SMB share slower than expected

I get 90MB/s from my synology and its just a shitty ds216j. 112MB/s from the ubuntu 16.04 server I have setup at work. Somethings not right here :\

I have a single ISO image that I transfer for this. It's a little bit more than 3GB.

What kind of hardware do you have? Is it the current FreeNAS (up-to-date)?

Those are the speeds I see as well, but over WebDAV only.

Mine was a celeron G3220 in a Dell t20 with some Seagate Barra cuda 2tb drive and Integrated Intel nic

What disk are you running and any raid or mirror configurations involved?

The client side doesn't seem to make a difference, since I tested it again with a RAM disk. On the server side I tested it with a RAID-Z1 array of 2+1 disks, which seems to be fast enough, since I get better performance over WebDAV from the same array.

However, I just noticed something very interesting. I just set up a Fedora 25 machine and mounted the CIFS share and did the same test like I did on Ubuntu. Surprisingly, I am now getting the full gigabit speed! I connected a different machine that happens to run Ubuntu too to the same network switch and did the same test there too and I still get ~70 MB/s there.

So this means that the restriction is part of Ubuntu, probably that it is using an older/slower protocol version than Fedora 25. Hmm..

2 Likes

im gonna look into that

Are you also using Ubuntu as a client system?

im using ubuntu mate 16.10 as client and ubuntu server 16.04 LTS

Maybe the reason is actually part of Ubuntu...

Now I only need to find out how to check what protocol each system is using to connect.

speeds for me over webdav are the exact same as through the client and others.

So thats something that we don't have in common.

I actually printed out the entire nextcloud manual. I am reading it now.

@comfreak, could you spin up a Fedora 24 VM. If speeds are similar to the issue we've been having, then we could just look at the delta's and that would tell us what package would need to be updated. If ubuntu is using an old package that is causing the issue. Something that spans both SMB and webdav, but not SCP or SFTP.

I was going to recommend an OS issue.

Is their still the Ubuntu Server distro? I would imagine that that does file shares better.

I will try that out and post an update with the results :D

So I didn't get to try Fedora 24 yet, but I installed the standard Fedora 25 (not the XFCE spin I did previously) and the performance is the same like on Ubuntu. I only get around 70-74 MB/s as well.

This is quite weird..

I found this thread here on the nextcloud forums.

I just ran mysqltuner and it gave me recommendations to improve the speed of my database.

Okay, but shouldn't that be completely separate from the storage on the server and the SMB service?

I have nextcloud running everything on the same machine, and I am using mysql InnoDB as the engine for the database.

Sorry! also forgot your issue was just with SMB while mine is with nextcloud. I'm still reading stuff though.

A thought popped into my head. Go to distrowatch. and sure enough they had the different version available.

Latest Ubuntu Mate uses Samba version 4.4.5

Fedora 25 uses Samba version 4.5

Latest version being Samba 4.6.

Something happened from version 4.4.5 to 4.5 which got fixed. So as along as a distro is using 4.5 of higher, I think Samba should work as expected.

I also found this bug on their bugzilla page https://bugzilla.samba.org/show_bug.cgi?id=10879

When you were using SMB on fedora did you mount the smb in fstab or was it using the smb client?

Okay, I managed to narrow it down even further. The reason why the transfers via the file browser are so slow has to do with the differences between how the file browser mounts SMB shares and how the mount -t cifs ... command mounts a share.

In fact when I mount the share via the mount command, I get gigabit speeds and when I open it in the file browser I only get the 70-75 MB/s (on the same system). If I only knew what the difference is. Maybe it would be a step into the right direction, if I found out what the file browser actually does when you open a network share. I always thought it does a simple mount -t cifs .. in the background, but apparently that's not the case..

that might be of some help. I found the first half useful.

In the bug I linked from their bugzilla I think we found the issue.

When copying a large file from my office NAS to my laptop via smbget I get a speed of around 1.6 MBps, using mount with the cifs file system option I get around 100Mbps. I am using Sambe 4.1.12 on Fedora 20.

See, it seems to work as expected with mount but with file browsers probably use the slower one for some reason. granted this 'bug' sprung up in 2014, but it is not resolved and still seems to be on-going.

My guess is that if you have the smb mounted with CIFS in your fstab that this might solve the issue.

Most people probably don't realize there's a bug because the only way to trigger it is to do large file transfers. Most SOHO setups would just do standard office docs and enterprise would have latest and greatest server hardware and not see the performance hit (probably).

My suggestion would be to also join the mailing list and let it be known to the devs that this is still a series issue that needs to be fixed.

2 Likes

I went ahead and tried what you did, and surprisingly, the overhead is present at the slower connection over WIFI too.

Wired file browser transfer = ~44 MB/s
Wired cifs mount transfer = ~80 MB/s

Wireless file browser transfer = ~4,6-5 MB/s
Wireless cifs mount transfer = ~5,6-6 MB/s

Tested with different files on same disk so as to avoid cached material.

Edit: Samba version 4.3.11-Ubuntu
And, the processor load on the server is very much the same for the two kinds of transfers.

Edit: On Windows 10 it reports a speed of 105MB/s, but I am unsure if it is Mibi or Mega. Anyway - it is fast.

I have been reading a bit on Netbios, SMB3/SMB3, Cifs and such. But to be honest it is a bit over my head. For one, either articles say CIFS is old, slow and insecure, or they say Cifs is new.

1 Like