SMB share slower than expected

I will try that out and post an update with the results :D

So I didn't get to try Fedora 24 yet, but I installed the standard Fedora 25 (not the XFCE spin I did previously) and the performance is the same like on Ubuntu. I only get around 70-74 MB/s as well.

This is quite weird..

I found this thread here on the nextcloud forums.

I just ran mysqltuner and it gave me recommendations to improve the speed of my database.

Okay, but shouldn't that be completely separate from the storage on the server and the SMB service?

I have nextcloud running everything on the same machine, and I am using mysql InnoDB as the engine for the database.

Sorry! also forgot your issue was just with SMB while mine is with nextcloud. I'm still reading stuff though.

A thought popped into my head. Go to distrowatch. and sure enough they had the different version available.

Latest Ubuntu Mate uses Samba version 4.4.5

Fedora 25 uses Samba version 4.5

Latest version being Samba 4.6.

Something happened from version 4.4.5 to 4.5 which got fixed. So as along as a distro is using 4.5 of higher, I think Samba should work as expected.

I also found this bug on their bugzilla page https://bugzilla.samba.org/show_bug.cgi?id=10879

When you were using SMB on fedora did you mount the smb in fstab or was it using the smb client?

Okay, I managed to narrow it down even further. The reason why the transfers via the file browser are so slow has to do with the differences between how the file browser mounts SMB shares and how the mount -t cifs ... command mounts a share.

In fact when I mount the share via the mount command, I get gigabit speeds and when I open it in the file browser I only get the 70-75 MB/s (on the same system). If I only knew what the difference is. Maybe it would be a step into the right direction, if I found out what the file browser actually does when you open a network share. I always thought it does a simple mount -t cifs .. in the background, but apparently that's not the case..

that might be of some help. I found the first half useful.

In the bug I linked from their bugzilla I think we found the issue.

When copying a large file from my office NAS to my laptop via smbget I get a speed of around 1.6 MBps, using mount with the cifs file system option I get around 100Mbps. I am using Sambe 4.1.12 on Fedora 20.

See, it seems to work as expected with mount but with file browsers probably use the slower one for some reason. granted this 'bug' sprung up in 2014, but it is not resolved and still seems to be on-going.

My guess is that if you have the smb mounted with CIFS in your fstab that this might solve the issue.

Most people probably don't realize there's a bug because the only way to trigger it is to do large file transfers. Most SOHO setups would just do standard office docs and enterprise would have latest and greatest server hardware and not see the performance hit (probably).

My suggestion would be to also join the mailing list and let it be known to the devs that this is still a series issue that needs to be fixed.

2 Likes

I went ahead and tried what you did, and surprisingly, the overhead is present at the slower connection over WIFI too.

Wired file browser transfer = ~44 MB/s
Wired cifs mount transfer = ~80 MB/s

Wireless file browser transfer = ~4,6-5 MB/s
Wireless cifs mount transfer = ~5,6-6 MB/s

Tested with different files on same disk so as to avoid cached material.

Edit: Samba version 4.3.11-Ubuntu
And, the processor load on the server is very much the same for the two kinds of transfers.

Edit: On Windows 10 it reports a speed of 105MB/s, but I am unsure if it is Mibi or Mega. Anyway - it is fast.

I have been reading a bit on Netbios, SMB3/SMB3, Cifs and such. But to be honest it is a bit over my head. For one, either articles say CIFS is old, slow and insecure, or they say Cifs is new.

1 Like

well CIFS seems to not be suffering from the slowness issue. So regardless if its new or old, it works twice as fast or more.

1 Like

I checked the raw packets going over the network using Wireshark and these are the differences that I found:

Click here for all the details and differences

In the request packets

"Flags"

File browser: "Canonicalized Pathnames" and "Case Sensitivity"
mount command: nothing (0x00)

"Flags2"

File Browser:

  • "Unicode String"
  • "Error Code Type"
  • "Extended Security Negotiation"
  • "Long Names Used"
  • "Extended Attributes"
  • "Long Names Allowed"

mount command:
the same except that "Long Names Used" and "Extended Attributes" are not set.

"Multiplex ID"

is 0 in the file browser and 1 in the mount command.

"Requested Dialects"

The bigger difference are in the "Requested Dialects" field, where the file browser's packet lists:

  • Dialect: PC NETWORK PROGRAM 1.0
  • Dialect: MICROSOFT NETWORKS 1.03
  • Dialect: MICROSOFT NETWORKS 3.0
  • Dialect: LANMAN1.0
  • Dialect: LM1.2X002
  • Dialect: DOS LANMAN2.1
  • Dialect: LANMAN2.1
  • Dialect: Samba
  • Dialect: NT LANMAN 1.0
  • Dialect: NT LM 0.12

but the mount command's packet only list:

  • Dialect: LM1.2X002
  • Dialect: LANMAN2.1
  • Dialect: NT LM 0.12
  • Dialect: POSIX 2

In the response packets:

"Flags"

File Browser:

  • "Request/Response"
  • "Case Sensitivity"

mount command:
only "Request/Response"

"Flags2"

same like in the respective request packets.

"Multiplex ID"

same like in the respective request packets.

"Negotiate Protocol Response"

Now this is where I guess the actual negotiated protocol is listed.

This field contains an entry called "Selected Index" and has the values:

  • File Browser: "8: NT LANMAN 1.0"
  • mount command: "2: NT LM 0.12"

Otherwise the packets are identical (apart from things like session keys etc.).


Now I am not exactly sure what these values mean or why one is faster than the other, but I guess this is the reason for the performance differences. Maybe there is a way to tell the file browser's implementation of the SMB protocol which dialects it shall understand/offer to the server, so that it will use the same like when you run the mount -t cifs command.

1 Like

so essentially, one request is sending more data than the other. That must be causing latency which is affecting the throughput?

I'm achieving ~90-120MB/s read and writes so idk what your issue is. My server is running Ubuntu Mate 16.04 in a Proxmox VM on a ZFS Z1 array with SSD cache.
Just apt-get'ed samba and configured accounts. My client is a desktop running Win7 Ultimate.

My server is a dual socket Opteton 6276 with 96GB RAM. Everything is wired into enterprise switches. I'll test it at 10Gb tomorrow if you guys want.

We are using linux server with linux clients. on stock installations of ubuntu an other distros.

We are using midrange hardware. you have a much more powerful setup.

Our results for iperf were the same as with yours. we need to keep things on Tbase1000 to have a controlled environment.

1 Like

So what I'm hearing is virtualize your file server in order to distribute single threaded core load. lol




There is nothing that isn't stock about my half assed Ubuntu Mate install with an apt-get default samba server on it. While the bare metal my server is running is indeed far more powerful than yours over all, my cores are all running at 2.3GHz and my RAM is a mix of bargain bin 1333Mhz and 1066MHz. My VM is allocated 8GB of RAM and 8 cores.

If it will be of any help I'll test my connection with a Linux distro as my client.

the only reason I mentioned stock was because this is what performance was like out of the box with no tweaking, for the both of us; Comfreak and I.

yes this will help quite a bit.

What distro(s) would you like me to test?

ubuntu 16.10, and then after that something which has the latest version of samba. Like manjaro or something.

ubuntu (desktop and server and from what I could find FreeNAS 9.x) ships with samba 4.4.5 where as arch and some others ship with 4.5 or 4.6.

I'll check the version I have installed on my server tomorrow night and will post my test results.