I am running Arch 5.1.15 and I have been playing around with Samba today and making some performance discoveries. Thought you might be of help.
These are the specs:
TR 2920x - ASRock X399 Taichi
32GB quad channel RAM
Intel X520 10Gb NIC
Win 10 VM with SR-IOV VF from the Intel X520 NIC. Networking is done through the X520 by the VF passthrough.
FreeNAS 11.2 U4
Dell R710 - Dual E5620 Xeons
4x 10TB WD White in stripe of mirrors
128GB of RAM
Standard Broadcom 1Gb NIC
Switch: Mikrotik CSS326
Up until this point, I was mounting the FreeNAS Samba share from KDE’s Dolphin. I was running a single 8TB WD and I was getting read speeds of ~76MB/s.
I thought it was normal because the link is only GbE and the 8TB drive was over 85% full.
Today I finally completed the switch from the 1x 8TB to the 4x 10TB pool.
I was getting the same numbers.
Windows meanwhile gets 111MB/s reads/writes which is what is normally expected.
I then tried mounting the shares through fstab and lo and behold, my network transfer speeds rose to 111 - 112MB/s. Until I disabled write caching… (Those are numbers I measured at the NIC, Dolphin reports speeds of 1.1GB/s+ due to the caching)
When mounting the SMB share without caching (cache=none option), performance drops to 62MB/s. I don’t get why performance drops since everything in the chain (NVME, HDDs, NICs, network) clearly supports speeds of 112MB/s+.
Does anyone know why this happens? Can it be remedied? I don’t like write caching to network devices.
I am already using Samba for Windows clients, I wanted to avoid having to use two protocols.
I mean, more power to you if you want to do it that way. I would at least try nfs just to see if it exhibits the same problem.
CIFS would be the proper way if you actually need to use SAMBA, but NFS is the bog standard.
I think your speed issue may be due to fuse. If you let the system mount the share, that should go through kernel space.If you mount it manually, I believe that is going through user space. There is alot of work being done on fuse right now.
I did mount it with fstab and cache=none and the performance was still 62MB/s. Isn’t that the system mounting the share?
Yeah, that would be correct. I have not idea why it is so slow then.
You are not the only one. I remember a thread a long time ago (can’t find it now, it was years ago), where we all chimed in with our speeds over SAMBA. I had a performance hit of about 50% compared to CIFS, others had something like the 30-40% hit, as you, others still had next to no performance hit. I tried all sorts of things in the .conf to try and better it, to no avail. I remember someone more knowledgable than me agreeing to the notion that there is an abstraction layer with SAMBA, which there isn’t with CIFS. This should make it easier for clients to just hurl files at the server without knowing anything about what kind of file system it has. Your hardware seems capable, so I can’t see how that should have a whole lot to do with it.
NB: Broadcom NIC on the server side. Do they have the same troublesome history with NIC’s as they do with Wireless on linux?
Yeah Broadcom is not really recommended for FreeBSD based OSes as far as I am aware but I never had any particular problems with it. I will try with a Mellanox 10Gb card at some point but I don’t have a long enough cable yet. Should be pretty interesting to compare caching vs no caching on 10Gb.