I have a Dell 720XD with 8 Seagate EXOS 10tb SATA-3 drives running RAIDZ3 in 2 vDEVs. I am using an Intel 2 SFP+/2 Gbe NIC. I have a 10GTek 10gbps card in a Windows 10 Pro workstation with a Core i7 4990K with 32gb RAM with a Samsung 870 Pro 512 NVME on a Gigabyte M5 Gaming motherboard. Both are connected to a Mikrotik CRS309-1G-8S+ switch. Jumbo frames have been enabled on the switch and the MTU has been adjusted on the Windows NICs as well.
I am getting great transfers for file sizes that are about 1GB. Once I transfer for a few seconds, the speed drops to about 500-600MBps. That is still awesome but why can it not sustain the high rates?
I have tried tweaking everything for SMB-Multichannel but it does not seem to help.
Sorry. TrueNAS Core-12.0-U8. Yes, the network is a layer 2 switch with both the TrueNAS and Windows computer in question connected to it and it has been fully tested with no bottlenecks.
Another oddity is that, sometimes, the transfers will only run at 100MBps and then after that one they will hit 1GBps and then fall off to 500-600MBps.
It is just not very consistent. Here is my SMB congif as well on TrueNAS.
feel free to play with my tunable I have collected and tested from others that are tweaking their speeds on 10gb and above networking. I get a pretty consistent 1GBps across to interfaces in reads and writes with large files.
Is your Atime turned off on your dataset?
Here are the settings I am running in my SMB Auxiliary parameters.
server multi channel support = yes
aio max threads = 100
allocation roundup size = 1048576
aio read size = 1
aio write size = 1
interfaces = "192.168.10.250";speed=10000000000,capability=RSS"
Here the tuneables I am running, be careful on the tunes I am running 96gb of Ram and some of the tunes take that into account.
Turning off atime seemed to help a little. I am running at ~915MBps (megabytes per second not Mbps which is megabits per second - www.actcorp.in/blog/what-is-the-difference-between-Mbps-and-MBps ) and then after a few seconds drops to as low as 500MBps. I should have stated that this is a read from the TrueNAS to the PC.
The drives appear to be CMR. They are Seagate EXOS ST10000NM0086-2A Helium Enterprise SATA-3 10tb drives.
I will try some of the tweaks. I had done some of this early on but I kept getting SMBD alarms. Everything was working but the alarms were persistent.
My Truenass server has a mellanox connectx-3 40gb.
My desktop windows 10 pro has the same mellanox connectx-3 but connected at 10gb on both ports. My switch only has 40g connection and 8 10gb ports. On my send and receive the file traffic is split on both interfaces.
As far as other things to try, and error logs would be helpful.
Maybe run some benchmark on the zfs pool in the cli and see if the pool is being slow.
I am not home atm and don’t remember off the top of my head for cli benchmark commands.
Question how are the drives hooked up, I just noticed you have a R720xd like I do. I also have a Dell MD1200 connected to the r720xd
I have a 720XD with SATA3 drives in the chassis running 6Gbps. I have a 730XD with SAS3 drives in the chassis running 12Gbps. The transfer is slightly better on the 730XD but it is also running TrueNAS Scale (latest release) and the 720XD is on TrueNAS Core (latest release).
Both servers and the Windows machine are connected through Mellanox cables running 10Gbps.
I have tried bench marking the network and everything looks fine. I do not think it has anything to do with it.
It has to be something with Windows or TrueNAS software. Maybe RAM caching? Something that would make the 1st GB or so run perfect and after that drop off. Admittedly, very little drop off but I was just wondering if there is a tweak that I missed.
Best I can come up with is that something is causing Samba to be causing ZFS to do more house keeping than expected. Your disks, on paper are capable of 250MB/s sequential write throughput each, with raidz3 over 8 disks that should equate to a max od 1250MB/s of data, just a smidge over your network capacity.
Can you try enabling FTP on TrueNAS and try using Filezilla or TotalCommander for some large file transfer, and see if the speed drops after a while?
If yes, it’s likely due to ZFS, if no, there’s probably Samba settings that can be tweaked to allow more writeback buffering to happen.
If you have Windows 10 Pro you could try NFS and see what the speed results are.
If you don’t have Windows 10 Pro, you can set up iSCSI and try some file transfers. your writes are going to suck so don’t pay attention to your writes. your reads should be fast.