TrueNAS speeds drop from +1GBps to 500MBps

Are you using TrueNAS Core or Scale?

Have you done any network tuning?

Sorry. TrueNAS Core-12.0-U8. Yes, the network is a layer 2 switch with both the TrueNAS and Windows computer in question connected to it and it has been fully tested with no bottlenecks.

Another oddity is that, sometimes, the transfers will only run at 100MBps and then after that one they will hit 1GBps and then fall off to 500-600MBps.

It is just not very consistent. Here is my SMB congif as well on TrueNAS.

interfaces = “10.0.1.100;speed=10000000000,capability=RSS”

I actually have a similar issue where instead of me getting 1 Gbps I am getting around 100 Mbps. Still trying to solve that one.

Am i reading this correctly? You have 4 drives per vdev … and are running raidz3 across 4 drives (times two, once for each vdev)?

If so, 500-600MB/s is actually pretty impressive.

2 Likes

feel free to play with my tunable I have collected and tested from others that are tweaking their speeds on 10gb and above networking. I get a pretty consistent 1GBps across to interfaces in reads and writes with large files.

Is your Atime turned off on your dataset?

Here are the settings I am running in my SMB Auxiliary parameters.

server multi channel support = yes
aio max threads = 100
allocation roundup size = 1048576
aio read size = 1
aio write size = 1
interfaces = "192.168.10.250";speed=10000000000,capability=RSS" 

Here the tuneables I am running, be careful on the tunes I am running 96gb of Ram and some of the tunes take that into account.

Variable						Value	Type

cc_cubic_load							YES			LOADER
cc_htcp_load							YES			LOADER
hint.isp.0.role							2			LOADER
hint.isp.1.role							2			LOADER
hint.isp.2.role							2			LOADER
hint.isp.3.role							2			LOADER
hostcache.expire						1			SYSCTL
hw.mps.max_chains						8192		LOADER
kern.ipc.maxsockbuf						8388608		SYSCTL
kern.ipc.somaxconn						2048		SYSCTL
net.inet.ip.intr_queue_maxlen			2048		LOADER
net.inet.tcp.cc.algorithm				htcp		LOADER
net.inet.tcp.cc.htcp.adaptive_backoff	1			SYSCTL
net.inet.tcp.cc.htcp.algorithm			htcp		SYSCTL
net.inet.tcp.cc.htcp.rtt_scaling		1			SYSCTL
net.inet.tcp.delacktime					20			SYSCTL
net.inet.tcp.delayed_ack				0			SYSCTL
net.inet.tcp.hostcache.expire			1			SYSCTL
net.inet.tcp.mssdflt					1448		SYSCTL
net.inet.tcp.recvbuf_auto				1			SYSCTL
net.inet.tcp.recvbuf_inc				524288		SYSCTL
net.inet.tcp.recvbuf_max				16777216	SYSCTL
net.inet.tcp.recvspace					524288		SYSCTL
net.inet.tcp.sendbuf_auto				1			SYSCTL
net.inet.tcp.sendbuf_inc				16384		SYSCTL
net.inet.tcp.sendbuf_max				16777216	SYSCTL
net.inet.tcp.sendspace					524288		SYSCTL
net.route.netisr_maxqlan				2048		LOADER
vfs.zfs.arc_max							92664000000	SYSCTL
vfs.zfs.dirty_data_max					34359738368	SYSCTL
vfs.zfs.l2arc.rebuild_enabled			1			SYSCTL
vfs.zfs.l2arc_noprefetch				0			SYSCTL
vfs.zfs.l2arc_write_boost				40000000	SYSCTL
net.route.netisr_maxqlan				2048		LOADER
vfs.zfs.l2arc_write_max					10000000	SYSCTL
vfs.zfs.zfetch.max_distance				33554432	SYSCTL
1 Like

MBs and MBps are all confusing stuff you know. I hope we are not all confusing the two measurements. LOL

1 Like

As above, check atime (access time) is turned off as otherwise you’ll be incurring writes when files are read.

Which, if you’re using SMR drives (which you shouldn’t really) will potentially incur read-modify-write at the drive level.

I’d check the sea gate drives you’re using are CMR and not SMR.

1 Like

Turning off atime seemed to help a little. I am running at ~915MBps (megabytes per second not Mbps which is megabits per second - www.actcorp.in/blog/what-is-the-difference-between-Mbps-and-MBps ) and then after a few seconds drops to as low as 500MBps. I should have stated that this is a read from the TrueNAS to the PC.

The drives appear to be CMR. They are Seagate EXOS ST10000NM0086-2A Helium Enterprise SATA-3 10tb drives.

I will try some of the tweaks. I had done some of this early on but I kept getting SMBD alarms. Everything was working but the alarms were persistent.

Thanks.

What are the SMBD alarms you are getting?

I honestly do not remember. But as soon as I removed the following parameters, no furthers issues:

server multi channel support = yes
aio max threads = 100
aio read size = 1
aio write size = 1
max xmit = 65535

Have you done any network testing with iPerf to make sure it’s not a networking problem?

The network is not the issue. Iiperf stats are perfect.

C:\Users\barryyancey\Documents\iperf-3.1.3-win64>iperf3.exe -c 10.0.1.100
Connecting to host 10.0.1.100, port 5201
[ 4] local 10.0.1.99 port 50641 connected to 10.0.1.100 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 823 MBytes 6.90 Gbits/sec
[ 4] 1.00-2.00 sec 1.09 GBytes 9.38 Gbits/sec
[ 4] 2.00-3.00 sec 1.10 GBytes 9.43 Gbits/sec
[ 4] 3.00-4.00 sec 1.09 GBytes 9.37 Gbits/sec
[ 4] 4.00-5.00 sec 1.09 GBytes 9.33 Gbits/sec
[ 4] 5.00-6.00 sec 1.09 GBytes 9.38 Gbits/sec
[ 4] 6.00-7.00 sec 1.10 GBytes 9.44 Gbits/sec
[ 4] 7.00-8.00 sec 1.09 GBytes 9.36 Gbits/sec
[ 4] 8.00-9.00 sec 1.09 GBytes 9.36 Gbits/sec
[ 4] 9.00-10.00 sec 1.09 GBytes 9.40 Gbits/sec


[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 10.6 GBytes 9.14 Gbits/sec sender
[ 4] 0.00-10.00 sec 10.6 GBytes 9.14 Gbits/sec receiver


Server listening on 5201

Accepted connection from 10.0.1.99, port 50640
[ 5] local 10.0.1.100 port 5201 connected to 10.0.1.99 port 50641
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 823 MBytes 6.90 Gbits/sec
[ 5] 1.00-2.00 sec 1.09 GBytes 9.38 Gbits/sec
[ 5] 2.00-3.00 sec 1.10 GBytes 9.43 Gbits/sec
[ 5] 3.00-4.00 sec 1.09 GBytes 9.37 Gbits/sec
[ 5] 4.00-5.00 sec 1.09 GBytes 9.33 Gbits/sec
[ 5] 5.00-6.00 sec 1.09 GBytes 9.38 Gbits/sec
[ 5] 6.00-7.00 sec 1.10 GBytes 9.44 Gbits/sec
[ 5] 7.00-8.00 sec 1.09 GBytes 9.36 Gbits/sec
[ 5] 8.00-9.00 sec 1.09 GBytes 9.36 Gbits/sec
[ 5] 9.00-10.00 sec 1.09 GBytes 9.40 Gbits/sec
[ 5] 10.00-10.00 sec 195 KBytes 9.38 Gbits/sec


My apologies, I double checked and I only have 1 vDev.

Any other ideas?

I know that some people say the SMB multi-channel might only work with multiple NICs. Is that accurate?

My Truenass server has a mellanox connectx-3 40gb.
My desktop windows 10 pro has the same mellanox connectx-3 but connected at 10gb on both ports. My switch only has 40g connection and 8 10gb ports. On my send and receive the file traffic is split on both interfaces.

As far as other things to try, and error logs would be helpful.
Maybe run some benchmark on the zfs pool in the cli and see if the pool is being slow.
I am not home atm and don’t remember off the top of my head for cli benchmark commands.

Question how are the drives hooked up, I just noticed you have a R720xd like I do. I also have a Dell MD1200 connected to the r720xd

Maybe:

@Barry_Yancey what’s your record size?

I have a 720XD with SATA3 drives in the chassis running 6Gbps. I have a 730XD with SAS3 drives in the chassis running 12Gbps. The transfer is slightly better on the 730XD but it is also running TrueNAS Scale (latest release) and the 720XD is on TrueNAS Core (latest release).

Both servers and the Windows machine are connected through Mellanox cables running 10Gbps.

I have tried bench marking the network and everything looks fine. I do not think it has anything to do with it.

It has to be something with Windows or TrueNAS software. Maybe RAM caching? Something that would make the 1st GB or so run perfect and after that drop off. Admittedly, very little drop off but I was just wondering if there is a tweak that I missed.

128KiB.

Best I can come up with is that something is causing Samba to be causing ZFS to do more house keeping than expected. Your disks, on paper are capable of 250MB/s sequential write throughput each, with raidz3 over 8 disks that should equate to a max od 1250MB/s of data, just a smidge over your network capacity.

Can you try enabling FTP on TrueNAS and try using Filezilla or TotalCommander for some large file transfer, and see if the speed drops after a while?

If yes, it’s likely due to ZFS, if no, there’s probably Samba settings that can be tweaked to allow more writeback buffering to happen.

Is the HBA in IT Mode?

If you have Windows 10 Pro you could try NFS and see what the speed results are.
If you don’t have Windows 10 Pro, you can set up iSCSI and try some file transfers. your writes are going to suck so don’t pay attention to your writes. your reads should be fast.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.