Slow performance with Truenas on 10g network

Howdy all! I’m having some performance issues with my NAS. Fairly new to this stuff so I’m asking around a few places to try and cast a wide net.

I recently built a Truenas Core system out of my old editing machine. I have 8x8TB Seagate drives (4 are new Ironwolf disks, 4 are an older model, ST8000VN0002 disks which I previously used). They’re in a RaidZ2 configuration.

NAS Specs:
CPU: i7-4930k,
RAM: 32GB of Ripjaw DDR3 RAM.
Mobo: Asrock x79 extreme6
NIC: 2x Intel x540 T1 NICs (on both the server and my desktop)
Switch: Netgear XS708E

Cables: Cat6 cables connecting everything, one 6 foot from server to switch, one 25 foot from the desktop.

I’m getting about 180MB/s max write speed from my SMB shares to Windows, which seems normal. However my read speeds don’t seem to be going above 220MB/s. I feel that the read speed should be much higher for a RAIDZ2 on a 10g network. For reference, the client desktop in question is using very NVME storage, so that’s not the bottleneck either.

I’ve checked all the switch settings, tried different cables, and I ran an iperf3 test and got 9.5Gb both ways, so it’s definitely not a network issue.

I’ve also tried enabling jumbo packets in Windows, as well as increasing the RSS queues and send + receive buffers to their maximum. I’m currently waiting on some more RAM to see if it’s a cache issue.

One odd thing is that my CPU never goes above 5% even during heavy transfers.

Can anyone help solve this mystery? Any advice is greatly appreciated.

You’re most likely seeing uneven load and access times because you’re mixing different models and the older HDDs seems to be PMR ones so performance isn’t likely to be good. What’s the performance like if you create a RAID-Z array of just your Ironwolf HDDs?

Copy a 20GB file off the share to your client PC. Then do it again and again. The repeats should hit 600MB/s assuming your client PC has an SSD.

If you don’t get a lot faster than the 220MB/s then it could be an SMB networking problem. Caching should take care of any hard drive speed problems at least for a 20GB file on your system.

Shoot, I didn’t realize they were PMR!

I already have a few TB of stuff on the NAS, so I’d need to find somewhere to move it temporarily which would be a hassle. Maybe I should just bite the bullet and get all new drives?

Would a RAIDZ2 of 4x ironwolf drives be roughly 2x the read performance of a single drive?

Long ago I remember having poor file copy performance over the network when I upgraded to 10gig; If I recall it was a Windows setting “Receive Side Scaling” needing to be disabled in order to get reasonable speeds.

Have you tried using a third party file copy tool to rule out Windows? Such as fastcopy, this is how I eventually troubleshot my issue because FC ignored some of the Windows settings.

1 Like

PMR drives are the faster variety of hard drives; they are what you want. It is SMR hdds that are the poorer performing ones.

2 Likes

Wait, I think that was it!!! Disabled receive side scaling and suddenly read jumped to 550MB/s and write to 450MB/s!!! Does that seem like reasonable speeds for an 8x disk RAIDZ2?

P.S. regarding your copy tool question, my go-to aside from Windows is Teracopy, but after changing the RSS setting I still got worse performance than windows at 400MB read and 250MB write.

1 Like

Good. Those speeds seem fine, perhaps a touch slower than theoretical but not by much; if you were copying multiple smaller files instead of one large file I’d expect results like that.
Another experiment you could try is turning off Windows defender’s real-time protection to see if you gain any increase in speed (obviously try at your own risk, viruses yada yada), if so you could chalk up the slightly depressed results to Windows scanning the files before copying them.

Interesting that teracopy was slower than “native”, usually its pretty good; FC has always been the fastest in my experience though.

1 Like

Very interesting! I disabled interrupt moderation on the NIC in Windows, and the read speed increased to over 800MB/s with a 10GB file. However, write speed dropped back down to 250MB/s.

I imagine I’d have to do something on the NAS side to correct this. Is interrupt moderation recommended?

Hmm? They’re 5900RPM (from what I could find using Google) so they’re probably quite slow and I found several ports suggesting that performance wasn’t great using these drives.

As I suggested a network problem. Any more speed problems are likely to do with actual hard drive speeds. If you’re copying a 100GB file onto your NAS you may notice it’s very fast for a bit then suddenly drop to a lot slower. This will be as the various caches fill up. In TrueNAS asynchronous writes are faster due to caching where as synchronous runs at hard drive speed the whole way.

Testing it out, there seems to be no significant speed drop with a 100GB file. Speeds are consistent throughout with write at 400MB and read at 500MB. So I think it’s all good now.

2 Likes

You’ve nailed it. Those are the sorts of speeds I get. I don’t think it will go much faster until we use the next generation of server tech.

1 Like

“…these are the voyages of the Truenas Enterprise” :stuck_out_tongue:

1 Like

Glad it’s finally working to its full potential. Thanks everyone for their input and advice!