TruenasCore - slow readspeeds

Read speeds on my TrueNAS core system are slower than I expexted. I just installed 8 8TB HDDs and build a RaidZ. I created a SMB share on this new pool and moved a couple GB of videofiles to the pool. All of that data seems to be cached inside the Ram, atleast the visualisation on the dasboard looks like it would. So if I now copy the data back to my windows machine, the data should be read from RAM and the transfer speed should be limited by the 10 Gbit/s networkspeed (2 x 10 Gbit/s as LAG to the 10 G switch and from there 10 Gbit/s to my workstation). But I’m copying at about 500 Mb/s = 4 Gbit/s. crystaldiskmark gives me similar results. What am I doing wrong? Can I somehow get the full 10 G?

Your problem is SMB. NFS is (much) faster, but as it’s not invented at M$, Win-OS doesn’t know about it :roll_eyes:

What you could try is booting the Win-OS machine with a Linux Live-CD and set up iperf3 on both sides (TrueNAS+client) then run some iperf3 tests. Another option is the Phoronix Test suite.

HTH!

Weird, its the exact opposite for me. I can push 15Gb/s over SMB (Limited by SSD) with no issue, but NFS is dog slow no matter what

OP, you might be expecting that cache to work a lot better than it does

I’m going to guess that there are multiple issues causing this. First of all, what read/“hash” speed do you see when scrubbing? Version 4.13 and below of Samba only uses software crypto which also limits performance. What kind of performance do you see using iperf3 (don’t know if that’s packaged for TrueNAS core)?

No, to the best of my knowledge, iperf3 is not packaged for TrueNAS core. What is that read/“hash” speed that you mentioned?
For further testing, I think i will put 4 PCI 4.0 SSD on an Asus Hypercard and sick that in tomorrow. Than I will see if the speeds are better from/to a RaidZ of 4 fast SSDs. If that gives me 10 G speeds, I could rule out that SMB and Networking are the Problem here.

Run scrub on an array and look at what transfer rate it reports, that’s roughly going to be your peak “raw” read speed.

I created the SSD Pool out of 4 Seagate FireCuda 530 SSDs. Everything else is the same. If I copy data to the ssd pool from my windows machine via SMB i get 8 Gbit/s transfer speeds. If I copy the same data back to the windows machine i get about 4,8 Gbit/s. Thats a litte faster but still doesn’t make sens to me. As > 230 GB of the RAM is free, to my understanding 100 % of the data should be written to and read from RAM cache…