Help with freenas cache

Ok yeah I see that Fusion IO didn’t maintain their drivers so they were removed.

To eliminate network issues, can you test throughput with iperf3? I want to avoid trying to test everything at once because then it’s hard to tell where the problem is.

Here is my iperf3 output

C:\Users\Ron\Desktop\iperf-3.1.3-win64>iperf3 -c 10.1.20.250 -R
Connecting to host 10.1.20.250, port 5201
Reverse mode, remote host 10.1.20.250 is sending
[  4] local 10.1.20.1 port 62218 connected to 10.1.20.250 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec  1.08 GBytes  9.24 Gbits/sec
[  4]   1.00-2.00   sec  1.12 GBytes  9.62 Gbits/sec
[  4]   2.00-3.00   sec  1.13 GBytes  9.74 Gbits/sec
[  4]   3.00-4.00   sec  1.09 GBytes  9.37 Gbits/sec
[  4]   4.00-5.00   sec  1.13 GBytes  9.73 Gbits/sec
[  4]   5.00-6.00   sec  1.13 GBytes  9.75 Gbits/sec
[  4]   6.00-7.00   sec  1.15 GBytes  9.92 Gbits/sec
[  4]   7.00-8.00   sec  1.15 GBytes  9.89 Gbits/sec
[  4]   8.00-9.00   sec  1.16 GBytes  9.96 Gbits/sec
[  4]   9.00-10.00  sec  1.16 GBytes  9.99 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  11.3 GBytes  9.72 Gbits/sec    0             sender
[  4]   0.00-10.00  sec  11.3 GBytes  9.72 Gbits/sec                  receiver

iperf Done.

Network looks good then :ok_hand:

is it just the nature of TCP/IP and the overhead small files are just slow?
I really wished infiniband worked on Freenas, from my reading all a lot of that overhead goes away

Do a quick look at netstat -p tcp -s and pastebin it. Things should not be that slow. You might also try watching gstat or zpool iostat -v POOLNAME 1 while doing those transfers to get a sense of the number of IOPS happening. I might have missed it, but I don’t think you’ve described your pool layout yet either.

Do you think it could be just the hard drive you’re copying things to that’s slow for small IOs/many IOPS?

Maybe i am off base, but i was thinking with a faster connection I can mount a drive and be close to a native SSD speeds. all this is bit of experimentation/learning on my part. know i have worked in some tweaks I’ll see how it works for my XCP-NG storage.

My ZFS pools is 3 vdevs, 6 drives in each in a raidZ1. I am using the Fusion IO for testing to copy files to and from, the bottleneck should be the zpools

It shouldn’t be slower, that’s for sure. One thing that’s working against you for small files is probably the 9000 MTU. Try 1500?

1 Like

I ran zpool iostat 10 while copy a mix of small and larger files.
One thing i have notiest with testing is the drives never seem to go above 60% to 70% utilization in gstat and netdata, even watching the hard drive activate lights thy never seem to stay solid.
I just have this feeling it should be faster, cant put my finger on it.
I’ll try 1500 and see what happens

                capacity     operations    bandwidth

pool alloc free read write read write


Storage 6.20T 10.1T 26 3.32K 181K 36.2M
freenas-boot 3.34G 10.7G 8 0 38.6K 0


Storage 6.20T 10.1T 25 1.57K 123K 38.0M
freenas-boot 3.34G 10.7G 0 0 0 0


Storage 6.20T 10.1T 26 1.97K 133K 50.4M
freenas-boot 3.34G 10.7G 0 0 101 0


Storage 6.20T 10.1T 28 2.08K 210K 48.4M
freenas-boot 3.34G 10.7G 0 0 0 0


Storage 6.20T 10.1T 41 4.69K 660K 127M
freenas-boot 3.34G 10.7G 0 0 255 0


Storage 6.20T 10.1T 60 4.77K 738K 130M
freenas-boot 3.34G 10.7G 1 0 2.05K 0


Storage 6.20T 10.1T 72 4.46K 1.19M 123M
freenas-boot 3.34G 10.7G 11 0 43.3K 0


Storage 6.20T 10.1T 53 5.63K 895K 157M
freenas-boot 3.34G 10.7G 0 0 204 0


that helped lost off the top end and raised up the bottom speeds. I can live with that trade off.

1 Like