Ok yeah I see that Fusion IO didn’t maintain their drivers so they were removed.
To eliminate network issues, can you test throughput with iperf3? I want to avoid trying to test everything at once because then it’s hard to tell where the problem is.
is it just the nature of TCP/IP and the overhead small files are just slow?
I really wished infiniband worked on Freenas, from my reading all a lot of that overhead goes away
Do a quick look at netstat -p tcp -s and pastebin it. Things should not be that slow. You might also try watching gstat or zpool iostat -v POOLNAME 1 while doing those transfers to get a sense of the number of IOPS happening. I might have missed it, but I don’t think you’ve described your pool layout yet either.
Do you think it could be just the hard drive you’re copying things to that’s slow for small IOs/many IOPS?
Maybe i am off base, but i was thinking with a faster connection I can mount a drive and be close to a native SSD speeds. all this is bit of experimentation/learning on my part. know i have worked in some tweaks I’ll see how it works for my XCP-NG storage.
My ZFS pools is 3 vdevs, 6 drives in each in a raidZ1. I am using the Fusion IO for testing to copy files to and from, the bottleneck should be the zpools
I ran zpool iostat 10 while copy a mix of small and larger files.
One thing i have notiest with testing is the drives never seem to go above 60% to 70% utilization in gstat and netdata, even watching the hard drive activate lights thy never seem to stay solid.
I just have this feeling it should be faster, cant put my finger on it.
I’ll try 1500 and see what happens