I am a bit of a ZFS/linux noob, but I have had a truenas pool set up in my homelab for 7+ years at this point. I recently changed some homelab gear up (went from a i3 6100 to a Xeon e5 2660 V4 and more then doubled my RAM from 28GB to 64 GB (and went from single channel to quad channel…) somewhat expecting things to at least be the same, potentially faster (slower GHz, but much more available threads for “all” [6] of my VM’s). This is not a high use homelab, I do simple things like truenas, pfsense, few ubuntu VM’s and docker containers, honestly the 6100 was fine but I needed more RAM…
Anyways, I have been writing data to my array lately, and come to realize my speeds are slow… like really low. I was never able to write at full gigabit, but I at least used to get 600-700+ mbps. I seemingly only get in the 1-200’s now, and that is pretty frustrating. I am sure other things changed along the way (I didn’t write to the NAS much trough COVID… didn’t take many pictures or videos which is the main thing I use it for besides plex which is all managed in the background so I would never notice any slowness), so I am not sure what is causing this or what I should do to investigate.
I checked latency as I know SMB hates latency… I get pings <1ms which is expected.
Any other suggestions on things to test?
The box is an e5 2660 V4, 64 GB of RAM, 10x4 TB Red 5400’s in Z2, its running virtually under proxmox (Ya know… I think it was running under ESXi last time I was hitting it hard, hmm, vNIC driver issue maybe?), virtual pfsense running on the same box under proxmox. All networking is gigabit and I can read from the array at what I would consider close to gigabit speeds (~8-900 mbps, some overhead + spinning rust… that seems fine).
iperf3 confirms an issue, just not sure where to go from here…
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 110 MBytes 923 Mbits/sec 0 208 KBytes
[ 5] 1.00-2.00 sec 112 MBytes 937 Mbits/sec 0 208 KBytes
[ 5] 2.00-3.00 sec 112 MBytes 937 Mbits/sec 0 208 KBytes
[ 5] 3.00-4.00 sec 111 MBytes 935 Mbits/sec 0 208 KBytes
[ 5] 4.00-5.00 sec 112 MBytes 937 Mbits/sec 0 208 KBytes
[ 5] 5.00-6.00 sec 112 MBytes 937 Mbits/sec 0 208 KBytes
[ 5] 6.00-7.00 sec 112 MBytes 937 Mbits/sec 0 208 KBytes
[ 5] 7.00-8.00 sec 112 MBytes 937 Mbits/sec 0 208 KBytes
[ 5] 8.00-9.00 sec 112 MBytes 937 Mbits/sec 0 208 KBytes
[ 5] 9.00-10.00 sec 112 MBytes 937 Mbits/sec 0 208 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 1.09 GBytes 935 Mbits/sec 0 sender
[ 5] 0.00-10.00 sec 1.09 GBytes 935 Mbits/sec receiver
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 53.2 MBytes 446 Mbits/sec
[ 5] 1.00-2.00 sec 58.0 MBytes 486 Mbits/sec
[ 5] 2.00-3.00 sec 61.7 MBytes 518 Mbits/sec
[ 5] 3.00-4.01 sec 62.1 MBytes 519 Mbits/sec
[ 5] 4.01-5.00 sec 59.7 MBytes 502 Mbits/sec
[ 5] 5.00-6.01 sec 64.2 MBytes 538 Mbits/sec
[ 5] 6.01-7.01 sec 64.4 MBytes 540 Mbits/sec
[ 5] 7.01-8.00 sec 61.7 MBytes 520 Mbits/sec
[ 5] 8.00-9.00 sec 62.0 MBytes 518 Mbits/sec
[ 5] 9.00-10.00 sec 64.0 MBytes 539 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.00 sec 611 MBytes 512 Mbits/sec receiver
EDIT To add to this, I have a few VM’s set up in my homelab, I just tried hitting a different VM (Ubuntu), and things are correct with ~940 Mbits/sec both directions… so that should rule out the physical networking (and vlans, both of these servers reside on the same VLAN, under the same hypervisor).
EDIT 2 Solved - virtIO vNIC was the solution… I am now getting ~22gbps both directions from VM’s sitting on the same host, and getting 1gbps over my physical network as expected