Iperf3 proxmox 8 extremely slow nic speeds on all hosts

I am seeing with iperf3 to pve host 200Mbits/sec and with -R I see full speed. So there is something happening with proxmox. Literally all of my proxmox systems are having this issue and have had the issue for extremely slow transfer speeds from a remote desktop to the pve hosts. Which then makes transfer speeds to VM slow too.

I have various pve boxes (many intel nics, many realtek), and all having the 200 to 300 mbps speeds going one direction and seeing full speed going the reverse direction.

I have googled the issue and cant seem to find anyone solution and all the solutions to similar issues don’t fix the issue.


PS C:\Users\argone\Downloads\iperf-3.1.3-win64> .\iperf3.exe -c 10.69.69.3 -R
Connecting to host 10.69.69.3, port 5201
Reverse mode, remote host 10.69.69.3 is sending
[  4] local 10.69.69.10 port 25604 connected to 10.69.69.3 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec  95.5 MBytes   801 Mbits/sec
[  4]   1.00-2.00   sec   109 MBytes   912 Mbits/sec
[  4]   2.00-3.00   sec   110 MBytes   923 Mbits/sec
[  4]   3.00-4.00   sec   102 MBytes   855 Mbits/sec
[  4]   4.00-5.00   sec   115 MBytes   963 Mbits/sec
[  4]   5.00-6.00   sec   109 MBytes   912 Mbits/sec
[  4]   6.00-7.01   sec   107 MBytes   893 Mbits/sec
[  4]   7.01-8.00   sec   101 MBytes   850 Mbits/sec
[  4]   8.00-9.00   sec   110 MBytes   919 Mbits/sec
[  4]   9.00-10.00  sec   113 MBytes   950 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  1.05 GBytes   898 Mbits/sec    0             sender
[  4]   0.00-10.00  sec  1.05 GBytes   898 Mbits/sec                  receiver

iperf Done.
PS C:\Users\argone\Downloads\iperf-3.1.3-win64> .\iperf3.exe -c 10.69.69.3
Connecting to host 10.69.69.3, port 5201
[  4] local 10.69.69.10 port 25624 connected to 10.69.69.3 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec  26.8 MBytes   224 Mbits/sec
[  4]   1.00-2.00   sec  25.6 MBytes   215 Mbits/sec
[  4]   2.00-3.00   sec  28.9 MBytes   242 Mbits/sec
[  4]   3.00-4.01   sec  31.9 MBytes   266 Mbits/sec
[  4]   4.01-5.00   sec  29.2 MBytes   247 Mbits/sec
[  4]   5.00-6.01   sec  29.1 MBytes   242 Mbits/sec
[  4]   6.01-7.01   sec  30.6 MBytes   258 Mbits/sec
[  4]   7.01-8.01   sec  25.5 MBytes   214 Mbits/sec
[  4]   8.01-9.01   sec  31.9 MBytes   267 Mbits/sec
[  4]   9.01-10.00  sec  31.0 MBytes   262 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec   290 MBytes   244 Mbits/sec                  sender
[  4]   0.00-10.00  sec   290 MBytes   243 Mbits/sec                  receiver

iperf Done.

Here is more information I am running latest version of proxmox and these machines and since I upgraded from 7.1 to 8.1.3 via clean install I can’t easily downgrade. The other bit is I am running latest kernel/firmware. As it is what came with latest iso I can’t figure out how to downgrade as it never had the older version of the kernel.

Update it seems to have to do with MTU. I have almost all my systems 5gbit or higher in speed and have 10gbit nics in many. The issue seems to be lowering mtu from 9000 to 1500 is maxing out some of the lines. Is it an issue if I run 1500 for the 5gbit nics and slower?

Update. It seems my devices still on gigabit (mini pcs that I cant add 10gbit) have really slow speeds when mtu set to 9000, however my synology is 1gbit and 10gbit and both with mtu set to 9000 for them get rated speed. Seems to but proxmox kernel issue as my other devices at 1gbit that arent proxmox have no trouble hitting 1000mbps with mtu 9000.

@PhaseLockedLoop

To get 10gbit speeds i need mtu 9000, but my lingering devices using 1gbit seems to not get 1gbit even when 9000 speeds. Like 200mbps in reality. When I set mtu 1500 i see 1gbit. I know I shouldn’t mix and match mtu on one vlan, but these are all devices I need access to via lan. How best should I divide them? Or mix and matching these specific devices with the rest on my network be a problem?

Use MTU 1500.

1 Like

mtu 1500 doesn’t let me see even remotely 10gbit with iperf3. I see 1.2gigabit with my 10gigabit nics,

And if VPN’s might be involved, a little lower still, to be safe

Are you saying 1gbit be on 1500 and 10gbit nic too? or just gigabit?

I see 9.5gbit between two 10gbit devices with mtu of 9000. When I lower to 1500 I see 1gbit speeds practically.

Jumbo frames should only really be used if you know every link in the chain, can cope with them

I wonder if the 10 gig, is Ethernet instead of fibre/DAC?

I understand that. I have found my mini pcs cant handle 9000 mtu. The thing is I have 10 devices with 10gbit and would like them to operate at full speed. I have 4 devices with 1gbit or 2.5gbit and they dont function right when setting 9000 mtu. So I am not sure what to do.

copper/ethernet.

1 Like

8p8c/rj45/Ethernet/patch, or SFP+ fibre/copper modules?

Okay, I dont know about that side, sorry

It Should be able to do better than 1.2 Gbit/sec tho.

I really thought it should do 10g at regular packet sizes

rj45

1 Like

I guess I will try these specific devices at lower mtu and the 10gbit at 9000 still. See if it ever conflicts. I understand 9000 mtu for gigabit should be fine, but it also isn’t fine with proxmox. supposedly a known issue. So I guess I will see if anything conflicts in the long run.

TCP offload enabled or disabled on these nics?

(Redhat doc 'cause I’m not finding a debian-equivalent one for some reason. Blame the coffee)

2 Likes

Will look into this. Also I don’t recall this issue prior to upgrading to proxmox 8.1.3 which makes me think it is proxmox and or proxmox kernel issue.

Is it normal to need MTU set at 9000 to saturate 10gb nic?