Unable to get 10Gb speeds on Hyper-V

Hello folks!

I’ve create a 2-node Windows Server 2022 cluster for Hyper-V and Storage Spaces Direct. Each node has a ConnectX-4 dual 25Gb. One of those ports are connected directly to the other node on 25Gb mode and is used for RDMA on the S2D storage traffic. The other port is dedicated for the VM compute traffic (Virtual Switch) and is connected to a 10G switch upstream.

The OS reports the ports are on the correct speeds:

Get-NetAdapter

Name InterfaceDescription ifIndex Status MacAddress LinkSpeed


Cluster Mellanox ConnectX-4 Lx Ethernet Ad…#2 10 Up 24-8A-07-B4-2A-66 25 Gbps
Management Marvell AQtion 10Gbit Network Adapter 8 Up E8-9C-25-27-72-2A 1 Gbps
Compute Mellanox ConnectX-4 Lx Ethernet Adapter 4 Up 24-8A-07-B4-2A-67 10 Gbps

All VMs are attached to the “Compute” NIC which is the external virtual switch and the VMs report they are in a 10Gbps link on their virtual NICs:

Get-VMSwitch

Name SwitchType NetAdapterInterfaceDescription


Compute External Mellanox ConnectX-4 Lx Ethernet Adapter

However, if I try to run iperf3 between two VMs (one on each node) it report speeds between 4.5 and 5Gbps/sec. If I try from my machine (a Mac Studio M2 Ultra with a 10Gb NIC connected on the same switch) it gets 1.2-1.3Gpbs/sec.

The 25Gb ports report the accurate speeds on iperf3 and if I remove the 10Gb ports from the virtual switch, give it an ip, and then run iperf3 on those compute NICs, they do 10Gbps just fine.

So what do I need to do in order to the VMs to get the correct network speeds in this setup? I’ve tried linux and Windows Server VMs to see if that is something with the Hyper-V guest NIC driver and also tried disabling the “Virtual Machine Queues” on the physical NIC, no differences.

Thanks! Any light would be appreciated.