Hello, I’ve been trying to get VFIO working for the past few days, I got everything set up and I’m able to passthrough my second GPU without crashes, but it seems that the GPU performance is degraded by a lot due to some PCIe handshake issues.
My hardware is:
Host
- MAG X570 TOMAHAWK WIFI
- Ryzen 9 5950x
- 32GB DDR4 3600Mhz
- XFX 6900 XT
- Kingston KC3000 PCIe 4.0 NVME plugged in main m.2 slot
- Second M.2 NVME in second slot
- 2 SATA SSD’s plugged in 0 and 1 slots
Guest
- 16 Pinned cores from one CCD
- 16GB RAM
- MSI RX 580 8GB
- 500GB Raw disk
- LookingGlass set-up
Looking into my host without VM running I see that the link-speed of the card is 2.5 GT/s
cat /sys/class/hwmon/hwmon0/device/current_link_speed 2.5 GT/s PCIe
which seems to correspond to PCIE 1.0, nothing I do can get the GPU and Motherboard to negotiate PCIe 3.0 link speeds, this seems to only happen when I have the main GPU plugged in, as if I remove the main GPU and keep the secondary on the second slot it negotiates 8GT/s or PCIe 3.0.
A disadvantage of the motherboard I have is that the second slot goes through the chipset and it’s only PCIe 3.0, so I’m hard capped at 4x lanes at PCIe 3.0, this shouldn’t drop the performance that much since the guest GPU is not that powerful, but since it’s only negotiating PCIe 1 speeds the performance seriously drops (60~ fps vs <20 fps in furmark at 1440p).
I have tried running the system without the second M.2 that goes through the chipset, without any SATA drives, and setting the PCIe gen to 3.0 in the BIOS but nothing seems to work to get the second GPU running at PCIe 3.0 speeds except removing the main GPU but that defeats the purpose of VFIO with dual GPU.
I have run out of options I think might be causing this issue, my next resort would be to buy a PCIe bifurcation card and split the main slot into 2 8X PCIe Lanes because I have that as an option in the BIOS settings but I would like to try this as a last attempt.
Any help or advise to troubleshoot this issue would be greatly appreciated, thank you!