Guidance on Hardware for GPU Passthrough

Hi!

I am looking at building a new PC to run Linux and to virtualize a Windows installation through KVM with GPU Passthrough. I’ve watched Wendell’s video on Level1Linux about this from three years ago, but his hardware recommendations are a bit dated.

The video I’m talking about https://www.youtube.com/watch?v=aLeWg11ZBn0&ab_channel=Level1Linux

Can anyone give me some advice on the hardware I’ve selected?

CPU: Ryzen 9 5950x
GPU: 2x MSI GTX 1660 VENTUS XS 6G OC
Mobo: Asus ROG X570 Crosshair VIII Hero (Wi-Fi)
RAM: G.Skill Trident Z Neo Series 32GB F4-3600C16D-32GTZNC
SSD: 2x Samsung (MZ-V7S1T0B/AM) 970 EVO Plus SSD 1TB - M.2 NVMe

Thanks in advance!

It’s a bit easier if you have different models of GPU’s, but still possible with identical cards.
https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF#Using_identical_guest_and_host_GPUs

Look into the IOMMU groups for the motherboard you go with. x570 seems to be pretty much ok from what I have seen.

1 Like

Is there consolidated database for IOMMU/VTd support tested on different mobos best with different bios versions?
Except Wendel, in one or two videos, I haven’t seen anything like this. Probably too niche for some youtuber with access to many review motherboards to do this. And it would probably take probably only 4 steps: enable IOMMU/VTd, boot form USB stick, run lsiommu, post results.
If process will be streamlined enough (script that uploads groupings to online database, like benchmarks) we might get IOMMU grouping on release day, which would be sweet.

I was able to get my GTX 1650 to pass through to a LXC container. I had used proxmox, Im not sure what you are using. I have some notes in my post if it helps. I do know I could not do a LXC pass through and a KVM pass through at the same time yet…yet…moved on to another project ATM.

1 Like

Yep, that is definitely something that would be good.

Part of the problem is that groups can change. I own one motherboard where the v2 revision has VT-d (Intel branding for IOMMU) support with some BIOS versions, but my v1 revision does not. For LGA 2011 Xeon E5 v1, VT-d was enabled but buggy in some engineering sample CPUs, disabled in stepping 1, and fixed/enabled in stepping 2. Some sockets with different amounts of PCIe lanes for different CPUs (eg x99, x299, ryzen APUs). Some motherboards change the groups depending on the UEFI/BIOS version. Some motherboards have different IOMMU groups depending if you have IOMMU on auto vs enabled.

So it is a bit complex to get good data and to put it in a chart/wiki.

Thats why I meant real database like benchmarks have. There are already some charts, but they are always incomplete/old, because its PITA to edit t manually with all variants. You right.
However, simple script like ls-iommu expanded with info like mobo name, bios version etc. could automatically upload this info over SSL and some Web API to database. There even could be small distro that you would boot and run script (easy for reviewers to use).

As TheCakeIsNaOH has already mentioned, the identical cards can be done. However, it will be an extra hassle that could make troubleshooting tougher than necessary. Also remember to set <hidden state='on'/> in the config file.

I would personally go AMD for the host GPU.

1 Like

And change the vendor ID. For both AMD and Nvidia, at least that is the current recommendation at the Arch wiki.

https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF#Video_card_driver_virtualisation_detection

1 Like

Thank you all for the responses. I will look into going all AMD for the build. Should I also look into getting ECC memory or does it not really matter?

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.