G’day. Looking to refresh the old desktop at home.
Target state is a Linux PC, able to host VMs, play games and interact with various hardware items. I might even get it access to work, we’ll see, that’s just a stretch goal.
I’ve been trying this with legacy hardware. Plenty of stuff either works via Proton or works natively. Already enough to make me fairly happy - until the GPU died, hence I’m now shopping around. Intend to go AMD AM5+Radeon7k - I like their stuff.
Have read about using IOMMU groups to pass a GPU through to a Windows VM, for running fussy games that only work properly under Windows. It’s a neat trick. I’m reading various threads about hardware that better supports this before I buy it.
But is there a way to get that kind of result that lets you give the GPU back to the host Linux OS without restarting? Without that the benefit’s a bit of an edge case, for me at least.
Some mention GPU splitting between VMs, kinda funny workaround potential there - a Linux OS, hosting a WIndows VM and a linux vm. The absurdity builds lol but it might work. Surely there’s a better way though?
You’d need a script that would unbind the GPU from the vfio-pci driver, and bind it back to the amdgpu driver. I haven’t tried it myself, but I’ve seen other people here messing around with it.
I remember Wendell mentioning in a recent video he’s had some trouble with VFIO on 7000 series cards, but I’m not sure exactly what they are.
That sounds like SR-IOV? Unfortunately, that’s not something you’re going to find on a consumer GPU.
Looks like there’s a bit of a gap in the product stack on these, between 8gb and 32gb. Y’know. A bit like the rest of AMD’s line-up at the moment, product gaps you could drive a truck’s worth of market share through… sigh