LibVF.IO Beginners Tutorial: vGPU & SR-IOV on consumer GPUs

Our friend Erik (not a particularly technical guy - he’s actually never used Linux) followed the text guide to install LibVF.IO and made a video while he did the install.
Playing Warzone at up to 113 FPS in a VM?! GPU Multiplexing with LibVF.IO (Beginner's Guide) - YouTube

My hope is that this could maybe bring some more people to using VFIO. I think some folks have been interested to try something like this but the setup process has been a bit too difficult.

Let me know what you think!

2 Likes

Trying to learn more about what I can do with my 5700G and 6800XT in regards to VFIO. Unfortunately I’ve read using AMD GPU’s in pass-through is very unstable atm?!?

Everyone is passing NVIDIA cards to VM it seems…

There are a few different issues:

  1. AMD GPU’s prior to the 6000 series have a reset bug, where they could be passed through fine, once. Shutting down the VM and trying to pass it through again wouldn’t work because they get caught in some sort of bad state. Vendor Reset can mitigate this sometimes. My reference 6700 XT works great without any extra measures besides needing to load a vbios. My RX 460 required vendor reset, and also disabling of PCI power management (ASPM) in bios and grub boot settings to stop some crashes. There are rare reports of some 6000 series cards from the various vendors having reset issues, but it’s not been common.

  2. The next issue is that very few AMD GPU’s support SR-IOV (which AMD’s implementation of is called “MxGPU”), which is basically taking a single GPU and being able to split it into multiple virtual ones. Your 6800XT is NOT one of the cards that supports this.

  3. In relation to the previous issue, currently AMD is outright not interested in supporting SR-IOV in any manner that would be helpful to you. As far as AMD is concerned, SR-IOV/MxGPU only exist for an exclusive and manually selected group of high dollar businesses. Everyone else can go get fucked. There is some old, unmaintained and buggy driver code around called GIM that kinda makes the very old S7150 (which is SR-IOV capable) card work.

To summarize: Normal passthrough will work great on your 6800XT. SR-IOV functionality does not, and likely never will.

2 Likes

So this means I must disable the Linux kernel from using the card on boot so VM uses it, and the only way to pass it back to the linux host is to reboot and change boot parameters?!

If this is the case then maybe it would be best off sticking to dual-boot since I have linux games I want to play on the 6800xt and rebooting kind of defeats the purpose a fair bit. (if it was seamless and without issue, which it isn’t, then maybe it be ok)

For some reason I thought GPU Pass-through under Linux had gotten to the stage where it was possible to pass most dGPU to guest then back to host when guest is shutdown, without rebooting or major issue.

Someone on reddit said they achieved this but X server had to be restarted after shutting win10 guest down; but that isn’t a huge issue. (couldn’t find much more about it however)

I believe that there’s something about hyper-v being able to do some gpu partitioning stuff now, but windows has to be your host. I’m not at all familiar with using it and what the caveats are.

My rough understanding is that with traditional SR-IOV, a gpu would put out multiple (hopefully separate, depends on motherboard) iommu groups, which can then be passed through to VMs.

With “gpu partitioning” the host grabs the entire gpu, and then essentially mediates out virtualized gpus that get a time-share of the real one. This comes at some kind of a performance cost. I’ve no idea what exactly is required from a GPU to actually support this, if anything. To my knowledge there is nothing in Linux yet that can do this.

Not familiar with “vGPU” which is an nvidia card only thing. Here’s the relevant project, but it’s going to be a heavily “figure it out yourself” sort of thing.

Historically for a single gpu to multiple VM situation, people have a headless host, and then setup VMs and scripts that can stop the VM and start up the other one. I believe it’s possible to unbind a gpu from an active vm, but X11 or whatever may not play well or requires logging out and in of the session.

Edit mixed up vGPU and gpu partitioning, fixed.