LibVF.IO: A Commodity GPU Multiplexing Tool Driven By VFIO & YAML


Normally, this post would be closed due to low effort (simply posting links with no commentary I’d not allowed), but this article is making big claims that I am eager to both try out and discuss.

LibVF.IO automates the creation and management of mediated devices (partitioned commodity GPUs shared by host & guest) , identifying NUMA nodes, parsing / managing IOMMU devices, and allocating virtual functions to virtual machines.

If this actually works, I am beyond eager to try this out.

EDIT: so it looks like this is a frontend for SR-IOV capable devices. I don’t think it’s accurate to say that they are “commodity devices”

I’m looking at this, reading through it thinking it’ll work on big navi. Which appears to not be the case at all.


We’re using the mdev-vfio API. This works on devices without SR-IOV capability. Rather it uses the sysfsdev convention. We also support SR-IOV if your GPU does that.


Ah, thanks for clarifying, I must have misunderstood the article.

Well, I now know how my weekend will be spent.


I know there’s lots of things I need to improve so let me know if I can help if you get stuck at all or you think of a way I can make things better!

1 Like

If it doesn’t work on a 3320m I’m uninterested

Will be away this weekend so I can’t get straight to it, but I just got an RTX 3070 so I’m probably gonna give this a shot next week.

Will report my findings here when I do it!

1 Like

I definitely will. Its been a long time since I’ve really played with new VFIO tech, so I’m glad to see some movement in this space.

To clarify, the mdev-vfio api for AMD works through their MxGPU driver? Am I reading it correctly that the 6000 series Radeon GPUs will not work with this?

Unfortunately AMD has ignored the vfio-mdev API. Nvidia, Intel, and RedHat developed it in collaboration - it works both for commodity Nvidia GPUs and Intel’s GVT-g capable GPUs. Incredibly (to my surprise) they actually use the same API! These devices are represented as a sysfsdev.

Edit: It does however work with the S7150 and W7100 on SR-IOV mode.
Just specify “sriovdev” under mdevType in the yaml instead of “sysfsdev”.
There are examples for AMD, Intel, and Nvidia in the Examples folder.


Ahh, well, maybe its time to put some pressure on AMD to support this standard.

1 Like

So if I understand correctly, this is requires one of three options:

  1. An older AMD S7150 card with the abandoned GIM driver
  2. An older Intel iGPU with GVT-g
  3. An Nvidia GPU that has support for Nvidia vGPU or is able to be unlocked by vgpu_unlock (so no big maxwell or ampere last I checked). And then either pricy licensed driver stack, or workarrounds (with posible :pirate_flag: ) to get the software working.

It also supports the DG1 and hopefully newer. I bought one and I’m still doing testing. :slight_smile:


Ya, supports GIM but I definitely don’t recommend it for the reasons you mention and several others:

Edit: If you really want GIM and a new kernel driver I did get it working on the W7100:


Oooh, right. I had forgotten that the DG1 is “available” now.

1 Like

Bookmarking this thread. :metal:

Still a VFIO newbie in general, but yeah this sounds pretty nifty.

1 Like

And now we know why I was working on modding gpus :wink:

Great work it’s the real deal


It means a lot to hear you think so. :grinning_face_with_smiling_eyes:

@ArcVRArthur If I use an NVIDIA GPU do I need to have a GRID license or does it work without it? I’m planning to buy an A6000 which is based on Ampere but supports SR-IOV natively. What would I need to make it work with your solution?

1 Like

@d4n3l You don’t necessarily need to use Nvidia GRID with LibVF.IO. All the steps that are required to get a working setup with Nvidia, AMD, and Intel GPUs are contained in the guide. :slight_smile:

On the Nvidia’s Ampere architecture unfortunately we’re not providing official support just yet, although we are working on adding support.

There’s also some other folks working on tangential pieces of the Ampere puzzle that I would love to speak with but who are understandably doing their work in a more closed way - to those people if any of you folks are interested in speaking with me I’d love to talk to you and compare notes! We’re committed to working openly but if you’d prefer to do so anonymously that’s okay too.