Are we getting closer to being able to share graphics cards between Linux and Windows?

So, about a year ago I looked into setting up a windows vm with a kvm switch, using PCIe passthrough for the graphics card. I'm currently running SLI, so I don't believe there are enough PCIe 3.0 lanes to support two SLI configs in my system. So, what I'm curious about is if there has been any recent advancements in getting a vm to share the gpu with another OS, and not just virtualize the gpu.

Actually, linux is working hard on a total solution. The drivers are already better on Linux for AMD and Intel (nVidia is a non-issue for secure and human rights conscious users now because nVidia chose to go that route, even on Windows nVidia users have to pay a subscription to get the better drivers, and that not being an option on linux, nVidia is a bad choice overall). The new drivers and the new kernel 4.10 that is coming in a few weeks to maximum a few months, will take care of that: basically you buy your hardware for linux, and the linux kernel and good open source driver support in linux lets you use all of that in a Windows guest out of the box, so without difficult custom kvm configs and stuff, everything just works with basically one click. This is already reality, even though there still have to be some bugs ironed out to expand hardware support towards cards that are a little older, because the support for those is brand spanking new.

But basically, 2017 is the year when Windows is run as a userspace application in linux, and people take control over their hardware back, without even understanding how things work in linux, because all the elements are finally combined to make that possible out of the box... except for nVidia of course, that will probably never happen.

3 Likes

At what point is Nvidia cards not able to be passthrough generation wise. I have a GTX 1060 and thinking of doing this.

1 Like

There is no high performance KMS driver for nVidia, and you can't unbind/bind a non-KMS driver because your session won't recover.

Actually:

https://www.archlinux.org/packages/extra/x86_64/nvidia-dkms/

lol, dkms stands for Dell Kernel Management System, it's an automatic versioning system for kernel modules. KMS drivers are "Kernel Mode Setting" drivers, it means that the kernel can recognize the mode and set it at system boot immediately before the session starts. If you use proprietary graphics drivers, first thing you do is tell the kernel not to try that because it can't (that's what the "nomodeset" parameter is for). Problem is that if the kernel can't set the mode, if you unbind and rebind a graphics adapter, the session will crash and can't be recovered, because there is no mode detection and setting mechanism.

1 Like

Ah, I misunderstood you. I thought you were saying you couldn't pass through an nvidia GPU period, which requires the DKMS version of the driver to accomplish.

Dkms is just a kernel module management system that is often used as an alternative for akmod, which is the automatic versioning version of kmod, the most used kernel module management package. Dkms has no other function than to basically load the version of the binary blobs into the kernel when there is an update of either the kernel (the untainted kernel) or the binary blobs themselves. KMS drivers do not require binary blobs, they work with the untainted kernel. So you don't need dkms if you passthrough an nVidia card running on the nouveau driver, because that driver is a KMS driver. If you're not on the nouveau driver and want to passthrough an nVidia card, you have to use the old system of passing through a dedicated card, and it will never really be stable and you'll never get full performance despite the use of the proprietary driver.
The new system introduced with kernel 4.10 lets you pass through all the hardware that works on the host, with full performance and with full security features. It is a game changer lol

1 Like

word

So Nvidia passthrough will work with proprietary driver with kernel 4.10?

I mean, it works on 4.8, so I don't see why not. You still need another card (igpu works too) for your host system though, so as long as you aren't on FX chip, you're good to go.

I don't exactly want to install 4 cards for 2 way sli.

I know the current NVidia driver has framebuffer support. I believe KMS as well, or probably more likely a proprietary version of KMS. If the card has KMS support I can just hit a KVM switch unbind and rebind the sli cards to a Windows machine?

I do hope AMD has a ton of success forcing NVIDIA to open source the driver.

yeah, I can see how that could be unreasonable. I use my nvidia cards for compute, and have an AMD card for the VM, so I've never tried the scenario you're describing.

sauce?

Pff, I wanna use a GPU as a CPU!

Here you go. Just pretend it's a gpu

Norp :U

You CAN use bbswitch to work around the crappy KMS stuff, the limitation being that you're confined to the integrated gpu IO on the host.

I'm sure NVidia will open source the driver for KMS if some rendering farm ends up needing it.

Nvidia doing opensource? Next you'll tell me that Oracle will open their database. Not going to happen, Nvidia is not the company to opensource anything.

2 Likes