Update: TIFU

UPDATE:

TIFU.

API Mismatch in my syslog.

SHIT.

Did you know you had to set kvm=off for Linux Nvidia drivers in a VM too? Well, that worked for 396.24.10. However, upon upgrading to 396.51.02, Nvidia I believe has added additional detection code to see past kvm=off, because it just won’t show any picture anymore.

DKMS builds fine, the xorg.conf updating between drivers works fine, the modules even load in the kernel, cause after uninstallation and attempting to reinstall without rebooting the VM, the installer warns that the DKMS modules are already loaded. But it looks like Nvidia now sees you’re running a VM even though kvm=off is applied and cannot initiate the driver. Kubuntu 18.04 on a manual Nvidia driver installation of this driver will now result in kvm=off not working.

If you’re sticking to a Vulkan driver, stick to 396.24.10 if you need to run Ubuntu/Kubuntu in a VM with GPU passthrough. Nvidia has caught on now, and I have no doubt the Vulkan beta drivers on Windows will also suffer the same fate.

I double confirmed by installing the old driver back and YES INDEED it loaded without issue.

Where’s Navi when we need it?

1 Like

Well shit, I guess AMD might be the only way forward or well as you stated stay on earlier drivers but in a year or so might want to update for reasons.

The link for 396.24.10 is still live for now:

https://developer.nvidia.com/linux-3962410

Not doing pass through yet but makes me hesitant at this point to charge the credit card. I will keep my eye on the threads here and see where this goes

We’ll just have to see how 7nm Navi does, and if AMD opens up to SR-IOV on consumer Navi cards.

1 Like

Did you set a random vendor ID in your VM’s config? Last I tried passthrough with an Nvidia card to Linux guest that was needed in addition to kvm=off when using the proprietary drivers.

I thought that was Windows only. That’s a Hyper-V enlightement. And it worked without that on 396.24.10.

If I added that and did a virsh define, it would put that in the Hyper-V section of the XML, and Ubuntu doesn’t use Hyper-V.

We’ll see when the next workaround comes around…

I must be mistaken then :\ Sorry. A few months ago I remember adding it to my config when I was having issues setting up a Linux guest with passthrough, but since it goes under the Hyper-V section, you’re right - it wouldn’t have been used at all.

That’s very annoying if they have made more changes to prevent people from passing through non-Quadro GPUs.

Someone using QEMU on Arch claims this works:

<qemu:commandline>
    <qemu:arg value='-cpu'/>
    <qemu:arg value='host,kvm=off,-hypervisor'/>
</qemu:commandline>

There could have been Hyper-V enlightenments too, but IDK… It worked on Arch, doesn’t mean it will work on Fedora… When I added that -hypervisor argument, the guest kernel just always panicked.

Edit: Just confirmed, the Hyper-V enlightenments do nothing to make the driver work for Linux guests.

Holy shit. This got enough attention to get reposted on a harassment subreddit.

“You’re an Nvidiot for choosing Nvidia on Linux, you idiot.”

“You’re using proprietary blobs and software, and you’re complaining about MUH FREEDOMS? How hypocritical! Oh the fucking irony…”

Jesus Christ.

Are you surprised on a “harassment subreddit”? :stuck_out_tongue:

Shows how toxic people can STILL be. Fucking hell.

Update:

TIFU.

API Mismatch in my syslog.

SHIT.

So I am good to buy my parts now?

1 Like

Yeah, I messed up updating drivers. Didn’t properly check DKMS and initramfs.

1 Like

this whole thread is why we need authoritative information on this subject.

3 Likes

I appreciate all the info and time you have given to help this part of the forum. I do wish there was a decent way forum wise to handle this.

1 Like

the simplest one is not to phrase your helpdesk questions like new bugs

@FurryJackman Should I just close this thread or leave it open?