Passthrough vm not starting after upgrading gpu

Hi all,

I’ve got a vm configured to pass through a gpu and USB controller. I recently upgraded from a 1070 to a 1080 ti (Asus strix gaming) and as soon as I did that, the vm stopped launching. I’ve experienced this before, but I forget how I solved it in the past.

What’s odd, is libvirt launches a qemu process, and that process pins one thread at 100% until you kill the process. I’m using q35 and ovmf.

I’ve checked all my iommu groupings and they’re correct, and I’ve tried to manually specify a gpu rom file for the 1080 to no avail.

I remember a few years back that fedora provided a bad version of ovmf in the repositories, is that still the case?

I’m not seeing any issues in the logs, so I’m kinda stuck here. If someone can give me a pointer or two, I’ll love you forever.


Additional data:

XML: http://ix.io/1MDT

DMESG: http://ix.io/1MDR

I’ve seen that before as well, I think I have the same card as you. I haven’t had to specify a ROM file for it. I assume you made the necessary changes for vfio to grab it. Also trying to remember what I’ve done for this before…

what’s dmesg say?

Yeah. It’s bound properly.

I’ll get that info for you shortly.

[186172.966248] virbr0: port 2(vnet0) entered blocking state
[186172.966250] virbr0: port 2(vnet0) entered disabled state
[186172.966354] device vnet0 entered promiscuous mode
[186172.966671] virbr0: port 2(vnet0) entered blocking state
[186172.966673] virbr0: port 2(vnet0) entered listening state
[186174.999214] virbr0: port 2(vnet0) entered learning state
[186175.110390] vfio_ecap_init: 0000:08:00.0 hiding ecap 0x19@0x900
[186177.047112] virbr0: port 2(vnet0) entered forwarding state
[186177.047123] virbr0: topology change detected, propagating

When launching, that’s what I get.

Oh, I’m on a Threadripper 1950x.

can you dump the whole dmesg please?

sudo dmesg | curl -F 'f:1=<-' ix.io

http://ix.io/1MDR

EDIT: the VM XML:

http://ix.io/1MDT

I’m still looking through your dmesg and gathering info. In the mean time: do you pass a drive through directly to boot into Windows? Or is it a virtual drive? If the former, does the card work when booting directly into Windows? I’ve experienced a similar problem in the past where I needed to just load the card once in Windows natively, then the VM would work. Also, what is the GPU that you’re using for your host? I’m seeing some iffy stuff with linux nvidia drivers here.

No, it’s a virtual drive.

The card works for Linux (proprietary, 418 drivers) when I bind nvidia and remove my 1060.

1060 6g.

So I’m using the Negativo17 repos for Fedora, dkms variant. I’m also using ZFS for my spinning rust, so 0.8.1 should be loaded as well.


I’m up 2 days, so I can reboot and re-dmesg if you’d like.

Can you

sudo virsh dumpxml vmnamehere > vmdump.txt && cat vmdump.txt | curl -F 'f:1=<-' ix.io

I’m sorry I’m not being of much use, but maybe try making a new VM instance and seeing what happens? Oh yeah, and like you said a freshly booted dmesg would be good too.

Also a friend over in the passthrough discord is curious if you’re on the latest BIOS. The Discord invite link is: https://discord.gg/KEXfcc if you want to join. Some people smarter than me in there. I’m not seeing anything overt in the dmesg you provided, though. So I’m kind of at a “throw random things that might work at it stage”. They might have better insight.

1 Like

Alright, let me reboot, then I’ll give a dmesg then. Once that’s done, I’ll try a new instance.

motherboard or GPU? I’m not sure about the GPU.

Mobo is at 3.50, X399 taichi.

It’s working after a reboot. for the record, I tried rebooting it before, I’m typically not this lazy

Well, maybe not quite working. Now I’ve got no GPU output, but the VM is starting. I wonder if it’s a bad GPU after all.

I’m going to attach a cirrus adapter to debug.

Oh, looks like an error 43 might be causing issues.

Nvidia 430 might have updated their VM detection methods.

EDIT: they did not.

Turns out, the 1080 Ti requires geforce experience installed to get drivers, for some reason. Even the windows update ones fail.

That doesn’t sound right :thinking: . But at least you got enough to work with now.

btw I totally called it that it would work when rebooting again :stuck_out_tongue:

1 Like

Think a DDU and reinstall of the GPU drivers would have it work without needing to install the geforce experience

Possibly. I did have the windows update ones installed prior to this though. :confused:

I’ve got a lot more work on this vm to do. It’s working, mostly, but I need to figure out why I need to kill the “windows device isolation graph” every time I launch a game or it’ll just hang there.

I also need to figure out why certain games simply refuse to launch.

I also need to set up audio passthrough so I don’t have to keep switching my headphones.

Have you tried vcpu pinning? A lot of my problems seemed to go away when I did that.