Fedora 28 QEMU/KVM Problems

Hi Guys,
Yeah I’ve been away for awhile but I need your help…here’s the deal.

So I’ve been running Fedora 26 - QEMU/KVM with hardware passthrough - guest is Win 10 enterprise, all has been fine, no complaints…nothing, just flat works flawlessly…

The problem is that I upgraded from Fedora 26 to 28 the other day and now the guest boots so slow, like it takes over 30 min for the guest to boot when it was less that a minuet prior to the upgrade. Once the guest is booted up it has a ton of latency making playing games impossible, I’ve considered building a new KVM to see if a new install of Win 10 works as expected, but before I do that I’d like your opinion on what might be causing the problem with the guest, the host system works fine no issues at all.

Any help or opinion is welcome…

TIA

I had this issue when hardware passthrough wasn’t working correctly. Do you see any VFIO reset messages in host dmesg during guest bootup?

[ +1.201959] vfio_bar_restore: 0000:09:00.1 reset recovery - restoring bars
[ +0.005097] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars
[ +0.468490] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars
[ +0.002741] vfio_bar_restore: 0000:09:00.1 reset recovery - restoring bars
[ +0.023972] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars
[ +0.002617] vfio_bar_restore: 0000:09:00.1 reset recovery - restoring bars
[ +0.020249] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars
[ +0.003087] vfio_bar_restore: 0000:09:00.1 reset recovery - restoring bars
[ +0.009907] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars
[ +0.000218] vfio-pci 0000:09:00.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0xffff
[ +0.000116] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars
[ +0.016058] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars
[ +0.000319] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars
[ +0.000033] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars
[ +0.000160] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars
[ +0.000885] vfio_bar_restore: 0000:09:00.1 reset recovery - restoring bars
[ +0.000213] vfio_bar_restore: 0000:09:00.1 reset recovery - restoring bars
[ +0.000033] vfio_bar_restore: 0000:09:00.1 reset recovery - restoring bars
[ +0.000148] vfio_bar_restore: 0000:09:00.1 reset recovery - restoring bars
[ +0.019837] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars
[ +0.000509] vfio_bar_restore: 0000:09:00.1 reset recovery - restoring bars
[ +0.015767] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars
[ +0.000416] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars
[ +0.000736] vfio_bar_restore: 0000:09:00.1 reset recovery - restoring bars
[ +0.000478] vfio_bar_restore: 0000:09:00.1 reset recovery - restoring bars
[ +0.017017] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars
[ +0.000884] vfio_bar_restore: 0000:09:00.1 reset recovery - restoring bars
[ +0.012550] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars
[ +0.000362] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars
[ +0.000292] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars
[ +0.000960] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars
[ +0.000320] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars
[ +0.000313] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars
[ +0.080593] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars
[ +0.000351] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars
[ +0.000293] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars
[ +0.000954] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars
[ +0.000310] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars
[ +0.000308] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars
[ +0.074976] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars
[ +0.000320] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars
[ +0.000290] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars
[ +0.003522] vfio_bar_restore: 0000:09:00.1 reset recovery - restoring bars
[ +0.000326] vfio_bar_restore: 0000:09:00.1 reset recovery - restoring bars
[ +0.000495] vfio_bar_restore: 0000:09:00.1 reset recovery - restoring bars
[ +0.180502] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars
[ +0.000300] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars
[ +0.000312] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars
[ +0.003772] vfio_bar_restore: 0000:09:00.1 reset recovery - restoring bars
[ +0.000417] vfio_bar_restore: 0000:09:00.1 reset recovery - restoring bars
[ +0.000427] vfio_bar_restore: 0000:09:00.1 reset recovery - restoring bars
[ +0.024523] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars
[ +0.000467] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars
[ +0.000328] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars
[ +0.001133] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars
[ +0.000334] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars
[ +0.000337] vfio_bar_restore: 0000:09:00.0 reset recovery - restoring bars

When I saw these errors Windows would take forever to boot up. If you applied a kernel patch it probably didn’t carry over after the upgrade. If you’re using the Java ZenBridgeBaconRecovery script maybe it isn’t running.

Thanks for the reply…

I did a little more poking around last night and looking at the console view the guest is dumping out to a BIOS screen and hanging there, the guest could be hosed, when I first set up this KVM I used SeaBios but a year or so later converted it to UEFI because it booted up much faster.

I built a new KVM last night also which works as expected, I think I’ll just finish up that new KVM and install my software on it then delete the old guest when I’m finished. I’ve been out of the loop for long enough to forget a lot that I knew and really don’t have the time to teach myself again, thankfully when I built this system several years ago I kept a hand written log with notes of what and how I set it up, it took about 30 min to get the new container created, Windows 10 Enterprise LTSB installed and do the GPU pass through. So I have a working guest system again, actually since I installed a fresh drive to do this I think I’ll actually build two containers this time with two different Win X builds.

I did have “snap shots” enabled and tried to go back to a snap shot I took about a month ago but that also failed to boot, I’m thinking that between Fedora 26 and Fedora 28 enough has changed in the KVM-QEMU realm that the old containers (guest) just won’t work, I also tried to load a older Fedora 26 point from the Grub menu at boot up but that was no help either with the guest doing the exact same thing. I probably should have paid more attention and upgraded from 26 to 27 last year but you know when something isn’t broke…you just keep on going.

At least I have the old configuration (KVM) to compare to the new KVM build (guest) so it’s very easy to make sure hardware-wise I get everything back and all the devices (hardware) I had passed to the guest like the USB controller, NIC, etc installed into the new container.

Thanks again for the reply…