@ChuckH I’m thinking you’re running into a basic Windows 7 issue, it doesnt automatically install in UEFI. And the ISO’s from microsoft don’t do UEFI without a little trickery. And as far as I know, OVMF can’t do BIOS booting.
On top of that, Windows 7 doesnt like cpus Skylake and newer, due to Microsoft trying to push people to Win10. So, you may have better luck by telling the vm it is using an older architecture like haswell (you’ll lose some flags, but should still work well).
@Whizdumb I did try installing the driver for the gt710 while using spice/qxl and still didn’t have success. If I boot with both it shows the gt710 as code 12 (I think).
@gotomech Well it does boot with OVMF and the spice/qxl no problem so it should be working with efi right? I can try changing cpu type, usb is already set at 2.
I will try staring over using seabios later this week to see if I have any better luck.
spice/qxl was working for me too with seabios, but I couldnt even get the iso to boot with ovmf/uefi. So, yea, if you can get it with ovmf with spice/qxl, then it’s a different issue.
You did the original install with spice/qxl with ovmf/uefi right?
Yeah I did the original install with spice/qxl and ovmf.
I can’t get the VM I set up to boot with seabios. The repair utility can’t find the drive. I’m going to have to start over with a new VM to try seabios.
Tried seabios real quick tonight and I cant get it to boot with the gpu passed through either. Different problem though, I don’t get any output on the display.
I will have to do more searching on that problem but it doesnt look like seabios was the silver bullet.
As far as I know, seabios cant do GPU passthrough, I never thought that would work, but you should be able to use virtio video from spice and get 3d acceleration from your host card with all the virtio drivers installed.
As for ovmf, I couldnt even get the install to work for me, so it seems like the passthrough issue is something different.
The code 12 is only when I have the GT710 passed through and the spice/qxl added to the machine as well. From what I’ve read that is not uncommon with windows 7 guests.
I wasn’t able to get virtio to work. I couldn’t find drivers that worked with windows 7 so it was limited to 1024*768 with no acceleration. I tried vmvga too but couldn’t find drivers that worked for it either (VMware drivers are for newer emulated hardware).
@TheCakeIsNaOH I’ll look into vga arbitration with seabios. I didn’t try anything fancy with it yet.
If you run the virtio-win-gt-x64.exe (or x86 depending) at the root of the iso, it should install all virtio drivers for all possible virtio devices you can install, even if they arent currently installed.
I think it is something with my setup or hardware not windows 7 explicitly.
I tried a windows 10 vm and only got marginally better results.
With everything default, the I can get windows 10 to boot to the gpu, but its locked at low resolution and the card is Code 43.
I checked my GPU I am trying to pass, it does support efi:
./rom-parser vbios.img
Valid ROM signature found @0h, PCIR offset 190h
PCIR: type 0 (x86 PC-AT), vendor: 10de, device: 128b, class: 030000
PCIR: revision 0, vendor revision: 1
Valid ROM signature found @f600h, PCIR offset 1ch
PCIR: type 3 (EFI), vendor: 10de, device: 128b, class: 030000
PCIR: revision 3, vendor revision: 0
EFI: Signature Valid, Subsystem: Boot, Machine: X64
Last image
I went back to just qxl/spice, uninstalled the nvidia drivers, and was able to get the VM to boot with all the above options. When I installed the driver I lost video. driver version 456.55.
As weird as this will sound, I think your vendor_id value is too short (maybe), should be 12 characters. Always used “Something123”, with no issues, I had weird issues with mine for a while until I changed it to that from some other suggestions.
But try just adding an additional letter to your value and see if it helps. Or try mine and see if it helps.
I have gotten passthrough working with windows 10 guest and linux guest. It seems that manually setting your cpu model conflicts with the kvm hidden state configuration.
I could get Win10 and linux to boot and function with the card passed through using the qemu options from:
I didn’t even need all the qemu arguments called out either. Only -cpu host,kvm=off. The kvm=off should be taken care of in the virt-manager xml:
I picked up a Quadro card to use for my VMs. Worked piece of cake with W10, don’t even need to hide the VM state or vendor id from the Quadro driver. Works so good I almost couldn’t believe it!
Windows 7, still no luck. I tried a few different things (combo that worked with Win10) but I get stuck at the “Staring Windows” logo. Back to searching I go… Maybe I’ll find something new.
EDIT: I setup RDP to try and connect to the VM to see if boot was successful. Looks like the “hang at boot” on the card passed through is not a hang. It boots and I can log in via RDP. Device manager shows the card as Code 12. No virtual video or spice server is setup on the VM. Kinda confused.
On my card there were 4 devices that made up the graphics card on 0000:4c:00.
0 video
1 audio
2 usb 3.1 controller
3 serial bus controller for USB-C
I had to add one of the devices normally or it would give me an error and the VM would not start. I added the last device normally and all the others via qemu commands and it works perfectly!
Can you share your xml config file , cos im trying to to do the same but not working im using qemu 6.
edit1 I was able to make it simply not using virt-manager and using only qemu it seems virt-manager generate some config that will not work with windows 7 efi , interesting is same config works with win 10 efi … as i read virt-manager generate some pci-e brige and that why video cards attached to this bridge not work . tryed to add manulay video card with this qemu parameters you give example but it give me access denied , that why i wana see your full xml config file if you can share it!
I don’t know how I ended up with so many pcie devices, but I can confirm this still works for me.
Also note: I had to add one of the pci devices from my video card the way you normally would via virt-manager to get it to work. (At least i think that’s why i did it that way. Its been a while since I did this)
Can you share your qemu command line for this?
Are yall able to get this to still work on Qemu 6.2?
Every combination I have tried changing pci buses in libvirt either got code 12 on AMD or no boot.
Couldn’t get any farther on command line qemu with 440fx as it seems they removed some of the ability to create a root pcie bus with ioh3420, but do have a new Q35 VM that is patched to boot with CSM off. So maybe I will have more luck with QEMU command line.
So currently either I have to use qemu from early 2020 or use an old AMD 2020 driver (only with 440FX VM) as everything newer gives me code 12.
I’ve been trying the same thing as posted here, trying to get GPU passthrough to work with a Windows 7 guest. I’m using a GTX 970, Asus Q87M-E motherboard, and Ubuntu Server 20.04 (so no virt-manager). I encountered the code 12 error as before, and then tried the solution posted by @ChuckH . However, I now encounter an error: error: internal error: qemu unexpectedly closed the monitor: 2022-07-23T17:58:15.608209Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,id=hostpci0.0,bus=pcie.0,addr=0x10.0,multifunction=on,x-vga=on: vfio 0000:01:00.0: failed to open /dev/vfio/1: No such file or directory
Looking around, it seems to be because it can’t access the GPU from the host, and the posted solution is usually to isolate the GPU from the host. However, I have already done so, and under lspci -nnv I see vfio-pci as the driver that’s in use. Has anyone also encountered this error under these conditions?