Blank screen KVM PCI Passthrough win10

I posted this on the manjaro newbie corner. I am new to all this, but seems to be in a kinda precarious situation, where I can’t find any posts about this specific problem. I got refered to this forum, where I will be honest and say I haven’t spent much time yet, but can anyone please help me with this situation?

I editing the link in for you, and you should be post links now.

Is the 2080 the boot GPU, the one that shows the UEFI splash screen?

Yes it is… Most of the time. Sometimes it comes up on the 1080. Very rarely though.

OK, there are some extra things you have to do if you are passing through the boot GPU, see this guide.

From what i can tell there is no way to set my primary GPU in my bios, so I made an attempt at adding “video=vesafb:off,efifb:off” in my grub config, as well as running this command

sudo nvidia-xconfig --xconfig=/dev/null --output-xconfig=/etc/X11/xorg.conf

It did not seem to have done much, considering my screens still light up and show blank screens.

A bit of an update however I downloaded and installed DCH driver from Nvidia, and now my 2080 shows up in device manager. However it most often goes into Code 43 mode.


And whenever I make an attempt at launching nvidia control panel, it simply never shows up with anything. Even if i run as admin.

You do probably need to pass the vBIOS to the VM as well.

I have been working at this for a while. Trying to add the vBIOS… which I’m not seeing an error for in the logs… Now when i start the VM my monitors no longer light up, and they do not react at all anymore. Here is

$ sudo cat /var/log/libvirt/qemu/win10.log

LC_ALL=C \
PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/var/lib/snapd/snap/bin \
HOME=/var/lib/libvirt/qemu/domain-4-win10 \
XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-4-win10/.local/share \
XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-4-win10/.cache \
XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-4-win10/.config \
QEMU_AUDIO_DRV=spice \
/usr/bin/qemu-system-x86_64 \
-name guest=win10,debug-threads=on \
-S \
-object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-4-win10/master-key.aes \
-blockdev '{"driver":"file","filename":"/usr/share/edk2-ovmf/x64/OVMF_CODE.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"}' \
-blockdev '{"driver":"file","filename":"/var/lib/libvirt/qemu/nvram/win10_VARS.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"}' \
-machine pc-q35-5.0,accel=kvm,usb=off,vmport=off,dump-guest-core=off,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format \
-cpu EPYC-IBPB,x2apic=on,tsc-deadline=on,hypervisor=on,tsc-adjust=on,clwb=on,umip=on,stibp=on,arch-capabilities=on,ssbd=on,cmp-legacy=on,perfctr-core=on,clzero=on,wbnoinvd=on,amd-ssbd=on,virt-ssbd=on,rdctl-no=on,skip-l1dfl-vmentry=on,mds-no=on,pschange-mc-no=on,monitor=off,hv-time,hv-relaxed,hv-vapic,hv-spinlocks=0x1fff,kvm=off \
-m 16000 \
-overcommit mem-lock=off \
-smp 16,maxcpus=32,sockets=1,dies=1,cores=16,threads=2 \
-uuid d8ac75b7-2dde-4a9f-94d4-4968af96d9ab \
-no-user-config \
-nodefaults \
-chardev socket,id=charmonitor,fd=30,server,nowait \
-mon chardev=charmonitor,id=monitor,mode=control \
-rtc base=localtime,driftfix=slew \
-global kvm-pit.lost_tick_policy=delay \
-no-hpet \
-no-shutdown \
-global ICH9-LPC.disable_s3=1 \
-global ICH9-LPC.disable_s4=1 \
-boot strict=on \
-device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2 \
-device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1 \
-device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2 \
-device pcie-root-port,port=0x13,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3 \
-device pcie-root-port,port=0x14,chassis=5,id=pci.5,bus=pcie.0,addr=0x2.0x4 \
-device pcie-root-port,port=0x15,chassis=6,id=pci.6,bus=pcie.0,addr=0x2.0x5 \
-device pcie-root-port,port=0x16,chassis=7,id=pci.7,bus=pcie.0,addr=0x2.0x6 \
-device pcie-root-port,port=0x17,chassis=8,id=pci.8,bus=pcie.0,addr=0x2.0x7 \
-device qemu-xhci,p2=15,p3=15,id=usb,bus=pci.2,addr=0x0 \
-blockdev '{"driver":"file","filename":"/run/media/matrucious/VM/win10.qcow2","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-2-format","read-only":false,"driver":"qcow2","file":"libvirt-2-storage","backing":null}' \
-device virtio-blk-pci,bus=pci.3,addr=0x0,drive=libvirt-2-format,id=virtio-disk0,bootindex=1 \
-blockdev '{"driver":"file","filename":"/home/matrucious/Downloads/virtio-win-0.1.171.iso","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-1-format","read-only":true,"driver":"raw","file":"libvirt-1-storage"}' \
-device ide-cd,bus=ide.2,drive=libvirt-1-format,id=sata0-0-2 \
-netdev tap,fd=32,id=hostnet0,vhost=on,vhostfd=33 \
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:7e:80:50,bus=pci.1,addr=0x0 \
-spice port=5900,addr=127.0.0.1,disable-ticketing,image-compression=off,seamless-migration=on \
-device cirrus-vga,id=video0,bus=pcie.0,addr=0x1 \
-chardev spicevmc,id=charredir0,name=usbredir \
-device usb-redir,chardev=charredir0,id=redir0,bus=usb.0,port=1 \
-chardev spicevmc,id=charredir1,name=usbredir \
-device usb-redir,chardev=charredir1,id=redir1,bus=usb.0,port=2 \
-device 'vfio-pci,host=0000:0b:00.0,id=hostdev0,bus=pci.5,addr=0x0,romfile=/home/matrucious/Documents/VM stuff/Zotac.RTX2080.8192.181009.rom' \
-device 'vfio-pci,host=0000:0b:00.1,id=hostdev1,bus=pci.6,addr=0x0,romfile=/home/matrucious/Documents/VM stuff/Zotac.RTX2080.8192.181009.rom' \
-device 'vfio-pci,host=0000:0b:00.2,id=hostdev2,bus=pci.7,addr=0x0,romfile=/home/matrucious/Documents/VM stuff/Zotac.RTX2080.8192.181009.rom' \
-device 'vfio-pci,host=0000:0b:00.3,id=hostdev3,bus=pci.8,addr=0x0,romfile=/home/matrucious/Documents/VM stuff/Zotac.RTX2080.8192.181009.rom' \
-device usb-host,hostbus=5,hostaddr=3,id=hostdev4,bus=usb.0,port=3 \
-device usb-host,hostbus=5,hostaddr=2,id=hostdev5,bus=usb.0,port=4 \
-device virtio-balloon-pci,id=balloon0,bus=pci.4,addr=0x0 \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
-msg timestamp=on
2020-06-16 21:41:43.000+0000: Domain id=4 is tainted: high-privileges
2020-06-16T21:41:45.076548Z qemu-system-x86_64: -device vfio-pci,host=0000:0b:00.0,id=hostdev0,bus=pci.5,addr=0x0,romfile=/home/matrucious/Documents/VM stuff/Zotac.RTX2080.8192.181009.rom: Failed to mmap 0000:0b:00.0 BAR 1. Performance may be slow

Does anyone have some inclination of what the issue may be here?

You are missing this on your -cpu part which important for nvidia:

-cpu hv_vendor_id=whatever,-hypervisor
-M q35,kernel_irqchip=on

WITHOUT hv_vendor_id=whatever nvidia WILL detect it’s under a VM hence giving you the code43 middle finger.

And double check that the card is uefi ready, the x570 definately not but you can patch it.

For the resource busy messages you need to make mad unbinding everywhere.

And in kernel command add this:

video=vesafb:off video=efifb:off

And also in BIOS you can change pci/peg order.

Sorry for my ignorance, I am rather new at this.

Is this something I change in the XML?

The resource busy messages disappeared when I passed the vBIOS, and came back when i added the hv_vendor_id. So what do you mean unbindings?

And I’ll look around for the UEFI and pci order stuff, but last time I tried to find something about it I couldn’t find any pci order stuff

I also already had “video=vesafb:off video=efifb:off” in my grub config
I added this to my XML config, and now windows just boot loops, until it boots to startup-repair.

let’s get things a bit organize first.

The fact that windows shows up is a good sign, your qemu/vfio is on the right track.

Having the boot loop means m$$ isn’t recognizing a driver, that boot loop dropping you into recovery is a symptom of that.

I think trying to install m$$ right into the VM with gpu is a bit harsh, knowing how m$$ loves to reboot multiple times for every new driver.

Usually it’s better to make a barebones install with ‘-vga qxl’ and keep adding after each reboot all other virtio drivers.

I don’t use libvirt, but yours looks like a mess, it’s trying to start both ‘-vga cirrus’ AND gpu passthrough with a mix of ancient and newer hardware “pcie-root-port” ancient, “-M q35” modern. This most likely is confuzzeling m$$ as hell.

For the gpu use “-device ioh3420,etc,…”

Also, I am not sure if it’s correct having FOUR vfio-pice device with thae same romfile AND at the same “addr=0x0” but into different pci slots!! So looks like your taking a single real physical pcie slot and dividing it into four inside the qemu.

From what I’ve read online the gpu got two devices id, gpu and audio. So in qemu you created a pcie slot and and those two into it. Therefore, at least in linux, they appear correctly as in the host in ‘lspci’.

Right now, without some logs, I don’t know where to help really. If you can provide the following info it’ll makes things easier.

“lspci -k”

At the moment, you want to pass the gtx 2080 and it is NOT in the first pcie slot, right?

Anyways, you’ve got fancy hardware and in the end your vfio VM should work even better than in real hardware.

regards.

Thanks for the clarification. To answer your question, yes my RTX2080 is on PCI slot 1. And my GTX1080 is in slot 2.
And it is correct that im passing 4 pci devices from the same pci slot, because there is gpu, audio, usb 3.1 controller, usb-c controller. However i have tried to reinstall windows and just adding two pci devices (gpu and audio).
Here is the command output you requested.

lspci -k

    00:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Root Complex
            Subsystem: ASUSTeK Computer Inc. Starship/Matisse Root Complex
    00:00.2 IOMMU: Advanced Micro Devices, Inc. [AMD] Starship/Matisse IOMMU
            Subsystem: ASUSTeK Computer Inc. Starship/Matisse IOMMU
    00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    00:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
            Kernel driver in use: pcieport
    00:01.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
            Kernel driver in use: pcieport
    00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
            DeviceName:  Onboard IGD
    00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    00:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
            Kernel driver in use: pcieport
    00:03.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
            Kernel driver in use: pcieport
    00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    00:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
            Kernel driver in use: pcieport
    00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
            Kernel driver in use: pcieport
    00:08.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
            Kernel driver in use: pcieport
    00:08.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
            Kernel driver in use: pcieport
    00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 61)
            Subsystem: ASUSTeK Computer Inc. FCH SMBus Controller
            Kernel driver in use: piix4_smbus
            Kernel modules: i2c_piix4, sp5100_tco
    00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51)
            Subsystem: ASUSTeK Computer Inc. FCH LPC Bridge
    00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 0
    00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 1
    00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 2
    00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 3
            Kernel driver in use: k10temp
            Kernel modules: k10temp
    00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 4
    00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 5
    00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 6
    00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 7
    01:00.0 Non-Volatile memory controller: Phison Electronics Corporation E16 PCIe4 NVMe Controller (rev 01)
            Subsystem: Phison Electronics Corporation E16 PCIe4 NVMe Controller
            Kernel driver in use: nvme
    02:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse Switch Upstream
            Kernel driver in use: pcieport
    03:01.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge
            Kernel driver in use: pcieport
    03:03.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge
            Kernel driver in use: pcieport
    03:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge
            Kernel driver in use: pcieport
    03:05.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge
            Kernel driver in use: pcieport
    03:08.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge
            Kernel driver in use: pcieport
    03:09.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge
            Kernel driver in use: pcieport
    03:0a.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge
            Kernel driver in use: pcieport
    04:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM961/PM961
            Subsystem: Samsung Electronics Co Ltd NVMe SSD Controller SM961/PM961
            Kernel driver in use: nvme
    05:00.0 Network controller: Intel Corporation Wi-Fi 6 AX200 (rev 1a)
            Subsystem: Intel Corporation Wi-Fi 6 AX200
            Kernel driver in use: iwlwifi
            Kernel modules: iwlwifi
    06:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller
            Subsystem: ASUSTeK Computer Inc. RTL8125 2.5GbE Controller
            Kernel driver in use: r8169
            Kernel modules: r8169
    07:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03)
            Subsystem: ASUSTeK Computer Inc. I211 Gigabit Network Connection
            Kernel driver in use: igb
            Kernel modules: igb
    08:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP
            Subsystem: ASUSTeK Computer Inc. Starship/Matisse Reserved SPP
    08:00.1 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller
            Subsystem: ASUSTeK Computer Inc. Matisse USB 3.0 Host Controller
            Kernel driver in use: xhci_hcd
            Kernel modules: xhci_pci
    08:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller
            Subsystem: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller
            Kernel driver in use: xhci_hcd
            Kernel modules: xhci_pci
    09:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
            Subsystem: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode]
            Kernel driver in use: ahci
            Kernel modules: ahci
    0a:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
            Subsystem: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode]
            Kernel driver in use: ahci
            Kernel modules: ahci
    0b:00.0 VGA compatible controller: NVIDIA Corporation TU104 [GeForce RTX 2080 Rev. A] (rev a1)
            Subsystem: ZOTAC International (MCO) Ltd. TU104 [GeForce RTX 2080 Rev. A]
            Kernel driver in use: vfio-pci
            Kernel modules: nouveau, nvidia_drm, nvidia
    0b:00.1 Audio device: NVIDIA Corporation TU104 HD Audio Controller (rev a1)
            Subsystem: ZOTAC International (MCO) Ltd. TU104 HD Audio Controller
            Kernel driver in use: vfio-pci
            Kernel modules: snd_hda_intel
    0b:00.2 USB controller: NVIDIA Corporation TU104 USB 3.1 Host Controller (rev a1)
            Subsystem: ZOTAC International (MCO) Ltd. TU104 USB 3.1 Host Controller
            Kernel driver in use: vfio-pci
            Kernel modules: xhci_pci
    0b:00.3 Serial bus controller [0c80]: NVIDIA Corporation TU104 USB Type-C UCSI Controller (rev a1)
            Subsystem: ZOTAC International (MCO) Ltd. TU104 USB Type-C UCSI Controller
            Kernel driver in use: vfio-pci
            Kernel modules: i2c_nvidia_gpu
    0c:00.0 VGA compatible controller: NVIDIA Corporation GP104 [GeForce GTX 1080] (rev a1)
            Subsystem: ZOTAC International (MCO) Ltd. GP104 [GeForce GTX 1080]
            Kernel driver in use: nvidia
            Kernel modules: nouveau, nvidia_drm, nvidia
    0c:00.1 Audio device: NVIDIA Corporation GP104 High Definition Audio Controller (rev a1)
            Subsystem: ZOTAC International (MCO) Ltd. GP104 High Definition Audio Controller
            Kernel driver in use: snd_hda_intel
            Kernel modules: snd_hda_intel
    0d:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function
            Subsystem: ASUSTeK Computer Inc. Starship/Matisse PCIe Dummy Function
    0e:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP
            Subsystem: ASUSTeK Computer Inc. Starship/Matisse Reserved SPP
    0e:00.1 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Cryptographic Coprocessor PSPCPP
            Subsystem: ASUSTeK Computer Inc. Starship/Matisse Cryptographic Coprocessor PSPCPP
            Kernel driver in use: ccp
            Kernel modules: ccp
    0e:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller
            Subsystem: ASUSTeK Computer Inc. Matisse USB 3.0 Host Controller
            Kernel driver in use: xhci_hcd
            Kernel modules: xhci_pci
    0e:00.4 Audio device: Advanced Micro Devices, Inc. [AMD] Starship/Matisse HD Audio Controller
            Subsystem: ASUSTeK Computer Inc. Starship/Matisse HD Audio Controller
            Kernel driver in use: snd_hda_intel
            Kernel modules: snd_hda_intel
    0f:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
            Subsystem: ASUSTeK Computer Inc. FCH SATA Controller [AHCI mode]
            Kernel driver in use: ahci
            Kernel modules: ahci
    10:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
            Subsystem: ASUSTeK Computer Inc. FCH SATA Controller [AHCI mode]
            Kernel driver in use: ahci
            Kernel modules: ahci

You absolutely need to pass all 4 devices from your 2080 :slight_smile: Everything inside a iommu group must be passed to the vfio-pci driver. (Your 2080 seems to be inside 0b:* iommu group)

I took a look at your xml file - everything seemed to be in order (comparing to mine)
Only thing that caught my attention was the .rom file pointing to your 4-passed objects… I remember using nvflash… and after some digging I can see 2 rom’s inside /home/ (the dl’d one and the patched one) Could you point out from where you found the advice for the rom?

Have you tried it the other way? Passing the 1080 to your VM - OR changing the gpu’s the other way around (1080 pci1, 2080 pci2) and trying to pass 2080 from the second slot.

From your original post - You told us you used archwiki’s guide. Have you read Archwiki#Passing_the_boot_GPU_to_the_guest
And Archwiki#UEFI_(OVMF)_compatibility_in_VBIOS

Switching my gpus around is a lot of hassle as my entire system is water cooled. And I have not yet tried to pass the 1080 as i am stubborn and want to use my best GPU for windows gaming. For the vBIOS i was directed to this link. From there i simply went to the website that is listed here. And downloaded the amp extreme vbios. Stored it in my local storage and passed it through. I will take a look at those links you referred to, and check the UEFI support. I didn’t really know what this was at the time, and I’m slowly but steadily learning to piece things together.

I just used rom-parser and I’m fairly certain that the vBIOS is UEFI compatible, as shown in the output. Here:

./rom-parser /home/matrucious/Documents/VM/Zotac.RTX2080.8192.181009.rom 
Valid ROM signature found @28600h, PCIR offset 170h
        PCIR: type 0 (x86 PC-AT), vendor: 10de, device: 1e87, class: 030000
        PCIR: revision 0, vendor revision: 1
Valid ROM signature found @36e00h, PCIR offset 1ch
        PCIR: type 3 (EFI), vendor: 10de, device: 1e87, class: 030000
        PCIR: revision 3, vendor revision: 0
                EFI: Signature Valid, Subsystem: Boot, Machine: X64
        Last image

I will be trying to go through the looking glass part next

Indeed - It is UEFI compatible.

In the Archwiki I meant: Passing the boot GPU to the guest

The GPU marked as boot_vga is a special case when it comes to doing PCI passthroughs, since the BIOS needs to use it in order to display things like boot messages or the BIOS configuration menu. To do that, it makes a copy of the VGA boot ROM which can then be freely modified. This modified copy is the version the system gets to see, which the passthrough driver may reject as invalid. As such, it is generally recommended to change the boot GPU in the BIOS configuration so the host GPU is used instead or, if that is not possible, to swap the host and guest cards in the machine itself.

This was also discussed in the passthroughpo.st article TheCakeIsNaOH linked

I see. So i should really either use the 1080 in this case, or switch the gpus around?
I have checked my BIOS a few times, and cannot find any way to change the pci order

Usually the pci-e slot 1 is the boot gpu - I think in my mb it’s possible to switch between dedicated gpu and iGPU for the boot gpu. but YMMV.

I would start with trying to get the 1080 working as it happens to be in the pci-e slot 2. If it “just works” - You can always a. switch the gpu’s around (the 2080 probs will work too) or b. Try to hack the 2080 to work on slot 1. :slight_smile:

Alright, i’ll start with trying the 1080 instead. I don’t have an iGPU since im on ryzen, and dont have an APU. So i don’t think theres a way for me to make my pci slot 2 my boot gpu. And when it comes to

im not sure where to start :sweat_smile:
I’ll post an update if i get the 1080 working… might have to redo a lot of the trouble shooting i’ve done for the 2080 as i have heard passing through nvidia cards is a general pain

Never mind my last post… i changed the IDs in modprobe.d/vfio.conf to my 1080, and passed them through to my VM aswell, and it booted up no problem what so ever. I will probably go though the entire process of changing my gpus around. Thanks a lot for all of your help and leading me to a better understanding. Much appreciated!