Force boot from IGD not working?

Hi guys this will be my first post here so bare with me,

I recently wanted to start using VM’s instead of a full blown windows multiboot for ease of access and have a bit more control with snapshots.
I tested this with a gtx710 card i had laying around and it worked, now i wanted to force my PC to use the iGPU of the APU so i could use my graphics card in QEMU (manjaro) .
I tried setting my initiate graphics adapter setting to IGD instead of PEG but my PC still boots from my card.
i rather not split it as it’s just a cheap system with a 4060 and resources already would be pretty limited on the GPU end.

Don’t know or it’s possible to edit but, did a BIOS update and now i get some output from my iGPU when i boot and the splash screen from MSI is visible. after that and the build in load icon the screen goes black (here is where i normally see my login screen).
I assume this is some kind of driver priority taking place as my iGPU is detected now, i set VGA card detection to false in the bios but apparantly this is ignored based on mhwd:

mhwd -l
> 0000:01:00.0 (0300:10de:2882) Display controller nVidia Corporation:
--------------------------------------------------------------------------------
                  NAME               VERSION          FREEDRIVER           TYPE
--------------------------------------------------------------------------------
video-hybrid-amd-nvidia-prime            2023.03.23               false            PCI
          video-nvidia            2024.05.03               false            PCI
           video-linux            2024.05.06                true            PCI
     video-modesetting            2020.01.13                true            PCI
            video-vesa            2017.03.12                true            PCI


> 0000:10:00.0 (0300:1002:164e) Display controller ATI Technologies Inc:
--------------------------------------------------------------------------------
                  NAME               VERSION          FREEDRIVER           TYPE
--------------------------------------------------------------------------------
video-hybrid-amd-nvidia-prime            2023.03.23               false            PCI
video-hybrid-amd-nvidia-470xx-prime            2023.03.23               false            PCI
           video-linux            2024.05.06                true            PCI
     video-modesetting            2020.01.13                true            PCI
            video-vesa            2017.03.12                true            PCI


> 0000:04:00.0 (0300:10de:128b) Display controller nVidia Corporation:
--------------------------------------------------------------------------------
                  NAME               VERSION          FREEDRIVER           TYPE
--------------------------------------------------------------------------------
video-hybrid-amd-nvidia-470xx-prime            2023.03.23               false            PCI
    video-nvidia-470xx            2023.03.23               false            PCI
    video-nvidia-390xx            2023.03.23               false            PCI
           video-linux            2024.05.06                true            PCI
     video-modesetting            2020.01.13                true            PCI
            video-vesa            2017.03.12                true            PCI


Look for ‘vfio-pci binding’ where you supply the device id of the external card on 04:00.0 (10de:128b) as a kernel startup or kernel module parameter so that the card gets claimed for vfio passthrough and not by the nVidia device driver.

The edit added to startup kernel command line parameters might look like vfio-pci.ids=10de:128b, or you might add it to a file like /etc/modules.d/90-vfio-pci.conf as a line like options vfio-pci ids=10de:128b.

Watch out for your bumblebee/switcheroo configuration, too, it looks like you have that involved here as well.

K3n.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.