System becoming unbootable when adding dracut modules

I attempted to add the vfio modules to dracut for passthrough, using a guide on here regarding passthrough on opensuse (linked at the bottom of this post) . The guest GPU is in its own iommu group, its after that step I’m having issues.

Everything checked out, did exactly as it said aside from entering my own device ids and changing amd to intel. The point the author of that post made about “add_driver” not working in opensuse was the case for me as well. Tried only adding the dracut modules to narrow down the problem, when I do the system becomes unbootable.

Host gpu: gtx 1080

Guest: RX 480

MB: MSI z170a gaming pro carbon

OS: Opensuse TW

link to guide I was following:My notes/tutorial to achieve KVM with passthrough on OpenSUSE and Ryzen/Threadripper system.

1 Like

I’m going to bump you so you can post links.

Thanks, will edit post to incluce

1 Like

Why are you keeping a gtx 1080 for host and passing through a RX 480 to the guest?

Host is Tumbleweed, Guest is the VM. I think you should be passing the gtx 1080 to the VM.

Can you please describe what you mean with that you are not being able to boot. Describe what you can see in the screen e.g. if the boot process of Linux is starting or if you just see black after the BIOS boot screen passes.

One mistake comes to my mind that I can not exclude from the information you have given us. The guide you linked seems to make use of the vfio-pci module that binds the GPU you specified to the vfio stub driver and keeps the OS from loading one of the ‘real’ drivers for it, so that when you start a virtual machine it does no need to unmount the card from the host.

This is important because most motherboards try to initialize video output on the GPU in the first (upper most) PCIe slot. If your UEFI does initialize this card and then hands off the boot process to your kernel who is now trying the bind the very same card to the vfio stub driver your system will most probably hang.

If you have a system with two graphic cards you either need an motherboard where you can select which GPU to initialize first or you need to make sure that the GPU for the host is in the upper most PCIe slot and the GPU you want to use for VMs is in one of the lower slots.

Afaik for AM4 and threadripper boards the “first” is the GPU on the chipset’s PCie slots instead, which is more convenient for our purposes since otherwise you would be wasting high speed slots for the host GPU.

The only other way is to get a HDMI switch to disconnect the cable from the GPU you want to pass through when you reboot. The UEFI will detect that only one card is plugged in and will initialize that.

The error I get when it does not boot is “Error: Failed to load kernal modules”. Upper slot is the one with the host gpu, my iommu is such that the guest card needs to be in the bottom slot.

Apologies if I haven’t provided enough information

You make a good point, It’s probably worth it to switch those around.