Good day!
I succesfully created a GPU passthrough setup on my Ubuntu as a host and Win11 as a guest.
I’m now polishing some configuration details and trying to understand better how certain things work underneath.
My setup:
AMD Ryzen 7 x1800
ASUS Prime B450 Plus
2x AMD RX580
32GB RAM
Ubuntu 20.04 with 5.15.0-76 kernel
I’m passing through, to the Win11 vm, the graphics card, that’s in the main PCIe x16 slot and is used by the Bios as the boot VGA:
$ cat /sys/bus/pci/devices/0000\:07\:00.0/boot_vga
1
With the initramfs script, I bind the vfio_pci driver to it and unbind the vtconsole:
$ cat /etc/initramfs-tools/scripts/init-top/bind_vfio.sh
...
echo 0 > /sys/class/vtconsole/vtcon0/bind
echo 0 > /sys/class/vtconsole/vtcon1/bind
DEVS="0000:07:00.0 0000:07:00.1"
for DEV in $DEVS;
do
echo "vfio-pci" > /sys/bus/pci/devices/$DEV/driver_override
echo "$DEV" > /sys/bus/pci/drivers/vfio-pci/bind
done
modprobe vfio-pci
echo 1 > /sys/class/vtconsole/vtcon0/bind
echo 1 > /sys/class/vtconsole/vtcon1/bind
exit 0
It’s being succesfully isolated and my guest OS works like a charm.
But… there is one detail I’m wondering about. The isolated GPU has one monitor attached to it and this is the screen that the BIOS and boot loader are displayed on. After initramfs initializes, the screen becomes dark, but doesn’t go into sleep mode. I also noticed, that when I boot the Win11 VM and then turn it off - it then turns the monitor off as well.
It doesn’t affect it’s capability to be reused once again however, so it seems that for this setup such behaviour is possible.
My question is: What is happening with the GPU during the whole process and can I “put the screen into sleep” from the host, with the vfio-pci driver itself?