For testing, you could try killing gnome+gdm and stuff, logging in with ssh, then with sudo, try a couple of the echo commands, but using your devices addresses?
Hey Tree, that would be my guide you’re talking about
Like gordonthree said, you could run a script like this:
#!/bin/bash
# VGA Controller: unbind nvidia and bind vfio-pci
echo '0000:0f:00.0' > /sys/bus/pci/devices/0000:0f:00.0/driver/unbind
echo '10de 100a' > /sys/bus/pci/drivers/vfio-pci/new_id
echo '0000:0f:00.0' > /sys/bus/pci/devices/0000:0f:00.0/driver/bind
echo '10de 100a' > /sys/bus/pci/drivers/vfio-pci/remove_id
# Audio Controller: unbind snd_hda_intel and bind vfio-pci
echo '0000:0f:00.1' > /sys/bus/pci/devices/0000:0f:00.1/driver/unbind
echo '10de 0e1a' > /sys/bus/pci/drivers/vfio-pci/new_id
echo '0000:0f:00.1' > /sys/bus/pci/devices/0000:0f:00.1/driver/bind
echo '10de 0e1a' > /sys/bus/pci/drivers/vfio-pci/remove_id
I’m using the virsh tool in my scripts but this works and is a manual way of accomplishing the same task.
Also, it doesn’t matter if you’re using GPUs with the same drivers or not. The only thing to be careful about is this:
## Unload nvidia
modprobe -r nvidia_drm
modprobe -r nvidia_uvm
modprobe -r nvidia_modeset
^^ Since you’ll be using two Nvidia GPUs, you wont be able to unload these modules because they’ll have non-zero usage count (this wasn’t an issue for me because I had an AMD GPU for my host). So go ahead and just delete those lines for your setup.