VFIO/Passthrough in 2023 - Call to Arms

Hi wendell (:

i am running a 7800x3d on the X670E Arous Master with a 4080.
i tried a Sapphire 7900XTX first which was the card of my dreams but sadly had reset issues so i had to downgrade to team green :frowning: (card is louder and imo in general worse in every aspect).

but i can reattach and detach it without any problems.

i use the iGPU for the host and the dGPU for VFIO or native gaming depending on my use case.

I can really recommend this tutorial for fedora/nobara [tutorial] The Ultimate Linux Laptop for PC Gamers — feat. KVM and VFIO | by bland_man_studios | Medium

Sadly nobara has a acs patch baked in (with no opt out).

The setup is running for nearly a year now without any problems on the lookingglass / vfio side.
(7800X3D iGPU still has lots of system freezes and hangs on linux tho).

As soon as the 8900XTX (or what ever is next) will fix the reset /reattach issues ill happily switch back to my beloved team red <3

if i can help in any way i am happy to offer what i can :smiley:

PS: My VFIO also is used for Dolby Atmos gaming in my living room since linux apparently still has problems with good audio 3d audio hdr and such stuff ^^
PPS: my first VFIO /looikingglas setup was a X370 Hero VI with a WX3200 for the Host and a 6800XT for the VM. But my 6800XT also had a reset bug :confused:
PPPS: My only VFIO problem is that the games i set it up for(because no proton support) started to also block VFIO and only support windows native -.- causing me not to be able to play them)

So does this mean I need to drag all three of my C612 rigs out? I have been using VFIO for a while now and it works really well. I am using older stuff but I think the basics are still the same regardless. I am using one for a living room gaming VM and I was running Parsec clients for a while until my cluster bellied up. I have some hardware I could deff share my journey with.

1 Like

Current Specs:

  • Gigabyte X570 AORUS ULTRA
  • Ryzen 9 5900X
  • Crucial Ballistix 2x32GB
  • Proxmox as Hypervisor
  • Nvidia 2080 Super - passthrough for Windows VM
  • Nvidia 1030 (msi passive heatsink) - passthrough for Manjaro VM

My story begins roughly around November 2018 when I started to build my PC for my Homelab Proxmox Cluster with the goal of having my gaming machine in a rack hidden away anywhere. It took me around 3-4 days until I had a working running setup with 2 VMs (Windows, Manajro) and both of the GPUs as PCI-Passthrough. During that time it had some hiccups booting up but was working ~85% of time without any problems.

After a kernel update, do not know which one exactly, around 6-7 months ago I was surprised that the system got even more stable with only one bug left when booting up. I use an 4PC - USB3 hub for switching input devices between the two VMs and my Work Laptop (Home Office) and everytime when the hub is connected to the Manjaro VM the kernel is freaking out with problems of the GPU (1030) I use as passthrough on the VM. There is also the Problem with the 1030 and initial setup on Manjaro but here I am not sure if the problem is more the 1030 or the broken X-Server configuration, because when I try to boot after installing the Manjaro OS I only get a black screen but when I delete the X-Configuration which btw. worked during the installation path, everything is going great again.

My biggest problems with my current setup are Monitors and Audio.
Currently I run HDMI/DP Cables from both of the GPUs to my monitors and switch manual between them (input select). It is not ideal but because I am using split monitors (monitor 1 Work laptop, monitor 2 Manjaro - watching Youtube videos psst) it is the only way possible.

For Audio I am using 3 Soundblaster XG1 to a ā€œLittle Bear MC5 Mini 4 channel mixerā€ going into a Soundbar/Steelseries Arctis nova pro wireless.
This needs to be replaced with proper Speakers and Headphones but it is working for now.

What I wish for: (And where you awesome community may help me with)

  • A ā€œcompanionā€ app or something like that which runs on all 3 machines (windows, manjaro, laptop) which can switch the inputs of the Monitors. Would be great if it had some sort of API where I can use an ESP with some buttons to switch
  • A ā€œcompanionā€ app which creates a virtual audio interface and sends the audio to a third device where I can mix up to 4 input channels.

For both of my wishes I also thought of a DIY device with knobs (volume), buttons (monitor switch) and a DAC. Don’t know if something like that does already exist.

Also I do discourage anyone to use the 1030 as passthrough if you already have a nvidia card in your system. I still think that if you want to use one card (in my case nvidia) as full passthrough device then the second one should be AMD and vice versa.

PS: Forgot to mention but even Steam Index USB passthrough is working on both Windows and Linux without any problems.

1 Like

I ran the full looking glass VFIO stack for a couple years, but I just kept running into updates breaking things; by the end, I even had my own kernel with reverted commits.

My setup considered of two Nvidia cards, a 2060 super (guest) and a 1050ti (host). The 1050ti isn’t supported on the newer open source drivers and that didn’t make things any easier. I ran Wayland on my host and that came with another sleuth of issues. Blacklisting Nvidia drivers wasn’t an option because I needed it for my host GPU, so I bound VFIO-pci to the guest gpu through kernel parameters. I ran windows as my guest is and had the best performance using a dedicated nvme ssd as it’s boot drive. It also allowed me to boot directly to the guest OS without any vms (even though it wouldn’t actually be a guest), which I did on a few occasions. I pinned CPU cores, and had a qemu hook to disable host process scheduling on those cores when the vm booted. Tuning gave me significant performance gains, and I found that I could run WSL with paravirtualized gpu acceleration in my windows VM with little overhead to mess around with AI models.

Things fell apart when there were new kernel version that were incompatible with Nvidia drivers (arch btw) and then after a kernel update (around 6.1?) the VFIO-pci binding would mess with the Nvidia framebuffer, and I had to revert some commits on mainline, but then further changes made it difficult to rebase and update. Oh and then my host ssd died.

Edit: Arch btw

3 Likes

Come to think of it… another nice use for VFIO I would like to do. Just it had not ocurred to me before now… is when people come over occasionally. And we want to do the LAN PC gaming together. But it’s actually quite nice not to then need a 2nd dedicated PC required in order to actually do that. But instead rather use that same (pre-existing) VFIO passthru guest VM. And then be streaming the guest machine through to the TV instead. To act as the 2nd physical machine. Over local lan via moonlight / sunshine / similar local network streaming. Would be quite a lot more convenient for those occasions. Heck, maybe even could also take my old 5700xt and then put into the home server. For a 3rd machine… but maybe VFIO isn’t so well supported on the earlier navi1 generation i suppose? Not sure.

1 Like

I did this with dummy plugs in my proxmox server. I used Parsec for folks to remote in and play. This actually works well. I have a pair of 1070’s I did this with. I’ll post everything I have going on when I get home with pictures included. Mind you I am also using two 2699v3’s and two 2690v3’s in two servers. I have a third I need to setup. More on that later.

1 Like

Vfio to the win <3

Thanks wendell / gnif for introducing me to the tech and deliver a f****** good software <3

My System:
Case: Corsair 5000D
Mainboard: Asus ROG Crosshair x670e Hero (Recent Bios 1516 installed - also would not recommend that Board)
CPU: 7950x (Since ~release)
RAM: Dominator Platinum RGB (2 x 32GB, 5600 MHz, DDR5 RAM, DIMM)
iGPU for my Host
Second Main VM GPU: AORUS GeForce GTX 1080 Ti Xtreme Edition 11G
Third GPU: Asus 1060 (not sure was rescued from an Acer PC)
SSD 1: Crucial P5 Plus 2TB for Linux and qcow2 images
SSD 2: Crucial P5 Plus 1TB for my Main Gaming VM
3 NVME Slot: IOCREST M.2 to 10 Gbase Ethernet with AQC107 Chip (thanks to Brother Ali and Express)
Using 2 Dell S2721DGFA 2560*1440 @ 165Hz Displays with the iGPU and KVMFR Module with Looking-Glass <3


Bios Settings:
Every VM/virt stuff I found is enabled and not on Auto :smiley:
Iommu Auto
PCIe ar Auto
PBO is set 105W TDP Mode so my CPU does not go in <5 sec to 95°C and my AIO to MAX fan speed!
Even had a Nuctua NH-D15 on there before the AIO…

Just Found out in ASUS with CTRL+F2 you can save it to a text file → [2023/08/18 14:46:53]Ai Overclock Tuner [EXPO I]EXPO [DDR5-5600 40-40-40-77- - Pastebin.com

OS:
Arch Linux (KDE DE) with latest 6.4 Kernel and LTS Kernel (both ready to boot my VM)
I don’t have many issues on my PC to get it up (maybe because of my NV gpu’s) but I had MANY issues with the iGPU tho! like Flickering [Solved] Display Flickers w Kernel 6.2 on X11, not on Wayland / Kernel & Hardware / Arch Linux Forums
No Picture after a Firmware Update amdgpu no output signal / Kernel & Hardware / Arch Linux Forums
and one (the left display) sometimes Disconnects right after Login - a restart of that Display fixes the issue so not bothered to check further :smiley:

I’m using this since October 2022 and did not had many issues to get it up and Running (My self destroyed Fan Hub - wrong PSU Cable m( Short Power to Ground while building this PC I dont want to mention that hard :stuck_out_tongue: But Good guys Corsair replaced then after I told them what happen - main part I fucked up and so on )

The IOMMU groups are fine’ish, FINE when the lanes are from the CPU directly, every other lane from Chipset is ā€˜bad’ - Maybe ACS Patch can save that - but here I just try to use the CPU Lanes for my VM stuff and then it is fine.

My Old PC had ā€œbetterā€ IOMMU groups but yeah was slow.
My Old System:
Case: Lian Li 50R
Mainboard: Asus Rampage IV Extreme
CPU: 4960x
RAM: 4 x 8 GB 1600Mhz DDR3
Main GPU: AORUS GeForce GTX 1080 Ti Xtreme Edition 11G
Second GPU: Asus 1060 (not sure was rescued from an Acer PC) and VFIO GPU
SSD 1: Crucial MX500 2TB for Linux and qcow2 images
SSD 2: Crucial MX500 1TB for my Main Gaming VM


My Modded 10G Switch for my 10G ISP Uplink (modded with 20mm Noctua fan + Silent Adapter)

2 Likes

That’s an Intel board. Do you mean the X670E Crosshair Hero? What’s wrong with it?

yes I defined it a bit lower in my prev post lol

/e: also fixed above post to not mislead

I’m a long-time VFIO user, recently with AM5 and a 7000-series AMD GPU. Passthrough works but with work-around for reset. Details below.

Hardware

Motherboard: ASUS X670E ProArt CREATOR WIFI
CPU: AMD 7950X3D
Memory: Kingston FURY Beast DDR5 6000MHz 64GB
dGPU #1: Acer Predator BiFrost Intel Arc A770 OC
dGPU #2: XFX MERC 310 AMD Radeon RX 7900 XTX

BIOS Configuration

iGPU disabled (I simply never tried using it)
Memory running stable at the 6000MHz EXPO profile
ReBAR enabled
IOMMU enabled (not auto, FWIW)

Software

Void Linux on kernel 6.2.13_1
qemu 7.1.0 with patch for ā€œstatic ReBARā€, libvirt 9.5.0
Looking Glass B6

Notes

Motherboard

I chose this motherboard because, AFAIU, the first two PCIe x16 slots are wired directly to the CPU, and can run at x8 gen 4 when both slots are occupied. I don’t know if, or how much, this really matters for performance.

CPU

I’m using vfio-isolate to dedicate 7/14 physical/logical cores to the VM. These are the 3D V-Cache cores, except core zero, because I read somewhere that Linux uses core zero even when isolated.

Arc GPU

The BIOS and the Linux host use the Arc GPU, inserted in the first PCIe slot. The BAR size really needs to be increased on this one - I was getting graphical glitches at the small BAR size. Also requires a recent kernel and Mesa 3D version for decent performance. With ReBAR enabled, the BIOS sets the maximum BAR size at boot and all is well. I haven’t tried PCI passthrough with this GPU.

The nice thing about this GPU is that it’s only two-slots tall, giving it room to breathe when the second x16 PCIe slot is occupied by another GPU. With two dGPUs stacked, cooling can be a real problem, but this works well in my Define 7 XL case.

Radeon GPU

The Radeon GPU is for a Windows 10 guest VM.

Reset

The VFIO driver is attached at Linux boot via the kernel command line, but if I go straight to boot the VM, I get no graphics output from the VM on the GPU. The workaround is to suspend the Linux system to RAM (S3 sleep) then wake back up. Once it wakes up, however, the BAR size is the default small one, so a script sets it back to the maximum BAR size. Now I can boot the VM with working graphics - but see details about ReBAR below. If I shut down the VM, I have to suspend to RAM again before booting up the VM.

ReBAR

As noted above, I’m using ReBAR. With stock qemu 7.1.0, the GPU could not be successfully initialized in Windows when ReBAR is enabled in BIOS. With ReBAR disabled and using the default BAR size, Windows boots with graphics, but I wanted the big BAR in case it boosts performance.

I found the static ReBAR patch in this Reddit thread. With this patch, Windows boots with graphics even when the GPU has the maximum BAR size. Also, the Radeon software considers ā€œAMD SmartAccess Memoryā€ enabled in the performance tuning menu.

Conclusion

Overall it’s a success, and I’ve played a lot of games on the Windows VM with great performance. Suspending to sleep and waking back up only takes a few seconds and isn’t a great inconvenience, but it must be noted that Windows likes to reboot twice during system updates, and because of the aforementioned reset issue, I have no graphics while this happens. It would be nice to have a fix or better work-around to the reset issue.

edit: I had originally said that setting BAR size via sysfs worked while the vfio driver was loaded - this wasn’t the case, my script was unbinding the driver, setting BAR size, then re-binding the driver.

Scripts

reset-vfio-gpu
#!/bin/sh

gpuAddress="0000:08:00.0"
gpuAudioAddress="0000:08:00.1"

echo 1 > "/sys/bus/pci/devices/$gpuAddress/remove"
echo 1 > "/sys/bus/pci/devices/$gpuAudioAddress/remove"
echo "Suspending..."
rtcwake -m mem -s 10 # use zzz instead?
echo "Woke up, waiting..."
sleep 5s
echo 1 > /sys/bus/pci/rescan
sleep 2s
/usr/local/bin/set-pci-bar-size-7900xtx
echo "Reset done"
set-pci-bar-size-7900xtx
#!/bin/sh

# Point this to the Radeon RX 7900 XTX
gpuAddress="0000:08:00.0"
bar1Size="15" # for 32GB
bar2Size="8" # for 256MB
deviceId="1002 744c"

echo -n "$gpuAddress" > /sys/bus/pci/drivers/vfio-pci/unbind
echo "$bar1Size" > "/sys/bus/pci/devices/$gpuAddress/resource0_resize"
echo "$bar2Size" > "/sys/bus/pci/devices/$gpuAddress/resource2_resize"
echo -n "$deviceId" > /sys/bus/pci/drivers/vfio-pci/new_id || echo -n "$gpuAddress" > /sys/bus/pci/drivers/vfio-pci/bind

# Bit Sizes
# 1 = 2MB
# 2 = 4MB
# 3 = 8MB
# 4 = 16MB
# 5 = 32MB
# 6 = 64MB
# 7 = 128MB
# 8 = 256MB
# 9 = 512MB
# 10 = 1GB
# 11 = 2GB
# 12 = 4GB
# 13 = 8GB
# 14 = 16GB
# 15 = 32GB
isolate-windesktop-cores
#!/bin/sh
op="$1"
test -n "$op" || { echo 'usage: isolate-windesktop-cores <enable|disable>' >&2; exit 1; }

#undoDirPath="/tmp/vfio-isolate"
#mkdir -p -m go=r "$undoDirPath"
#undoFilePath="$undoDirPath/undo.bin"

allCores=0-31
hostCores=0,16,8-15,24-31
guestCores=1-7,17-23

case "$op" in
    enable)
        vfio-isolate \
            drop-caches \
            cpuset-create --cpus "C$hostCores" /host.slice \
            move-tasks / /host.slice \
            compact-memory
            #irq-affinity mask "C$guestCores"
        taskset -pc "$hostCores" 2 # kthreadd only on host cores
        ;;
    disable)
        #vfio-isolate restore "$undoFilePath"
	vfio-isolate \
	    cpuset-delete /host.slice
	    #irq-affinity mask "C$allCores"
        taskset -pc "$allCores" 2 # kthreadd reset
        ;;
    *)
        echo "unknown operation '$op'" >&2
        false
        ;;
esac
windrive-bind-vfio
#!/bin/sh
op="$1"
test -n "$op" || { echo 'usage: windrive-bind-vfio <bind|unbind>' >&2; exit 1; }

vendorId="144d"
deviceId="a808"
deviceClass="67586"
pciAddress="0000:70:00.0"

driver_name(){
    local address="$1"
	basename $(readlink /sys/bus/pci/devices/$address/driver)
}

case "$op" in
    bind)
        if test -d /sys/bus/pci/devices/$pciAddress/driver; then
            driver=$(driver_name $pciAddress)
            if test "$driver" = "vfio-pci"; then
                echo "vfio driver already loaded; doing nothing"
                exit
            fi
            echo -n "unbinding current driver '$driver'... "
            echo "$pciAddress" > /sys/bus/pci/devices/$pciAddress/driver/unbind || exit
            echo "done"
        else
            echo "no driver loaded"
        fi

        echo -n "binding vfio driver... "
        echo $vendorId $deviceClass > /sys/bus/pci/drivers/vfio-pci/new_id || exit
        echo "done"
        ;;
    unbind)
        if test -d /sys/bus/pci/devices/$pciAddress/driver; then
            driver=$(driver_name $pciAddress)
            if test "$driver" = "nvme"; then
                echo "NVMe driver already loaded; doing nothing"
                exit
            fi
            echo -n "unbinding current driver '$driver'... "
            echo "$pciAddress" > /sys/bus/pci/devices/$pciAddress/driver/unbind || exit
            echo "done"
        else
            echo "no driver loaded"
        fi

        echo -n "binding NVMe driver... "
        echo $vendorId $deviceClass > /sys/bus/pci/drivers/nvme/new_id || exit
        echo " done"
        ;;
    *)
        echo "unknown operation '$op'" >&2
        false
        ;;
esac
6 Likes

I’m actually using VFIO form my main home rig in the following way.
The host acts as a NAS to keep all my data and SMB shares; then I have:

  • a gaming Windows VM with my main GPU
  • a ā€œhome fun/hobbyā€ Linux VM with secondary GPU
  • a ā€œworkstationā€ Windows VM for my personal things (two in fact: one using the main GPU and the second using the secondary GPU)
  • a ā€œworkstationā€ Windows VM with Solidworks for home working with main GPU
  • an AI Linux with main GPU
  • some other VMs used for experiments

My system:

  • AMD 5800X
  • Gigabyte X570 Aorus Elite
  • 64Gb DDR4 3200
  • AMD RX 6700 XT
  • AMD RX 6600
  • 6x 2TB HDDs on ZFS ZRAID2
  • 2TB NVMe as swap/cache/windows gaming VM

What worked for me: having multiple systems that I can run at the same time
What has not worked or given me big headaches:

  1. memory overcommitting: I’ve tried everything I could find on the internet but as soon as the memory assigned to the VMs is bigger than the physical amount of RAM the systems becomes unresponsive - even with NVME swap, even with balloon drivers, even with KSM… I had to buy 64GB of ram which most of the time is unused except for the occasions when I really need.
  2. using a single GPU on multiple VMs: for those VMs that do not require a beefy GPU - like host, office or internet browsing - that would be a major benefit
  3. also assign mouse and keyboard (and host USB controllers) to the different VMs without conflicts has given me some headaches, but in the end I made it work.

Bye
Andrea

I started to build my vfio-setup seriously after a LTT video (i think?) about 7 years ago. Ive documented the build in a github repository together with some pictures in a imgur album:

4Gamers1PC Github
Imgur Album

Since then, I have upgraded from two Xeons E5-2670 and a Asrock Workstation motherboard to a HEDT platform (X299 + 7980XE).
For the last 6 years my daily driver was a Windows/Linux VM, running on a host without any video output.

I still use this machine to game with my friends. I just shutdown my main VM and boot 2-5 VMs with their own GPU (Nvidia RTX 3090, AMD 6800 XT, Intel A750, Nvidia GTX 1060s) and USB controller. Each VM also has its own main drive with Windows 11 and Linux, but also gets a LVM snapshot of the gaming drives.


This is an older picture without the AMD and Intel GPU. The computer is located in an other room.

In my last vfio-adventure I wanted to test if an old Voodoo 2 GPU still works. For that I ordered a PCIe to PCI adapter. It was super fun to get a Windows 98SE VM with ā€œgpu-passthroughā€ working :smiley:

VFIO rocks!

3 Likes

Have not attempted VFIO yet. Running a Ryzen 5800X and a Radeon RX 6800XT on an X570 Aorus Master mobo on Debian 12. Not looking to change hardware any time soon, so SR-IOV would be a must for me. Even worse, I would want to be able to use the full GPU for linux games, and then swap over to bifurcating the GPU for when I do CAD work in Fusion360 in a Windows VM.

(I know, I know, I’d save myself a bunch of hassle if I could use a CAD program in Linux natively, but meh. Easier to just live with laggy Fusion360 in a VM. SR-IOV would still be important for Qubes (haven’t gotten it working just yet, but someday!).)

This actually worked? That is amazing! I would be interested to know more about this (and I would assume I’m not alone in that on these forums :smiley: )

My VFIO use case is probably one of the more obscure use cases and came out of an interesting mix of budgeting and resource sharing.

Essentially I tried to get an AMD Radeon Instinct MI100 passed through to a VM on Proxmox, on a Dell PowerEdge R7525 - the server needed to serve two entirely separately needs, VM hosting and GPU compute for an independent group of people.

Through probably weeks of tinkering I ran into the infamous AMD reset bug - I am unsure whether the MI100 CDNA architecture inherited it from the RDNA it’s based on, or whether it went the other route.

As far as I am aware from googling - no one ever attempted to do this with the MI100, at least in a documented manner.

Whatever VFIO and passthrough related resources are out there I probably went through - I ended up making it possible to pass through the GPU as long as the VM booted once and never shut down due to the reset issue. Right now I pivoted towards trying to make it work within an LXC container, though that is a side project - the original purpose for the MI100 got budget cut, so now I just have an MI100 with no use in a proxmox host.

TL;DR Couldn’t get PCI passthrough to work, re-purposed Guest OS graphics card.

Setup:
Ryzen 9 3950X
Gigabyte X570 AORUS Master
64GB RAM
Host OS = Linux Mint 19.3 Cinnamon
GIGABYTE Radeon RX 5700 XT GAMING OC
Guest OS = Windows 10
MSI GeForce RTX 2070 SUPER GAMING X TRIO

December 2019 I had finally assembled the above computer with the goal of running Linux on the host so I could game on the Windows guest if needed. My hope was to make a single computer that could do everything I wanted. The problem was that this was my first Linux computer and I had almost no experience with anything on the Linux side. I was able to get Linux Mint setup the way I wanted relatively quickly. I tried following these Chris Titus Tech videos (lol, still had the links) at the time, but I could never get it to work.

PCI Passthrough | System Configuration | Part 1 (https://www.youtube.com/watch?v=3yhwJxWSqXI)
PCI Passthrough | Virtual Machine Setup | Part 2 (https://www.youtube.com/watch?v=GbhUBQdMoJg)

If I remember correctly, since it’s been a while, the Windows VM would never see the guest card. After a week or poking at it I decided to re-purpose the RTX 2070 SUPER to my older Windows box so I could play some VR games.

My problem was definitely that I was too new to Linux at the time, but I found the steps complicated because I had no idea what I was doing at the time. To be honest, I still feel that’s the case when I looked at this new VFIO guide. Since then I haven’t bothered to go back and try again since my Windows box can still play my VR games. This has been less of an issue recently with Steam + Proton.

Hope when you tried the videos from ctt where not in the 3-4yrs range lol - you should use a recent guide - e.g. arch wiki with adaptations to your distro - maybe join a discord like vfio to get better help. But these guides are outdated as soon the poster released the video tho.

Thanks everyone for doing this. This is already a huge db of knowledge. And thanks Level1 team for the giveaway!

I currently do not have a PC (craptop only) to try this setup, but there’s a pretty lengthy YT video about single GPU passthrough and it seems to work for a lot of people:

Here’s the guy showing everything working:

Maybe this helps someone!

Sgpu is worse then just dual booting ngl. If you have an older nv gpu you may have more luck/experience with vgpu unlock but yeah - another can of worms.

Vfio discord has now a openAI chat just for sgpu setups:
madgpt.wipf.nl dangerous if you can not into jokes

There are resonable sgpu setups but as soon you kill your linux de/wm and apps that are using a gpu it is most likely easier to just dual boot. Sgpu brakes a lot - just a new app installed and using while you want to start the vm - it may break.

I’m someone who builds mid-range systems for myself every 3-5 years. I wait for that performance per dollar value proposition to show up before I shell out. Important data about my current build and VFIO are as follows:

Mainboard: Gigabyte X470 Aorus Ultra Gaming
CPU: Ryzen 9 5900x
Memory: 64GB 3200MT/s RAM (G.Skill)
BIOS: F62
IOMMU: enabled
Main GPU: Radeon 5700XT
Passthrough GPU: RTX 2080 Ti
Passthrough USB Controller: FL1100-based PCI-E USB 3.0 controller
OS: Arch (btw :wink: )
Guest OS: Windows 10
Guest Display: Pixio PX274P
Hypervisor: libvirt + kvm
Kernel: linux-vfio package from AUR (6.3.7 currently)
GRUB_CMDLINE_LINUX_DEFAULT (from /etc/default/grub): loglevel=3 sysrq_always_enabled=1 amd_iommu=on module_blacklist=nouveau,snd_hda_intel pcie_acs_override=downstream,multifunction vfio-pci.disable_idle_d3=1 vfio-pci.ids=10de:1e07,10de:10f7,1b73:1100 libata.allow_tpm=1 amdgpu.ppfeaturemask=0xffffffff acpi_enforce_resources=lax
Other components: Barrier (Synergy fork) is used for easy non-gaming KB/mouse integration. Scream is used for passing audio back to the host OS. Looking Glass is installed, but I rarely use it as it seemed laggy compared to a dedicated native display; I may re-attempt in a few months. Sunshine is installed for occasional remote game streaming.

My config has been very stable for at least the last year. I use my setup primarily for running games that refuse to run under Lutris/WineGE/Proton/whatever. My build was previously based around a 3600x, but a good deal popped up for a 5900x last year so I demoted the 3600x to my homelab NAS build. LMK if any other details are useful.