Increasing VFIO VGA Performance

I figured I should report back on this. I added a line in my little VM startup script that sets isolation and performance governor to set the irq affinity to FCFC for all vfio devices. seems to work well. I’m not sure if the latency is actually lower than before but I have noticed less stutter, such that I can now use LG with DXGI capture at 120 UPS perfectly.

grep vfio /proc/interrupts | cut -b 3-4 | while read -r i ; do
   echo "set mask fcfc to irq $i"
   echo fcfc >/proc/irq/$i/smp_affinity
done

I know I could do better on pulling the irq values with awk or sed so that it can handle 3 digit irq values but at the time I couldn’t be bothered with regex. If someone else wants to improve on it be my guest.

Do you still have the problem with the core count?
If so can you post what kernel version you are using?
For Zen (also Buldozer) cpus you also need recent enough kernel.
The old ones would not allow exporting the required CPUID leaf to expose the topology and cache information under topoext.
Note that the new kernels could have problems due to CRYPTO_DEV_SP_PSP being compiled into them.

If you still do not see the correct number of cores for host-passthrough you could use CPU-Z to dump the CPUID info and I could have a look if its not some Win bug.
Its the Save report button under the About tab.

1 Like

I finally got it working on monday and it indeed was the Kernel that was too old. I’m now using 4.19.1 which works fine for host-passthrough, but has caused some other issues on the host. I might just switch to Arch for the time being.

I haven’t had time to do extensive testing yet, but CPU-Z now correctly tells me 8 cores / 16 threads. It also shows the correct caches, though it shows the numbers for both dies. Windows task manager tells me that I have 16 logical processors and doesn’t display a core count, but I think this is normal? It was the case with EPYC as well.

Now it’s on to fixing other issues and tuning it for performance. One thing that is annoying me is that after a cold start / reboot of the host, the guest only gets the GPU as PCI-E 1.0. I have to shut the guest down and “cold start” it again, then it gets the GPU as PCI-E 3.0. I know it’s still not quite 3.0 as I am unable to apply gnif’s nasty patch, but it is very noticeable in performance. I’ll need to try some things when I have time, but for now I just boot it twice.

Thanks for all your help!

You should set the IRQ affinity to a single core or when interrupts happen on a different core the ISR (Interrupt Service Routine) task needs to be moved to the core the IRQ was triggered on, increasing latency due to L1/2 cache misses. Here is how I set it :slight_smile:

MASK=$(printf "%x" $((1<<$CORE)))
echo $MASK > /proc/irq/$1/smp_affinity

Also make sure you’re not running irqbalance, or set it to one shot mode, or ban it from touching your vfio interrupts, as it will undo any affinity configuration you do. Always check /proc/interrupts to see if your affinity settings are sticking.

watch -n0 cat /proc/interrupts

Or if you want to watch a single core, pipe it through awk. For example, to watch core 13:

watch -n0 "cat /proc/interrupts | awk '{print \$1 \" \" \$12}'

In theory, but if the available local memory becomes sparse/fragmented, the kernel may shift memory to the other die. If you’re going for best latency you want to avoid this at all costs, by pinning it locally if you’re out of local memory instead of the kernel finding more on the other die, it will fault with an OOM error, which is what you want for a dependable configuration.

Also numad doesn’t allocate memory, it simply tries to keep all the memory and threads of a single process local, on the same die. When allocating such huge chunks of RAM for a guest VM, it is best to be as specific as possible, no matter what virtualisation software you’re using.

It will attempt to locate processes for efficient NUMA locality and affinity, dynamically adjusting to changing system conditions

The dynamically part is the issue, at any point in time it may shift the allocated ram to the other core if for instance another process on the host needs more ram and it’s out of local ram on the other die.

Also AFAIK at current Qemu doesn’t support NUMA natively, it only supports CPU pinning, and as such when it allocates the guest’s RAM from it’s primary thread at startup, affinity has not yet been set as it’s not a CPU thread and it may be allocated to the wrong die. I can confirm this for LookingGlass’s shared memory, which is why I pre-allocate it using Numactl before starting the VM.

numactl --length 64m --shm /dev/shm/looking-glass --membind=$NODE
chown qemu:qemu /dev/shm/looking-glass
3 Likes

Thanks for the explanation! I will definitely dig down more on this and try different things when I have time to do so. For now it works well enough for my use case but increasing performance and reliability is always a good thing.

1 Like

Some good news everyone! I just had a proper patch set dropped on my lap that should address the PCIe Link speed negotiation. I am about to apply and test, will let you all know how it goes, and if it’s stable I will post details here so that others can also test.

4 Likes

So far so good, need to do a bit of a burn in and some testing across reboots, etc. Seems to be working well, unfortunately however I have run out of time for the moment, I will have to get back to this later this afternoon.

1 Like

Ok, I have had some more time to test and play and while there has been some feedback on just some coding style issues so far, I am happy with the results. The change introduces some new configuration values as RedHat wish to retain backwards compatibility by default for now, and perhaps change things in a later revision of the Q35 platform version.

Huge thanks to Alex Williamson of RedHat for his time and effort on this!

The patch is a total of seven parts and is available at https://patchwork.kernel.org/cover/10683043/

Usage is simple, the speed and width of the bus needs to be set on the root port, not the PCIe device. Patch 6 of 7 has an example libvirt configuration, but if you’re not using libvirt here is how to do it directly.

 -device pcie-root-port,id=root_port1,chassis=0,slot=0,bus=pcie.0 \
 -set device.root_port1.speed=8 \
 -set device.root_port1.width=16

You may be able to put those extra lines on the root-port device itself as parameters but I have not bothered to test this. Note that the device is no longer ioh3420 but has been changed to pcie-root-port.

This patch set applies against the latest version of Qemu in git, please note that several things have changed, namely:

  • Ryzen users now need to explicitly set the host CPU option topoext to enable hyperthreading.

  • The PulseAudio patches no longer applies clean, seems more work was done to the PA code without any regard for these audio problems or the fix that we have been using for some time.

    It would be great if @spheenik would be so kind as to update his patch. If not when I can find some time I will look into it, however I will need to get up to speed with how PA works in Qemu. If I can help it i’d rather not as this is taking time away from Looking Glass.

    Edit: After a bit of messing around I managed to get an acceptable sound solution by using ALSA in Qemu, allowing it to use the Pulse ALSA shim. I found that the following extra environment variables have resolved all sound issues, however I am using AC97 which may also have helped here.

    export QEMU_AUDIO_DRV=alsa
    export QEMU_ALSA_DAC_BUFFER_SIZE=512
    export QEMU_ALSA_DAC_PERIOD_SIZE=128
7 Likes

Great news! Thanks for all your hard work, didn’t expect it so early. I was looking forward to this patch and will test it extensively during the weekend.

I can also try to go through the diffs in the PA code and see if I can adapt the patch. No guarantees though, haven’t worked with C in quite a while and haven’t worked on QEMU or PA at all. Is the PA patch for 3.0.0?

Ok so this is excellent news. I’m really considering getting a vega 56 to pass through if i can find a good deal on one. So I want to be absolutely sure that you guys have confirmed that the vega reset bug is actually fixed(with clean shutdown) by using Q35 with this patch and the prescribed pcie-root-port configuration

It is not fixed, and since AMD have gone dark on the matter it’s unlikely to ever be fixed. We are now at the stage where we are looking at getting in a pci quirk for the VFIO driver to disable reset of the AMD GPUs, however this will not recover a GPU from a crashed VM.

1 Like

Hmmm alright. what really confuses me is there always seems to be some people that can use a vega gpu without having any reset issues.

I have seen an acceptable (to me) workaround where you detach the device and suspend to ram then rescan. which apparently manages to reset it. It’s rare for me to need to turn off my vm without shutting down anyway. So as long as the vega gpus work fine while loaded in to the vm I think I can probably manage.

It does, I run a Vega 64 for my workstation VM that I rely on for an income, the only suggestion is that if you’re using Linux on the Vega is to ensure you have a very recent kernel, older versions of the kernel suffer from some pretty nasty gpu stalls, crashes, etc.

No plans for that at all I run dual head and have a 580 for operations in the host. I’m honestly just fed up with Nvidia and my 1070, and want to make better use of my freesync monitors.

My plan is to try to snag a used 56 with samsung ram, flash a 64 bios and tune in an undervolt “OC” in baremetal windows then switch over to using it in VM.

I’ve kept up with vega benchmarks on phoronix and it just doesn’t seem worth it to use a vega gpu in linux right now.

Is anyone else having trouble with the latest qemu git commits of the last two days, specifically when using the virtio network adapter with q35/i440fx? When i use the virtio adapter it refuses to have a connection after about a minute or so and eventually leads to a critical_process_died BSOD or just flat out refuses to shutdown or reboot(this happens with and without gpu passthrough fwiw). If I use the e1000 adapter everything seems to work fine from what i can tell so far in my testing. I have also tried using the stable and latest virtio drivers but that didn’t seem to have any affect whatsoever. Figure I’d post this just in case someone else encounters this problem.

The new q35 patches seem to work fine though here on my ryzen 5 1600 x370 based system, so that’s a plus :smile: it’s good to see that the bus speed issue is finally being resolved.

No, I have not seen this issue, I have however found that a Linux guest with a Vega 56 fails to start properly, the GPU hangs when starting X. Fortunately it’s not related to this patch set as reverting to Qemu 3.0.0 and then applying this set works just fine.

If you wish to apply this patch set to 3.0.0 it nearly applies clean, patch 6 fails simply due to a code formatting change and can be applied by hand.

I posted earlier in this thread with very bad latency and i figured out a way to reduce my lag.
I redid my VM completely and just used virt-manager to configure my VM.

And my latency went down to this:

Summary

Here is my config

Summary
<domain type='kvm'>
  <name>win10</name>
  <uuid>9151c10a-25d2-4e09-9d38-3a9039b9d1aa</uuid>
  <metadata>
    <libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
      <libosinfo:os id="http://microsoft.com/win/10"/>
    </libosinfo:libosinfo>
  </metadata>
  <memory unit='KiB'>12681216</memory>
  <currentMemory unit='KiB'>12681216</currentMemory>
  <vcpu placement='static'>8</vcpu>
  <os>
    <type arch='x86_64' machine='pc-q35-3.0'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/ovmf/ovmf_code_x64.bin</loader>
    <nvram>/var/lib/libvirt/qemu/nvram/win10_VARS.fd</nvram>
    <bootmenu enable='no'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vendor_id state='on' value='whatever'/>
    </hyperv>
    <kvm>
      <hidden state='on'/>
    </kvm>
    <vmport state='off'/>
  </features>
  <cpu mode='host-model' check='partial'>
    <model fallback='allow'/>
    <topology sockets='2' cores='4' threads='1'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
    <timer name='hypervclock' present='yes'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <pm>
    <suspend-to-mem enabled='no'/>
    <suspend-to-disk enabled='no'/>
  </pm>
  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='writeback' io='threads' discard='unmap' detect_zeroes='unmap'/>
      <source dev='/dev/disk/by-id/ata-INTEL_SSDSA2BW160G3_CVPR146008C3160DGN'/>
      <target dev='sda' bus='scsi'/>
      <boot order='1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <target dev='sdb' bus='sata'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/home/vertex/freenas/isos/os/virtio-win-0.1.160.iso'/>
      <target dev='sdc' bus='sata'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
    </disk>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1d' function='0x7'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci1'>
      <master startport='0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1d' function='0x0' multifunction='on'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci2'>
      <master startport='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1d' function='0x1'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci3'>
      <master startport='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1d' function='0x2'/>
    </controller>
    <controller type='scsi' index='0' model='virtio-scsi'>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
    </controller>
    <controller type='sata' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'/>
    <controller type='pci' index='1' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='1' port='0x10'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='2' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='2' port='0x11'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='3' port='0x12'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
    </controller>
    <controller type='pci' index='4' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='4' port='0x13'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
    </controller>
    <controller type='pci' index='5' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='5' port='0x14'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:f9:27:d2'/>
      <source bridge='br0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x0b' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x0b' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='yes'>
      <source>
        <vendor id='0x1b1c'/>
        <product id='0x1b11'/>
      </source>
      <address type='usb' bus='0' port='1'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='yes'>
      <source>
        <vendor id='0x046d'/>
        <product id='0xc085'/>
      </source>
      <address type='usb' bus='0' port='2'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='yes'>
      <source>
        <vendor id='0x0d8c'/>
        <product id='0x0102'/>
      </source>
      <address type='usb' bus='0' port='3'/>
    </hostdev>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </memballoon>
  </devices>
</domain>

Any “tuning” i do like cpu pinning results in a lot more latency issues.

Also exposing my cpu via passthrough triples the latency.

Any explanation for that?

  • Are you running a recent kernel version?
  • Are you running a recent Qemu version, and very recent and on Ryzen, did you specify topoext as a CPU flag?
  • Is your AGESA up to date?
  • Are you pinning the correct cores to the correct threads?
  • Have you exposed the NUMA architecture to Linux? (Memory interleave set to “Channel” in the Bios)
  • 4.19.1-arch1-1-custom
  • qemu 3.0.0-4 on Threadripper 1950x and yes i had topoext as CPU flag but not any more in my best performing config
  • yes newest available bios from august
  • dont know about this one but the answers to my post above said yes
  • yes i was isolating all CPUs from a single numa node, i posted my lstopo above as well

Here is my old config from before with at least 8 times the latency

Summary
<domain type='kvm'>
  <name>win10-gaming</name>
  <uuid>ca042030-d63d-40a5-8b32-7df0948f81ed</uuid>
  <metadata>
    <libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
      <libosinfo:os id="http://microsoft.com/win/10"/>
    </libosinfo:libosinfo>
  </metadata>
  <memory unit='KiB'>12681216</memory>
  <currentMemory unit='KiB'>12681216</currentMemory>
  <vcpu placement='static'>8</vcpu>
  <iothreads>2</iothreads>
  <cputune>
    <vcpupin vcpu='0' cpuset='0'/>
    <vcpupin vcpu='1' cpuset='1'/>
    <vcpupin vcpu='2' cpuset='2'/>
    <vcpupin vcpu='3' cpuset='3'/>
    <vcpupin vcpu='4' cpuset='4'/>
    <vcpupin vcpu='5' cpuset='5'/>
    <vcpupin vcpu='6' cpuset='6'/>
    <vcpupin vcpu='7' cpuset='7'/>
    <emulatorpin cpuset='0-1'/>
    <iothreadpin iothread='1' cpuset='0-1'/>
    <iothreadpin iothread='2' cpuset='2-3'/>
  </cputune>
  <os>
    <type arch='x86_64' machine='pc-q35-3.0'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/ovmf/ovmf_code_x64.bin</loader>
    <nvram>/var/lib/libvirt/qemu/nvram/win10-gaming_VARS.fd</nvram>
    <bootmenu enable='yes'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vpindex state='on'/>
      <synic state='on'/>
      <stimer state='on'/>
      <reset state='on'/>
      <vendor_id state='on' value='KVM Hv'/>
    </hyperv>
    <kvm>
      <hidden state='on'/>
    </kvm>
    <vmport state='off'/>
  </features>
  <cpu mode='host-passthrough' check='full'>
    <topology sockets='1' cores='8' threads='1'/>
    <cache level='3' mode='emulate'/>
    <feature policy='require' name='topoext'/>
    <feature policy='require' name='invtsc'/>
    <feature policy='require' name='svm'/>
    <feature policy='disable' name='hypervisor'/>
    <feature policy='require' name='ht'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='rtc' present='no' tickpolicy='catchup'/>
    <timer name='pit' present='no' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
    <timer name='kvmclock' present='no'/>
    <timer name='hypervclock' present='no'/>
    <timer name='tsc' present='yes' mode='native'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <pm>
    <suspend-to-mem enabled='no'/>
    <suspend-to-disk enabled='no'/>
  </pm>
  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='writeback' io='threads'/>
      <source dev='/dev/disk/by-id/ata-Crucial_CT256MX100SSD1_14330CF987C3'/>
      <target dev='sda' bus='scsi'/>
      <boot order='2'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <controller type='pci' index='0' model='pcie-root'/>
    <controller type='pci' index='1' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='1' port='0x10'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='2' model='pcie-to-pci-bridge'>
      <model name='pcie-pci-bridge'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='3' port='0x11'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
    </controller>
    <controller type='pci' index='4' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='4' port='0x12'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
    </controller>
    <controller type='pci' index='5' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='5' port='0x13'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
    </controller>
    <controller type='pci' index='6' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='6' port='0x8'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='7' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='7' port='0x9'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='pci' index='8' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='8' port='0xa'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='9' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='9' port='0xb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/>
    </controller>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1d' function='0x7'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci1'>
      <master startport='0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1d' function='0x0' multifunction='on'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci2'>
      <master startport='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1d' function='0x1'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci3'>
      <master startport='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1d' function='0x2'/>
    </controller>
    <controller type='scsi' index='0' model='virtio-scsi'>
      <driver queues='4'/>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </controller>
    <controller type='sata' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:09:ab:46'/>
      <source bridge='br0'/>
      <model type='virtio'/>
      <link state='up'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </interface>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <hostdev mode='subsystem' type='usb' managed='yes'>
      <source>
        <vendor id='0x1b1c'/>
        <product id='0x1b11'/>
      </source>
      <address type='usb' bus='0' port='2'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='yes'>
      <source>
        <vendor id='0x046d'/>
        <product id='0xc085'/>
      </source>
      <address type='usb' bus='0' port='3'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='yes'>
      <source>
        <vendor id='0x0d8c'/>
        <product id='0x0102'/>
      </source>
      <address type='usb' bus='0' port='1'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x0b' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x0b' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
    </hostdev>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </memballoon>
    <shmem name='looking-glass'>
      <model type='ivshmem-plain'/>
      <size unit='M'>32</size>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/>
    </shmem>
  </devices>
</domain>

AMD is making it difficult for me to want to support them. I am planning a new PC build and would buy a new AMD XFX RX 580 card. This darkness from AMD makes me want to consider an Nvidia card. However with Nvidia they block the driver from working in a VM when the graphic card is dedicated. This doesn’t make me want to buy from Nvidia either. Maybe it works today but what happens with the next driver update, does it continue to work or do they break it again. I can’t decide which path AMD or Nvidia is best? Grr.

I am also planning on giving Looking Glass a go. I have no room for two monitors, keyboards, and mice. If it doesn’t work I’ll have to get a KVM.

I think I am leaning toward Nvidia for the guest GPU, with them it works and I can avoid updating the driver. If there are other thoughts please share.

AMD, you are just hurting yourself!