ASUS Prime X570-P for vfio - what will I miss?

I’ve seen a couple of theads talking about motherboards and vfio, but the suggestions seems to be in the high-price region, so allow me to take a different perspective on the discussion.

What will I miss if I am building a Ryzen 3700x with two GPUs for passthrough (5700XT and an older 970GTX) on a mid-range ASUS Prime X570-P motherboard?

1 Like

tl;dr: You will likely be fine, but you should ask somebody with that board for available iommu groups. Otherwise you run a risk of extra work or returning for a different model.

VFIO has nothing to do with price range, but it does require some things from the MB vendor and some of the features maybe be very beneficial for VFIO gaming. The higher end MBs naturally attract more reviews and builds by enthusiast.

The biggest source of discussion is always IOMMU grouping as that can be affected by Vendor and on some boards may be barely usable.

The next thing is feature rich environment:

  • iGPU outputs (Not yet for Ryzen 2)
  • Separate NVMe (Easy to passthrough)
  • separate USB controller for 3.1 (Easy to passthrough)
  • Supports OC (Getting back some of the losses from KVM)
  • sensible distribution of power (Stability issues cause more harm to people running complex solutions)

Ah that seems like an important information. I’m keeping an eye on the iommu groups post

I keep reading this, now I have to ask …

Why pass the controllers, what’s wrong with passing individual USB devices to the VM if and as required? Why not just give the VM a raw partition/volume?

It is a good question and I wanted to include those as points to be discussed. So thanks for asking.

USB is easier to answer - because it works. Peripherals often reset and then you have to connect them again. Then you have the question of plug and play - works out of the box. And finally logistics - you can mark ports to be exclusive to the VM and therefore easy to work with for your users. (It is easy to forget you are not always designing systems for people like you.)

In my setup I have my GearVR and Dell Monitor in my 3.1 USB controller and this way monitor for windows has usb for windows. (And tethering for my gearVR is without random issues.)

Raw is still handled by kernel and at speeds measured in GB/s that is bound to be noticeable. You are leaving the performance of an expensive NVMe on the table.

I wonder. Reportedly even the difference between an NVMe and regular SATA SSD is negligible in games (and anyway it’s just load times), so I have a hard time believing the hit from paravirtualisation would actually matter. (I/O-bound productivity applications are another matter entirely, of course.)

How’s that work? Passing through a dedicated NVMe might avoid a performance hit, but it does require a second (expensive) NVMe, in the raw partition/volume case the host and VM can share one.

Problem with PV is when you hit a high CPU load on all cores. Your IO performance depends on CPU resources. That is the only real downside when gaming.

@Pixo already said it and I am just going to add - NVMe is not just a new interface. It changes the nature we work with drives. Sure you can get NVMe drive that is similar to SATA, but the nature of the interface is more like a ramdisk with partial SLC backup and full QLC backup.

In the strict sense most SATA SSDs today will be DDR4 ramdisk + SLC + QLC, but they are designed to be used as a faster HDD. My point is that form follows function and then function follows form.

I don’t really think that you would miss out anything.
But you should double check the situation on the pci-e lane layout.
How the pci-e 16x slots are wired.
And also check the m.2 slots if wanne use those for passtrough.

But other then that i don’t really think that you will miss out that much.
Of course the board only comes with one nic.
But i believe you can bridge those if i’m right?

How the iommu groups are on that said board i cannot really tell unfortunately.

How so? The technology may be vastly different from conventional HDDs, even SSDs, resulting in performance that’s orders of magnitude better according to most metrics … but at the end of the day it’s just persistent storage on top of which we put a filesystem.

The “nature we work with drives” has changed insofar as it’s been possible to temporarily disconnect / swap out a drive quickly and easily (even without a hot-swap enclosure), now that can entail having to remove the GPU, motherboard cover and/or CPU cooler(!) …

So NVMe is pretty much the same, SSDs have established a form already. Now NVMe has the same form, but can be so much faster it brings out new opportunities.

I look at it in a similar way as I look at the time gigabit networking enabled iSCSI and mdadm over ethernet. The form of the network did not change, but the speed made solutions viable.