So on intel, it seems like with the 6/8 core processors (8700, 8700k, 9900k, 9700k, etc) the IOMMU separation is pretty good for chipset devices, but for CPU devices there is no IOMMU separation.
With Z390, we’re seeing AIB partners offer x8/x4/x4 PCIe slots using CPU lanes, which is great for device flexibility and reducing bottlenecks, but all of those devices share a single IOMMU group which isn’t great for virtualization.
On the 7700k, at least some boards provided separation for x8/x8 devices, but that doesn’t really seem to be the case anymore (at least when using the latest UEFI).
For VFIO users who want to run a virtual machine, it is still possible to use the iGPU on Intel CPUs + one add-in card but for the user that wants to use two add-in cards you’d be relegated to another platform like Intel’s own X299 platform, or Ryzen/ Threadripper from AMD.
Generally, on higher end Xeon and X299 Intel CPUs, users of this virtualization have had an easier more stable time. And while AMD is quickly narrowing the gap, the high clock speed and performance of the 8-core 9900k may be appealing for certain types of users that would run this kind of virtualization.
Are you (or would you be) an Intel user that uses more than one IOMMU group from your CPU PCIe lanes? If so, let me know.
I’m hoping that there are enough of us to make it worth Intel’s while. Both MSI and Gigabyte have directly expressed interest in providing these features for users as well, which is extremely encouraging.
Your Motherboard Model;
The output of ls-iommu.sh
1-2 sentences on what your intended use case is (multiple VMs, mixed host/VM add-in graphics, etc)
If you’ve purchased X299 in Lieu of 1151/Z370 for exactly this reason, I want to hear from you too.