Hello All!
I have seen several instances of people asking about VFIO and the Asus PRO WS W680-ACE motherboard - most specifically questions around the IOMMU grouping for this board. After months of research and waiting for Asus B2B support to answer these questions - they never did… I decided to simply do the testing myself.
With this testing now complete - what of it have have had the resources and time to complete - I wanted to share my findings with everyone… Spoiler Alert: This board seems like an amazing solution for VFIO use cases and provides exceptional modularity.
BLUF / TL-DR:
Bad News First… If you are looking for official Asus documentation for any of this you are unfortunately out of luck - though I am working with Asus B2B support to ensure they have this information for future customers. Also… while the front-line B2B support from Asus is exceptional, don’t expect to get any answers from their engineering teams - evidenced by my 4 month old, still open support case.
So… The Good News… Simply put, this board, in my opinion, is the very board the VFIO community has been needing. When it comes to PCI & IOMMU Grouping the news is fantastic, as each PCI/PCIe interface on the board is assigned a a unique PCI Bus and creates a new IOMMU group - all interfaces (human-accessible ones) are individually addressable.
Methodology:
I performed this testing using the following hardware:
- Asus PRO WS W680-ACE Motherboard
- Intel 12100K CPU
- nVidia 4090 OC
- nVidia 1080Ti FE
- Western Digital Black 1TB PCIe m.2 NVMe
- Samsung 990 PRO 2TB PCIe m.2 NVMe (2x)
- Intel XL710 2-Port QSFP+ NIC
- QCA 6141 802.11ac WLAN Adapter
Limitations… I have as of yet been unable to fully test the ‘SlimSAS’ functionality of the board so I do not exactly know its position or impact on the PCI architecture of this board. That said, I can report that it will only operate on the PCI architecture when put into PCI/PCIe mode… it will not operate on nor consume PCIe lanes when put into SATA mode. It is likely however, that like the on-board NICs, the W680 has dedicated PCIe lanes for this PCIe-enabled ‘SlimSAS’ functionality - but don’t take my word for it.
As for the software used, I installed ProxMox 8
Technically I installed ProxMox 7.4 from ISO and updated to 8.x as there are compatibility issues with some piece of hardware and the PVE 8.x ISO installer.
Please use this diagram for understanding the remainder of this post:
Hardware Setup:
- Interface A: Unpopulated
- Interface B: nVidia 1080Ti FE
- Interface C: nVidia 4090 OC
- Interface D: Unpopulated
- Interface E: Intel XL710
- Interface F: WD Black 1TB NVMe
- Interface G: Samsung 990 PRO 2TB NVMe - 1 of 2
- Interface H: Unpopulated
- Interface I: Samsung 990 PRO 2TB NVMe - 2 of 2
- Interface J (On-board NIC 0): Connected to Lab Network 1
- Interface H (On-board NIC 1): Connected to Lab Network 2
Software Configuration:
- PVE 7.4 updated to 8.x
- Installed grub-efi-amd64 (apt install grub-efi-amd64)
intel_iommu=on iommu=pt
added to GRUB Command Line (/etc/default/grub)- vfio, vfio_iommu_type1, vfio_pci, vfio_virqfd kernel modules enabled (added to /etc/modules)
- Blacklisted appropriate drivers (i.e. nvidia / radeon / …) in /etc/modprobe.d/pve-blacklist.conf
- Installed lshw (
apt install lshw
)
Findings (CPU / PCH Alignment):
While not specifically documented in one central location, Asus does have some documentation as the the PCI interface alignment of the board. It can be tricky to put together as you have to compile it from multiple sources… Thankfully, what is reported, is true and accurate.
PCI Interfaces Associated with CPU include: B, C, F
PCI Interfaces Associated with PCH include: A, D, E, G, H, I, J, K - also worth noting that the ‘SlimSAS’ controller interfaces to PCH - regardless of PCIe or SATA mode.
Findings (PCIe Lanes):
So much of this is depended on the CPU you install; however, specific to PCH there are a total of 12x PCIe 4.0 Lanes & 16x PCIe 3.0 Lanes - while this may seem like a lot, I actually wish Asus/Intel would have included more within PCH. Nevertheless, this is still very reasonable.
It is also worth noting that each of the on-board NICs have dedicated PCIe 3.0 lanes - thus not consuming from the pool of PCIe 3.0 lanes otherwise available through PCH.
As for the ‘SlimSAS’ (as mentioned earlier)… I currently cannot comment with authority as to if the SlimSAS (when put in PCIe mode) interface has dedicated PCIe 4.0 lanes or if it consumes from PHC’s pool of PCIe 4.0 lanes otherwise available. With everyone else broken out as it is on this board, it would not surprise me if this interface is provided dedicated PCIe 4.0 lanes.
Findings (PCI Interface Addressing):
Again this will look different for you depending on the hardware you have installed (more accurately, where you have or have not installed it). Excluding the ‘SlimSAS’ functionality, if all interfaces on the board are populated, the BIOS / EFI addresses the PCI architecture in the following manner:
*0000:00:*.*
: CPU / PCH Interface - Critical Board / CPU Components
*0000:01:*.*
: B
*0000:02:*.*
: C
*0000:03:*.*
: F
*0000:04:*.*
: G
*0000:05:*.*
: A
*0000:06:*.*
: I
*0000:07:*.*
: D
*0000:08:*.*
: E
*0000:09:*.*
: K
*0000:10:*.*
: J
*0000:11:*.*
: H
So what this tells us…
- All PCI/PCIe interfaces exist on a single domain (as expected)
- Each user-accessible PCI/PCIe interface exists on its own bus… (okay, now we are talking)
No cheap manufacture / design shortcuts here…
Findings (IOMMU Groupings & Passthrough):
So this shouldn’t be a great surprise to anyone by this point, but the story behind the IOMMU Grouping capabilities of this board is phenomenal! Simply put, each human-usable PCI/PCIe interface on the board creates its own IOMMU group. In essence, you don’t have to worry about situations where passing an installed GPU will also pass (for example) an NVMe installed in a corresponding interface.
I have confirmed this by running several virtual machines where I have individually passed-through each of these devices to a separate VM.
Closing:
I hope this helps (and saves some people money) if they are considering this board as a component within their VFIO build. I took the gamble and purchased the board months ago with the hopes Asus would eventually provide the documentation around all this… While they were unable to, I was able to demystify a large portion of the functions, architecture, and operations of this board.
If you have any questions or if you find that I have misrepresented anything in this post, please do not hesitate to reach out… Constructive criticism is always welcome and I am by no means an SME in this space.
Best,
AX