Asus PRO WS W680-ACE & VFIO - GREAT NEWS

Hello All!

I have seen several instances of people asking about VFIO and the Asus PRO WS W680-ACE motherboard - most specifically questions around the IOMMU grouping for this board. After months of research and waiting for Asus B2B support to answer these questions - they never did… I decided to simply do the testing myself.

With this testing now complete - what of it have have had the resources and time to complete - I wanted to share my findings with everyone… Spoiler Alert: This board seems like an amazing solution for VFIO use cases and provides exceptional modularity.

BLUF / TL-DR:
Bad News First… If you are looking for official Asus documentation for any of this you are unfortunately out of luck - though I am working with Asus B2B support to ensure they have this information for future customers. Also… while the front-line B2B support from Asus is exceptional, don’t expect to get any answers from their engineering teams - evidenced by my 4 month old, still open support case.

So… The Good News… Simply put, this board, in my opinion, is the very board the VFIO community has been needing. When it comes to PCI & IOMMU Grouping the news is fantastic, as each PCI/PCIe interface on the board is assigned a a unique PCI Bus and creates a new IOMMU group - all interfaces (human-accessible ones) are individually addressable. :grin:

Methodology:
I performed this testing using the following hardware:

  • Asus PRO WS W680-ACE Motherboard
  • Intel 12100K CPU
  • nVidia 4090 OC
  • nVidia 1080Ti FE
  • Western Digital Black 1TB PCIe m.2 NVMe
  • Samsung 990 PRO 2TB PCIe m.2 NVMe (2x)
  • Intel XL710 2-Port QSFP+ NIC
  • QCA 6141 802.11ac WLAN Adapter

Limitations… I have as of yet been unable to fully test the ‘SlimSAS’ functionality of the board so I do not exactly know its position or impact on the PCI architecture of this board. That said, I can report that it will only operate on the PCI architecture when put into PCI/PCIe mode… it will not operate on nor consume PCIe lanes when put into SATA mode. It is likely however, that like the on-board NICs, the W680 has dedicated PCIe lanes for this PCIe-enabled ‘SlimSAS’ functionality - but don’t take my word for it.

As for the software used, I installed ProxMox 8
Technically I installed ProxMox 7.4 from ISO and updated to 8.x as there are compatibility issues with some piece of hardware and the PVE 8.x ISO installer.

Please use this diagram for understanding the remainder of this post:

Hardware Setup:

  • Interface A: Unpopulated
  • Interface B: nVidia 1080Ti FE
  • Interface C: nVidia 4090 OC
  • Interface D: Unpopulated
  • Interface E: Intel XL710
  • Interface F: WD Black 1TB NVMe
  • Interface G: Samsung 990 PRO 2TB NVMe - 1 of 2
  • Interface H: Unpopulated
  • Interface I: Samsung 990 PRO 2TB NVMe - 2 of 2
  • Interface J (On-board NIC 0): Connected to Lab Network 1
  • Interface H (On-board NIC 1): Connected to Lab Network 2

Software Configuration:

  • PVE 7.4 updated to 8.x
  • Installed grub-efi-amd64 (apt install grub-efi-amd64)
  • intel_iommu=on iommu=pt added to GRUB Command Line (/etc/default/grub)
  • vfio, vfio_iommu_type1, vfio_pci, vfio_virqfd kernel modules enabled (added to /etc/modules)
  • Blacklisted appropriate drivers (i.e. nvidia / radeon / …) in /etc/modprobe.d/pve-blacklist.conf
  • Installed lshw (apt install lshw)

Findings (CPU / PCH Alignment):
While not specifically documented in one central location, Asus does have some documentation as the the PCI interface alignment of the board. It can be tricky to put together as you have to compile it from multiple sources… Thankfully, what is reported, is true and accurate.

PCI Interfaces Associated with CPU include: B, C, F
PCI Interfaces Associated with PCH include: A, D, E, G, H, I, J, K - also worth noting that the ‘SlimSAS’ controller interfaces to PCH - regardless of PCIe or SATA mode.

Findings (PCIe Lanes):
So much of this is depended on the CPU you install; however, specific to PCH there are a total of 12x PCIe 4.0 Lanes & 16x PCIe 3.0 Lanes - while this may seem like a lot, I actually wish Asus/Intel would have included more within PCH. Nevertheless, this is still very reasonable.

It is also worth noting that each of the on-board NICs have dedicated PCIe 3.0 lanes - thus not consuming from the pool of PCIe 3.0 lanes otherwise available through PCH.

As for the ‘SlimSAS’ (as mentioned earlier)… I currently cannot comment with authority as to if the SlimSAS (when put in PCIe mode) interface has dedicated PCIe 4.0 lanes or if it consumes from PHC’s pool of PCIe 4.0 lanes otherwise available. With everyone else broken out as it is on this board, it would not surprise me if this interface is provided dedicated PCIe 4.0 lanes.

Findings (PCI Interface Addressing):
Again this will look different for you depending on the hardware you have installed (more accurately, where you have or have not installed it). Excluding the ‘SlimSAS’ functionality, if all interfaces on the board are populated, the BIOS / EFI addresses the PCI architecture in the following manner:
*0000:00:*.*: CPU / PCH Interface - Critical Board / CPU Components
*0000:01:*.*: B
*0000:02:*.*: C
*0000:03:*.*: F
*0000:04:*.*: G
*0000:05:*.*: A
*0000:06:*.*: I
*0000:07:*.*: D
*0000:08:*.*: E
*0000:09:*.*: K
*0000:10:*.*: J
*0000:11:*.*: H

So what this tells us…

  • All PCI/PCIe interfaces exist on a single domain (as expected)
  • Each user-accessible PCI/PCIe interface exists on its own bus… (okay, now we are talking)

No cheap manufacture / design shortcuts here…

Findings (IOMMU Groupings & Passthrough):
So this shouldn’t be a great surprise to anyone by this point, but the story behind the IOMMU Grouping capabilities of this board is phenomenal! Simply put, each human-usable PCI/PCIe interface on the board creates its own IOMMU group. In essence, you don’t have to worry about situations where passing an installed GPU will also pass (for example) an NVMe installed in a corresponding interface.

I have confirmed this by running several virtual machines where I have individually passed-through each of these devices to a separate VM.

Closing:
I hope this helps (and saves some people money) if they are considering this board as a component within their VFIO build. I took the gamble and purchased the board months ago with the hopes Asus would eventually provide the documentation around all this… While they were unable to, I was able to demystify a large portion of the functions, architecture, and operations of this board.

If you have any questions or if you find that I have misrepresented anything in this post, please do not hesitate to reach out… Constructive criticism is always welcome and I am by no means an SME in this space.

Best,
AX

8 Likes

Thank you for the review, the board looks all around like a solid choice and premium quality!

1 Like

I looked at it some more and this board, an i9-14900K and 128GB of ECC RAM would be what I would consider for a workstation if I did not have one already.

  • ASUS Pro WS W680-Ace IPMI: 559€
  • Intel Core i9-14900K: 599€
  • 4* Kingston Server Premier DIMM 32GB: 520€

Total of 1750€ for a mighty fine workstation!

Absolutely! With the latest firmware now supporting the Intel 14th Gen CPUs, you can really make something nice using this board.

Funny that you mentioned the 14900K specifically, as the minute my testing showed I could use this board for a particular VFIO build I have been considering, I ordered the 14900K straight away!

This is really interesting, very much appreciated! Any experiences on clock speeds with all four dimms populated?

Not at the moment, sorry. Let me finish by build and I can look into it.

1 Like

Thanks for sharing. I’ve lusted after this board mightily but I think I’m going to stick it out with my Prime H670-PLUS D4 for my little virt server until the next big update.

A couple nits with your breakdown:

So much of this is depended on the CPU you install

How so? All LGA1700 processors to the best of my knowledge are capable of providing the full complement of x16 (or x8/x8) PCIe 5.0 lanes, x4 PCIe 4.0 lanes, and x8 DMI 4.0 lanes.

It is also worth noting that each of the on-board NICs have dedicated PCIe 3.0 lanes

I know that Intel’s block diagrams sometimes make it look like this is the case but I don’t believe this is true. The Pro WS W680’s x12 PCIe 4.0 lanes go to two x4 M.2 slots and the x4 SlimSAS port. Its x16 PCIe 3.0 lanes go to two x4 expansion slots, one x1 expansion slot, and the x2 Wi-Fi slot. The remaining x5 lanes will be used for various peripherals, including the two 2.5GbE NICs. So you can see that Asus did a pretty good job of allocating the PCH lanes in this implementation.

Or at least that’s my understanding. Corrections welcome.

Sorry to “wake the dead” (referring to the thread itself)

But has anyone else noticed that the PCIE Gen 3 x16 slots (labeled “D” and “E” in the OP’s post) are actually pinned for “x8”? (And not “x4” like Asus’ documentation states?)

Note that I’m not attempting to state they actually provide “x8” connectivity (as I haven’t even setup the board yet), but I can certainly state that they are 38-pins wide (eg. “x8”) physically.

I would like to ask, could this board handle 2 x 3090 gpus with NVLink? Would both gpus be able to get x16pcie in this setup? And would the spacing/heating situation be ok?
I ask because I’m looking for a dual rtx 3090 setup for ML and trying to get the best bang for my buck on all the other parts. Any guidance would be much appreciated as I don’t know a lot about PC builds.

The GPU slots are only spaced one apart, so maybe if you can watercool at least the top card or mount it vertically with a riser (but that would eliminate nvlink?). But FE model or aircooled, no, they are all triple slot and you can only pull off dual double slot cards.

No, from specs:

  • 2 x PCIe 5.0 x16 slots (support x16 or x8 / x8 mode)

x8 would be fine though, especially since the cards can communicate with each other over NVLink leaving pcie bandwith free for communication with CPU/main memory.

If you need ECC there’s maybe more options on AM5 (ASUS crosshair and proart boards, asrock taichi should support ECC and have bigger slot spacing). But if you need more build advice you probably should open your own thread :wink: