ASUS ProArt X570-CREATOR : PCIE Lanes, Thunderbolt, GPU Passthrough and more

Hi Everyone!

I currently have the x570 taichi but I’ve realized that if wanted thunderbolt 4 and 10g ethernet I would have to sacrifice my Gen4x16 slot dedicated to the 4090, thus this topic about the
ASUS ProArt X570-CREATOR

In this post I’m looking for any feedback or first hand experience on the following topics:

  • Is it possible to passthrough the thunderbolt 4 controller to a Proxmox VM ?

  • Does the thunderbolt support allow me to plug an active thunderbolt cable to my thunderbolt dock (display, inputs) and use a VM a linux machine without having a case nearby ?

  • Can you allocate the 2.5G to Proxmox and normal vms and the 10G to a specific one ?

  • Does this motherboard properly support Micron 32GB DDR4-3200 ECC UDIMM 1.2V CL22 ?

  • Can I use an m2_3 slot to attach a m2 → pcie riser and install a 1x display card (to free the 4090 gpu from Proxmox), the goal is that since the vga single slot gpu doesn’t need external power using the m2 → pcie conversion would preserve the first x16 slot

Thank you very much in advance for any recommendations, feedbacks , thoughts and really anything to contribute !

I don’t have any experience with Proxmox but have multiple ASUS ProArt X570-CREATOR units. Since I’m ignorant regarding the Proxmox stuff I could setup Teamviewer et al. on a third system that itself is connected to the ProArt X570-CREATOR via KVM IPMI so you could check these things out for yourself.

Should work like any other Thunderbolt devices. As a general hint it’s very important to adjust some UEFI settings on AMD motherboards for Thunderbolt to function normally - but I have only experience with Thunderbolt on Windows.

Yes, I’ve personally tested this motherboard model and verified it to correctly report corrected ECC memory errors with a wide range of AM4 CPUs and Pro APUs.

I think I have all the parts to try this out, however they might be old so PCIe Gen4 mode would be borderline-ish. Don’t see an obvious reason why this shouldn’t work though.

Thank you for your kind reply, have you used the 2.5G and 10G ? Can you use both ?

The documentation refers to only one thunderbolt out for a dGPU IN, does that mean only one display setup is possible ?

Unless I’m totally wrong I would like to know if overall the ProArt x570 seems more oriented for serious but not enterprise level users? I feel like the integration of 10G and Thunderbolt while it uses the same number of lanes is still better than on the x570 taichi where you would have to “give up” Gen4x16 for the GPU, am I totally wrong ?

You can connect multiple displays to the motherboard itself when using an APU, on the other hand when using Thunderbolt with a dGPU only one Thunderbolt output can be used for a display.

Other displays would have to be directly connected to other ports on the dGPU, not the motherboard’s second Thunderbolt output.

This is why on the “good” separate Thunderbolt AICs you see two dedicated dGPU DisplayPort inputs next to the two Thunderbolt outputs. On the ProArt X570-CREATOR WIFI the second DisplayPort input is wired directly to the CPU socket and is used when you install an APU in the motherboard.

Of course you can use both ethernet adapters.

I have had “all” the great X570 motherboards and consider this one to be the best for desktop use. The only other “great one” is the ASUS Pro WS X570-ACE since it is the ONLY AM4 motherboard that connects to its chipset PCIe slot electrically with x8, not x4 as everone else does. With this you can use Gen3 x8 AICs in the X570 chipset PCIe slot with close to native performance, making x8/x8/x8 configurations possible on that specific AM4 motherboard.

The ProArt X570-CREATOR WIFI can give you even more flexibilty for “harvesting” M.2 PCIe lanes for other devices, it has 3 M.2 slots, the usual x4 one from the CPU, one x4 from the X570 chipset and another x4 one that takes 4 CPU lanes from the second large x16 PCIe slot, meaning you can operate a x8 (top slot, dGPU), x4 (second larger PCIe slot) and x4 from the third M.2 slot with lanes from the CPU.

1 Like

It’s highly doubtful but I can give it a try if I have some time today. I have a 10AM meeting that looks a little boring…

The problem is that it just looks like a bunch of PCI bridges until you plug something in. I was successful in the past at passing a TB2 drive enclosure connected to the USB4 controller through to a VM, so there might be a version of what you’re trying to do that’s feasible.

Yes although I should note that I’ve found the AQC113CS controller used in this board to be somewhat buggy. Even when it works it doesn’t support VLAN tagging in Windows (without jumping through some Hyper-V hoops) and it doesn’t support SR-IOV. I ended up disabling it and putting a cheap X520-DA1 in the third slot so I don’t have to worry about it.

Asus consumer desktop boards in general work well with ECC UDIMMs as long as they indicate support for them on the main product/specs page. I’ve used similar ECC UDIMMs in both my ProArt X570-CREATOR and Pro WS X570-ACE without any issues.

There’s not much going on in the Thunderbolt settings on this particular board. Any suggestions in particular?

If you draw lines between consumer, prosumer, and professional products, this board is firmly in the prosumer camp. The next step up to professional (in Asus land) would be something like a WRX80 SAGE or W790 ACE or SAGE.

Endorse. It’s the only board that was able to entice me away from X299 (prior to Ryzen 7000 and Xeon W-2400 offerings becoming available). PCIe Gen4, Zen 3 cores, USB4, two CPU-attached M.2 slots, two CPU-attached expansion slots for two dGPUs, and bunch of nice-to-haves. My only real beef is with the onboard 10GbE NIC as I mentioned above.

It’s an absolutely killer feature especially if you have an older Gen2 x8 HBA or NIC you want to use without getting bottlenecked by a x4 link. I still have one kicking around in my homelab.

I see that’s quite interesting, too bad the content on thunderbolt is so rare on the internet :-/

My goal is to have one windows “gaming” vm for occasional keyboard and mouse gaming and I thought that having one cable to a thunderbolt dock could be a “”““cheap””“” way to play games while have a great research workstation the rest of the week, if you’re able to confirm a passthrough and decent IOMMU separation that will be awesome

That’s too bad but even 2.5G is quite the upgrade for a be it all machine ^^

Can you confirm (for my sanity) that this board will allow the following configuration:

  • GPU at gen4x16 in the first slot
  • M2_1 at full speed and M2_2 at x4 ( split from the second slot which will stay empty)
  • 10G and Thunderbolt
  • A self powered gpu in the m2_3 slot (with a m2 to pcie riser), just to have display without causing the x16 to split

Basically explicitly change every PCIe setting in the AMD CBS → NBIO Common Options from AUTO to Enabled (except DMAr, leave it on Auto if you don’t want to reseat your CPU or USB Flashback your BIOS).

If that hardware configuration description is complete you don’t even need to harvest any PCIe lanes from an M.2 slot.

  • Top M.2 Slot (x4 from CPU)
  • Top PCIe slot (x16 from CPU)
  • Chipset PCIe slot for the second dGPU (x4 from X570)
  • Chipset M.2 slot (x4 from X570)

?

I get where you’re coming from but honestly you’ll be doing yourself a favor if you just set up single GPU pass-through (or go with two dGPUs and a KVM switch or Looking Glass) and skip Thunderbolt altogether.

No, once you populate either PCIEX16_2 or M.2_2, PCIEX16_1 will drop down to Gen4 x8.

Why not just put a dGPU in PCIEX16_3??

Here’s my setup that might work for you with some tweaks. I have an RTX 3090 in PCIEX16_1 and a Radeon Pro W6400 in PCIEX16_2. In “work” mode I use the W6400 for host display out and the 3090 for CUDA. To go into “play” mode I boot a guest VM that takes the 3090 (and one of the three onboard USB3 controllers as well as two NVMe drives in M.2_2 and M.2_3); I use a KVM switch to toggle between host and guest but I could easily use Looking Glass instead.

CUDA on the 3090 is hamstrung a bit by having only a Gen4 x8 link to the CPU. To work around that I could slot the W6400 into PCIEX16_3 (or M.2_3) instead and leave PCIEX16_2 and M.2_2 unpopulated. So there’s your solution.

All that being said, it means you don’t need Thunderbolt, and you don’t need the unique bifurcation features offered by this board. Therefore basically any X570 board with a Gen4 x16 CPU-attached slot and a Gen4 x4 PCH-attached slot will be sufficient for your use case. And if you can live with a Gen3 x4 PCH-attached slot for your host dGPU, basically any B550 board will do as well.

If all this is sounding good to you, the next natural step would be to strongly consider jumping to AM5. You get a built-in iGPU for host display out that will probably be more responsive and stable than any dGPU attached via the PCH. You also get four more general purpose CPU lanes that (depending on the board implementation) can be used for a USB4 controller, extra M.2 slot, expansion slot, or other onboard peripheral.

1 Like

Ah, gotcha, just the usual IOMMU-on-AM4 config then. Cheers.

Ok so here is the setup from my basic understanding :

  • PCIEX16_1 = Work GPU at full speed gen4x16
  • PCIEX16_3 = Dumb GPU for output when needed
    The modo manual states that it comes from chipset but surely it means that either it works a gen4x4 or goes into dual gpu mode at gen4x8 each, I would appreciate any confirmations on that :frowning_face:
  • M2_1 and M2_3 populated

I see but the thing is my « server » is in a closet around 10 meters (based on the existing Ethernet routing distance) away that’s why a single cable seems enticing….

From what I’ve seen there shouldn’t be almost any CUDA/ML downside to gen4x8 given a proper dataloader and 2/3% in gaming but I could be very wrong

If somebody is also to confirm a thunderbolt 4 passthrough that could be awesome and surely make this board the most complete am4 experience

Yes, it comes from the chipset, and will work at Gen4 x4 without sharing or bifurcating or otherwise affecting the CPU lanes.

Perfect. Put your main working SSD in M.2_1 for optimal latency and un-bottlenecked throughput.

If the Thunderbolt option doesn’t work out you can use one of these to get down to one cable.

I have some CUDA workloads that would absolutely run faster if I could push data over the bus at 256Gbps instead of 128Gbps. I guess it’ll depend on what you’re doing.

The gaming impact of a Gen3 x16 or Gen4 x8 limitation is closer to 1% on GA102, but it might be worse on AD102.

I have some initial results and they’re fairly mixed. I’ll finish testing and share with you ASAP.

This is so interesting, it makes thunderbolt not a must do you know more content where I could see it in action ?

If you have more informations about IOMMU groups and how well does thunderbolt passthrough works I would love to see it!

For the IOMMU groups, I opened a post on VFIO sub-reddit about a year and a half ago. Just search “Asus ProArt X570-Creator WiFi IOMMU Groups”.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.