Return to

IOMMU groups for VM host: x570 or x99

I’m planning to update my current VM/Container host. (its a old P55)
It’s a multi purpose machine: NAS, Router, DVB TV, Plex, etc.
Right now I have two HBAs, a 4 port NIC and a Graphic card (To boot the machine; in a remote future I will use that for Plex).
The two HBAs are used by my NAS VM.

So as you can see I need some pcie ports. In future I would migrate my PCI TV card to a more mdoern one with PCIe. So a x1 slot is also needed for that.

My initial idea was to go with a x470 MB but on reddit a guy told me about the southbridge chipset assign the same group.

I’ve been suggested to go with a x570 MB, but thats a bit too expensive. And I would end up with a better performing server than my main PC :sweat_smile:

Anyway I’ve seen that the x99 Xeons are quite cheap on used market. So also an x99 with 40 lanes would be a possible choice.

The HBAs are x8 (Dell Perc h310), the intel NICs are x4. So a MB with 2x8 and a x4 would be perfect. I don’t care about the NVME, so my idea was to use a riser to connect the Network card to one of the NGFFs.

(The x99 MB should have the slot available, but i don’t know the IOMMU group situation)

The Graphics, and the DVB Tuner card doesn’t need a dedicated IOMMU since I would use a docker container on my host for Plex/tvheadend.

What would you suggest? My budget is not huge, someone suggested me to get a x399…

My current system is a P7P55D Deluxe, 8GB DDR3. and a unknow Xeon I got for 5 buks. My main reason to move is due to the lack of ECC. DDR23 memory stick prices are going up and I’m filling the ram quite often now.
(If I could only use my Dell 1950… electricity bill :money_mouth_face:)

As some of you will note, the P55 does not have VT-d. Right now I’m using the scsi pass through via Quemu.


X99 increases risk regarding microcode updates breaking processor features. X79 with a E5 V2 CPU should be better equipped. Only one board though meets the criteria for VFIO: The X79-UP4 Rev 1.1. Gigabyte boards have a Boot GPU selector which is extremely handy for VFIO setups, and that is the only board that supports E5 V2 CPUs. That’s honestly the best budget option right now, because X99 if you want to unlock the multiplier on E5 V3 CPUs requires UEFI boot editing. (Very involved, not for the faint of heart)

What you mean by that?

What? Why should I have to do that?

If you want to overclock.

Now for your use cases, I don’t think you need to or should overclock your CPU.

1 Like

No It’s not my intention to Overclock. I would love to have an opinion about x99 vs x570.
On x99 I would use PCIe riser with bifucation if I don’t find a good MB with built in switches.

Ah, if you want to Bifurcate, it’s ASrock X99 only because ASrock Mobos have that feature.

Ok but how about x570 vs x99?
That was the main question :smiley:
I know that I have to go with Asrock on both cases due to ECC support on x570/x99 and bifurcation especially for x99.

X99 only supports ECC with a Xeon, but that has a higher likelihood to support registered DIMMs.

Yea, I know that. I’ve already choosen a 2640v3 or a 2620v3 if I want to go with a x99 MB.
I’m worried about IOMMU groups. I’m new on this. And I have no idea which one is better for my needs.

For PCI-E lane count, X99 is better. You only have 20 direct CPU lanes available on X570.

20 Lanes should be fine if thoose can be assigned to VMs. But having a spare x1 that can be mapped could be helpfull.

So even the x570 have the chipset pcie lanes with the same iommu groups?

X570 doesn’t have the IOMMU lumping problem for the chipset, but it’s in constant flux. One AGESA update and it could break.

X570 has another problem though: USB passthrough is broken.

I need for my NAS/Router/Server. No I think that I would not need USB Passthrough.

WOW are thoose fetures that broken that they forgot to fix on updates?

I’ve found an Asus X99-WS/IPMI. It would have the same price of an x570 MB. RAM would be equal on both cases 2 sticks DDR4 ECC UDIMM.

So the difference so far is only the CPU choice for better long term support.

BTW I use Docker and Proxmox.

Yeah, X99 will be more stable on that.

What do you think about that mainboard (Asus X99-WS/IPMI)?
In this case I don’t need bifurcation due to the fact that it have the pcie switches built in and a lot of x8 slots!.

It should support ECC ddr4 by default using Xeons CPUs.

Feel like I have to reply since I’m doing the move from X99/128 GB ECC RDIMM (6-core Broadwell-EP Xeon, ASRock X99 WS) to X570/128 GB ECC UDIMM (12-core Ryzen 3900, ASUS Pro WS X570-Ace).

I have had a well-working ESXi machine until the Meltdown/Spectre patches rolled out - then for example, multiple VMs that used storage provided from another began to slow down and even became “sluggish” (very short version) and CPU load increased. Additionally the very first sudden system crashes appeared after the release of the microcode patches.

The Ryzen 3900 runs circles around the Xeon and since I had been used to Intel’s yearly quad-core upgrades I’m sort of amazed what is now possible on this mainstream platform, even with another CPU upgrade path with the upcoming 16-core Ryzen 4000.

With the ASUS Pro WS X570-Ace you can use 34 PCIe lanes for your own devices:

  • CPU: x4 from M2_1
  • CPU: x4/x4/x4/x4 via bifurcation from the upper large PCIe slot
  • Chipset: x8 from the lower large PCIe slot
  • Chipset: x4 via U.2 port
  • Chipset: x2 via M2_2

If you plan your setup well and you’re only using PCIe 3.0 devices you can probably avoid PCIe bottlenecks (14 chipset PCIe lanes share the bandwith of the PCIe 4.0 x4 interface between CPU and chipset).

Since my previous setup was CPU-limited I’m happy with this side-grade even though I can’t upgrade system memory in the future.

Not just ecc support but also thunderbolt support

Switches and PLX chips make IOMMU groups even worse. Try to always go for native CPU lanes for IOMMU, and for slower peripherals, that’s when you use the chipset.

With switches I’ve meant the Muxes to divide the x16 to 2x8. Not the active PLX ones :smiley:

Unfortunatly thoose mainboards here are 350+EUR.
The primary focus of this build is a NFS storage and the PfSense VM as my router.

IOMMU through those things is always trial and error. Direct lanes is a much better bet.