ASRock X470 Taichi

I’m just a bit curious on this, and I can’t find more information on it. Btw the CPU will be a Ryzen 2700, the VMs I plan to use would be Windows for gaming, Ubuntu for most use, lubuntu for a few things running at home (printer, scanner etc) and other VMs to play around on).
This board has 3 PCIE x16 slots and 2 PCIE x1 slots, also 2 m.2 slots.
PCIE 1 and 3 are both x16 that can be ran together for x8 lanes each, and they go straight to the CPU, m.2_1 has x4 lanes straight to the CPU. PCIE 5 is x16 but only has x4 lanes, but if the m.2_2 is being used it gets deactivate, and the m.2_2 goes tho the Chipset.
I plan on building a UNRAID box (already have the license, and used to the UI), I want to have one VM for gaming with a graphics card on PCIE 1, and an HBA(x8 lanes, for a total of 16 SATA ports) on PCIE 3. I was thinking of putting another graphics card on PCIE 5 instead of another NVMe drive.
My question is, how would the lanes be split? If I added a card on one PCIE 2 or 4, how would it affect it?
I’m not a gamer, a bit casual, nothing crazy, plan on using a rx570, and I have a GTX 650 sitting around, which I planned to use to have quick view of other VMs. But I want to have the most speed to the storage, as I plan on having a few users having Plex access at the same time, plus home use of the NAS and other applications going on.

1 Like

Don’t you need a gpu for unraid itself? I know there’s a web gui for the setup. But installing unraid requires a gpu, and if unraid doesn’t have a gpu after install, there’s no terminal you can acces if you lose connection to the web gui? Remember the ryzen 2700 doesn’t have a igpu.
But 1 gpu for the windows vm, one gpu for the linux vm, one hba card for storage? One or two nvme ssd’s, and possiblely a a gpu for unraid itself?
That’s a lot of pci’e devices for a X470 motherboard.
Have you considered threadripper? And it’s many pcie lanes?

Technically UNRAID doesn’t need the GPU, and I already have it set up and running, just moving it over, and getting more VMs (from i3 3240). I normally just run UNRAID without GUI, and always access it from other sources on webgui. Never had a problem this way, so far.
The GPU(1) and HBA are the only 2 that I care about performed. The other GPU would be the equivalent of a igpu, as in I don’t plan to do anything extensive with it.
I was thinking of threadripper, but I don’t need that much performance, or lanes. I’m thinking of maybe adding one more card on the x1 slot. And I don’t plan doing much more, if I do, I plan on upgrading (probably 5 years from now).

Well, if the second gpu is unimportant performance wise, I’d put gpu1 in the first x16 slot, and the hba in the second x16 slot. 8 lanes each.
Gpu2 should go in the bottom x16 slot (which is x4) through the chipset. It will be pretty limited, since it’s only pcie 2.0 x4. But you say it doesn’t matter, so no problem there.
Then you have the x4 lanes for a nvme ssd. Any other ssd’s will have to be sata, and just connect to the motherboard.

Edited for typos.

Thank you
It wasn’t unclear how that it was going, so same for the x1 slots?

I think a x1 slot will cribble the card too much, though I’m not too sure.
But have you checked the iommu grouping on the motherboard?

So you know, the two PCIe 3.0 16x slots and NVMe slot connected to the CPU, and the USB ports in the same IOMMU group as the TPM chip are the only things you’ll be able to pass through to VMs. The rest of the chipset is in it’s own group, including the bottom PCIe 2.0 x16 (x4) slot.

so in short the bottom nvme cannot be passed through

Correct, the bottom M.2 slot is tied into the PCIe 2.0 x4 chipset, which generally can’t be passed through

bummer… i wonder if this is the same on the x399 w/ threadripper

I’m way too late to answer this but I’ll do it for posterity anyway. I have an Asrock Fatal1ty X399 Professional Gaming which is identical to the X399 Taichi but adds a 10Gbit Ethernet controller and a COM port header (you can actually see the space reserved for those on a Taichi photo). So I expect the situation with IOMMU groups is identical too. And the bad news is that everything that is in the chipset or connects to a PCIe lane coming off it is in the same IOMMU group. The single PCIe x1 slot is also connected to the chipset and consequently everything you plug into it ends up in the same IOMMU group. I’ve tried that. The only thing I haven’t tested is to have an SR-IOV enabled network adapter there and see where the logical devices end up. But I’m not optimistic about this.

With that board though it’s not much of a problem because you don’t have any really high speed devices connected to the chipset. The three M.2 slots connect directly to the Threadripper CPU and you can pass them through to a VM. So you are left with the SATA, USB and networking ports but these can work quite well via virtio.

As a bonus, with Threadripper you have two USB controllers on the chip (one on each die) and they are nicely laid out at the back of the board in two groups of 4. I pass through one of those to a VM and can probably do it for the other one as well. But the board is a bit picky about where you connect the keyboard/mouse when those are behind a hub (like on a display). So it might not be very practical. I have updated the BIOS several times since I played with these things and it may be more flexible these days. It’s an amazing platform with amazing value and I hate what AMD did to it. But I digress.

Hope that helps. Someone in the future…