ASRock X399 Taichi / Fatal1ty summary Thread

Following similar thread for ASUS Zenith Extreme X399, I think it’s worth to create such thread for ASRock workstation oriented X399 motherboards, gathering all workstation oriented info we have.

Both motherboards support ECC, SR-IOV and PCI-E bifurcation.

Since BIOS 2.00c PCI-E bifurcation supports both x4/x4/x4/x4 and x8/x8 modes (before, only x4/x4/x4/x4) for high end cards like 40G NICs, SAS controllers or GPUs (official answer from ASRock support).

Since BIOS 2.00d both x16 slots support bifurcation (before, only bottom one) giving either 16/8/8/8/8/1 or 16/8/4/4/4/4/8/1 or 8/8/8/8/8/8/1 or 4/4/4/4/8/4/4/4/4/8 slots (not counting M.2 -> pcie adapters which can add another 4/4/4).

Passthrough story thread:

IOMMU grouping with all slots populated:


Added updated info based on BIOS 2.00d. It’s monster board. Also I think we should take a while to appreciate how awesome is ASRock support which implemented 2 new, totally optional, nice-to-have functions in 2 weeks by request in support ticket… Now we can hook up 14 gpus to this motherboard for “14 gamers, 1 cpu” lol.

2.00d? Is that some beta you got from them?

What’s the situation of TR/X399/Asrock and AGESA versions, are they up to date?

I was going to try Looking Glass/Passthrough on my Taichi, Vega & 970 GTX. It’s so nice not having to bother with the nvidia driver though, that I haven’t been arsed to try it yet…

Considering that 2.00d was created yesterday and sent in zip (2.00c was created 1 week ago) I would say yeah it’s probably not public yet. I’m sure you can poke them to send you one as well as give answer to AGESA question. They sent me particularly Taichi BIOS but those boards are almost 1:1 so I doubt they didn’t add it to Fatal1ty as well.

Fun fact - with SR-IOV and bifurcation support thus 14 pci-e slots those motherboards have hypothetical capacity to run 448 VMs with accelerated graphics… or 192 when using only x8 slots (with 32 vGPUs per S7150x2 card, 24 VMs in high performance mode 4gb/vGPU, 4 per card) :smiley:. You’d need lots of optane for swap tho.

OoOoOOoo… I want to get 8 Samsung 960 EVOs and run them striped for my boot drive :stuck_out_tongue:

Attempting esxi with 4 GPU’s. Individual Vm’s randomly crash and if rebooted, passthrough is dead until the HOST is rebooted (vm comes back up with error 43).

System info: Asrock x399 fatal1ty, 1950x, 64gb DDr4, (no xmp), 4 x Nvidia 10-series GPUs. 2 x 1TB nvme drives, no raid.

1.91 beta bios (no vm’s would start with passthrough with 2.0). iommu , sriov enabled.

Esxi 6.7 has the usual d3d0 in (prevents host hang at vm reboot) and hypervisor.cpuid setting in the VM (otherwise ALL vm’s always have error 43). win10.

Things that don’t work: changing pcie switches to x1 mode (both of them; I thought this might turn off APCM.)

Also, the 3.1 USB controller won’t pass through but the 2x3.0 controllers will. (perpetually stuck on “needs reboot.”)

I’m starting to suspect this mysterious PCIE AMD bug that’s been documented (and fixed) in the Linux hypervisor world. Is anybody getting esxi to work with this equipment?



I’m looking at the Taichi for a TR 2950 build for KVM & Passthrough. Did the latest bios do anything interesting? Would any users still recommend this board?


I’m interested in the same except I will be going for a cheaper 1920X or 1950X.

I am looking to make a build that has Linux Host, Windows VMs for gaming and work apps(2-3 VMs), FreeNAS/other VM for NAS, and maybe a couple more VMs to host a website for my wife’s business and maybe another for myself which is another business.

Last two are optional…

Soo, I was wondering something about PCIe bifurcation… It’s mentioned in the OP that both x16 slots now support bifurcating into various configurations, which is great. But how about the x8 slots? Could support for x4/x4 be added to those or is there a technical limitation preventing that? I’m contemplating building a dedicated VM host at work and the more PCIe devices the better. Any thoughts?

I did this, it was fine

Cool. I plan on frankensteining something like this:
A Supermicro AOC-SLG3-2M2-O into two Deock M.2 to PCIe x4 adapter into two PCIe x4 extension cables into eventually NICs, USB controllers and other stuff like that.

Do you think something like this would work?

Does anyone know how the IOMMU groups are arranged when PCIe bifurcation is used on this board?

Where do I find this BIOS 2.00c for Taichi? Can you please share a copy? or do the latest BIOSs include PCIe bifurcation. I am thinking of buying the x399 Taichi specifically for PCIe bifurcation so I can run 6 GPUs in x8

Install the latest (3.30) stable bios, not version 2.00 (that was just when the bifurcation changes were made.

Wow! that was quick. Just in time for a good Black Friday deal. Thanks a lot :slight_smile: :slight_smile: You made my day. Just curious if x399 Fatality also got this :slight_smile:

Yes, both boards are nearly identical except of the 10g ethernet port and some extra software for the fatality, so they’ll nearly garunteed to recieve updates at the same time.

Also, here’s a correction from what I said earlier, pay attention to this:

  • If the current BIOS version is older than P2.10, please update BIOS to P2.10(Bridge BIOS) before updating this version.

So double check which bios it has first, and if it’s older than 2.10, then you’d first update to 2.10, then to 3.30 after that.

Also for those who want vrm info on those boards.

1 Like

I got a beta 3.30c bios but I think that’s been posted already

1 Like

The Asrock X399 Taichi board is currently on sale up here,
and paired with a 1920X which can be picked up for arround $350,-.
That is pretty tempting.
Although the Msi X399 Gaming pro carbon is even cheaper atm.
So i need to dive into the Msi click bios nowdays to see which bios is better.
Since i have not toutched Msi boards for a while.

MSI has put a lot of work into polishing their board for AMD. It’s pretty solid tbh but works best with 180w CPUs. Meg for greater than 180w for sure

1 Like