Return to

ASRock X399 Taichi / Fatal1ty summary Thread


Following similar thread for ASUS Zenith Extreme X399, I think it’s worth to create such thread for ASRock workstation oriented X399 motherboards, gathering all workstation oriented info we have.

Both motherboards support ECC, SR-IOV and PCI-E bifurcation.

Since BIOS 2.00c PCI-E bifurcation supports both x4/x4/x4/x4 and x8/x8 modes (before, only x4/x4/x4/x4) for high end cards like 40G NICs, SAS controllers or GPUs (official answer from ASRock support).

Since BIOS 2.00d both x16 slots support bifurcation (before, only bottom one) giving either 16/8/8/8/8/1 or 16/8/4/4/4/4/8/1 or 8/8/8/8/8/8/1 or 4/4/4/4/8/4/4/4/4/8 slots (not counting M.2 -> pcie adapters which can add another 4/4/4).

Passthrough story thread:

IOMMU grouping with all slots populated:


Added updated info based on BIOS 2.00d. It’s monster board. Also I think we should take a while to appreciate how awesome is ASRock support which implemented 2 new, totally optional, nice-to-have functions in 2 weeks by request in support ticket… Now we can hook up 14 gpus to this motherboard for “14 gamers, 1 cpu” lol.


2.00d? Is that some beta you got from them?

What’s the situation of TR/X399/Asrock and AGESA versions, are they up to date?

I was going to try Looking Glass/Passthrough on my Taichi, Vega & 970 GTX. It’s so nice not having to bother with the nvidia driver though, that I haven’t been arsed to try it yet…


Considering that 2.00d was created yesterday and sent in zip (2.00c was created 1 week ago) I would say yeah it’s probably not public yet. I’m sure you can poke them to send you one as well as give answer to AGESA question. They sent me particularly Taichi BIOS but those boards are almost 1:1 so I doubt they didn’t add it to Fatal1ty as well.

Fun fact - with SR-IOV and bifurcation support thus 14 pci-e slots those motherboards have hypothetical capacity to run 448 VMs with accelerated graphics… or 192 when using only x8 slots (with 32 vGPUs per S7150x2 card, 24 VMs in high performance mode 4gb/vGPU, 4 per card) :smiley:. You’d need lots of optane for swap tho.


OoOoOOoo… I want to get 8 Samsung 960 EVOs and run them striped for my boot drive :stuck_out_tongue:


Attempting esxi with 4 GPU’s. Individual Vm’s randomly crash and if rebooted, passthrough is dead until the HOST is rebooted (vm comes back up with error 43).

System info: Asrock x399 fatal1ty, 1950x, 64gb DDr4, (no xmp), 4 x Nvidia 10-series GPUs. 2 x 1TB nvme drives, no raid.

1.91 beta bios (no vm’s would start with passthrough with 2.0). iommu , sriov enabled.

Esxi 6.7 has the usual d3d0 in (prevents host hang at vm reboot) and hypervisor.cpuid setting in the VM (otherwise ALL vm’s always have error 43). win10.

Things that don’t work: changing pcie switches to x1 mode (both of them; I thought this might turn off APCM.)

Also, the 3.1 USB controller won’t pass through but the 2x3.0 controllers will. (perpetually stuck on “needs reboot.”)

I’m starting to suspect this mysterious PCIE AMD bug that’s been documented (and fixed) in the Linux hypervisor world. Is anybody getting esxi to work with this equipment?




I’m looking at the Taichi for a TR 2950 build for KVM & Passthrough. Did the latest bios do anything interesting? Would any users still recommend this board?



I’m interested in the same except I will be going for a cheaper 1920X or 1950X.

I am looking to make a build that has Linux Host, Windows VMs for gaming and work apps(2-3 VMs), FreeNAS/other VM for NAS, and maybe a couple more VMs to host a website for my wife’s business and maybe another for myself which is another business.

Last two are optional…