Return to Level1Techs.com

Is the X399D8A-2T a good motherboard?

Does anyone have any experience with the X399D8A-2T? It has a lot of nice features (if expensive), but I’ve seen the thread on the X470D4U2-2T board so that makes me hesitant on grabbing one. I’ve looked around but I haven’t found anyone posting reviews/experiences with the X399D8A-2T.

Why I’m looking at this board:
I currently running a 2950x on an x399 Taichi board with a 128GB of ram. Unfortunately the Taichi board did not play nice with esxi and i kept getting the purple screen. I switched over to Server 2019 but I never had any luck passing PCIE devices to the VMs in Hyper-V.

Right now I just want to build a second 2590x node so I can migrate off my current box and rebuild my lab.

1 Like

I like the specs on that board, 3 x16 slots and 2 x8, thats some nice bandwidth. They’re all so close, not good for gaming GPUs?

I briefly ran ESXi 6.7 on my x399 Designare but didn’t get too far into pci passthrough due to bios issues.

Dual gaming cards would be fine I think. For my workload, I would look at some single slot Quarto cards on eBay. Plex and my NVR software tend to like team Green than Red.

If you find one at a reasonable price, worth a shot? Does VMWare recognize any x399 boards on their hardware support list? Could be something funky AMD did in the chipset and there’s no way around it.

A friend of mine is running the Taichi x399 with two GPU’s passed to different VMs, his IOMMU groups are a lot better than my Gigabyte. I think he’s on Fedora or Proxmox for the host OS.

x2 for the Quadro… PMS runs great on a Quadro P400 I picked up off ebay super cheap, and it’s a small card that doesn’t block airflow on bigger cards.

The lowest price ones are on ebay for $546.99 but Amazon has it for $555.49. Either way , after taxes it wound be around $590. Their kinda hard to come by and Newegg keeps selling out of them.

If i remember right ESXi would purple screen due to a weird driver issue. So I went with Hyper-V on Server 2016. Unfortunately, when I would try and assign the PCI-E device to a VM, it would just hang on the Powershell command. I would have to reboot the server to clear it. So in the end the VM’s never got the device assigned to them.

Eventually I updated to Server 2019 to see if that made a difference. I found that I could get the PCI-E devices assign to the VM’s via Powershell but they would never boot. It would always trow an error saying the device was in use even tough the were not (used a lot of Powershell to verify this). I played with both BIOS settings and windows setting but I eventually gave up. Family and Friends got annoyed with me taking down Plex between reboots.

I want to take another stab at it hence why i want to building another box.

Thx, for the info. I’m now looking into the P400 now :grin:

If you find a good deal on a Gigabyte Designare x399, I can vouch pci passthrough is working for a few slots, running the F12i bios (Sept 2019). SR-IOV also works and I have my 10g network card passed through to several VMs. That’s under Fedora linux, like I said, didn’t spend much time with VMware. It also worked under Unraid.

My finger might have slipped on the Amazon buy button…

Anyway, It’s mostly for a training lab so I can be more useful at work. I’ll try and do a wright up on it once I get it in hand.

2 Likes