Return to Level1Techs.com

Is the X399D8A-2T a good motherboard?

Does anyone have any experience with the X399D8A-2T? It has a lot of nice features (if expensive), but I’ve seen the thread on the X470D4U2-2T board so that makes me hesitant on grabbing one. I’ve looked around but I haven’t found anyone posting reviews/experiences with the X399D8A-2T.

Why I’m looking at this board:
I currently running a 2950x on an x399 Taichi board with a 128GB of ram. Unfortunately the Taichi board did not play nice with esxi and i kept getting the purple screen. I switched over to Server 2019 but I never had any luck passing PCIE devices to the VMs in Hyper-V.

Right now I just want to build a second 2590x node so I can migrate off my current box and rebuild my lab.

1 Like

I like the specs on that board, 3 x16 slots and 2 x8, thats some nice bandwidth. They’re all so close, not good for gaming GPUs?

I briefly ran ESXi 6.7 on my x399 Designare but didn’t get too far into pci passthrough due to bios issues.

Dual gaming cards would be fine I think. For my workload, I would look at some single slot Quarto cards on eBay. Plex and my NVR software tend to like team Green than Red.

If you find one at a reasonable price, worth a shot? Does VMWare recognize any x399 boards on their hardware support list? Could be something funky AMD did in the chipset and there’s no way around it.

A friend of mine is running the Taichi x399 with two GPU’s passed to different VMs, his IOMMU groups are a lot better than my Gigabyte. I think he’s on Fedora or Proxmox for the host OS.

x2 for the Quadro… PMS runs great on a Quadro P400 I picked up off ebay super cheap, and it’s a small card that doesn’t block airflow on bigger cards.

The lowest price ones are on ebay for $546.99 but Amazon has it for $555.49. Either way , after taxes it wound be around $590. Their kinda hard to come by and Newegg keeps selling out of them.

If i remember right ESXi would purple screen due to a weird driver issue. So I went with Hyper-V on Server 2016. Unfortunately, when I would try and assign the PCI-E device to a VM, it would just hang on the Powershell command. I would have to reboot the server to clear it. So in the end the VM’s never got the device assigned to them.

Eventually I updated to Server 2019 to see if that made a difference. I found that I could get the PCI-E devices assign to the VM’s via Powershell but they would never boot. It would always trow an error saying the device was in use even tough the were not (used a lot of Powershell to verify this). I played with both BIOS settings and windows setting but I eventually gave up. Family and Friends got annoyed with me taking down Plex between reboots.

I want to take another stab at it hence why i want to building another box.

Thx, for the info. I’m now looking into the P400 now :grin:

If you find a good deal on a Gigabyte Designare x399, I can vouch pci passthrough is working for a few slots, running the F12i bios (Sept 2019). SR-IOV also works and I have my 10g network card passed through to several VMs. That’s under Fedora linux, like I said, didn’t spend much time with VMware. It also worked under Unraid.

My finger might have slipped on the Amazon buy button…

Anyway, It’s mostly for a training lab so I can be more useful at work. I’ll try and do a wright up on it once I get it in hand.

2 Likes

I’m using an X399D8A-2T at work, seems decent enough.

The IPMI/BMC works fairly well although I’ve managed to crash it a couple of times and had to cold reboot the server to get it back, although this was while in the process of testing it, we’re only just going into production with it now, it’s been stable now I’ve stopped mucking around with it.

I’m using Hyper-V 2019 and have passed through a Quadro P2200 graphics card without any tweaks or hacking beyond the official instructions (planning to use this box to host session based remote desktop services and trial some thin client options - working really well so far) so for your use case this should work well, although obviously I’d caution that the Quadro is certified for this sort of pass through where most graphics cards aren’t!

We did do a trial run using the Sata drive ports for RAID, but hit a snag of sorts - we were limited to raid 10, and the software didn’t seem very mature (no email alerts!) - it talked about there being some way to upgrade to allow raid 5 & 6 but I’ve been unable to find a means or cost for purchasing the upgrade.

Then, if you wanted to use the M2 PCIE slots in RAID (and say, make a small raid 1 virtual disk on them for a boot drive) then that all worked fine, but you had to use the SATA ports in RAID mode too, and you then had to use the on board raid modes with those drives - you couldn’t for example pass through the disks and use Windows Storage Spaces because that would mean creating a virtual disk for each physical disk, and you were capped at 8 virtual disks total (which is less than 8 AMD SATA ports plus 2 m2 ports…)

Anyhow, all this is a very long way of saying that I ended up putting in a RAID card to manage our spinning disks.

The board was quite expensive (we’re in the UK and paid £400) but this was offset by the very cheap first gen threadrippers that we bought to go in them (we bought two…)

Are you still happy with X399D8A-2T or do you have any more info to share please?
I’m looking at it for a proxmox server with zfs/raid controller but information about this motherboard isn’t much in this area.

Yep, still happy is the short answer.

IPMI has been rock solid since I posted, not had any problems with remote management (although haven’t needed to do much with it either) - one small note is that I disabled the hardware inventory feature when setting up, can’t recall the exact reason I did this (perhaps to improve boot time or occasional hanging on reboots?) but no issues or real need for the feature anyway.

Two quirks I could add are (and these are presumably Hyper-V specific, perhaps driver specific to what I’ve installed too)

  • SR IOV mode doesn’t seem to work with Intel NICs - so the virtual NICs never go into SR IOV mode. This is despite (or perhaps even related to) passing through the graphics card successfully. I’ve tried different intel drivers to see if it makes any difference, no dice. Network speeds are still very good though (probably thanks to high ghz of the processor). This is with both the onboard 10gbe and an X520 nic for SFP+ support.

  • Occasionally (perhaps after a month and a bit of uptime) we’ve had replicas being hosted on these hypervisors stop applying changes. So the replica status just sits on ‘applying changes’ indefinitely. Then, when you try and reboot the hypervisor the Virtual Machine Management service just hangs on ‘stopping’ for a long time - I usually just psexec onto the hypervisor and taskkill the VMMS.exe process and this allows the reboot to proceed. Once it comes back up, the replications resume as normal (with a bit of delay while it works through the backlog).

Ultimately I wouldn’t classify either of these as any kind of dealbreaker for our usage, and it’s possible Proxmox will run completely without issue, but if you are after something totally rock solid in production then it might be worth spending the rather more for an Epyc/Xeon platform, but I guess if you were you wouldn’t be looking at this!

ZFS would probably be a good way to work around the limitation of the onboard RAID. I’m not a huge fan of MS Storage Spaces for hard drive management so I tend to use hardware raid for spinning disks still, then use storage spaces for data tiering on top of that RAID.