Asrock X570d4u, x570d4u-2L2T discussion thread

The problem with x470 is that the chipset link is pcie 3. That means you run into bandwidth issues with nvme drives. I don’t absolutely need gen 4 speeds for a single drive, but I will be mirroring a pair of nvmes, so I do need/want the grunt throughput of gen 4 on the chipset link so I have some overhead for other stuff.

In a “standard” x570 configuration with a pci x4 slot going through the chipset and the m.2 slots split across the chipset and CPU lanes this isn’t as much of a problem. Yeah, you can run into some throughput issues with a gen 4 nvme drive running through the chipset, but gen 3 nvme is more than adequate for me. That would leave plenty of overhead for things like 10Gb NICs, some random USB junk, a few sata connections, or whatever else.

Running both m.2 slots through the chipset will cause bandwidth capacity issues no matter what if you intend on using two non garbage nvme drives.

1 Like

I’m able to post now. I replaced the 2x32GB with a single 8GB stick of random junk RAM in the A1 slot. This let me post the system. Then I shut it down and swapped back to the 2xKSM32ED8/32ME kit.

The fact that the debug code indicated a CPU problem combined with the fact that that memory is specifically on their QVL is quite confusing. I can honestly say that I have never had anything quite like this happen before with any system build I’ve done in the past.

Their memory training isn’t the greatest (I already mentioned my own RAM problems) but I wouldn’t have guessed D0 means this.

So the support wasn’t that useless after all…

Support initially blamed the failure to flash the bios on not having ram installed despite me initially stating that the flash did not appear to be working whether or not the RAM was installed, and the fact that it’s not necessary to do the initial flashing. Obviously you can’t POST without RAM, but there was no indication that the flash was successful at all initially. So, perhaps they were in the right area, but if that’s what they were originally going for, it was rather poorly stated. The RAM is on their QVL as well.

Right now the system seems hard locked at 2666 MT/s, which is not what I want either. I bought 3200 sticks specifically to use them at 3200, possibly higher, and any attempt to fiddle with the memory speeds is resulting in a d1 error. That error code says there’s a northbridge problem, but given the error codes so far, who knows what that means.

I’m honestly about to just return this board and go with a standard ATX/mATX board that supports ECC, because this is turning into far more hassle than the upsides of onboard 10Gb lan and IPMI are worth. Especially since half the IPMI features aren’t functional in the first place.

I think I figured it out. The memory speeds reported in the bios and the overclocking interface are not consistent.

For those that are unaware, DDR means double data rate. In laymans terms that means that the memory can do two operations per clock cycle, or is twice as fast as it says it is. Most systems these days report the doubled value, but if you look at your ram speed with CPUZ or something of that sort, you’ll get the actual clock speed, which will be half. So if you buy a 3200 MHz kit, the actual clock is 1600 MHz, and the data rate is 3200 MT/s.

This board reports the doubled value on the first page of the bios. However the overclocking section (which I’m forced to use since it apparently will not recognize the memory on the QVL at the correct speed), appears to use the actual clock speed.

This means that when I was fiddling around trying to manually enter the clock speed for the RAM, I was shoving double the value in, which would explain why it wouldn’t boot. Even when I thought I was underclocking it as a sanity check, I was actually overclocking it to 4800. I’ve seen boards that report the “actual” clock speed before. I don’t believe I’ve ever seen a board that displays one value in one section, and the other in another though. That’s a new one.

Currently chilling at 3600 MT/s with memtest chewing on it. I think I’ll let that bake overnight and we’ll see where we’re at in the morning.

Yeah, I already don’t remember how many times I pointed this out to people, those boards all behave differently when compared to consumer boards.

Asrock should know better and at the very least be consistent but I guess it is expected - those boards are a good introduction of server-side-of-things to people used to consumer stuff. Annoying but in the long run those people are set to learn a valuable lesson.
And home-server niche could become a bigger market with better offerings!

Just a few posts I found in 1 minute search:

The confusion on my end came from the fact that it lists both combined with the insanely high speed options in the bios. I do not believe I’ve ever seen a board do that before. Not even the server boards I’ve done stuff with on the past. There have been plenty of consumer boards that listed stuff in actual clock rates, but it’s always been one or the other.

The real WTF though is why on earth does it have memory clocks that go up that high? Isn’t the world record with liquid nitrogen just over 7k MT/s?, aka 3500 MHZ? There is no conceivable reality in which half of the options on that menu are obtainable. If the board had capped the OC settings at 2000, or even 2400, it might have been a clue to put 2 and 2 together.

1 Like

I think they just didn’t bother - the BIOS back-end code was probably initially copied from their AM4 consumer boards and the OC menu was hardly validated/worked-on.

AM4 can not be called a mature server platform at this point. The best we can hope for is for the menu not to be removed entirely.
For example RAM voltage setting was removed at some point on the X470D4U - don’t know if X570 boards still have it.

Kida chicken and the egg - no support because AM4 is not popular server platform, not popular because support and board selection sucks.

Has anyone else noticed absurd coil whine with this board?

I got the 2L2T board and after I type the same command, it looks like I got the same issue. Two 10Gb Ethernet port only linked with PCI-e 1x…

root@SwanDisc:~# for netdevs in $(lspci | grep I210 | awk ‘{print $1}’); do echo -e “$netdevs - Link Speed: $(cat /sys/bus/pci/devices/0000:$netdevs/max_link_speed) - Link Width: $(cat /sys/bus/pci/devices/0000:$netdevs/max_link_width)x”; done
26:00.0 - Link Speed: 2.5 GT/s PCIe - Link Width: 1x
27:00.0 - Link Speed: 2.5 GT/s PCIe - Link Width: 1x

Any solution ready there?

You used my command that will match i210 nics, thus the 1Gbps ports, not the 10s.

Hi all, thanks for the information so far on this thread - has been very useful as I just bought one of these boards to use as an ESXI host to consolidate a few machines.

Out of interest, has anyone managed to get a GPU to work in this board?
When I put one in either the 16x or 8x slot (the 8x would be enough and it has a slot cut to allow a full PCI-E into it and preferable as then I wouldn’t lose the 1x slot with a dual slot GPU in situ) I cannot get past the BIOS - it always fails with 5 beeps which is the code for no graphics on Asrock boards. I also tried plugging in the HDMI lead on the GPU during testing as some PCs just won’t run without something connected at all from memory. But it didn’t help
I tried also disabling the onboard graphics but this was a mistake as then you can’t even get the IPMI remote up either and the 3950X doesn’t have an IGPU on it which is my chosen processor for this build.
I am on the latest available BIOS and have tried resetting to defaults also - I must be missing something as it is my first non-enterprise server or non “gaming” focused motherboard

hello every i have x570d4u-2L2T i would like to add HBA 9500-16i card but cant find cables for directly connect to 16x sas 12gb drives can help me

I’ll try soon enough. Your post worries me as I was counting on a P2000 card for Plex transcoding. This will be such a deal breaker for me that I would actually need to sell the motherboard if it doesn’t work.

https://docs.broadcom.com/doc/12354774

you can find cheaper offbrand cables on amazon but there’s no guarantee it will work at 12Gbps. I only have SAS2 drives so It doesn’t matter for me.
The offical LSI cable for your scenario would be CBL-SFF8643-SAS8482SB-06M
Edit: It seems the new gen cards have replaced the standard port. Look around Broadcom’s website.

Broadcom introduced shitty proprietary cables with their 9400 line (the first where you can also use NVMe drives when using the internal ports): You cannot use standard U.2 cables but have to get these overpriced Broadcom cables.

I’d be careful about the 9500 and the cables it might need, maybe you also have to use special cables for SATA and SAS drives. :confused:

Not true. Rocking a 9460-16i with standard mini SAS HD ports in my server.

I’ve installed a Quadro P2000 card (Pascal GP106 / 5GB RAM) single slot, 75W (no PCI-e power connections) with a DisplayPort 4k Dummy plug and it works fine.

A couple of things to consider:

  1. This motherboard has a peculiar way of PCI Enumeration. As it seems it only does it after complete power off/on, as in, pull the plug out of the wall type of power off, while also waiting for the heartbeat LED on the mobo to die off.
    So, be sure to do that unplugging after each insertion/deinsertion of any PCI-E card, especially GPUs.

  2. To have the remote control (web IPMI) work correctly after installing a GPU you need this setting in your BIOS:
    AMD PBS → Primary Graphics Adaptor ->Onboard D-SUB
    Otherwise you’ll only see the “NO SIGNAL” screen forever.

So the card gets picked up by my linux installation just fine, haven’t yet installed drivers but I’ll test this soon enough.

lspci | grep -i nvidia
2d:00.0 VGA compatible controller: NVIDIA Corporation GP106GL (rev a1)
2d:00.1 Audio device: NVIDIA Corporation GP106 High Definition Audio Controller (rev a1)

Yes, the 9400 ones only need Broadcom cables to connect to NVMe SSDs even though they use standard SFF-8643 ports.

I feared that since they changed ports again with the 9500 line that they might have introduced “certified” cables even for SAS and SATA drives.

thank you for your help. I’m trying connect Hba 9500-16i directly to 12x Seagate Exos X16 ST14000NM002G 14TB for home lab and large storage