Asrock X570d4u, x570d4u-2L2T discussion thread

Welp, I’ve made the same mistake. Do you need a GPU for socflash? Or does the onboard vga port work without a gpu?

Onboard VGA will get you something like 1280x720p out. The Aspeed chip on the board that provides the IPMI also has a very weak 2D GPU onboard, so that’s what you’re using. There’s no 3D acceleration at all, so you’re best sticking with a GUI-free OS. FreeDOS works well, as it’s just … DOS.

Onboard HDMI will not work without a supported AMD CPU with integrated graphics. And you’ll need the latest BIOS and BMC for that.

I’ve done it now it worked perfectly!

1 Like

Hi all. I’m relatively new to the forums. I recently got this board and set it up with a 5950X, 128GB of Crucial CT32G4DFD832A RAM. First boot went great, I installed xcp-ng and was up and running for a few days but after a reboot it won’t POST.

I’m getting a 4b error code. I’ve tried CMOS reset (unplug, remove battery, jump the CMOS reset) and also tried individual sticks of RAM so far but no luck. Any recommendations? I have access to the IPMI so I can pull more system info.

Thanks!

Oof. Sorry to hear that.

Do you have another CPU you can try? I had a 5900X that went bad on me after a few days. I think it was mishandled in shipping and prone to fail when I got it, but I can’t prove that…

I had the CPU running on another motherboard for a long time and then it ran fine upon first install. But that’s a good idea. I think I have another AM4 CPU I can try.

1 Like

Hi there!

I have read this whole thread looking for a problem close to mine, but i didn’t find anything yet… I own a X570D4U-2L2T and upgraded the BMC to v1.35 and the the BIOS 1.70 today. After upgrading the BIOS and a full reboot, the system stopped detecting my two M2 drives (Samsung 980 Pro). I tread almost EVERY setting i thought could be related to it… but no luck. From time to time (every 20th restart) one of the drives shows up in the EFI but after leaving and the following restart its gone again. I already tried downgrading the BIOS to v1.40 and even v1.30, but nothing helps - the drives are not detected anymore.

Did anyone had problems like mine with NVMe drives?

System Specs:

AMD Ryzen 3950X
128 GB DDR4 2666 RAM
2 * Samsung 980 Pro

No PCIe card or similar installed. The system was running without any problems with BIOS v1.30 and BCM v1.20 for more than a year.

Thanks so much in advance!
Greetings unu1

Board finally arrived, and works! However, I’m having a small issue:

The memory installed is only running at 2666MHz, but is rated for 3200. I tried setting this manually in the OC section since I couldn’t find anywhere else for timings, but the BMC started throwing errors shortly after that saying the RAM was running at 1.75 volts and won’t boot anymore. I’ve not tried to navigate a server BIOS for a good half a decade, would anyone happen to know where else to look to get that set right?

I’d appreciate any pointers!

Also, if anyone knows how to reset the CMOS without getting in and pulling the battery… that’d be amazing, the CS381 is not a friendly case to build in.

EDIT: Just re-flashed the BIOS to solve the not-booting issue. If it works, it works!

Hello, everyone.

Here’s another question betraying my inexperience a bit.

I’ve got two PCIe 4.0 NVME drives installed (Sabrent Rocket 4.0 1TB), which I plan to use for VM/CT image storage in Proxmox. Mostly, it’ll be responsible for a TrueNAS VM that has a bunch of SAS drives passed through to it.

I’d also like to use both 10Gbps NICs in LACP mode.

I know that that one of the NVME slots shares bandwidth with the chipset, so, question: How likely am I to see bottlenecking in normal use for a personal (non-business) virtualization server/homelab setup?

Put another way, what would I need to be doing to run into problems?

My storage array is a 16-disk SAS2 SSD set, with 8 each connected to an LSI 9207-8i card.

Proxmox itself (the OS) will be installed on a SATA SSD ZFS mirror.

Chipset bandwidth is 8GB/s (4 lanes of PCIe 4.0). So pretty much what good NVMe drives can demand. And with the 10Gbit NICs and SATA all sharing bandwidth…I got PCIe 3.0 drives and don’t worry about it. You may see some bottleneck, but how often does the NVMe pull full sequential read throughput? My VMs don’t.

Thanks!

I’m disinclined to spend more money replacing the drives at this point; I don’t need their max thoroughput–unless it’s going to cause problems with the CPU-connected drive and the chipset connected drive running at different speeds.

I’d be much more concerned about the 10Gbps NIC on the chipset not running at full bore. Does it have priority? If so, and I’m not going to get instability from a speed differential on the drives, I think I’m okay?

You can always set the PCIe generation to 3 in the BIOS to see if system performs better with the NVMe in 3.0 handcuffs. Testrun the NVMe on full sequential while having the NICs in full duplex transfer. And then check the differences.

Hi there,
I am planning to create a new build based on X570D4U-2L2T/BCM, but in my region all 32 GB memory modules from the QVL list are not available.
I want to use all 4 slots with 128 GB of ram and I found Micron MTA18ASF4G72AZ-3G2R memory.
Has anyone had experience with these modules and did everything work well?

Welcome to the forums!

These are the modules I’m using, in the same configuration - the QVL lists a couple different variants of the MTA18ASF4G72AZ, just not the 3G2R, so I figured it was close enough.

So, they work - but I’ve not been able to get them running at the full 3200MHz. In fact, trying to change any memory related setting at all stops the system from booting, throws a C8 error on the debug LED, and according to the IPMI, causes the VCCM voltage to spike to 1.72V. Everything survives, but needs a BIOS reflash to get the system working again since it doesn’t even get to a point you can get into the BIOS, and the way the IPMI exposes it is… kind of terrible, honestly.

If you can get by without the full 3200MHz, it’s still worth getting them, since I’m sure someone smarter than me will figure out a way to get them running at full speed eventually, but I’ve had no issues running them at 2666MHz.

Interestingly, the BIOS does recognise them as being able to run at 3200MHz (and 2933MHz) when you go into the right menus, it just refuses to select it when set to Auto and throws those weird errors when you do it manually.

1 Like

Hi and thank you for a great thread.

I just received my board (2L2T) and paired it with a Ryzen 7 5700G and 64GB Kingston ECC memory. So far the setup has worked great for TrueNAS Scale!

I only have 2 minor issues that I hope you more experienced users can help out with:

  1. I use the onboard 10Gbit ( enp36s0f0) and if I understand correctly this NIC is shared by the IPMI? Without connecting the physical IPMI port I do get an IP address and can connect to what looks like a colorful dashboard that I assume is the BMC. What is the physical IPMI port for and is it any different? Will sharing the link impact the performance of the 10Gbit link?

  2. I have the primary GPU set to onboard and I have not been able to figure out how I can get the remote control in the BMC to work. It only says “No Signal”. The HDMI out works perfectly however, but I do want remote capabilities.

Hope you can help.

The standard NIC LAN ports share sideband IPMI to the BMC. The sideband BMC has a different IP address than the NIC ports IP addresses.

You have to login into the IPMI interface to get access to its functions.

The BIOS has settings to enable or block the IPMI interface. Make sure it is enabled

In addition to what @KeithMyers mentions, you’ll need to plug a network cable into the IPMI port (above the two USB ports) and use the IP address of that connection to get to the web UI (IMPI interface). Once you’re in the web UI, go to Remote Control. This will give you the same video out signal you’d get over HDMI, and allow you to control the entire server from a web page.

I don’t know of a way to get the IPMI interface over the HDMI port. I don’t think it’s designed to do that. In a pinch, you can connect a cable to the the VGA port and get video out from the IPMI chip itself, which will let you access the BIOS and even boot into an OS if you need to, but that’s a relatively weak 2D GPU that can’t do 1080p, so while it’s great for troubleshooting you won’t want to use it all the time. And it still doesn’t give you access to the IPMI interface itself (though you can access the BIOS).

Thanks for bringing up the NIC sideband thing. I’m still not clear on how much that eats into the 10Gbps port’s bandwidth when it’s not being used. (Though even when it is being used, it can’t be too much bandwidth even when using remote control, maybe 1Gbps at most).

My understanding is that it’s a backup for the dedicated IPMI port over the USB ports–if the IPMI
port failed without the sideband on the 10Gbps port enabled, you’d be locked out.

Unrelated: which TPM module should I buy? There are 4 on the QVL, but I assume I want one of the two that are TPM2.0.
https://www.asrockrack.com/general/productdetail.asp?Model=X570D4U-2L2T#TPM

I’ve noticed some odd things with my -2L2T setup. For reference, I’m running 4x32GB RAM modules that were on the QVL.

Occasionally I’ve noticed random reboots of the system. I just happened to notice in the IPMI logs a message that VCCM has hit 1.75V. The RAM is set to auto and should be running at 1.20V, so I don’t know how it would find spike up to 1.75V. Whenever I look at the voltage in the IPMI, it says it’s around 1.22V which should be fine.

I cannot determine if this spike comes as a result of the reboot or it caused the reboot. The PSU is a new Corsair RM650x, so I don’t think that is the issue.

I know people have mentioned that BIOS 1.70 is buggy. Is this one of the bugs and would everyone recommend that I go back to 1.40?

The odd thing is I have a duplicate system running the same configuration which has been rock solid with no VCCM events.

After some further investigating, the VCCM error in the IPMI event log happens every time the machine is powered off or rebooted. Interestingly, it does not occur during power up. Has anyone else noticed this?

Lately I have been facing an issue with the PCI Slot X16, it keeps disappearing every time I shut down the server. If I turn it off and on it might come back. I reset the BIOS and nothing changed. Any ideas?