Asrock X570d4u, x570d4u-2L2T discussion thread

Thanks!

It must be a full power slot, as the reduced power slots apparently run at 25 w and that wouldn’t power that card. (That AMD lists the power requirements in terms of TDP makes it confusing, as that sometimes refers to heat dissipation and not power draw.)

:slight_smile:

Glad I found this thread!

Looking at upgrading my current NAS build to X570D4U-2L2T/BCM but after reading this thread it’s scared me a bit.

So are things stable with this board now? Is there a certain combo of BIOS/BMC that seems to be the most stable for people?

Looking the RAM the cheapest I can get from the QVL list is KSM32ED8/32HC for around £67 per stick which seems reasonable. I will be running this with the latest Unraid most for File Storage/Backups, Dockers and a couple VM’s.

If you can calm my nerves after reading half of this thread that would be nice as I really want to upgrade to this board.

Last time I checked the /32HC wasnt on the QVL. I know the /32ME is. Kingston changed the SKU from Micron-E to Hynix-C a year ago. I have the /32ME and they’re running fine. Can’t say anything about the Hynix modules.

They always were. If not, people were messing with overclocks or had incompatible memory. Or having trouble upgrading BIOS or BMC

1 Like

Ah yes you right, tired eyes didn’t spot that. Damn the prices are for /32ME are a bit nuts I need to look at the QVL again.

Thanks for the info though, it was likely I was going to order it anyways I don’t mind a challenge :wink:

Looks like the X570D4U is going to be supported with OpenBMC. Not sure what that actually means in terms of functionality but sounds exciting.
https://lore.kernel.org/lkml/[email protected]/

Hey folks, I’m hoping that you guys can help me out in making sure that my system refresh + move from TrueNAS Core → Proxmox makes sense from a hardware perspective.

I have a small 2U rack case. That’s all I can fit in my rack right now. Practically speaking that means I have a 2x5.25" drive slot that I currently use for a 3x3.5" hard drive hot swap enclosure. I also have two other drives in the system, but they’re wedged in a way I don’t like. I’m trying to increase performance / reduce power consumption / move to Proxmox in swoop.

New enclosure: this icydock 8x2.5" ~ ToughArmor MB508SP-B
It uses 2x mini-SAS connectors on it for the 8 drives. Because the X570D4U has 8x SATA ports, my understanding is that I would need

“Reverse cable” to connect the mini-SAS to the motherboard
This is a “reverse cable” so I think it means I can connect the 4x SATA connectors to the motherboard, and then I plug it into the mini-SAS backplane, right? I should get two of these.

And then from a software configuration, I hear that it’s best to pass the entire SATA controller to TrueNAS (will be in a VM on proxmox), rather than pass individual drives. So all 8x drives will be assigned to TrueNAS VM. I’m going to get 4 x 8TB SSDs in RaidZ1.

And then I have the m.2 slots on the motherboard. I was thinking of getting a 2TB SSD, and using that as the boot drive, and then with the remaining space, use it for VM storage. Or… should I get one boot device and a separate one for VM storage? These will essentially be the only two storage devices I can have assigned directly to Proxmox / not passed through, since it’s passing the SATA controller to the TrueNAS VM, right?

Anything else I need to be aware of? Thanks!

I hope that also includes the -2L2T board!

One thing you could try: go into the firmware settings and disable all other serial ports, and then set the SOL port to use the IRQ and base address that would usually be assigned to COM1.

I need to buy an m.2 PCIe SSD. I see that this board has two slots… According to the product page, one is listed as CPU. The other is FCH. What’s FCH?

I think this is a typo and mean PCH ( Platform Controller Hub).

Edit:
OK, I have found this:
AMD is marketing their chipsets as Fusion Controller Hubs (FCH)

So what does that mean exactly? Which port is which? Which is better ot use?

Hi guys,
anybody succeeded in resolving the random freeze problem? I started experiencing it much more often after switching from running esxi to running redhat on the board (runs 0 load nowadays). Board completely freezes almost daily. I’ve tried dropping the RAM to 2666 as I found it working for someone on reddit but for no avail… Disabled c-state and we’ll see.

I’m running X570D4U with AMD Ryzen 9 3900X and 2 x Kingston DDR4, 32GB, 3200MHz, CL22, ECC

|BMC Firmware Version|1.20.00|
|BIOS Firmware Version|P1.20|
|PSP Firmware Version|0.14.0.2A|

Just wanted to throw my 2 cents into the pile of the VCCM issue. I Believe @Null_Dev, @Mircoxi and @Arsoth were the ones to experience it recently. I’m experiencing it every boot event multiple times over and can’t take my memory over 2666 (even though it’s QVL up to 3200 and is a 3200 set). Curiously, when I was testing initially, if I were to only populate 2 slots it would stay at 3200, but now I’m getting 1.73V on VCCM that gets deasserted 10-11 seconds later at 1.21V ( I assume it’s the polling interval of the sensor array).

Has anyone found any recourse?

So I have a very stupid setup with an oversized GPU directly over the chipset heatsink and up to the front of the case, meaning I can’t put an intake fan that would blow over the fins . As you can imagine, even with ample airflow in the case, the chipset ain’t very happy about it, especially under load - so looking for a decisive and direct solution to the problem that AsRock left us to deal with ourselves.

I haven’t found anyone measuring or attempting heatsink replacement solutions so will document some of my findings.

The target area we need to cool conforms well to a 30mm fan size, and the distance between pushpin holes is ~46.5mm and height not exceeding 13mm (see below for precise measurements).

Out of all listings online I’ve been able to reliably find just one active cooling solution looks like the “proper” solution and has some brand recognition: Enzotech SLF-30. It lists a slightly wider minimum pushpin distance (47.5mm to 53.5mm) and is slightly higher (13.5mm). There is only one issue: it’s only available at $60ish at AeroCooler before shipping, which is a stupid amount of money to solve this issue.

Other options include:

  • Any ol’ 30-40mm fans that you could try and strap on top of the as some people did but it requires at least 10mm of clearance, which is usually unavailable if you have a full-width top slot filled.
  • A low-profile 30mm fan i.e. MULTICOMP MP001243 (6.5mm, usually used on Raspberry Pi Active cooler, can be bought as part of it) or the GeeekPi Raspberry Pi (7mm) One can also use 40mm options and mount them diagonally.
  • A blower-style “laptop” cooler (i.e. WAKEFIELD THERMAL DB0550505H1A-2T0 or SEPA HY45AB05PSE26AP00), at only 2-5mm thickness it’s easily mounted, but it can certainly be lacking in CFM.
  • A BGA cooler attached via a thermal pad and tied down with a zip-tie: MALICO MC30060V1-0000-G99. Downside is applying sufficient pressure to ensure good contact as BGA lacks conventional mounting mechanisms. There are also 40mm options that are cheaper but they can cause shorts with onboard components as they’ll overhang, so probably not a good solution.

Stock heatsink measurements/info for reference:

  • Based on the STP file provided by AsRock, the X570D4U-2L2T/BLM has a 50x30.8x12.6mm (l/w/h) passive heatsink which when mounted is elevated 3.85mm above the PCB surface, for an effective height of 16.55mm.
  • The heatsink is made up of two blocks: the main block that makes chipset contact is 32x30.8x12.6mm and the “fins” that don’t make direct contact but increase air contact surface that are 9x30.8x10.6mm each (2mm gap from bottom of the central plate). Note: Technically, on the render there is a 20x20mm plate a little beneath this block, which is likely to make the actual contact
  • Lastly, the linear distance between the center of the push pins (screw holes) is ~46.5mm (43.5mm closest hole diameter, 50mm farthest diameter).
  • In my case, with the stock heatsink installed, there is still 5.5mm clearance from the GPU, bringing the total clearance from PCB to ~22mm

P.S. I personally resorted to buying an RPI active cooling fan for £4 as I want to see if that’ll be enough without spending dozens. It’s 7mm and I hope it’ll fit once it arrives. I’ll either screw it into the heatsink (between the fins) or ziptie onto it.

Happy to report that the £4 RPI 30x30x7mm fan worked wonders. I hooked it up to the voltage-controlled fan controller that’s built into the Node 804 and at Low it’s inaudible but brings down the idle temps by about 20 degrees, making my PWM case fans go way slower and thus also inaudible.

With a hefty GPU up against it it’s 70c at idle in windows and doesn’t exceed 85c with GPU stress test.

3 Likes

Are you still working on this? I’m no expert, but I’m doing something similar and wanted to check in with some suggestions based entirely on my calvacade of mistakes. :stuck_out_tongue:

I have a small 2U rack case. That’s all I can fit in my rack right now. Practically speaking that means I have a 2x5.25" drive slot that I currently use for a 3x3.5" hard drive hot swap enclosure. I also have two other drives in the system, but they’re wedged in a way I don’t like. I’m trying to increase performance / reduce power consumption / move to Proxmox in swoop.

Keep an eye on the X570 chipset temp. I didn’t realize at first that it really needs to have some sort of airflow over it, and had to redo my whole cooling arrangement (twice) just to try to deal with that. It’s good now, but I really wish Asrock had put some sort of low-profile active cooler on this thing.

Don’t forget to do some sort of heat soak/stress test when it’s all built to make sure the chipset doesn’t melt. If it survives running Cinebench for 30 minutes to an hour, you’re … probably … fine.

New enclosure: this icydock 8x2.5" ~ ToughArmor MB508SP-B
It uses 2x mini-SAS connectors on it for the 8 drives. Because the X570D4U has 8x SATA ports, my understanding is that I would need “Reverse cable” to connect the mini-SAS to the motherboard
This is a “reverse cable” so I think it means I can connect the 4x SATA connectors to the motherboard, and then I plug it into the mini-SAS backplane, right? I should get two of these.

I think I saw @wendell do a video on those reverse cables once. It sounded tricky, as some motherboards will get cranky with these and it might be tricky to know if you’re buying the right cable.

Are you doing 2.5" SSDs or HDDs?

I have that IcyDock on this board, in a 3U Supermicro case. It’s rock solid, and you can swap out the fan on the back for a Noctua of the same size to make it quieter if you’ve got enough intake airflow from other sources. Just make sure you have enough SATA power; my server is old enough that I had to use a MOLEX to SATA splitter.

If you haven’t done all this yet, and have at least an x8 or x16 slot free, I’d strongly suggest foregoing the reverse cables and the on-board SATA ports getting an HBA in IT mode (take a look at the LSI 9207-8i; they can be had for reasonable prices on eBay and take the cable you need). If you’re using a SAS2 or SATA (6 Gbps) 2.5" SSD, putting eight of them on a single LSI 9207-8i should work fine.

Others who are more experienced, please chime in on bottlenecking and whether OP should look at a different card instead. :slight_smile:

And then from a software configuration, I hear that it’s best to pass the entire SATA controller to TrueNAS (will be in a VM on proxmox), rather than pass individual drives. So all 8x drives will be assigned to TrueNAS VM. I’m going to get 4 x 8TB SSDs in RaidZ1.

If you use an HBA card, you can just pass the whole HBA card though to your TrueNAS VM. If you’re using the on-board SATA ports, yes, you could pass through the SATA controller, but that can get weird if it’s intermingled enough with the other hardware on the board that it can’t be easily separated out for PCIe passthrough.

PCIe passthrough can get complicated quickly, but the issue is that you need the device you want to pass through to be in a separate “IOMMU group” than other devices you don’t want to pass through (see: PCI(e) Passthrough - Proxmox VE ) . If you’re using board chips/ports, this is more likely to be a problem than if you’re passing through a slotted card.

And then I have the m.2 slots on the motherboard. I was thinking of getting a 2TB SSD, and using that as the boot drive, and then with the remaining space, use it for VM storage. Or… should I get one boot device and a separate one for VM storage? These will essentially be the only two storage devices I can have assigned directly to Proxmox / not passed through, since it’s passing the SATA controller to the TrueNAS VM, right?

If you can budget for it, get two NVME of the same size, and when you install Proxmox, set them up in a mirror (Advanced Settings → ashift 12, default options are fine). It’s not recommended (for professional settings) to keep the OS and the VMs on the same NVME pool, but 2 TB should be enough for plenty of VMs and LXCs and the OS to coexist.

If you’ve still got a free x16 or x8 slot, you can get a carrier card that will hold 2-4 more NVME if you’ve got the budget, then you can keep the OS on (smaller) NVME on the board, and use larger NVME on the card for VM storage, for example. Or if you’re using spinning drives, you could use a pair of NVMEs on the expansion card for a special vdev or slog or L2arc if those make sense for your use case.

Anything else I need to be aware of? Thanks!

I’d suggest posting a more general Proxmox/ZFS thread describing your planned setup with all the drives and NVME you want to use, so someone smarter than me that actually knows how to all that correctly can help you avoid the mistakes I made/am possibly making now/will make again. :slight_smile:

Keep in mind that if you use the x8 slot with a Ryzen 3xxx processor (or newer), it will run with all 8 lanes, but the x16 slot will become an electrically x8 slot as well: So you can still put an x16 slot in there, and it will work, but at half the advertised speed (for an NVME carrier board, for instance, half speed is likely more than enough; likewise for a GPU if you adjust your performance expectations).

I don’t know for sure, but I think this is because the CPUs that run on these boards only really support the x16 slot and the x1 slot, and to make the x8 slot work it has to bifurcate the x16 slot in half, so it loses 8 data lanes when the x8 is in use. The nice thing is, out of the box this just works; you don’t have to turn it on in the BIOS.

There’s information in the manual (pull the latest one from their website) about what happens to the PCIe slot lanes/speeds depending on which CPU you use, as well as what memory speeds you can use depending on how many sticks you’ve installed). For instance, some things are completely disabled when using the 4xxx series Ryzen mobile processors. They’re not kidding about those configurations and the QVL approved hardware lists. Stray from the memory speed/slot configs or try RAM or other hardware not on the list, and you may have a bad time. You’ll definitely have an unpredictable time.

I’d suggest getting a bootable Windows 10 install going on a USB drive, too. You’re going to to need it for doing things like updating firmware on disks and LSI HBAs, and it’s easier to run heatsoak benchmarks there. Take a look at Win2USB. Yes, it’s 30 dollars, but it just works for doing this.

I got one of these fans to try, but I don’t think the Radeon Pro W5700 I have will fit over it. I’ll have to try later.

I wonder if having the fan hanging past the “bottom” edge of the heatsink (to the left in your pic) would still work? At the moment, I have a shorter GPU in place, and I wedged the fan between the two tall SATA pairs.

I nearly forgot that the fan was 5V, and went to connect it to a >5V fan speed switch I have in my case. But thankfully I remembered, so I connected it to the 5vsb pin of the aux panel connector.

EDIT: Tried the W5700, the Pi fan sure doesn’t fit. And jeez, that Pi fan is noisy!

Hey there @UnicodeFiend

Sorry I didn’t get around to replying earlier. The card I’m using is an MSI 2070 Super Trio X, which is incredibly obnoxious in terms of dimentions on unnecessary bits, especially in my case.

That said, it did manage to fit on top of the fan, even if I had to apply a bit of force. There is, what little, but clearance there.

And yeah, I agree, it’s rather loud, like most small fans are, that’s why I think a voltage-controlled regulator is required, like the one that comes stock with Node804 or one of the other ones available on the market.

As you mentioned that it doesn’t fit, I assume you tried mounting it hanging past the bottom part. When researching this I had an idea about mounting it differently - positioned in this little recess here, blowing air through the fins. Maybe worth a shot? Not sure about how well it’ll be able to circulate it tho.

The Radeon Pro W5700’s heatsink shroud assembly hangs down far enough that it’s barely two millimeters away from touching the SATA port stacks, so if anything, it would need a blower fan blowing through that gap. But since I don’t actually need that kind of horsepower in the machine, I just went back to one of the single-slot GPUs I was using before.

I have a Radeon Pro W6400, which was a mistake to buy (should’ve gone for a W6600), and an NVIDIA “we don’t use the name Quadro anymore” T1000. Both of those GPUs are short enough that I can just put a taller Noctua fan directly on the X570.

The thing I’m next hoping for is for this OpenBMC tree to get merged. This person is working on it, but I don’t know what their progress is on conversing with upstream to get it merged. I was able to build the first one, but I’m not daring enough to try flashing it, and booting it via tftp left it stopping on initramfs, because it expects to be able to mount the flash device.

I’ve also noticed that with the -2L2T board, a Ryzen 7 3700X, and the W6400, as well as two P4610 U.2 SSDs and two 8TB hard drives, the whole machine idles at something like 72 watts, even with PCIe ASPM enabled. The “monitor_cpu” tool from ZenStates seems to show that the CPU never enters the C6 state.

That’s some amazing research there, and an exciting branch for the OpenBMC indeed. Good to hear that you’ve got many other options, 2mm is indeed not enough for basically anything eh. I think I can get shorter cards in but what’s the fun in that?

You have also inspired me to share my own notes for power consumption with a Ryzen 3700X, 4x16TB + 4x4TB HDD + 2x250GB SATA + 2x1TB M.2 NVME SSD after extensive testing are as follows, at IDLE is around 120W:

  • CPU (3700X) (energy-efficient mode): 15W scaling up to 35 under load
  • IPMI: 10W
  • GPU (2070 Super): 30W
  • 4x4TB (Seagates IronWolf) HDD bank: 15W
  • 4x16TB (Seagates EXOS) HDD bank: 20W
  • 2x1TB NVMe (Samsing 970 evos): ~10W
  • 2x250GB SATA SSD (Samsung 870): ~5W
  • Other (fans, mobo, ram, etc): 15W

Generally aligns with your 90W figure it seems, given the different in storage.
But damn I don’t like the 30W idle standby on the GPU. This could feed an entire RPI 5. I tried to do some driver magic to maybe bring it down with nvidia standby modes but failed for now.

My only issue is that, given 0.12 kWh consumption, keeping this thing running would cost me an astronomical £25/month. Hence why I moved most of my 24/7 things back to the RPI and now am firing it up mostly during work hours.

In case anyone’s in the same boat - I wrote a 10-minute read on how to make the IPMI switch accessible in your Apple Home to toggle it with ease: Controlling home server over IPMI from an iPhone via Apple Home