Low-budget AM5 server board AsRock Rack EPYC4000D4U -- too focussed and strange?

So @Exard3k is browsing his shopping list for the next server (or cluster node to be more precise). And I saw this (relatively new) board from AsRock for 300-350. Usually those AsRock Rack boards are 400-600ish. So this surprised me and grabbed my attention.

Outside of EPYC4000, ECC validation,IPMI and the usual things, the strange MUX setup is probably the defining feature. And the lack of on-board 10GbE or 25GbE or any PCH/chipset. I had to check the block diagram to make sense of it. Very unorthodox, but interesting.

Looks very flexible on first sight. Wanna use M.2? fine. Nahh, let’s use MCIO and U.3? sure, but M.2 won’t work. Or x16 slot downgrades to x8 then. Want to use both MCIO at 8 lanes each? No M.2 and only x8 slot for you!

Basically 24 CPU lanes of freedom?

Do you like that? Would you buy it?

I’m looking for Ryzen, ECC, IPMI, 25GbE NIC, 2xHDD+ 4x U.3 machine. Is this my board?

https://www.asrockrack.com/general/productdetail.asp?Model=EPYC4000D4U#Specifications

1 Like

I think this is about what I have been waiting for on the AM5 platform since it’s inception. The ability to flexibly use all PCIe lanes.

What isn’t clear to me from the manual is if the PCIe slot and the shared MCIO support bifurcation (into 4x4 or 2x4 each).

The manual says that the second MCIO is 2x4 lanes shared with both m.2 slots and the MCIO lanes will work if the shared m.2 slot (either one) is not populated. Brilliant!

Where do you see the board for sale that allows you assessing a 300-350 price tag?

It’s still not total flexibility…like 6x M.2 or triple x8 slots. But it is way more than the usual x8/x8 split on the slot and “don’t need those 4xM.2? wasted lanes, you sucker!” we are so used to.

I assume that MCIO1 is 8 lanes that can be bifurcated into 2x4 (as it is common for MCIO 8i) while MCIO looks like it is mandatory 2x4 without the ability to connect a x8 device (like a riser and strapping a GPU or NIC to it). I’m unsure about this myself. But both MCIO connectors are 8i. MCIO 4i connectors are smaller in width. I checked other boards in the past and they look different.
So common sense says you can just get 2xMCIO 8i cables and run 4x NVMe on them?

I checked Newegg for US along other google hits and Geizhals.at in the EU and price range varied between 300-350 + VAT. Availability in EU is very very thin for the usual retailers. New AsRock boards usually take 4-5 months to trickle down into retail channels and this one is from Nov '24. CTT in Germany (wholesaler, B2B-exclusive) has them in stock, but I don’t have a business account, so can’t check or buy from them.

Only problem I have is…there is no port/slot left for a boot drive (only2x SATA), I probably have to use an external USB drive. Very tight config indeed. And that MCIO2 2x4 uncertainty. Then get a ConnectX-4 for the PCIe slot and you’re ready to go.

edit: Found some Dutch site having it in stock. I had dutch in school (wasn’t my best grade), and states shipment within 4-7 business days, so it is sitting on-demand in some wholesaler warehouse.

1 Like

You’re correct. AGESA supports PEG x8/x4/x4 and ASRock calls MCIO2 x4x4 because there’s not an x8 PHY behind it. FWIW, every AM5 desktop board I’ve checked has had the x8/x4/x4 option and it wouldn’t make sense for Rack to cripple it out of the EPYC4000D4U.

M2_1 and M2_2 should both be bootable so I’d expect the same for either drive on MCIO2. Not as sure about MCIO1 but, as X870E boards are unrestricted, AGESA should support either drive there as well. But there might be a BIOS limitation as ASRock doesn’t have an X870E with M.2s switched out of PEG.

1 Like

The board is promoted by ServeTheHome in their latest video about the new EPYC4005 (Zen5 Ryzen) if you want to have a more audio-visual look.

I’m about to pull the trigger on this board, but announcement of EPYC4005 kinda makes me want to wait…

1 Like

This board is so close to being great – and then they messed it up by wasting the 4 “chipset lanes”! From the manual:

  • PCIe lane 20: ASMedia 1061, a Gen2(!) dual SATA controller! So this doesn’t even give full SATA bandwidth on the meager two ports, at 500 MB/s total.
  • PCIe lanes 21 & 22: each goes to a I210AT 1 Gbit/s NIC. :facepalm:
  • PCIe lane 23: AST2600 BMC.

So anyone using this will pretty much be forced to put a NIC in the single PCIe slot. Leaving no good way of getting more SATA ports.

Imagine if they had used PCIe 20-21 for an ASMedia 1164 instead, giving 4x the SATA bandwidth over 4 ports, and then used a better NIC on PCIe 22, giving at least 2.5 Gbit/s (or preferably even 10 Gbit/s).

1 Like

I assume by ditching a proper chipset and rewiring stuff with cheap “basics” is just a cost-saving measure. You can do more with those lanes, but will also put a +50-100 bucks pricetag for the stuff that comes with it (chipset M.2 ports, PCIe x4 slot, that kind of stuff). I think I know why they did it.

2x SATA is nothing to write home about, PCIe 2.0x1 even less so. But better than having to decide between 8xSATA or NVMe on the MCIO 8i as it is common for modern server boards. So you get the MCIO and have 2x dedicated SATA that are just fine for HDDs or boot SSD(s).

That’s pretty much what the intention is. It’s not a board designed for a server with a GPU or HBA (unless you really love being stuck with 1Gbit connection on a server :wink: ).
The slot is for 25 (x8) or 100GbE NIC (x16).

2.5G is a pure consumer thing. 10G chips are all x4 or x8, so needs some chipset/switch/whatever to make this fly. AsRock has 10 or 25Gbit pretty much on any other board. Hardwiring 8 CPU lanes to 10G/25G kinda defeats the purpose of “24 CPU lanes of freedom”

Yes I don’t like having only 2xSATA on 2.0 lane and I don’t like having no spare M.2 slot for boot drive. And maybe some x4 chipset slot or passive x8 slot so you can do two cards with x8/x8.

But I kinda also don’t want to spend +200-300 bucks for all those niceties . The 350 pricetag is what is so great. With some cheap ebay 25/100Gbit NIC, that’s really competitive. The Gigabyte AM5 server board Wendell had in his EPYC 4005 video is 750. And AsRock Rack B650D4U-2L2Q isn’t that much off.

TLDR;
Yeah we don’t get all 28 lanes. 4 are “wasted” by stuff you can do more efficiently (but also more costly). But the board is purposely stripped down (no chipset) off a bunch of things to achieve a low pricetag. It’s certainly not the all-purpose board that fits all builds.

I hope we see more of these MUX approaches in future boards, along with more MCIO for AM5.

I still wait for EPYC 4465P prices in retail and will probably order the board along with CPU (4464P,7900x,9900x or the new 4465P. Not decided yet. your thoughts on CPU?)

1 Like

I’m not saying they should add a chipset and make this into just another AM5 board. But I wish they had used the available bandwidth of those four lanes better. Adding two SATA connectors, switching the ASMedia 1061 to a 1164, and removing one NIC couldn’t have increased the BOM by a lot (i imagine).

I agree a faster NIC is perhaps more difficult to fit in given the single available lane and the lack of suitable NIC chips.


As for CPU, as a homelabber I don’t see the point of the EPYC 4xxx series (compared to Ryzens). Maybe there’s some actual difference that does matter?

I think what’s neat about this board is how they expose almost all CPU PCIe lanes.

I personally have moved on from SATA on my builds, but I would expect this to be one step too far for any product manager building a motherboard in 2025. That means they have to sacrifice at least one lane of bandwidth to SATA.

Same argument here, you cannot offer a motherboard without NICs in 2025. There is an argument to be made that a single sacrificial NIC may have been sufficient (only one PCIe lane wasted). Or, as you say an upgrade to a more “modern” 2.5GB NIC, but that increases cost.

While none of my builds have a BMC, these are a staple and typically expected by the target customer base. Unless a PCIe lane is used, how would you connect a BMC? Not an option.

That said - I can imagine a more radical design that offers another 4x MCIO connector behind another PCIe switch that allows foregoing SATA, 1GB NIC, BMC for the benefit of enabling more PCIe lanes - but that would be quite a radical design :slight_smile:

Yeah. I don’t think we’re that far yet. I can make use of them, either for HDDs or for boot SATA SSD (all my old SATA SSDs serve as boot drive or are reserved to do so in the future. Just doesn’t cost 4x PCIe lanes. I take it.

I wouldn’t buy a board without dual 1G NIC. I want physically separated connections for WebUI and internal cluster stuff. VMs, Containers and stuff will go over the high bandwidth NICs.

Fell in love with Aspeed2500 when I got my x570 board 3 years ago. It’s just great utility and so useful. The Aspeed2600 is newer and uses way less power (at least I’ve been told so).

So without chipset and more energy-efficient IPMI/BMC, A homelabber can probably squeeze out some less Watts for this kind of server. I will certainly do the comparison if I decide to go with this board.

EPYC4000EX28 (Extreme-X28 edition). I wouldn’t go that far personally. Because I would totally plug more U.3 in there and then I have to upgrade CPU and RAM too…too expensive :wink:

I tend to just get a 7900x. I want this to be good price/performance, not overbuild it. Cheap board, good 12 cores, 64GB 5600MT ECC. Up for the job.

Interesting. I almost wrote “Who needs dual 1 GbE in 2025?!” in my previous post. :smiley: You don’t use VLANs?

Aquantia’s upper line (acquired by Marvell) is x1 and, if the boards I’ve looked at are anything to go by, implementing dual 10 GbE with a couple 4.0 lanes is increasingly common. Some of those are at the same price point, so EPYC4000D4U is underspec here.

Yeah, seeing as there’s unpopulated lands for two more SATA connectors on the board. ASM1166 might also have fit with some layout tweaks.

I’m aware of, but unconvinced by, the arguments EPYC 4005 is more than a Granite Ridge rebadge. Probably all 4465P really gets you is an upcharge for 65 W TDP if ASRock’s crippled 9900X 65 W eco mode out of the BIOS.

Personally I find Granite Ridge has enough over Raphael I’m willing to pay the extra but for most things there’s little difference and Raphael’s different boost profile might offer a bit more energy efficiency in server use despite the larger node. Don’t know of good data there.

subnetting and a lot of cables worked just as fine in the past. Maybe I’m a bit too old-school or lazy on the network side of things.

No matter what CPU, it will run on 65W TDP. Either by factory or me setting it to ECO mode on first boot. I can’t (reasonably) cool 170W TDP in 2U form factor nor am I fond of the noise and power bill associated with it.
65W TDP doesn’t really sacrifice much in a server context compared to the gains. And served me very well for my 5900x.

they did? is this a new market segmentation thing for Granite Ridge? My Desktop is AsRock+7900x and I got all the PPT,EDC,TDC jazz along with 65W and 105W ECO available. The board sucks, but I never had a complain about BIOS options.

interesting. I was under the impression that 4nm update got some more power-efficiency as well.

I probably can’t make much use of AVX512 and the new ISA stuff anyway because of migrating VMs with a Zen3 one rack unit below. And a homelabber doesn’t need SME (secure memory encryption) I need threads and multi-core performance, 20+ threads on a budget and low power profile. Not some 1-4 core gaming boost behavior. And 16 core SKUs, being top SKUs we never that attractive in price to me.

Why I said if. It’s an option on the desktop board I have 9900X in, but that doesn’t guarantee Rack BIOSes support it. Haven’t seen anybody confirm 65 W’s actually an option there. It’d be weird if it wasn’t but stranger things have happened.

Yup. But it’s not uncommon Raphael benches more efficient in what I’ve seen, though there’s not a lot of data. Granite Ridge’s architectural advantage over Raphael’s appreciable but a good part of the difference’s also Granite pulling more boost power.

In my compute intensive workloads both Raphael and Granite Ridge bench higher on what’s effectively an AVX10/256 profile than AVX10/512ish. Four 256 bit ports tend to schedule more efficiently than two 512s and zmm register count and some of the AVX-512VL instructions help slightly compared to FMA.

Well, I mean, you don’t need multiple NICs to use multiple subnets; you should be able to just add multiple IPs to the same NIC? Or do you use separate physical switches, one for each subnet, for separation? (The latter would be pretty cool actually if I had the space and didn’t mind the power consumption.)

Board has arrived today. Not vaporware or unobtanium, just ordered from a random ass german retailer. Payed 305+tax. I still lack most of the other stuff (not fully decided on CPU yet), but I will certainly put that beauty to work in the coming weeks and post progress in my build log thread.

3 Likes