Thoughts on my AM5 server build

I’m planning to build a new home server. It will run Proxmox with a ZFS storage pool. The ZFS pool (mirror vdevs) will be used for file sharing over SMB, backups and media storage. I will run a single node Kubernetes (K3s) cluster for all my services (Jellyfin, Home Assistant, git server, etc.), a CI runner, game servers and a few VMs for software development.

Here is the hardware I’m planning to use:

For reference, I’m in Germany, so consider that when looking at the prices.

I went with the ASUS Prime X760-P mainly because it has ECC support and one PCIe x16 slot and two x4 slots, while most mainboards only have one additional x4 slot. This should allow for more future expansion (GPU, HBA, NIC).

The Kingston 4800MT/s CL40 (KSM48E40BD8KM-32HM) is the only ECC UDIMM on my mainboard’s QVL. There is also a 5600MT/s CL46 variant (KSM56E46BD8KM-32HA) which is only €10 more expensive but isn’t on the QVL. According to the model number, the only difference between the two (other than the speed/timings of course) should be that one uses Hynix M-Die while the other uses A-Die. Should I get the faster DIMMs or follow the QVL?

While 64 GB of RAM should be enough for now, I might want more RAM in the future. This video recommends to only run two DIMMs on AM5 though. The problem is that the largest ECC UDIMMs I can find are the Kingston Sever Premier 48 GB 5600MT/s CL46 KSM56E46BD8KM-48HM which cost €100 more per DIMM than the 32 GB ones and are also not on the QVL. Is it possible to reliably run 4 DIMMs at 4800MT/s in the future if needed, or should I pay the €200 extra to get 32 GB more RAM with two DIMMs now?

I also considered the new 9950X, but the 10-20% performance increase does not warrant the 35% higher price (€655). However, the review of the 9950X says that it also improves support for running 4 DIMMs. Would it make sense to get the 9950X to better support a potential RAM upgrade in the future, while also having better performance and power efficiency?

I’m also not quite sure if the cooling for the CPU is sufficient, but the NH-D12L is the largest cooler I could find that fits in a 4U enclosure. According to Nocuta’s compatibilty list it has “medium turbo/overclocking headroom” for the 7950X and since I’m not planning to do any overclocking, it should be fine. I can also add two more 120mm fans in front of the drive bays if necessary.

1 Like

AMD recently released EPYC4000 series, bringing 4th gen EPYC to the AM5 socket. The lowest tier is just 150 USD (quad-core territory, so really basic) but in essence, the higher tiers can hold up to Ryzen 7000, mainly as their clock boost reach 5.7GHz (base 3.7GHz). In your case, the 12c/24t EPYC 4464P (<- link!) model is worth considering.

Availability and pricing in Germany: no idea, sorry. :person_facepalming:

I looked into the Epyc 4000 series, but according to the video covering the launch, they are just rebranded versions of their Ryzen 7000 counterparts with additional enterprise assurances and support. The Epyc equivalent to the Ryzen 9 7950X is the Epyc 4564P. The cheapest I could find it here in Germany is €730 compared to €480 for the Ryzen. I don’t think that price premium for enterprise support is worth it for a home server.

  • CPU - You might want to consider Ryzen 9900X or even 7900 because of the lower TDP
  • RAM - Crucial (Micron) MTC20C2085S1EC48BR also works well (even in 4 DIMM setup but speeds will be at Ryzen official specs)
  • Motherboard - I would highly recommend that you get one with Intel NIC and an 8 layer PCB
    Such as: Asus ROG Strix X670E-A Gaming WiFi or Asus ROG Strix X670E-F Gaming WiFi
  • SSD/NVME - I’d go for Crucial or Solidigm instead as there have been reports of WD NVME acting up with ZFS. Crucial T500 or Solidigm P44 Pro depending on what’s cheapest
  • HDD - I would by far go for the Toshiba MG-series instead, they simply seem to be a more solid/reliable choice. Pricing is about the same if you look at MG08ACA16TE
  • Cooler - That’s right on max height so clearence might be an issue, I would also be a bit concerned about TDP but Noctua says it’s fine so… I do think it’s quite expensive for what it is in general but given the height requirements you don’t have much models (in general) available
  • Get an ATX 3.0 PSU, Acer Predator GX850 seem to be a very good bang for the buck option in general at the moment… (114 EUR)
  • Noctua fans are fine but quite expensive, Scythe and Akasa SC series might be viable options to shave off some EUR

The rack stuff is probably fine although unless I would personally just go for a Fractal Design R-series / XL-series Tower instead unless deadset on a rack.

Personally, I run a 7900 with 64 Gb ram with pretty close to your setup (not in a rack mount setup). I didn’t see the benefits into a 170W CPU when the 65W CPU will give me about 80 to 90% of the value when my CPU is not seeing more than 30% at most. Save on the CPU/electricity bill now to a maybe upgrade to a more valuable CPU in the next generation (granted that it will be on AM5 platform).

I don’t know your requirement but if those 4 extra cores/8 threads are a must and you don’t need functionality that the Epyc vraiant will provide, then go for 7950X.

Just my 2 cents.

For VM server build, you need a nic with sriov support. Get x550-t2 10Gb nic or equivalent.
Not all pcie slots are equal. You want pcie that is directly connected to cpu to enable sriov and vfio. Thus, for the motherboard, I recommend asus proart b650 creator. It gives you 2x pcie x8 from the cpu.
For the memory, get 2x48GB instead. You won’t regret it. Also you don’t need ECC.

@diizzy Thanks for the suggestions!

I’m not too concerned with power draw because I have solar power, which means for most of the year electricity only costs about 8ct/kWh. But lower power draw is always better, of course. Looking at benchmarks, the 7900 would have a significant performance penalty in demanding tasks (code compilation, game servers, etc.) while the idle power consumption shouldn’t be much lower than that of the 7950X. The 9900X is interesting, though. It seems it is very close to the 7950X in benchmarks even with the lower core count while drawing significantly less power and at the same price. The only thing I’m concerned about is the lower core count for virtualization. I’m not too familiar with how KVM handles scheduling, but I would imagine that a lower core count introduces more overhead for context switches between VMs. But I don’t know how big of a difference that makes in practice.

The Crucial and Kingston DIMMs cost the same. I went with Kingston because they are on the QVL for my mainboard, but if the Crucial ones are better in any way, I would go with those.

For the mainboard I found the ASUS ROG Strix B650E-E Gaming WIFI with an Intel NIC. I couldn’t find how many PCB layers it has, but the B650E-F seems to have 8 layers, so I would imagine this is also true for this one. At €280 it is cheaper than the X670E variants, and it has an additional PCIe x4 slot connected directly to the CPU. Correct me if I’m wrong, but I don’t think the X670E chipset provides any benefits for my use case.

I don’t plan to use ZFS for the SSD because I don’t know of any benefits ZFS provides over LVM-thin for a single drive while having worse snapshot support. But the Crucial T500 is the same price, so I’ll go with that one in case I ever want to use it for ZFS in the future.

The best source for HDD reliability data I know of is the drive stats report by Backblaze which shows slightly lower failure rates for the Seagate ST16000NM001G than the Toshiba MG08ACA16TE while also being cheaper. The Toshiba drives have a smaller (but still decent) sample size and have a little higher average drive age, but I don’t see anything in the report that would indicate that the Toshiba drives are more reliable.

Having an ATX 3.0 PSU is probably a good idea, I didn’t think of that. I’ll go with the Acer Predator GX850 you suggested.

I really want to rack mount the server because I already have a NAS and a server in a desktop case on a shelf and with the UPS, switch, cables, etc. it is very unorganized. I want to move as much as possible into the rack to have it better organized. This also allows me to properly mount any rack mountable hardware I get in the future.

As far as I know, SR-IOV is only necessary if I want to directly pass through the NIC to the VMs. I only have Gigabit networking anyway, so it should be perfectly fine to run a virtual bridge to connect the VMs to the network. As for the PCIe layout, the ASUS ROG Strix B650E-E Gaming WIFI, which I will probably use now, also has two PCIe slots connected directly to the CPU (one is only x4 though) in case I need them for a SR-IOV capable device in the future.

I know ECC isn’t required, but it is widely recommended for server builds to ensure data integrity. I’m planning to run this server for a long time, and it stores all my important data (with offsite backups) which is why I think ECC does make sense here.

Whole thread for that. TL;DR with a matched quad usually yes for Zen 4 but some rigs don’t get that high. Early to tell for Zen 5.

The present UDIMM upper bounds are 2x48 GB at DDR5-5600+ or 128+ GB at DDR5-YMMV. If you buy a dual kit now and quad it out with another dual later where the YMMV comes in’ll depend on silicon lottery and how much time you want to spend optimizing timings across the two dual kits. If labor’s much of a consideration, decent chance rebuying capacity with a quad kit or a larger dual’ll be cheaper.

The homelab value proposition for DDR5 EC4 UDIMMs is fairly niche, so I mostly agree with @jxdking. I’m doing a 9900X 2x48 non-ECC build right now, but it’s a desktop.

Noctua’s marketing rating is more delusional than usual here but if even if you don’t mind the noise of maxing a D12L out and don’t have high ambients it’ll likely still struggle to support a 7950X’s ~230 W stock. In practice a loop’s needed for PBO into the 270-330 W range the 7950X is capable of as AIOs have a hard time hitting 270.

4U compatible AM5 (and AM4 and LGA1700) air coolers are unfortunately limited, so far as I know and as you’ve probably noticed. If you can get to a case that can manage 154 mm then a Phantom Spirit 120 SE will just fit. Otherwise a Peerless Assassin 120 Mini’s probably a bit more capable than an NH-D12L. Coolleo’s FF135 might be worth a look but it seems probable Thermalright would have a better base to AM5 match.

There’s also several 120s released in the past few years which generally outperform the NF-A12x25 and cost less, plus the Gentle Typhoon and number of others which outperform at the higher speeds which seem under consideration here.

I think all of ASRock’s AM5 boards support ECC, certainly all the ones I’ve checked, though not all of them have AGESA 1.2.0.2 updates yet. Three or four M.2s is routine for B650 ATX boards and X670s often have four or five. Personally I avoid Asus due to Armoury Crate and number of other reasons. Pricing I can get on the Prime X670-P’s about the same as X670E Steel Legend.

Any vaguely current (m)ATX board will have an x16 PEG. Can’t think of any ITX ones offhand that don’t either.

I wouldn’t. Consider shortlisted PSUs with evaluation data or ones that at least have open test results. There’s a not a GPU here so I don’t see a concern with an RM850x 2021. For ATX 3.x (3.1’s actually potentially a downgrade from 3.0) the RM850x Shifts are quite fine and Corsair released RMx 2024s last week.

+1

Initially, with 32GB DIMMs the additional cost for ECC was approximately €100 which I was willing to pay for better reliability. Considering the problems with running 4 DIMMs, 48GB DIMMs are probably a better idea. The extra €200 for ECC in that case is much harder to justify, so I’ll probably go with non-ECC DIMMs. Which DIMMs are you using? I found a Corsair DDR5-6400 CL32 Kit (CMK96GX5M2B6400C32) for €340 which I should be able to run at 6000MT/s (or maybe 5600 for higher stability).

After @diizzy mentioned the Intel NIC, I looked into it a little more, and it seems Realtek NICs can be unreliable with Linux. All ASRock boards seem to have Realtek NICs (apart from the very expensive server boards). The Armoury Crate software isn’t a concern to me, since I will only be running Linux anyway. What are the other reasons to avoid Asus? However, the only other option below €300 seems to be Gigabyte.

After looking at a few reviews, it seems like a great option for almost a third of the price of the NH-D12L, so I’ll get this one instead.

Noise isn’t a concern for me because the server will be in the basement. I’m also not running any workloads that max out all cores for long periods of time, so I think the cooler will be able to handle the load most of the time.

I found the Phanteks T30-120 which outperforms the Noctua NF-A12x25 and can go up to 3000 RPM. The extra 5mm of thickness also shouldn’t be a problem. It is the same price as the Nocuta, though. It seems the Scythe Gentle Typhoon you suggested isn’t available in Germany. Are there any better fans than the T30 at a lower price?

The Cooler Master GX II Gold 850W ATX 3.0 seems like a good option for €120. It has the best efficiency at light loads, and this build should draw a lot less than 850W.

1 Like

It’s not the overall power draw that’s a concern and most of the services you mention will use little to no CPU time at all so you’ll likely see less of a difference in performance than what you expect looking at benchmarks.

That motherboard should be fine however do have a look at PCIe line sharing on B650E motherboards. (check manual)

That being said, Asus doesn’t list ECC for most 7000-series CPUs but others however this is very likely just a layout issue on their global website.

As for 4 DIMMs, at least on a decent motherboard. What lemma seems to forget is that we’re talking about ECC memory which is more sensitive / reliably detecting bitflips and whatnot.

You can disable Armoury Crate (since at least a year back by now), it’s not an issue and there are yet(?) to be documented real world tests that ECC actually works on ASRock boards so feel free to test and report back.

There’s a also quite a bit more ATX 3.0 covers that lemma seems to have forgotten to mention, ATX 3.0 Power Supplies Explained!

That being said, the Acer PSU is SFX so it’s a no go :-/ (I just look at one etailer who listed it incorrectly as ATX)
FSP VITA GM 850W ATX 3.1 ab € 113,97 (2024) | Preisvergleich Geizhals Deutschland is an alternative that’s ATX-sized and even ATX 3.1

Another point worth noting is that Toshiba are known to not have bogus SMART data entries whereas Seagate have undocumented and sometimes very stange valus for “stanard entries”.

Just started tuning CMK96GX5M2B6600C32 for 5600 today (it was the same price as 6400). The mobo my build’s on doesn’t have an AGESA 1.2.0.2 BIOS yet, so I opted to set up timings for the 9900X’s supported upper bound and then see about going higher. The 6400’s probably similar to the 6600 in being Hynix M die and having JEDEC 4800 and XMP 6400 profiles with nothing in between.

I’m not aware of any 120 with greater noise-normalized airflow. Lian Li’s P28 ties the T30 within margin of test error, though, and is usually around half the price here. Anecdotally, it also seems the P28 impeller’s fairly good at avoiding some of the T30’s more problematic load noises.

One of them is, at least in my AM4 experience, some BIOS level settings are only in Armoury Crate. Not sure if it work through in Wine, though Armoury’s enough of a tangle I’d guess probably not, so you might have to dual boot Windows 11 to change certain things. Another’s that, while Asus’ Armoury autoinstall out of the BIOS should fail on Linux, it’s possible (I suspect probable) that’s a pwnable pathway even when turned off in the BIOS.

More generally, Asus has been notably predatory the past few years. Issues surrounding that finally boiled over about a year ago (so there’s quite a bit online about it) and, while there appears to be some effort at reform since, it looks like Asus has a ways to go. I stopped using their boards because Armoury doesn’t plausibly meet our security requirements. But also pricing was high, quality was relatively low, and I was seeing caution signals around declining support. Can’t comment on AM5 quality but the pricing I can access has mostly kept rising.

My history with Asus mobos goes back about 30 years but, y’know, lately not so much.

Just a word of advice, if you’re looking for stability and reliability you want ECC memory and even if you go without ECC there’s very little to no point in going way out of spec unless you possibly only use it for games and need a few fps extra with the possible tradeoff data corruption and other fun stuff. You’re being lead astray…

Specs says 5600 (9000 series) tops at but again, if you plan of going out of spec you might as we scale down on the other hardware in terms of quality since it’s not a priority to begin with. Just follow JEDEC specs and you’ll be fine.

I have no idea about the point of Asus rant here, US/NA != EU (which most controversy seems to come from) and again, much of the other stuff is more or less speculation at best and plain incorrect statements. You can disable Armoury Crate in BIOS, it’s been like that for years [Motherboard] How to disable automatic download of Armoury Crate? | Official Support | ASUS Global

When I’m basically idle, which means only my services on Kubernetes are running and not much else, there is no point in having a 7950X/9900X of course. But from an efficiency standpoint, it also shouldn’t be much worse than with a 7900 because at this load both should have comparable power draw (especially the 9900X). But when I am running heavier workloads like game servers, code compilation, software transcodes etc. the 7950X/9900X should be a lot faster as seen in the benchmarks for these workloads (and draw a lot more power).

The Manual lists support for ECC and non-ECC UDIMMs and the QVL also includes the Kingston ECC UDIMMs for Ryzen 7000.

The main PCIe x16 slot will only run at x8 if the third M.2 slot which is directly connected to the CPU is used. That shouldn’t be a problem, since I will probably never run more than 2 or 3 drives anyway (and probably also won’t use the full x16).

I would like to have ECC, of course. When looking at 2x 32GB DIMMs the price difference is about €100 which is reasonable. However, I am not sure if 64 GB will be sufficient, especially in the long run. My original plan was to just add two additional DIMMs when needed. This recent video still recommends to only run two DIMMs for ECC memory if you want a reliable system. Additionally, with 2 DPC I will probably be limited to 3600MT/s. While memory speed seems to be a lot less important for Zen 4, 3600MT/s is a lot slower than 4800MT/s, which is the lowest speed I have found benchmarks for. If I stick with two DIMMs and want more than 64 GB memory my only option is 2x 48GB DIMMs where the price difference for ECC is €200. At that point, I’m not sure if the price difference justifies the additional reliability over running a high quality non-ECC 6000+MT/s Kit at the officially supported 5600 JEDEC speeds (or maybe at a slight overclock at 6000MT/s).

All 4 DIMMs setups are certified at 3600, it works just fine. If theoretical numbers matters I guess TR is your next step and just follow JEDEC specs, it’ll save you a lot of potential headaches further down the road.

If you want to do without ECC and still be within JEDEC specs

I would assume there are reasons why Wendell strongly recommends not running 4 DIMMs in the video I linked. But even if they run without any problems, it will be at 3600MT/s. From the benchmarks I have found for Zen 4 the performance difference between 4800MT/s CL42 and 6000MT/s CL30 isn’t that big (especially for non-gaming benchmarks). I haven’t found any numbers for 3600MT/s, but would assume that comes with a significant performance hit. If you know of anyone who tested a configuration like that, please let me know.

I think I’ll either go with 2x 32 GB Kingston ECC DIMMs or 2x 48GB non-ECC DIMMs. I have to think about whether the additional price and smaller capacity is worth it for ECC.

Given that it works for multiple people here including myself and there’s no source for this claim I’d take it with a grain of salt. I wouldn’t be surprised if here are tighter tolerences on 6 PCB layer motherboards with 4 DIMMs compared to 8 PCB layer and some might simply have better electrical design overall with traces etc.

Take a look at this article from Puget Systems.
They provide sound models on how to think about CPU/system efficiency and the numbers for AMD CPUs to back it up.

Interesting article! That quote is taken out of context, though. I know the 7900 will be significantly more efficient overall. It is designed for high efficiency. I was referring to power draw when the CPU is almost idle. Most of the time my server will be almost idle (just running a few services like Jellyfin, git server, etc. in my Kubernetes cluster) and with that workload the efficiency difference probably isn’t very large. I don’t know of any tests comparing idle/low load power draw. Probably because it is relatively hard to quantify. In the cases where I am running heavier workloads, I care more about the better performance than efficiency. If your goal is to complete a set amount of work as efficiently as possible (which is what the article covers as far as I can tell), the 7900 is definitely the superior option, but for a server that is running 24/7 and idle 90+% of the time the efficiency gains probably aren’t that great.

1 Like

This article is IMO a bit pointless. It compares a bunch of CPUs at settings AMD/Intel declared ‘stock’, but none of them have the same target power use. It is only relevant to an uninformed/non-enthusiast consumer who would buy the system, do zero configuration, and want efficiency.

Anyone else could easily set a different TDP limit for any of these CPUs in bios, or set a max frequency in Windows or the linux power governor and get completely different results.

If you configure a 7950X with eco mode you will get much better efficiency than any single CCD CPU in cinebench, for example.

The comparison of 7700X vs. 9700X is also completely skewed since the TDP was adjusted from 105W to 65W. If you want effiency you can configure the 7700X to 65W too and have a much smaller difference. Or just get a 7700, which would be a fairer comparison.

The most efficient at idle are Intel or the monolithic chips of AMD (8000 series APUs). Probably single CCD Ryzen are slightly more efficient than dual CCD ones at idle, too. But chiplet architectures in general are not great at idling. Or at least not when you use infinity fabric. Whatever interposer Intel is using on their newest laptop chips (Meteor Lake?) is much better in that respect.

My 7950x desktop idles at around 80-100W IIRC, but it is not at all tuned for low idle power. My i3 12100 NAS does 28W idle (it should be able to go lower but somehow I can’t get ASPM working properly – I believe the motherboard/bios are to blame).

I would guess a better tuned for efficiency Ryzen could do about 50W. But don’t quote me on that.

2 Likes