Current Server build:
Intel i7-8700
MSI Z370 Gaming Plus
64 Gb DDR4
no GPU
Current upgrades I’m looking at:
Ryzen 9 9950X (or maybe 7950X, I’d be interested to hear your thoughts)
Noctua NH-D9L
Asus ProArt X870E-Creator WiFi
192 Gb DDR5-5600 (2 sets of Crucial Pro 96Gb x 2)
Server Usage:
I’m using proxmox as the base OS with the following running in a couple of VMs. (ZFS, SMB/NFS shares, Plex, Immich, adguard, your_spotify, game servers, vms for dev work, etc.)
My main reason for upgrading is I am running out of RAM
Between ZFS and all the docker containers I’m running I’m using almost all of 64Gb. I would like to have some more headroom to keep exploring different apps. (I’ve had the grafana suite, GitLab, etc. in mind recently)
As far as the CPU goes, it seems like the 9950X only has a small improvement over the 7950X so I could be convinced to go with that instead especially since it is a little bit cheaper. I’d be interested to hear your thoughts on this.
NOTE: I was originally looking at the Ryzen 7900 but that cpu only supports 128Gb RAM officially and I wanted to future proof a little more just in case.
Overall I just want to sanity check myself before dropping money on the newer hardware. I appreciate any thoughts or suggestions you might have.
I am not aware that RAM support is any different between AM5 CPUs (they all use the same memory controller) - at least within the 7xxx and the 9xxx lines.
I understand the latest AGESA updates really have improved the compatibility and performance when running AM5 CPUs in two dimm per channel configurations.
Maybe someone can respond with experience and links that speak to differences between 7xxx and 9xxx generations.
Choosing the right CPU model can make a meaningful differences in a homelab as you need to find your compromise (or balance) between power savings and performance capacity.
Unless you know you’re going to be bottlenecked by the 7900’s horsepower, I’d go with that one for the $$$ savings and power savings over the top of the line model.
So far as I know there aren’t any besides most available Raphael data lying earlier on the IO die and AGESA maturation curves. But that’s not been enough to lift Granite Ridge’s 2DPC 2R support bound from 3600.
Like lots of other people, I built dual chiplet Raphael with 4x48 in 2023 and it’s been completely fine, so don’t see any reason to avoid a 7900. Updating from AGESA 1.1.0.0 to 1.2.0.3a patch A made literally zero difference as the two AGESAs train to exactly the same values. It’s stable up to 4800 but not at 5200.
I’ve also got a dual chiplet Granite Ridge 4x48 1.2.0.3a patch A that won’t post at 4000, either all auto or with EXPO, just reverts back to 3600. Hoping for better luck with another dual chiplet Granite Ridge that came last week but the power supply’s not here yet.
Underspec for 142, 162, or 230 W PPT parts and way overpriced. Phantom Spirit 120’s a good default but’ll get loud at 230, so a 360 AIO’s preferable for 7950X, 9950X, and similar if noise is a consideration and they’re not eco moded. A620 Pro SE’s probably better than the Phantom in certain aspects.
Doesn’t seem like X870E ProArt offers anything here unless the objective is to convert money into Asus hassles.
Since the hardware configuration supports ECC UDIMMs I’d change the memory to Kingston KSM56E46BD8KM-48HM 48 GB DDR5-5600 ECC UDIMMs with SK hynix M-Die.
In my region you can get a Asrock w790 for the same cost as the proart X870E. That would give you quad channel memory and the ability to use 8 sticks of RAM compared to the 2 commonly recommended for AM5. Intel w-2400 are available at reasonable prices.
I would definitely prefer to use the 7900 if it was possible because of the power efficiency.
From what I have seen listed on the product pages by AMD, the 7900 only supports 128Gb of RAM while the 9950X and 7950X support 192Gb of RAM which is what I was going off of. If the 7900 supported 192Gb of RAM I would definitely go that route.
Only reason are 10Gb and 2.5Gb built in and a good amount of PCIe slots + M.2 slots to give me options later on. I’m really just giving myself some space so that I don’t run into a situation where I want to do something but simply don’t have the connectivity to make it happen. At least for a few years I hope.
I just did a quick check and I’m seeing the ASRock W790 at around $875 where as the X870 board is $480 currently. I only checked a couple of places quickly so maybe I just haven’t looked hard enough but at first glance it seems to be quite a bit more expensive.
The Xeon CPUs also just did a quick check but those seem to be a little more expensive as well, at least if I want to stick with 12-16 cores.
The only reason for going with AM5 is I’m more familiar with the consumer hardware and the cost is cheaper. But I’m not opposed to other suggestions because I know dedicated server hardware does comes with some benefits like a larger number of PCIe lanes.
Just curious, what’s the symptoms of this? (It’s normal for Linux to use almost all available RAM for buff/cache, so a low “free” number does not mean RAM is running out.)
The DDR5 ECC UDIMM support on Linux is abysmally bad - edac-util doesn’t or at least didn’t support it last year when I checked. Memory controllers aren’t properly recognized for ECC, though some sort of auto correct indeed works and TrueNAS and others do recognize it as ECC.
However: the support for DDR5 ECC RDIMMs is MUCH better and you usually have more options at higher speeds. I am currently using an AMD EPYC 8024P with 96GB DDR5 ECC RDIMMs (6*16Gb, so plenty of headroom) and with a Gigabyte ME03-CE0 motherboard you still have a lot of headroom for even more RAM (and pretty much everything else).
The set up isn’t exactly cheap and the top performance is probably a worse than the 7900 or 9900, but it’s server hardware and meant to be used 24/7. Also BMC/IPMI are really awesome features I don’t want to ever miss again.
Just an idea (and something I can speak about from experience). I had issues with my i5-12500, that I initially used and the only component I did not switch to find out why it’s unstable, was the RAM. And not a single error was found by any memtest, yet Kingston took the RAM back, confirming it was faulty. They were DDR5 ECC UDIMM sticks.
Why would you need extensive and expensive latest-gen CPU cores just for a fileserver ? Do you really need all the latest goodies of Ryzen 9000 ( SSE512 etc) on a thing that will go idle or lightly loaded most of the time ?
DDR4 is very cheap these days and you can get 32GB sticks cheap.
Also, it has less of a speed hit when running 2DPC (4 sticks).
Why not just upgrade the RAM ?
And if you have to upgrade the rest, why not go for AM4 system ?
6C/12T 5500X is €65-ish. 8C/16T 5700X is €115-ish and 5700G with a decent iGPU is €145-ish.
DDR4 is like €1.3 per GB, so 32GB stick is €40-ish.
Yeah I’ve done some thinking and I might try to stick with what I have for a little longer. In the future maybe I’ll try jumping to an epyc class cpu just for fun. My main issue with that jump is the power consumption but I think it would be cool to try out at some point later on.
I appreciate this take and you are 100% right to keep the cost low given the workload.
The issue is my current motherboard only supports 64Gb of RAM so I would need to buy a new motherboard although sticking with the same platform would still be a lot cheaper.
This is also a hobby and eventually I think it would be cool to build a high end machine even if it’s just because I can. Now that doesn’t mean I want my power bill to be hundreds of dollars but I think having all that hardware could also push me to try crazier things. For now I’m going to take a step back and push my existing hardware until it reaches the limit then if I ever get to that point I’ll start looking at upgrades again.
Side Note:
I think part of the issue is I don’t truly know how much hardware or performance I need for server tasks. I’ve built plenty of machines geared towards gaming but the rule there is pretty much buy the best performance you can afford.
The server I’ve used for last several years has just been old parts from my previous gaming pcs so there was never really much thought into spec’ing the parts to my workload, I simply had what I had from previous builds.
If you have any recommendations on how to learn more in this area maybe that will be helpful for me in the future so I can do a better job of spec’ing the hardware to my specific workload.