Advice on building home server, development workstation

Hi! I’m planning my first PC build - a home server/workstation setup, and I’d like some guidance on component selection and overall approach.

Core Requirements & Budget:

  • Located in Europe
  • Flexible budget - willing to invest in quality components where needed
  • No existing parts or retailer preferences
  • Planning to use Fedora/Fedora Server as OS

Project Overview: I’m looking to build a system that will serve as both a home server and development workstation. My main challenge is deciding between two approaches:

  1. Setting it up as a server using something like Cockpit (https://cockpit-project.org/) to manage containers and VMs, accessing everything remotely via RDP/SPICE
  2. Building it as a workstation that can run containers and VMs locally

The system will need to handle:

  • Container workloads (PostgreSQL, NextCloud AIO, Jellyfin, LLM inference)
  • Virtual machines (Home Assistant and a development VM for coding/LLM experiments in case of server build)
  • Software development (mainly Rust) in case of workstation build
  • LLM inference (planning to add a GPU later)

Current Planning & Questions: I’ve started a build on PCPartPicker (Part List - AMD Ryzen 9 9900X, Lian Li A3-mATX MicroATX Mini Tower - PCPartPicker) but I’m facing some decisions:

CPU Choice: I’m considering either the AMD Ryzen 9 9900X or 9950X. Since this will run 24/7, I’m wondering if the extra 4 cores/8 threads of the 9950X justify its 50W higher TDP?

Storage Strategy: I’m planning for:

  • Two M.2 slots - one for OS, one for working storage (does this make sense?)
  • Space to add two SATA SSDs later for backup

Other Requirements:

  • RAM expandable to 128GB-192GB over time
  • mATX form factor (seems to hit the sweet spot for size vs features I need)
  • One PCIe slot for a future graphics card

Being new to PC building, I especially need help with:

  • Motherboard selection that meets these requirements
  • Appropriate CPU cooling solution
  • Memory configuration recommendations
  • Storage device selection
  • Power supply sizing

Would really appreciate any guidance on these components and whether my overall approach makes sense for this use case. Also open to completely different suggestions if there’s a better way to achieve what I’m looking for!

Given those requirements, I would look at the AMD Eypc 4004 CPUs as they are designed for server use and you requirements do not require much in terms of PCIe lanes. Normally for server I would suggest a full-fledged Eypc CPU, but if all you need is a single pcie gen5 x16 slot then the 4004 line is right up your alley. As for memory, it is simple for what you are after, buy everything your budget will allow. Storage wise it really depends on your needs, but I’d suggest a fairly large NVMe for the OS and at least two large 2.5" SSD that mirror one another for data resiliency. And then the PSU is more about what GPU you are thinking about.

2 Likes

There are several ways to go here, but the reality is a beefy VM server is more hassle than it is worth as a home setup.

For a home setup there are two extremes to go, full lab (preferably with rackmounts) or small and invisible (but still high end). I prefer the small homelab, myself, so that is what I will focus on here. Some would disagree with this take, this is fine.

For the small lab, I would like to give two interesting options. First off, the Asustor Flashstor Pro. Same size as a PlayStation 2, up to 12 m.2 storage bays for a mere $799, and a max power draw of roughly 35W. Insane value for what it offers, which is a damn reliable file server.

The Flashstor is not, however, a powerful VM machine capable of running whatever you throw at it. So, while the Flashstor is a capable file server, you really also want something beefier. Something that could be turned off when it needed. That is why I tend to recommend this setup as a complement:

PCPartPicker Part List

The above has a few oddities. RAM is twice the cost as it should be, this is because I am emulating ECC memory costs. The Asus motherboard does support ECC but is not a proper server motherboard. The above also has basically no support for AI workloads and heavy GPU tasks. For a home server lab that you can put in a bookshelf and forget, however… I think it fits the bill nicely! :slightly_smiling_face:

1 Like

Most folks buy a matched quad up front, though those tend to be B-die in my experience. For running mismatched kits I’d suggest Hynix M-die and manual timings. Those are usually 2x32 or 2x48 DDR5-6000+ at CL30 (6000) or 32 (6200-6600). Corsair Vengeance is what I’ve worked with but there’s plenty other non-ECC options. If you want EC4 often Kingston’s most available.

The 9900X I use is under a Phantom Spirit 120. 9950X eco moded to 105 or 120 would be fine with a PS120 as well. For 170 W TDP/230 W PPT Galahad II Trinity Performance 360 and Liquid Freezer III 360 are reasonable references. Upsize accordingly if you’re planning to PBO.

I don’t see reason to use Zen 4, whether Ryzen or EPYC 4004, here.

TDP at this point’s just a marketing number. At stock, the power draws to design for are 145, 162, and 230 W PPT for 105, 120, and 170 W TDP. At least until AMD decides they want to use different numbers, though those pairings have been stable for a few years.

My workloads differ enough from yours I don’t have a good feel for 9900X versus 9950X. Idle draw for the two’s pretty much the same.

System NVMe + data NVMe is routine. Consider backing up to 3.5s as a mitigation for flash storage’s risk of slow reads on cold data.

Distinct lack of good airflow cases, though, and the dGPU’ll compromise M.2 cooling. The A3’s not bad but I’d suggest B650 Steel Legend in a Lancool 207 as a reference. ASRock has full line ECC UDIMM support if that’s important.

Current gen ATX boards commonly have two M.2 slots on a dGPU’s cool side but look at the planned dGPU dimensions, board layout, and the PCIe block diagram so to understand what’s connected where, what the resulting path bandwidths are, and the mechanical clearances and airflows involved.

SN850X and 990 Pro are standard drive defaults. The 990s are a bit faster but my real world experience is they’re somewhat fragile for both sustained throughput and thermals. So I mostly use SN850Xes. 5.0 x4s are all 12 nm Phison E26 at the moment, which I’ve avoided as they’d fail thermals in our workloads even under Cogage H2s.

Include columns for max major and minor rail draws in your build spreadsheet for all active components, both in box and attached loads like USB devices. Total them up and that’s the minimum PSU spec. It’s good to also total expected real world operating points as a cross check. I also pull the Cybenetics noise map for every PSU I consider, both as a check on whether the OEM understands how to spin a fan to cool components (more than a few seem to find this challenging) and against build noise targets.

850 W gold would be a very common default for this type of build. But, if planning a higher power dGPU, then 1000-1200 W platinum (and more likely ATX rather than mATX). I like Corsair RM and HX but see HWBusters’ for other commonly shortlisted options.

I did something similar, using one machine for everything. It caused enough hassles, especially once other people started using jellyfin or nextcloud, that it eventually got split up into three machines - one being a laptop I can use to remote into either of the other two (workstation and server)

For hosting the containers and VMs, aside from the LLM inference you really don’t need much horsepower at all. You may consider setting everything up so you can easily migrate your VMs and containers off of this machine and on to another one if you ever get annoyed enough having it all in one place - by, say, having an easily movable VM host the docker containers instead of doing them on your base system.

The Fractal Pop Air mini is an excellent case for such a build.

Pop Air Mini’s not a great choice here. Either an ATX PSU blocks what shroud perforation’s available or competes with the GPU for the restricted airflow possible. Assuming the planned dGPU even fits as LLM GDDR quantities imply more than three slots with transverse fin parts. The drive bays are unventilated, another substantial airflow fail in an airflow case, so they’re not suitable for 3.5s that are doing much. Not so great for 2.5s that’ll see sustained transfers like initial backup syncs either.

Fractal’s capable to provide drive sleds that are pretty good for cooling as well as decent GPU intake but they’ve a hard time doing so consistently or putting both in the same case. It’s unfortunate the Pops don’t combine 5.25 support with airflow for 3.5s or show awareness of current dGPU thicknesses and second position PEG mobos.

The Lancool 207 lacks 5.25 support, which isn’t needed here, and is a couple centimeters larger but removes the Pop limitations and is also nice for transverse fin M.2 heatsinks below the dGPU. If add LLM dGPU later’s code for like a 5090 I’d use a Lancool II or probably III though.

If it really needs to be Fractal the Norths, certain Torrents, Focus 2, and some Meshify/Define variants are less constrained than Pop or Pop Mini. But those are all bigger than the 207.

Well let’s take this from my first hand experience with the Pop Air mini. There is no air flow issue and it does handle drives just fine with the capability fo hold at least 4 SSD or a pair of 3.5 and 2.5 inch drives without issue. Now if you are concerned about air flow thought the 5.25 inch drive bays simply removing the magnetic cover and if needed an external fan is an option.

Few things to considers:

  1. Go for lower TDP Ryzen CPU… For a server build depending on the kind of workloads I would recommended more cores that you can afford on your budget
  2. You can consider RACK mountable CPU cases…
    2.1 I would personally recommend Air cooling over water cooling for 24x7 workloads.
  3. 128GB will be 32x4 and 192GB will be 48x4. Buy 4 DIMM kits.
  4. Use the PSU power calc website that are available and come to a base power rating. For my build the anticipated power draw (if populated all the planned h/w 2 gpus, few ssd and hdd etc…) was 940w so I went for 1000w. You add more head room with a margin of 1.25x to 1.5x… But I chose not to and the IT shop who built the system said 1000w would be enough…
  5. Motherboards you can go for server class i.e. supermicro or similar. Or prosumer. But this will depend on many factors OS, use case etc…

– Not sure about MicroATX… —

Yes, I understand as this is exactly what I did when I built my NAS which runs on a Ryzen 4600G.

Yes and once again this is exactly what I did with my NAS and OPNSense router

This has nothing to do with my comments.

You could but a simple calculator and a little research gets you there too.

One can but it isn’t necessarily necessary.

MATX can work in all these case. Both my rack mounted machines utilize MATX form factor boards and they work fine. And there are even MATX server MBs available.