[Recommendations] Most Budget Platform for Self-Host & Home Lab, Bifurcation & Many Lanes

First, I am loving this community!
Second, I searched through forum posts/threads for a little while and I think I confirmed my answer, but wanted to ask:
What is the most “affordable” platform to get (1) good computing, (2) solid support/reliability, (3) lots of PCIe lanes, (4) PCIe Bifurcation?

To further narrow down, here is my use case:

  • Proxmox Virtual Environment bare metal,
  • Self-host services, running a couple active game servers & replacing Google/iCoud/OneDrive//Netflix/Pandora/etc for myself and ~15-25 family members (between my wife and me) (Ubuntu Server Pro, since apparently TrueNAS has problems with shared databases/storage between containers),
  • Room for one of these VMs running at a time:
    -A remote & local/headed gaming VM (with x16 graphics card [currently 6700 XT])
    • A linux VM for training LPIC-1
    • A Mac VM for learning & data recovery tools
  • Several offline & segmented VMs (start with 2) for format shifting (VHS & 35mm Slides to digital) (Using different storage pools)
  • Another Ubuntu Server Pro VM for running small business application containers related to format shifting
  • Windows Server & Windows VMs for learning & making tools for computer repair shop needs.

My Current gear (some of which doesn’t support my needs), X99/Xeon platform:

  • ASUS X99-WS/IPMI
  • Xeon E5-2697A v4
  • 128GB (8*16) ECC 2400
  • 1 6700 XT
  • 1 ATI All In Wonder PCIe capture/graphics card
  • 1 X710-DA2 10GB SFP+ x8 PCIe Card
  • 1 x8 SATA HBA PCIe card
  • 10 worn out 1TB WD Blue disks (this upgrades last, as personal computer is more important than family storage)
  • 2 low-bandwidth intel enterprise SATA 120 GB SSDs
  • 2 medium-bandwidth intel enterprise SATA 240 GB SSDs
  • 2 Silicon Power 256GB Gen 3 x4 NVMe SSDs
  • 2 Intel Optane 64GB M10 SSDs
  • ASUS Hyper x16 Gen 4 M.2 PCIe card (for the above NVMe’s)
  • 1 Blu-ray burner from about 2014/2013
  • Rosewill Thor II full tower case
  • 750W PSU (may need to upgrade)
  • (A Dell T330 (no iDRAC) for 10GB/s on-site backup & a 4TB internal HDD for the start of some sort of off-site backup solution)
  • (A Ryzen Pro 2400GE mini PC for Monitoring & Management, Pterodactyl/Pelican management, NGINX, etc.)
  • (2.5GBe/10GB internal networking)

Despite an existing mod, my current platform will not support bifurcation (I’ve fought for a month +).

Future needs: more NVMe & HDD storage (meaning probably another NVME PCIe card) and more graphics cards (starting with 1) for tape-to-digital conversion workflow.

I need to ensure that I can prioritize GPU’s getting all the available lanes. Physical connection wise, I think I need 4 x16 (gen 3 or higher) PCIe slots & 2 x8 slots to start with. actual lane count is 96 if going by physical connection, but the actual bandwidth needs of the HBA & NIC cards should only be like x4 each???

So far I’ve found that Epyc 7001 platform is the lowest cost platform that supports x86 64bit stuff while offering the necessary lanes & bifurcation. I think the cost of building two 1st gen Threadripper stations would be more expensive than 1 EPYC Naples.

I am learning by doing, but I think my current plan is where I’ll stay for a while and I’m happy to start with 16-core/32-thread, then replace as my “production” and saved money increase. Is there another platform /setup that would meet these needs?

1st-3rd gen EPYC is going to be your best bet here, as far as PCIE lanes/$. Threadripper will dominate in single-threaded performance, if that’s important to you, but even used Threadripper gear is substantially more expensive than SP3 EPYC stuff. I have the Tyan S8030 which I found to be a pretty great homelab platform. It’s got 5 full-length x16 slots, 2x m.2, and connectors on the board for 12x SATA and 4x U.2. It would handle everything you’ve got no problem.

1 Like

Naples, Epyc 1st gen will have more lanes and stuff…but is pretty old tech at this point. 2nd gen Rome has been the best performance/price platform in recent years.(4th digit determines the EPYC generation), so you can quickly figure things out.

Also check out 8004 series Siena. It’s the successor to EPYC Rome for perf/watt and cost optimized with 96x PCIe 5.0 and MCIO and all the nice new things. This is basically your go-to homeserver platform if you buy new. CPUs are basically on Ryzen-level per core, although clock speeds are way lower.
I really like all the MCIO on-board connectors to use like 4-10x NVMe (depending on the board) without ever touching a single PCIe slot.

3 Likes

I am a bit confused about your requirements. If I understand correctly you have 1360 GB of SSD storage and 10 TB of HDD storage. And a 16-core Epyc 7001 (say a 7351P) is enough compute, at least initially.

CPU benchmark shows Epyc 7351P as having a CPU Mark score of 26042 (1789 single-thread).

What about splitting your requirements into two machines? One for gaming and other things that needs the GPU and another for storage and all the other VMs.

A lowly AM4 Ryzen PRO 5650G (~€200) has a CPU Mark of 20660 (3246 ST). Combine with a B550 motherboard that can split the x16 lanes into x8/x8 (e.g. Asus ProArt B550-CREATOR). This combo supports ECC UDIMMs. And get 1-3 WD SN850X 4 TB drives (depending on storage needs and redundancy). Each of these can be hooked up to CPU lanes – and you still have x8 CPU lanes for a NIC (although the board already has 2x 2.5 GbE ports).

You also have 4 SATA connectors on this MB which will let you hook up 2-4 modern HDDs (14-20 TB each). If you want more drives there are two PCIe x1 slots for ASM1064 SATA cards (each giving you 2 SATA at full speed + 2 more which you can use if you don’t need the full bandwidth).

Then get an AM5 platform for gaming and enjoy ridiculous amounts of compute power for much much cheaper than any reasonable Epyc platform. (Like a lowly Ryzen 9700X has a CPU Mark of 37201, or 4656 ST.)

Edit: TL;DR: Upgrade your storage first. Then get (a) modern machine(s) to suit your needs.

4 Likes

This is the way to go in 2025. Most homelabs only need the equivalent of a raspberry pi or a N100 to run all necessary 24/7 services. Only boot a powerful 2nd (or 3rd, 4th, …) machine to play.

This keeps hw requirements focused, power consumption at a reasonable level (only pay for what you use).

I personally run my always-on stuff on a proxmox ve node based on a Ryzen 5700G, 128GB RAM w/ high capacity enterprise-level nvme (Micron 9300) attached, 10gb NIC. That offers plenty of performance at ~40W.
I have an old i5-6500 based box with spinning rust connected via LSI SAS running proxmox backup for about 10mins/day to ensure everything is securely backed up.
Otherwise I have other machines to run my homelab experiments that I’ll only start when necessary.

This setup provides continuity to the services I (and my family) rely on, and flexibility to tinker and upgrade to my hearts’ content.

Don’t take my actual hw specs as blueprint. It’s possible to scale the always-on system up and down depending on need and budget. E.g. An earlier iteration was running off of an N100 miniPC with 32TB of SATA SSDs connected. I upgraded because I have some specific workload that was pegging the N100 CPU, while barely registering on the 5700G - all in the same power envelope.

2 Likes

Which is probably still overkill for 95% of the stuff we usually run. Love the setup btw. good stuff. Ryzen 5600G was THE homelab CPU for years and for good reasons, not only here at L1T. Fast, cheap, efficient and iGPU. And makes up for a great homeserver even today and the years to come. Modern Ryzen cores each are insane compared to what we usually throw at it.

:+1:

Avoid “storage zoo” (varying capacities and performance tiers) as much as possible.
Plan ahead.

My SATA/NVMe of yesterday and today are the boot, CACHE/LOG (ZFS) or DB/WAL (Ceph) drives of tomorrow. And depending on the topology/FS you use, varying capacities on same performance tier can still be done, but just because you can, doesn’t mean it is a good idea in the first place.

But stuff like 64GB and 120GB drives no matter what else they are…they deserve a proper funeral for their service. 240GB drives may still be great for boot drives depending on their health/endurance/general reliability.

Avoid “storage zoo”. That entire collection can be replaced by a single modern NVMe (e.g. like that Micron 9300 from @jode ) using 4 lanes total and outperforming it in basically all metrics.

2 Likes

It’s moments like these where I wish we were all talking in person. It’s fun to geek out & learn, but there’s so much more I’d be discussing.

I think I need some help understanding the different things ya’ll have said. Please note, I am not trying to argue, just understand.

But first, I’ll build a build out (on paper) a set of systems that generally match the suggestions other than the EPYC route.

1 Like

I would start with figuring out how many PCIE slots you need (or can get away with). If the answer comes out to something like 10 minimum, you’re kinda locked into multiple machines.

As was stated above, a single ssd will vastly outperform ANY configuration of the 4 lines of 8 low capacity ssd’s you have. IMO more importantly, if you replace 8 drives with 1 you might have just saved yourself a bunch of PCIE real estate which is expensive in terms of $ and upkeep.

More PCIE slots = More computer infrastructure.

1 Like

I suppose you might not be using PCIE slots for these, but for a few hundred dollars you could replace them with a single drive that should be easier to manage going forward.

Wendle has previously talked about how if your motherboard has an MCIO connector, you could use an adapter cable from amazon/ebay/etc to connect say, an enterprise U.2 SSD that could be high capacity and will avoid the slowdown typically associated with the NVMe connector.

In short for the design I would focus on:

  • Minimizing necessary PCIE slots
  • Figuring out how to utilize a new sweet U.2/eX.s enterprise SSD via an on-motherboard MCIO connection.
1 Like

So, I haven’t had time to list out costs on a pair of builds, but I am pretty sure of my original plan.

Some things I have not understood are why some recommendations were along the (paraphrased) lines of “get the latest systems” “spend a few hundred dollars on a single drive”, when the original question was about getting the most PCIe expansion & bifurcation as cheap as possible. Actually, the “newer setup” option is intriguing, and I do appreciate the perspective - I just don’t/won’t have the cash for that.

Much of the misunderstanding is likely because I was not clear enough in what I am intending to do. So, Here is what I am doing:

  1. Self host a bunch of services, such as gaming servers for up to ~15 people at once, possibly on different games; family media storage (Immich, Nextcloud, something); entertainment media (Jellyfin, music), possibly 2 streams while others are playing games; other self-host services popular on YouTube.
  2. My gaming rig (16GB RAM, 12-16 logical cores, good storage speed), still getting full resources while server is under full load.
  3. A homelab environment, playing with Microsoft Server, domain clients, office applications, LDAP, SMB, etc.
  4. A production environment, first used for analogue-to-digital media conversions. High resource usage preferred.

Any combination of two usage scenarios 2-4 may be occurring at a single time, while 1 while be active.

To run those services, I already own some hardware (see OP). I doubt I could sell that hardware for enough to truly consolidate storage without spending more. I’m fairly confident that the total cost of two machines will be roughly the same as one server.

Something missed that I thought was obvious: drive parity. So, a single drive would fail that requirement. The other thing that I thought was obvious was the need for high IOPS, which, I suspect, even a NVME pool (if I could afford it) would sometimes struggle with.

Granted, I am building experience and have mostly just research under my belt.

After thinking through things further, I found & took advantage of a good deal on an open-box SuperMicro H12SSL mobo on Newegg and bought an EPYC 7502 off eBay, cooler too. I’m ~$730 in for EPYC Rome 32-core/64-thread. Selling my X99 platform brought it to ~$480. I may have saved some cash with two systems, but I just need to move on to making progress in learning the skills to do all this.

Thanks for the advice, everyone! I’ll try to make time for some posting developments.

I assumed that the end goal was having enough storage and cpu grunt to run the services you listed, while spending your money in a way that doesn’t paint you into a corner – and with PCIe lanes and bifurcation being means to an end rather than the end goal itself.

Your initial post listed lots of requirements and I didn’t realise that many PCIe lanes and bifurcation (the latter which also exist on the systems I recommended, BTW) was the main goal. Sorry for the misunderstanding!

3 Likes