Quiet, Power Efficient, Virtualization platform for 24/7 homelab operation

I’m working on building a homelab virtualization solution. I’ve been trawling things like Reddit, L1 forums, serverbuilds, and servethehome, but haven’t found exactly the right mixture of my preferences. I’m hoping that by starting a post here I can get some feedback on a build that will end up with me having a nice, quiet platform that isn’t too power hungry.

Budget: $500-1000

What I already have:

  • NZXT H440 ATX Mid-tower case (former pc case, now collecting dust)
  • Synology DS920+ for file storage / current Docker/VM host

I’m looking to offload VMs and Docker containers off the Synology so that I can expand what I’m doing (run more stuff) but without massively running up the power requirements or noise in the room. I also don’t have room for a rackmount solution, as I have no rack. That’s why I’m hoping that I can make use of my old tower case that used to house my gaming PC before I built my current machine.

Goals/constraints:

  • greater-than 1 GbE port on-board (mostly so that I can dedicate interfaces to different things)
    • Alternative: Space for a 4x GbE NIC
  • IPMI (want to run this headless)
    • must be a browser-compatible solution! (No old java client hell, please. I deal with this at work enough…)
  • Power Efficient Idle (less than 60W at idle, preferably less than 40.)
  • Intel QuickSync for Jellyfin/video transcoding
    • Alternately, space for GPU encoding, so something like a Nvidia Tesla P4? (nice power/performance ratio)
  • Quiet (will be in the room with me - the case should allow a lot of big/slow fans to keep things under control)
    • At least must be quieter than the Synology with 4 Ironwolf NAS disks
  • Virtualization/Nested Virtualization (Must have, this is the point of the system)

I’d be open to picking up something in the used Enterprise gear space, so a dell tower server or something in that range, but I need it to be quiet and power efficient, and also the on-board IPMI would need to fully support remote console via browser, and I don’t know how to determine that when looking at ebay posts and labgopher results.

I’ve spent a few days digging through posts, trying to figure out what generations of Xeons have things like QuickSync so that I can pass off transcoding to hardware on those, which generation(s) are power efficient and/or cool enough to be able to run quietly, but I’m not really sure what I should be looking for? I’ve found things like the CPU Compundium (here: CPU compendium - Google Sheets) useful, but it doesn’t have more recent CPUs listed for comparison’s sake, so I’m not sure whether these options are worth looking at from a price/performance-per-watt perspective.

With the budget I’m looking at building something kinda DIY with the critical thing being flexibility and expandibility without being too power hungry. One thing I’m looking at is either a Supermicro ATX-ish board and compatible CPU, or Asrock Rack? Their IPMI solutions seem pretty comparable, and don’t have weird licensing issues like you see with Dell and HP, right? I’ve also looked at PiKVMs but they’re really expensive, and I feel like integrating a solution to the motheboard would be more cost-effective.

Also considering: AMD solutions? Lots of cores/threads-per-dollar? What’s the idle power consumption on something like a 5900x? What would I need to look at in terms of Threadripper or EPYC? I’ve noticed that a lot of EPYC chips have a vendor lock - that kinda sucks, and I’d hate to end up with an expensive coaster for my trouble…

Future expandibility would lean towards more HDDs/SSDs (considering that the H440 can hold something like 11 3.5" HDDs) and run it as a NAS, but have it supplement/back up my Synology, which is my primary data vault. I want a platform that I can use to learn things like ZFS, and many virtual machines/containers. I’d like a place where I can spin up and snapshot VMs to try things out and then destroy / roll them back if I screw things up. I’ll be doing the LFCS certification soon, and figure having a test bed for various experiments and practice/learning would be helpful.

Is this a reasonable direction to be looking, or should I look more at the small (eg, the “tiny / mini / micro”) business client computers for either a single node, or get a few cheap-ish ones and make a cluster? How would you approach adding things like discrete GPU(s) to those platforms? What are their power consumption like? Is the drop in cost vs. enterprise gear worth it to lose any hope of a totally headless system without IPMI / OOB management? Some of these have Intel vPro embedded - does that work as a valid replacement for IPMI?

Thank you for your thoughts. I’ve tried to make this as clear as possible, but I will absolutely clarify if there’s anything confusing. I really appreciate the help parsing things out.

Yeah, IPMI, Power Efficient, Less than $1000, pick two.

Here is a core build that should give small idling while at the same time being cheap and affordable. No IPMI support (that requires a second hand server board which will not be power efficient) or ECC RAM but should otherwise work fine for your system. It can run headless but you need to install it first.

PCPartPicker Part List

Type Item Price
CPU AMD Ryzen 7 5700G 8 core 16 threads $178.00
CPU Cooler ARCTIC Freezer 7 X CO $34.78
Motherboard MSI B550M PRO-VDH WIFI $119.99
Memory G.Skill Aegis 2x16 GB DDR4-3200 CL16 $69.99
Storage Samsung 980 Pro 500 GB m.2-2280 $87.74
Case NZXT H440 -
Power Supply be quiet! Pure Power 11 500W $69.90
Wired Network Adapter Intel E1G44HT 4 x Gigabit Ethernet PCIe x4 $160.00
Total $720.40

For added capacity, have a look at a m.2 → 6x SATA expansion boards.

Circling back on this, I have been pricing out and experimenting with different build configurations. I really want IPMI, and so I’ve decided to let the budget balloon to something in the range of $2000 now…

Ideally I’d like a lot of cores to play around with, so I like the looks and current pricing around AMD 5950x chips (16c/32t). Can get those for about $500 new, or $330 used.

My first and preferred build was around an AMD 5950x with the AsRock Rack B550D4-4L (4 1-gig LAN), but I can’t seem to find that board for sale anywhere right now. Not sure if they’re out of production or if they just are out of stock short-term and they’ll be back?

The second build is keeping the 5950x, but using a AsRock Rack X570D4U-2L2T (Dual 10GLAN) — this board is wildly expensive new right now, so I’m not too happy about that.

So I went back and re-considered an Intel Xeon build around something that supports QuickSync, so settled on a E-2378G (8c/16t) bolted onto a Supermicro MBD-X12STH-LN4F-O, which is also a 4 1-gig board. But after pricing the board and CPU, I still can’t really manage to get the price / core ratio to where I like it - This chip can boost pretty fast (5.1Ghz) but I don’t think the workloads I want to throw at it are really going to leverage it well. I’d be paying a premium for clock speed in this case.

The last build and the one that I’m most intrigued about for several reasons is around an AMD Epyc chip. I haven’t done anything with Epyc, and know barely anything about them, but the used market for them makes them really compelling! I’d also just really like to play with them… With that said, I’m afraid of buying a vendor-locked used cpu (AMD has one-time usable fuses that lock these CPUs to a particular vendor :frowning: ) and ending up in a situation where I have a pretty expensive coaster.

With that out of the way, here’s the build on that platform:
Epyc 7402P (24c/48t) ($420-600ish used) w/ AsRock Rack EPYCD8 motherboard ($398 used) both of which can be found on the used market way under their original prices.

I started leaning towards the Epic platform because of the number of pcie lanes available and what I might be able to achieve in a DIY one server to rule them all kind of build for fun and experience. I also liked the video Wendel did on this motherboard (the AsRock Rack EPYCD8-2T variant). It seems like an interesting platform to pass devices through in virtualization, say with Proxmox or TrueNas Scale.

Anyone have thoughts on this? Critiques?
Thanks for the input.

2 Likes

epyc 7272. you have your choice of about 2 or 3 mainboards in the 300ish$ range that will check off MOST of your requests, and you can always add a PCIE card for the rest.

i recomend not using the 7402p as it is 180w. i think you could find enough cpu for your use in the 120w/155w range and you will be able to keep said system a lot quieter.

honestly most the ebay sellers are pretty honest about where the CPU came from, and i have had good luck resolving the few issues that have cropped up for me.

1 Like

EPYC is a badass platform for a server. The biggest issue with modern consumer grade hardware is a major lack of PCIe lanes. If you think that could be a limiting factor at any point in the near future, you might as well go with EPYC. I am using a 7252 in my NAS (the Rome generation 8c/16t) and it has performed excellently for my use case. If you don’t necessarily need all that horsepower and are trying to keep your power bills low, there are 120 & 155 watt TDP models that will use less power than the 7402. My NAS is significantly more beefy than my old one built on 8th gen intel desktop components and only uses slightly more power during normal operation.

To serve as a foil to @Zedicus, I purchased an EPYC processor off of eBay that was completely dead when I started my new NAS build and its mode of failure made it seem like the motherboard was at fault. This lead to a multiple week long rabbit hole of motherboard whack-a-mole until I figured out the processor was definitely fried. The seller listed the processor as tested and working, and ignored all my attempts to contact them, but I still managed to return it just in the nick of time to be covered by the eBay return policy.

Even after that shit show, I’m still very glad I chose EPYC. I have purchased a lot of stuff on eBay and that was by far the biggest debacle I have ever had. The key take aways are:

  1. Always make sure that you buy from a seller with a return policy, even if it costs a bit more.
  2. Always pay with a credit card so you always have a charge back as an option in a worst case scenario.
  3. Always buy the components around the same time so that every necessary component will be in your possession while each item is still within the return period.

Another thing I learned in my endeavor is that you should really invest in a torque wrench that has a Torx T20 bit and measures in lbs/in if you are going with EPYC. They are very picky when it comes to proper seating and the peace of mind that you are seating it according to spec is worth it. If you seat it incorrectly and need to remove the heatsink afterwards, you can easily wind up going through a LOT of thermal paste with those silicon chonkers. AMD recommends 14 lbs/in.

2 Likes

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.