Low power server

Hello,
I’m a developer and I would like to make a home server that could handle those task :

  • Coding remotely (no way !)
  • Some containers for services that I develop
  • NVR container to handle 4 or 5 cameras
  • 1 or 2 VMs
  • Gaming server (like Minecraft/Sons of the forest etc)
  • Storage server

In the time, I will maybe use it for learning AI and using it as a plex server

The OS will be UNRAID or TrueNAS and I need that the consumption be the lesser possible

I have a lot of questions on which architecture choose.

  • I have seen server CPU, but I have no ideas how choose them for my needs

  • I will surely go with a server motherboard :
    ASrock Rack have some that are compatible with AMD AM(5 or 4) but not for the classic intel if I choose to not take a server CPU. But I learnt that intel is better for the power management.

  • For the RAM, I would like to start with a 64Go more if I can

  • For the stockage a good 5 or 10T with one or two SSD should be good to start I think

  • for the gaming server, the transcoding of video or Stable diffusion, if I go with AMD or a server cpu, I will need to buy a graphic card no ? Should I use a NVIDIA (4060/4070 ? )

I will choose the case after (surely a 4U for the server (to have the possibility to add a GC) and a 9U for the rackmount.

It is a lot of question, I try to understand all the differences but I prefer asking to people that know more to make a good choice.

Thanks and have a good day !

1 Like

BUDGET!? … :slight_smile:
we need to know how much your willing to spend. :wink:

3 Likes

That a good question, I didn’t tell it to know the price for my needs :smile:
It should be between 1k and 2k.

1 Like

while your waiting for a build guide.

have a look through the playlist you might find some ideas yourself.

2 Likes

Thanks, I will look into that

1 Like

i would consider an ebay EPYC build based around like a 7252. this gains some nice features for an actual server, is cost competitive, at the lowest end of the power curve for high end computing, and supports REG ECC.

1 Like

Hmm, that is a lot of demands.

First off, you should know that building a home server with server grade CPUs for home are just a waste of both power bill and money. While EPYC is really efficient for a server CPU, we are still talking about 200W+ for most EPYC builds. EPYC has a very efficient Power-Per-Thread, but if you only need 12 threads and most of them only occasionally, why bother? A consumer grade server is the way to go here unless your needs are exceptional. For a bad car analogy, we all want to drive a Lambo, but unless you actually race it is a very poor investment to go grocery shopping in one. :slight_smile:

Second, given your modest storage requirements, I would suggest splitting into two builds, actually. One low power file server and one more active homelab server, both small form factor. This actually saves you money overall.

First build:

PCPartPicker Part List

This should allow you to get started and the power consumtion is just amazing on this thing, idling at 10W and full load with all six drives spike that to 25W. If you need more juice than that, for whatever reason, well, here is what I would build as the second build:

PCPartPicker Part List

Type Item Price
CPU AMD Ryzen 9 7900 $409.00
Motherboard Gigabyte B650I AORUS ULTRA $259.99
Memory G.Skill Flare X5 2x32 GB DDR5-6000 CL30 $169.99
Storage Kingston KC3000 1TB M.2-2280 PCIe 4.0 $42.13
Case Jonsbo T8 PLUS Mini ITX Desktop Case $79.99
Power Supply SeaSonic FOCUS Plus 650 Gold $89.99
Total $1051.09

Now, I realize I am at the max capacity of your budget, you could go for the Ryzen 7700 or the Ryzen 7600 and save $100-$200 and you could go with a bigger case that allows a cheaper motherboard - though this case should still support a 6700 XT or similarly sized card. Power envelope is still higher, with this config Idling at around 30W-40W and full load is around 100W… Before GPU.

I feel like this is a good compromise that will allow you to run whatever you want on the server and then your storage is just your storage, period. No need for racks or anything fancier. You could combine this into a single server too, but having the option to just switch off the beefy server when not in use (and remote boot it over LAN) is a good thing.

Hope that helps a bit!

3 Likes

Without breaking the bank you might want to look AMDs AM5 platform, one of the ProArt series motherboards depending on your needs and pair with ECC memory. I don’t see why you’d go for ASRock Rack mobo that appears to recieve little or if any aftermarket support at all.

7800x3D is lowish power consumption, you can configure it’s tdp, they’re tuned for diminished returns so once you lower the tdp, your performance doesn’t suffer too much

Compiling REALLY likes vCache, and so will game servers like Minecraft

I don’t think ECC would be super important to you but up to you, I would get the the new odd sized dimms 48GB

If you’re gonna mess with AI then the CPU doesn’t matter a bit, it’s your GPU that matters, AMD is cheaper and more powerful but… Everything just works, like out of the box, with Nvidia

Welcome to the forum!

I agree with most sentiment here. Low power consumption means consumer hardware. I’d say you go one step further and split the computing, for different reasons.

The NVR can be something like a odroid h3(+) with the HDD case. Get 2x 4 or 8tb disks (depending on your camera resolution and how much/long you want to store the footage).

For the server build, I’d agree on Ryzen X3D variants. If on budget, grab at least a 7600 (although 8 cores would be a sweet spot for the among of thing you seem to want to run).

The only thing server motherboards have going for them is IPMI, which you can emulate with Pi-KVM anyway. But personally I’d go for the remote solutions, like RustDesk (which I haven’t tried), or VNC / RDP. Remote coding on such a platform should be fine. If you need a bit more oomph (lower latency), go with something like Moonlight / Sunshine (if you do GPU passthrough). Thing is, if you don’t combine stable diffusion and gaming VMs together, you’ll need more than 1 GPU. Or you’d need to stop the VM and start the other one that uses the GPU. But you can plan for a VFIO build, so make sure you get a consumer motherboard with decent IOMMU groups.

My servers at home are mostly ARM SBCs (my requirements are small) and I’m planning for a x86 server build around the odroid h3+ which I own for microVMs.

this is a weird but very common way of being wrong. you are going to end up cobbling something together that almost does what you want, for nearly the same amount of power, with out all of the convenience and benefit of being an actual server.

look, i have been in I.T. for 20+ years and a home labber for at least that. i was using thin clients for home stuff since before that was a thing. Low power, quiet, all the way up to server racks and custom mining gear.

Then please give a suggestion on a proper low power server that works in a SOHO setting where a server rack is rare and noise is a thing. In my experience you pay at least $1000 extra for the motherboard and CPU alone, and that is with the same core count.

I have nothing against EPYC, Xeon etc, I just don’t see how you would ever want one in your living room.

4 Likes

128 Lanes and thus more than one full PCIe slot, more memory channels…server-exclusive CPU features (things like QAT come into mind). There is a huge difference. It just depends on whether you need that difference. Expansion is a thing for homelab. An extra NIC along with an HBA isn’t easy to do on consumer boards. And if you need a GPU as well…you’re totally out of luck.

The best consumer boards suited for server use have a x8/x8 split option for your two slots, but these features are only available on a handful of boards, most of them more expensive than a fully-fledged server board with 128 lanes.

This is an intended pattern for the matter of market segmentation. Those limits are real and won’t change. You want to do more stuff, you have to get the expensive CPUs.

Depends on the system you are building. Just because it’s server CPU+board doesn’t mean you have to use 10k RPM fans. Standard case and aftermarket coolers are similar. And if your Gaming PC isn’t sitting in a 1U chassis, you probably don’t use 10k RPM fans either. Tower/ITX server form factor works great depending on what you want to do. Even Wendell has put Dual Xeons in a Fractal Torrent and that case is a beast for dissipating even 700W+ at a reasonable noise level.

2 Likes

I will go with the proverbial ‘It depends’ :slight_smile:
the variables I usually consider, even before setting the requirements with my workloads usually are:

  • Economic Budget (In this case, 1-2K)
  • Power budget (in this case, we do not know, low to me is 20w, but it may be 400w to someone else…) - let’s assume 100w plus storage
  • Space (4-9U Rackmount)
  • Noise (in this case, we do not know, assuming normal noise to medium, no datacenter screamers allowed)
  • The most important: Time / Experience to dedicate to the project, I’ll assume medium experience and infinite time (this is l1techs :slight_smile: )
  • Propension to try out bleeding edge stuff (AKA am I going to follow the canons or am I willing to ‘experiment’): I’ll assume some willingness to go bleeding edge
  • Expected performance of the storage/NAS, you said SSD for the storage, did not specify network speed, assuming 1Git/s, so not really much for network transfers, 500-600MB/s for sequential VM loads

Only then I start considering the requirements and deciding how many servers do I need, that is going to be a combination of noise/space/money/power and, the most important of all, ports and PCI lanes needed

In our case, I am counting:

  • Storage - 10TB of SSD - 12TB RAW (assuming RAIDZ1) SSD : 12 PCIE Lanes if Nvme, 4PCIE Lanes if SATA
  • RAM: 64GB , tbd if normal or ECC
  • Transcoding GPU: ANything from a 4060up, 16PCI Lanes
  • Dedicated GPU for VM passthrough - assuming not needed (no vfio, graphics workloads running on host os)

For this use case and a standard almost off the shelf approach both server components and consumer components make sense, the main deciding factor for going 1 or 2 servers is going to be whether you want to have your GPU always powered on, and, if you go ECC RAM, how much more power draw that willl be (4x16GB ECC sticks will easily draw 20W by themselves)
The server option will be an all in one solution, easier to maintain and with options for upgrades, the consumer option will be already maxed out on pci lanes/SATA ports if you with the single server solution
The server option will have an IPMI/VGA port that will let you pass the GPU to a vm without any issue, the consumer option will not, unless you get a cpu with an APU (more power) and fiddle with that setup a lot
Cost will be a factor, because of the low power budget all server rack mounted solution will not be an option, unless you go with modern and very expensive gear.
An EPYC Rome based build would get you a lot of power and ports and connectivity for around 130W considering ECC RAM, 10Gbit networking and one GPU at idle, and on a reasonable budget but that may still be too much … you will need to do some deciding and planning …

Given Unraid as a possible OS (very limited option for customizing it other than what the standard UI and the plugins engine provides) I would try and stick with a proven working config aswelll …

1 Like

used server gear is usually a drastic cost savings. it can be on par or lower cost than new desktop gear, and will include the missing features that you didnt know you wanted. like 2x to 4x more PCIE lanes, IPMI, and REG ECC support.

a 7252 EPYC is about the sweet spot at the moment and a system with boot drive, gold psu, and 1 stick of ram in each channel will idle under 100w. this will include IPMI and maybe 10GB networking, all that stuff is an add on if you go desktop hardware. not to mention the cludge of getting ECC in any form on a desktop board.

1 Like

If you don’t need horsepower on the CPU and are more focussed on periphery when it comes to features and performance, this is the way to go. And 8 Zen2 cores are plenty for your average homelab workload. With the option of upgrading to up to 64 Cores and populating the remaining memory banks if necessary. All while having the slots to plug in 100GbE NICs and HBAs left,right and center.

CPU will see its limits if using VFIO Gaming VM, Gaming server, Ceph, lots of compression or other CPU intensive services or services that are single threaded and want high clock speed. This isn’t the norm however. Most of my homeserver services just generate low levels of background noise on the CPU meter.

maybe, my homelab is still on a 7351P (more, but even slower cores) and it has a fair number of game servers and does transcoding for EMBY all at the same time.

honestly i run out of bandwidth on my 1GIG home internet, or occasionally run out of network bandwidth if i also have a bunch of people streaming internally at the same time also, well before i have ever ran out of CPU.

if you go with EPYC, and notice you are having ‘CPU performance issues’ that seem out of place, it is most likely actually related to memory. GOOD memory in correct channel and bank configuration is beyond important on EPYC. Really all AMD platforms, but due to the design it is even more noticeable on EPYC.

1 Like

instead of the power cost of an entire GPU, an EPYC 7262 is only a bit more wattage and is the ideal core config for utilizing an 8 channel RAM config, and has a small MHZ bump over a 7252, i would not have any issue recommending using this CPU as a stream server transcode option, with NO GPU added, this would be a net gain in power savings and cost savings.

1 Like

I think this is a great point. GPU idle power draw adds up and can be tremendous depending on the model. If the CPU can handle the transcoding just fine, you’re saving yourself a slot and a lot of power in the long run. Some 50-100W extra for boosted CPU for minutes or an hour is nothing compared to 10-20W 24/7 over possibly years.

I’ve done my stuff on my 12-Core Ryzen server. Super power-efficient @65W TDP and 12 Ryzen cores won’t disappoint you no matter how low the TDP is.

Yeah, because that is definitely not breaking the 100W budget.

Old server gear is fine if you can afford the electricity but the efficiency gains the last 5 years have been amazing. We are talking more than quadruple perf/watt.

My two suggestions above gave 8TB of redundant SSD storage below 100W with 12 cores to play around with, and although split over two systems those systems together take up less space than a 2U rack, for ~$2000.

This is a very valid point, but at the same time:

CPU EPYC 7262 Ryzen 7900
Cores 8 12
Threads 16 24
TDP 155 W 65 W
Base Clock 3.2 GHz 3.7 GHz
Boost Clock (One core) 3.4 GHz 5.4 GHz
Boost Clock (All cores) 3.4 GHz 4.7 GHz
Cache Level 1 512 kB 768 kB
Cache Level 2 4 MB 12 MB
Cache Level 3 128 MB 64 MB
PCIe Lanes 128 28
PCIe Version 4.0 5.0
Memory Type DDR4 @ 3200 MT/s DDR5 @ 5200 MT/s

Add to this, the EPYC 7262 does not have an iGPU while the Ryzen 9 7900 does, and it has both HW Decoder and Encoder support for pretty much everything except AV1 encode/decode and VC-1 encode, and I think the 7900 is taking home this fight quite handily.

Pretty much the only three things lacking above are iPMI, 100 PCIe lanes and ECC RAM, all three can be had with a server motherboard on the 7900, but it is going to cost you an extra $300-$500, at the very least.

Again, EPYC is great for big iron; SOHO is a completely different market. YMMV.

2 Likes