Low power server

Running 3 PCIe cards in a consumer mobo causes headaches, but is quite possible and does not yet necessitate the upgrade to Threadripper & co.

It requires very careful selection of mobo, though. Look for one that offers two 16x slots that can operate at 16/0 or 8/8 (with a third 16x physical/4x electrical slot).
Look for bifurcation support: 16x, 8x/8x, 8x/4x/4x, 4x/4x/4x/4x
Use add-in cards that utilize said bifurcation capabilities Option 1, Option 2, Option 3

This way you actually utilize all the connectivity of your consumer platform without having to pay for capabilities that are unquestionably awesome, but maybe not necessary.

I use LSI SAS cards, 40GB dual-port NICs, m.2 and U2/3 SSDs in my computer mobos as needed and desired.

To stay in the context of this thread I should note that powerful hardware consumes a lot of … power.

In the end running a power-sipping home server is an exercise of limiting the hardware to just want you need.

But the hardware won’t magically warp to PCIe 5 or 6. Just to give you an impression on NICs: 25Gbit cards are usually PCIe 3.0 x8. 100Gbit NICs are PCIe 3.0 x16 or PCIe 4.0. To actually use PCIe 5.0, you need to buy a 400Gbit NIC.

Same goes for HBAs and most other server stuff. Lanes > PCIe standard.

Peace of mind always works ECC. And older generation memory is dirt cheap. I can see this being a major go-to factor. I’m personally ok with ECC UDIMMs, but not having registered memory is always a downside when I look for arguments.

It’s quite the job to figure this out. For my use, I narrowed it down to some boards because I really need the “perfect” board layout to get the most out of it. I really like the B650 ProArt from ASUS which is really low-priced for what it can do. And the big brother with 10GbE if you want an upgrade and need the 10GbE.

But digging through data sheets (because every store and manufacturer website almost always states 3x x16 slots) is a very tedious and anti-consumer endeavor. I’m glad I got the L1T Forums with people not falling for every marketing pitfall and sharing their experience

1 Like

That is a very important point. I’d love to get all the bandwidth of the 24 PCIe5 lanes the AM5 platform offers nicely broken down into 48 PCIe4 lanes - ideally into 6 8x slots.
Paying for PCIe5 support doesn’t make a lot of sense to me today. There are basically no consumer accessible PCIe5 add-in cards available today.
I was so looking forward to AM5 until I saw that it doesn’t give me anything but added CPU power that I cannot feed (with data from sufficient PCIe lanes/slots).

Tell me about it.
At AM5 launch the situation was dire in this regard (much worse than AM4). The manufacturers have silently added better options since then.

4 Likes

https://www.asrockrack.com/general/productdetail.asp?Model=EPYC3451D4U-2L2T2O8R#Specifications

actually that would be ideal considering the built in 10gb, and storage, plus all the PCIE slots. AND a low(ish) power CPU.

that COST though.

I really like the dual SFP+ AND dual Intel x710 10G copper NICs. I don’t think I’ve ever seen that combination before. 5x OCulink for 5x NVMe, only 1 or 2 slots because of form factor and quad-channel RDIMMs. A shame that all the embedded stuff is either ancient and expensive or fairly recent and overly expensive.

Great all round homeserver board with fairly low power footprint. Also allows for a forbidden router and all kinds of shenanigans in a really small package. I don’t think it’s worth the pricetag, but I really like the package as a whole. Ticks a lot of boxes at the same time.

the convenience and completeness ALMOST makes up for the cost, i have thought hard about this purchase multiple times over the last 2 years. never pulled the trigger though.

1 Like

it is available on some of the sienna builds, ie epyc 8xx4.
dual 10g with intel 710
https://www.asrockrack.com/general/productdetail.asp?Model=SIENAD8-2L2T#Specifications

the micro atx does dual sfp28
https://www.asrockrack.com/general/productdetail.asp?Model=SIENAD8UD-2L2Q#Specifications

These cpus start at $410 for 8 cores and 90 watts max tdp. They are a low cost genoa epyc with cut down IO dies.

My genoa epyc 9124 uses 60 watts just on the io die, when the CPU dies are using under 2 watts.

2 Likes

Makes me feel like @Zedicus right now. That is some nice juicy piece of hardware. And the Siena CPUs (8000 series) are really cheap compared to Genoa. The 16-core Siena isn’t that much more money than a 7950x.

I need a better job, now!

The whole siena line is much slower than ryzen though. max clock speed is 3.1ghz.

https://www.amd.com/content/dam/amd/en/documents/products/epyc/amd-epyc-8004-series-processors-datasheet.pdf

My impression is that Sienna can compete with used epyc 7000.

My cpu is the slowest genoa, which is 16 cores and a max clock of 3.7 ghz.

My windows geek bench is 1920 single threaded.

The 7000 thread rippers are going to be crazy fast, with a max clock of 5.3 ghz.


and minimum price for the cpu alone of $1499.

I know an argumentum ad populi when I see one and mine ain’t one. I always take into account the scenario for which someone is asking for certain hardware and in this case, server hardware for this homelab seems completely counter-productive.

Context matters. I guess I formulated my response wrong, of course there are others, I haven’t mentioned things like ECC support (at least as far as DDR5 is concerned, where the RAM slots are different) and like you mentioned, other features + quite some more. The reason I said only IPMI, is because I was referring to AM4 and AM5 Asrock Rack Motherboards that OP mentioned. Sure, you can run DDR4 ECC on older Ryzen CPUs, but they’re still consumer CPUs without the pricey server CPU features.

For most home servers like the one specified above, you can easily get away with a consumer ATX motherboard.

And the cost of running that 24/7 (remember OP needs a NVR) will make it so that in 6 months, you were better off buying consumer hardware that use as much power at full peak as an older server at idle. I’m exaggerating a bit, but in my case I got an older Xeon build for free and it doubled my power bill (I kept all my systems 24/7 on, but had to start stopping that thing and powering it whenever I felt like it).

An odroid h3 will idle at 10W with 2x HDDs, almost 10x the idle consumption of an epyc 7252 server. For the sake of argument, let’s say that the pentium 6005 in the h3+ is so weak that it needs to be full tilt all the time and that the cameras record 24/7 and not in bursts on motion detection. We’re still talking maybe 55W for the full package at full tilt, compared to 100W at idle. There’s a reason most DVRs and NVRs ship with a really low power CPU (I have a DVR myself, the 6 cameras combined are using more power than the DVR, at least at night when they’re blasting the IR lights).

I agree with you and Exard about GPU idle power, which adds a lot, but OP needs a GPU not just for gaming, but also stable diffusion. Maybe it’s worth splitting the build into a separate one? Not sure, doing a few very high end builds will increase the costs too much for just the power bill to save enough, which is why I’d still suggest an 8 core Ryzen X3D with the GPU. Again, context matters and OP is looking for a desktop workload mostly, with some minimal things that are generally attributed to servers, like running 2 VMs and a few containers.

Highly dependent on where you live and how expensive electricity is. Both where I was and where I’m now, electricity wasn’t cheap.

For a business, or even a professional that is making money running the machine, then going ECC is understandable and I would encourage it too. But for a home lab, where no money is being lost for downtime, then going with the bigger cost hardware just for the redundancy and other protection features becomes questionable.

And over the years, electricity goes up in price anyway, unless you’re lucky enough to live in a place that just got a cheap fuel source and a new power plant built nearby.


The homelab of today is starting to look way different than the data center of yesteryear. Nowadays I see more people, myself included, going to great extents to avoid going full on virtualization and instead running containers that are not live-migratable, but they use way less power (and are way jankier, like check out the struggle to add a NFS share in a container).

And microVM technologies like firecracker allows you to skip most of the virtualization overhead for linux VMs and only allocate a subset of hardware to them, which is all that most services will need. Full-blown VMs in the homelab are mostly used for running other OS (windows and BSDs, or others that you can’t easily run as a microVM).

These are squeezing more utility out of low-end consumer hardware. And you can get the same benefits out of server-grade hardware (arguably it applies even more to server hardware, because you can squeeze quite a lot more out of them, since they can run more things, but with a very large aggregate overhead, which means a very large performance savings from avoiding the overhead). However, servers in the homelab are very much going to mostly idle. Even consumer CPUs are going to mostly idle, which is why I see more appeal for lower power consumption and skipping very nice features to have, like ECC and lots of expansion.

3 Likes

Adding to the final part, when you got enterprise hardware in a homelab, your chances of going the extra-mile to run microVMs instead are very low. When your hardware is sitting idle, you aren’t going to attempt to make it idle even more, which is why most people with enterprise gear will just run full-blown VMs anyway.

It competes with CPUs in a similar pricetag. With Siena you don’t need to get “old” EPYC Rome. You get equal or better performance, much better power-efficiency and PCIe 5, CXL, DDR5 and all those nice new goodies. Without breaking the bank while buying the CPU.

If you need the clock speeds, get TR or Genoa (F-SKUs).

modern EPYC on a budget is within reasonable homelab territory now. Most stuff just doesn’t need high clocks, but needs lanes, lots of RDIMMs, IPMI and stuff.

8-core server CPU with 70W TDP, $400ish pricetag + state-of-the-art server board unlock? My stuff scales linearly with threads and cores, clock speed is a nice bonus.

Great pricetag.

I missed the Siena launch last month, but this is very appealing to me and I think we’ll see a bunch of homelab stuff around this.

2 Likes

And now for something completely different.

My daily driver is a M2 MacBook Air 24g/2TB. I too don’t want the PC to be running all the time making heat and noise.

Apple charges a crazy amount for ram, but the SSD they provide and use for virtual memory (512gb and larger) has more throughput than 1 channel of ddr4.

The max power I have seen it consume is 12watts to 17 watts.
When generating stable diffusion images using “Draw Things” app, while running a few thousand tabs, and playing 3d games, it usually uses 12 watts of power.

The M2 has 2 channels of ddr5.
The M2 pro has 4 channels of ddr5.
The M2 max has 8 channels of ddr5.
The M2 ultra has 16 channels of ddr5.

You can get a mac mini with built in 10gb ethernet, and then use several thunderbolt SSDs for your storage. You can virtualize other operating systems via a number of methods as long as the guest OS uses the arm instruction set.

Making images in stable diffusion with Draw Things and 10 gpu units I think it is about 1/10 the speed of an rtx 4070, but power wise it still comes out ahead.

The laptop doesn’t have those advantages, but at this point everything is local, and I backup to an NVMe m.2 ssd via 10gbps usb4.

I am going to put a time machine drive in the server, but that is a future project.

If you backup to a mechanical hard drive with “Time Machine” (built in backup software ) the hard drive only spins up while in use. Time Machine can run hourly, daily, or weekly I think. I run it daily.

Wow ! Thanks for the answers and advices, I didn’t see that so many people were here !
I forgot to mention that I have an old (not that old) power supply for my pc (800W), so I will go for an ATX.
For the server case, I will surely take a 4U in the necessity of having a graphic card in the future. For the whole rack I will take a 9U.

I have read all of your comments (I didn’t say I understood them all :sweat_smile:)

Yesterday, I looked at what I could choose and finish with this config :
CPU : Intel core I7 13700 (65W)
FAN : Noctua NH-D9L (don’t know if necessary)
Motherboard : Asus PRIME H770-PLUS ATX LGA1700
RAM : Corsair Vengeance 64 GB (2 x 32 GB), A few people mentioned tbd , What is that ?
Storage : I hesitate between SSD or WD Blue or a mix of them

Now I see that most of your answers push me towards AMD, I’m not against but I have see a lot of comment on their inefficiency with transcoding. So I assumed Intel would be better for plex and the NVR, am I wrong ?

For the RAM, the final word is, do I need ICC ? Because it’s more expensive and it consumes more.

Many people have argued between taking an EPIC or a 7900. I have not been able to understand the pros/cons between the two (EPIX consumes much more but offers certain functions?). For you, Intel is out of the game for my config?

“If you’re gonna mess with AI then the CPU doesn’t matter a bit, it’s your GPU that matters”: I’m actually doing it on my personal computter on a AMD radeon 7900 XTX (yeah I know, it would be better with a Nvidia). I think I will wait before doing AI on my server ^^

“Expected performance of the storage/NAS, you said SSD for the storage, did not specify network speed, assuming 1Git/s, so not really much for network transfers, 500-600MB/s for sequential VM loads” My network speed is 1Gb (my switch limit the 10Gb of my network cable)

Am I wrong to tell, if I take a GPU, it will be used only for one application at a time ?

I have seen some comments to separate the NVR part with the other. If I don’t take a GPU, I will draw less power. I have services (not big, the run on a raspberry pi 4 8g actually) that will run permanently, and I will add more. Will I gain so much (power/price) to build a NVR or buying one separated of the server ?

Thank you for the answers, do not hesitate to criticize and propose configurations, knowing that I have a server case (4U) and a rack (9U) to buy for the budget :slight_smile:

Have a great day !

Boy, this is going to cause another 200 replies :slight_smile:
You don’t need anything, unless you decide to.
ECC draws more power. gives you error correction so depending on what you are running as services you may want the additional stability/peace of mind.
The chipset you chose does not support it, so you don’t need to worry …

EPIC is a server platform, it takes from you:

  • Money
  • Power
  • High single core clocks

It gives you:

  • PCI lanes/slots (128 vs 24-28) that you probably don’t need at the moment
  • scalability (up to 192 cores) that yu probably don’t need
  • 10Gbit on the mobo - you won’t be able to get it on your setup unless you go crazy with 1x cards or NVME adapters)
  • stability
  • ECC support (actually required)
  • IPMI

That brings down the power requirements and pci lanes requirement a lot, so low power AMD or intel with consumer hardware definitely look like the best option for your use case

Only you can answer this question … an NVR (and or a NAS) usually require 24/7 operation and usually you do not want to have them go down randomly, so if you are not disciplined and you only have the one server, you will probably bring it down because you were just ‘trying out’ something with a new kernel, or a new VM, or a new config and then you’ll need to scramble to bring it back up …

2 Likes

Intel is better for transcoding at the moment, but this might change. AMD currently has the Perf / Watt crown for general computing though, the power draw of that 13700 is between 60W-210W while AMD 7900 is between 20W-90W.

As for EPYC vs Ryzen… Depends on what you want to do with your server. In a home setting you get three distinct advantages that may or may not matter for your usecase:

  • IPMI allows you to shut down or sleep your server and wake it up remotely which will save on your power bill by quite a lot
  • ECC which will detect and correct memory errors and decrease file bitrot significantly on the device
  • A full set of seven x16 PCIe ports with x16 lanes (at least in theory).

You are paying a premium for these three features and none of them are that important TBH, not to mention there are a few server boards out there as well that has ECC and IPMI support for AM4 and AM5.

Only if you lose significant amounts of money when data gets corrupted. This is unlikely in a home setting. If you get a bad RAM stick, ECC will light up like a christmas tree, but if you verify your RAM is fine and periodically (every 6 months or so) run memtest, we are talking one cosmic ray bitflip every 3-6 months or so.

Another way to think about it is that you will see one bitflip for every Exabyte of data you load to your RAM. That is 2^20 TB of data. So if you do nothing but write to your RAM day in and day out you will see a bitflip after roughly an hour, but normally you do not do this with RAM, especially not on a home server with low usage.

What you said, is if I go on AMD side, if I need to upgrade my cpu, I could have both, good transcoding and low power draw ?

On the transcoding standpoint, while AMD has gotten better with their AV1 encoder, it still performs poorly compared to both Intel’s QSV and Nvidia’s NVENC at a lower bit rate.

H264 is still extremely poor in all scenarios for AMD’s VCE, even with RDNA 3, which may be something you want to consider if you do a lot of encoding. (See VMAF result, choose QSV for Intel, VCE for AMD).

But encoding quality may not matter that much for NVR. In that case, AMD may be a better choice.

For the power consumption standpoint, 13700 while still worse than 7900 in terms of perf per watt, it’s not that drastic. The CPU has PL1=65W and can boost to PL2=219W for 56 seconds when doing all-cores workload. This is unlike the K variant (13700K) where M/B usually lets the CPU consume as much power as it could, which is usually beyond its diminishing returns (non-K variant has these values locked).

Alternatively, you may also want to look at 13700T, which is TDP-optimized (PL1=35W, PL2=106W).

Another thing that you may want to consider is idle power consumption. Ryzen while has better performance per watt while it’s doing a task, it also has higher idle power consumption due to IO die (this can be as high as 30W idle versus Intel’s 10W idle). This may not be a problem if you intend to run a full VM and a few game servers all the time, though.

For the NVR, as I said, I suggest you split it to another box. But don’t go with WD Blue or SSDs for it, they are likely to fail fast. Go with Skyhawks or Purples, which are surveillance HDDs, designed for this use case. The GPU in the Odroid H3s have support for Intel quicksync, I’d say it should be fine. But I noticed a missing info on my part, I don’t know the camera’s MP count, meaning for 6 cameras, it could be a slight struggle. Assuming they aren’t 4K capable cameras.

For the question about using the GPU for multi-purpose at once. If you only go with single server, you cannot use it for diffusion, gaming and NVR transcoding at the same time, if each of them are in separate containers or VMs on the host. Assuming you pass the GPU to a single VM that servers for gaming and stable diffusion, you might be able to (assuming it has the power to handle both), but then you’d also need the NVR software on your gaming VM.

There are other tricks, like SR-IOV, but let’s not get into that, it’s difficult and needs special enterprise cards. Best to just split the gaming / programming build from the NVR. And this gives you the ability to even power off and reboot your server, without affecting the NVR.

For a NVR build, you basically need a NAS build with transcoding capabilities and particular HDDs for surveillance footage. For the other build, it’s basically up to you. We don’t know what exact services and OS you are going to run in the VMs and containers, but any 6 core CPU + / - efficiency cores should be able to handle that. The reason most people recommend AMD is for the lower power consumption with basically similar performance, which makes for a good homelab server, unless you really low-end for the lower power consumption (like ARM SBCs).

3 Likes

A quick and dirty NVR appliance for this is Synology with its Surveillance Station. There are some caveats for using Synology parts and as a non-user, I am not fully aware of its issues and you should read on it if you are interested @haloremi . The fun part is that if you completely cut it off the internet, or at least configure your Synology properly, you could use shady HikVision cameras relatively safely and not inadvertendly contribute to Chinese government espionage. But you do already have cameras anyway.

2 Likes