Best power efficient platform for home server?

My server today

  • Unraid OS
  • Asus X299 TUF MK1
  • Intel 7820x
  • 32GB 4dim 3600
  • Intel Optane 480GB pcie
  • 14 HDD 1-18TB (15min spindown) 66TB
  • 8 port LSI HBA
  • No GPU
  • 18 always on containers/docker / No VMs

On the same APU (APC 700w Line-interactive)

  • Old 3gen 1nic Intel laptop Router/pfsensen
  • 5port vlan switch
  • Fibre Media Splitter

Living in Southern Sweden with a price of electricity before 2019 at 0.07-0.15$/kWh with taxes. Last 2 years we are paying 0.3$+/kWh even in summer.
I bought a watt meter switch before the APU for logging power usage. I was reading about 170W idle with 3hdd on, powersave, XMP on,turbo off, PFsense PowerD on minimum.
Now i have turned on all C-state on server, performed a linux powertop tuning on Unraid and it’s hovering around 118w(2HDD on) 190w all HDD on. Around 400$+ (median 150w) in electricity bill a year. 2000$ 5 years.

I know that x299 platform is a hungary platform and i have to much (small) drives. I only want 18TB drives going forward. I’m going to invest about 2000$+ in the summer with goal to half the power consumptions. But there are lacking benchmark to choose good Motherboards on Intels new 12gen Z690.
Ideally i moved the pfsense on a VM. But i manually put on Encryption key on Unraid on every start and do find that is horrible mess to get it working good.

My question is:

1: I want to migrate to ZFS/TrueNAS. But now when power usage it’s more demanding than ever i do not feel it’s a good option for me. My server pinging at least 2-3 hdd all the time. So with zfs all my pool is on all the time in my mind. But i lack zfs knowledge to be certain of it. What do you think about zfs in power consumption against unraid? Are there a good structure to get my optane ssd to take the most read and write hits to not spin up disks?

2: What manufacturer of motherboard and general are going all in on lowering power consumption. In UEFI and total?

3: Should i buy a “T” version of Intel 12gen, or can i buy “K” version and make it a “T” isch version in UEFI?

4: Any disk recommendation in 18TB with low watt usage? I have Seagate ST18000NM000J with no problems yet. But i read that is not so reliable that i want to buy all disks in that type.

5: DDR4/DDR5 power consumption? I want ddr5 for ECC. And my thought is that every DDR generation is reducing power consumptions. Is it right?

6: Any tips how to get vlan/router on VM to work to access unraid/truenas without the VM started. Or a good secure remote way to allow disk encryption to automount disks with good user allowance? Without the router/wan on. I can not figure that out.

Not the same as unregistered ECC dram.

118W from the wall for x99 with optane+hba+a few disks is pretty good.


Have you looked into epyc boards? Most of the typical motherboard peripherals are on the CPU(soc) which would be good wrt power efficiency - there’s even boards without a chipset.

I would probably separate the server to two systems, one NAS and one server.

For the NAS I would go with a low power SBC solution like the Helios 64. It’s a freakin’ NAS, it doesn’t need to be better than that! Unfortunately the Helios 64 team folded last year, might resume ops once supply chain picks up the pace but… Synology might be an alternative, though they are expensive and proprietary and love making you sell your firstborn to afford their shiznit.

For the other server, I would go Ryzen 65W like the 5700X or 5600G coupled with a decent X570 motherboard like the Asus Crosshair VIII Hero perhaps? That should give you plenty of cores to play around with, though you need a 2u enclosure to fit it proper with all m.2 SSDs.

This should give you an idle print of like 50-60W or so for both servers, with a full load 200W print.

2 Likes

There is no DDR5 with ECC yet

A Synology 1621+ or TrueNAS Mini with those Seagate 18TB drives you have should reduce your power consumption fairly well

1 Like

I thought DDR5 already had some measure of error correction built in its specification (hence the lack of ECC DDR5s)?

According to my sources on Internet, DDR5-ECC is on-chip ECC - While good to have regular ECC is bus-based (e.g. makes sure transfer from memory to CPU is O.k)

So you do want both regardless if it is super important data doesn’t get fucked up.

3 Likes

Is that another way of saying that DDR5s have only some sort of partial ECC support? Whereas, DDR4s have checks on several important points/steps(i dont know the correct terminology for this)?

Well, DDR4-ECC checks the bus for bit errors, while DDR5-ECC checks chip and bus. So DDR4 ECC is less complete than DDR5 ECC, but you can have DDR5 ECC without DDR4 ECC.

Personally, I think ECC is overhyped in either case. The risk of losing anything due to not having ECC is pretty minor for most people, it’s nice to have but the cost of having it does not currently outweigh the benefits, unless you are working with terabytes of important data transfers per day.

My 2014 Intel Atom C2550 has a 14w TDP, and pulls 50-60w at the wall outlet with 2 sticks of ddr3, a SATA SSD, a PCIe HBA, and 4x 5900rpm SATA drives.

It’s old, but there are later models from Intel in roughly the same vertical segment. At least the C3000 platform from 2020. Then there is/was also Xeon-D, that used to be a bit hotter but still not so bad. These are usually SoCs where the CPU is integrated with the motherboard.

Though I’m not sure whether they kept up with making such low-power parts in the recent couple of years.

Although DDR≤4 + memory-controller based ECC (which checks the data from the bus and is what we used to call simply “ECC” up to now) would also catch errors originating on the chip. So its not really that DDR5 with its on-chip ECC + future additional memory-controller based ECC would be more complete, rather it would have redundant(*) ECC checks for data on the chip, whereas “normal” DDR5 with only on-chip ECC would still be incomplete. And present-day DDR4 ECC is complete in the sense that every aspect of the memory is checked at least once.

(*)Edit: Actually I’m far from sure there will be such redundancy. I have no intuition on whether server-grade DDR5 will add additional ECC bits, or simply expose the ECC bits sitting on the chip to the memory controller, letting it read and write those!

My understanding is that most memory errors occur as bit-flips on the chip, and therefore DDR5’s on-chip ECC will catch most. And as hardware and software progresses it became simply untenable to not have any ECC on consumer memory, given that the risk for bit-flips scales with the amount of data sitting on the chip over time, and the latter increases every year. Whereas errors originating on the bus scales with throughput, and is therefore more likely to happen in server and workstation applications.

The honorable gentleman Dr. Ian Cutress summed it up pretty well:

So yeah, “on-die-ECC” is more of a marketing gag for DDR5. We’ll see true ECC DDR5 memory once Saphhire Rapids from Intel and AMD Zen 4 hit the market. Until then, there is no option but to take DDR4.

4 Likes

It is true that the whole pool is active to increase performance. If you have 4 drives, you usually want the performance of 4 drives combined. But if you have full load “all the time”, you probably need that performance. And double the speed means only half the time of activity. And with ZFS, you get caching of most frequently/recently used data by the ARC, reducing HDD activity by a big margin. Most people don’t recommend spinning down drives at all. ~2W delta never justified all the disadvantages for me either, despite living in Germany which seems to have similar price/kWh.
Lowest amount of disk activity can be achieved by high cache hit rate, meaning lots of memory and/or L2ARC. But you don’t want Optane for that. Writes are different, but explaining what log is, is beyond the scope here.

Well you want to disable features not in use and you don’t want a GPU. And if you need 10Gbit Ethernet, go Fiber, not Copper.

I recommend going AMD. You can save a bunch of money on the boards and can e.g. lower 8 core Ryzens down to 45W TDP if you want via BIOS. AM4 platform also offers ECC memory support for selected boards and is generally more mature. Servers in general are about cost, power and reliability, not about cutting-edge singlethread performance.

All 16-18TB drives on the market are Enterprise Class. Means most bang for the buck designed for 24/7 in businesses. I’m running with 6x 16TB Toshiba MG08, 3 different batches according to serial numbers and they are spinning flawlessly and fast for 8-9 months now. I can recommend them unless noisy drives are a concern. Getting as few drives as possible and increasing TB/drive is best for power considerations.
Business doesn’t prioritize power draw or noise and there are no energy-saving HDDs in that bracket. If you want low-power storage, SSDs will be superior and brands like WD Blue or Samsung 980 make up for very efficient consumer NVMe.
In “ages past” there were high capacity 5400rpm drives like I believe WD Green or so? Those things don’t really exist anymore. HDD has one edge over Flash and that is TB/$ in the server-market. Consumer class HDDs are basically legacy hardware in 2022.

edit: Oh and check the efficiency on the PSU. If you have like 50-100W draw, you want best efficiency at that load. For a 500W PSU this would be 20% load. High efficiency PSUs are more expensive and might not justify the costs for the savings they provide.

If you pay .1€ per kWh saved, this means every 10 kWh saved is a single Euro.

Put another way - every 10 W extra in Idle draw for a system with 99.9% uptime is around €.719 per month, if you pay €.1 per year. This means €8.64 per year, and assuming you will run that PSU for 10 years, that is ~€86 worth of savings, per saved idle kWh.

A low power efficient PSU on 300W would cost like, what? €40? €20 premium? It pays for itself in three years time.

1 Like

If your goal is to keep the same performance capability level and only lower the power draw, 2000 USD is plenty and then some, if you are prepared to accept the fact that you don’t need latest generation of hardware

I’d suggest you look on this supermicro board:

CPus for that board go as low as 80eur for a 10 core unit,
it has 10 SATA ports (you could get rid of your lsi SAS controller if you used bigger hard drives), you could initially reuse your DDR4 ram and eventually buy ECC, it comes with an IPMI port, and it would support plenty of expansion options like a 4 M.2 card on the 16x pci-e slot, plus 10gbit networking

Looking on truenas forum there’s plenty of users running it, and reporting sub sub-70W idle loads with

4x 8GB Kingston KVR1333D3E9SK2/16G
Supermicro X10SLL-F
Intel Xeon CPU E3-1241 v3
LSI Megaraid 9207-8i
5x 10tb WD red RaidZ1 backup pool for RaidZ2
1x 3tb backup pool for 1tb Evo860
1xIntel 40GB SSD (boot)
Esxi 6.7
Freenas 11.1, PF Sense

It really depends on how much you are fixed into buying latest generation hardware vs very good previous generation server grade hardware that has seen some use in the past years …

Every comment on top about ECC. I do know that it is not as good as “real” ecc. My question was more if it draws less. Because they lowering voltage to the RAMs every generation. It’s more bonus that it have some ECC functions.

Small NAS systems is not what i want. Firstly i do not want to jail myself in a brand. When they go bust as one of you said. I love DIY.
Then i do a lot of processing with videos. APU intel (amd is not working good for x264) or x86+Nvidia gpu is a must for future. Archiving a lot of old videos to x265 footage for size decrease. Not power efficance with CPU at all. And i know 12gen intel apu video en/decoding are not working as good in the moment. But it should do forward with some kernel improvement. I’m not in a hurry.

Then i feel that unraid still is my best option.

I have a 3700X as a gaming PC. It’s not a good idle machine at all. It draws a lot more than new intel in idle. Put on the same time a zigbee power monitor switch on that to. 90w with Nvidia gpu 10gen (very low idle) 2 stick ddr, W11 on battery save, platinum psu, 2 NVME ssd nothing more. Intel is a power hungry at the top but is very very low idle in the new generation. But old x299 platform is a wattage beast on both ways, idle and usage. That much i now after hours of benchmark searches. My gaming PC is on maybe 10-20h in a week. So i don’t care that much about the draw.
Then i want some docker to only go on power efficance core. And transcoding app on the APUs gpu (when linux kernel is mature) and some of the power cores. My home assistant and Postgresql database have some power cores.

I forgot to say that i have seasonic prime titanium tx-1000. So the psu is one of the best efficance today. (titanium is better than platinum in 80+ certificate)

Madmatt:
I like old server hardware. Have had it before. For my £$€ i go consumption products with newer tech. You need to spend a LOT of money on brand new server stuff to get low power draws and good cpu power.
My budget allow for 3-4 18TB a i7 cpu(i9 is overkill), MB and 64GB-128GB RAM. If i go with ddr5, not going for xmp so do not need the most expensive ones. And i want to know the difference in DDR4vsDDR5 power draw. This is a server who is going 24/7 for 5-10 years. Every wattage is counting.

1 Like

Yeah, 3700X + GPU will not be your best Idle buddy like, ever, especially if you have a ton of cooling installed. That’s like expecting a Ferrari to have a great fuel economy. It’s built to be bloody fast, not economic.

Anecdotally, a 5600G system with 2 sticks of RAM, two fans, 2 SSD and 1 HDD draws around 30W in idle from the wall socket.

Ok, first off, are you certain AMD sucks that bad at x264? Things happen fast in this space and what was true even a generation ago may not be today. A generation ago… Intel was still stuck in 14 nm land, for instance. A generation ago AMD had no chance in hell of beating Nvidia in performance. And so on.

Second, even if AMD is bad for x264 / x265, do you really need those specific codecs? Might be more interesting to use something more efficient, you know, like Ogg Vorbis used to be better than mp3s 15 years ago.

Third, even if you need those codecs, would it kill ya to encode on the desktop instead of at the server level?

Right now it sounds more like you’re justifying an Intel+Nvidia build than seriously considering an AMD build. Don’t be a fanboi! At the opposite end of the spectrum, if you really need stuff only Nvidia provides then by all means do that. Just make sure you’re operating on current info. At the end of the day you do you.

This is not what is meant here. A 1000W PSU is made to be efficient somewhere around 300W to 800W or so. If you run a server operating in the range 100W-300W, you want something closer to a 400W supply to get an efficient curve.

See this for more info: 80 PLUS Platinum efficiency; What does it mean, and what's the benefit to me? - Power Supply Units - Corsair Community

Big iron is almost always power hungry even if it is efficient. You have all that power from a nearby Nuclear Plant for cheap, why not use it? Don’t confuse efficiency for low-power.

Furthermore, these days big iron is pretty much exclusively super computer stuff. The days of every company requiring a server rack in the basement is long gone, and while some niche places still require on-site servers, most don’t. A Raspberry Pi or two can handle most SOHO server needs just fine these days, with the sole exception of on-site storage.

It is time to stop the server nostalgia of big iron, and start to realize servers these days are often a small, low powered box in the corner of an office. We have gone a long way from the mini computers of the sixties… :slight_smile:

DDR5 has less voltage but higher clocks so power draw is ± 0, but DDR4 is half the price. The 12700 has slightly higher Idle draw (5-10%) than 5600G, and is a massive overkill for what you want to do, I would aim for the 12400 instead, if you must have Intel.

Furthermore, in 10 years we will probably see 64 cores sold as i3 barebones. Heck, most servers would probably be sold as Intel NUCs or similar by then. So whatever you get now you will need to upgrade it in three years regardless.

2 Likes

Maybe spend the money on solar instead?

Right now I suspect you’ll be spending a lot to gain little as a large part of your energy expense will be drives and cooling which won’t really change between platforms.

I’d personally go for a Ryzen for this (maybe an APU and skip the GPU), z690 isn’t exactly power efficient either.

Ddr4 is cheaper, Ryzen boards are cheaper, processors are cheaper. Which will offset some of the electricity expense and they will probably use less power anyway.

Batch transcodes. That’s what servers are for. Running batches. Doesn’t matter so much how long they take.

2 Likes

Search for x264 or x265 ffmpeg gpu amd/intel/nvidia quality and then search Plex amd gpu transcoding. Then you understand.
I have encoding with my Gaming PC for 2 years. But i want to do it on server level so i can shut down my pc and not have both on. Waste of energy.
I ALWAYS take the best thing for the work, ALWAYS. Never been a fanboi in my life. In motorcycles, pc, phones, cars etc. I always change my opinion and never settle.
I had have 4 AMD computers in my life. Sitting and writing on one right now. But today Intel 12gen is ahead in my view, and i think their cpu is a perfect fit for me this time. A CPU+GPU and power efficient cores to do all the small tasks in the server is perfect. Don’t need a extra gpu. It draws lot less then x299/7820x. And ffmpeg gpu transcoding is working good with Intel (not 12th gen on Linux yet).

Yeah i know. I bought it when the server had 2 1080ti last mining season. Then it draw 600+watt all the time. But to change a 400$ TITANIUM PSU 1000w for ~600w (better than platinum and tested for greater than 90% efficiency in 10% load) is ludicrous. For maybe 1-3watt. Nahh.

I have order on E.ON for 20 000€ on solar this summer. So that is fixed.

Amd do not work with Plex and many other encoding are bad. Search for it yourself. I do not want Nvidia GPU in linux… drivers a f*cking mess. Then Intel is my last option. Don’t be a fucking AMD fanboi you to. I had bought AMD if plex was working with it. My last pc i bought was a AMD. Getting less of you fanbois.

Not the picture i got from researching z690.
Everybody is going about max wattage not idle wattage.

I’m running a Ryzen 3400G in plex myself right now.

It transcodes using far less than one core just fine?