Low power server

For transcoding with quickassist from intel, You don’t need to buy in right away. If you determine you need it, it is available on a x4 pcie card for under $300.

search for:
intel quickassist 8970

Another alternative is the N100/N200/N300 line of Intel CPUs. Extremely power efficient.

I am experimenting right now with a N100 based NUC-style device (idle at ~7W) as a basis for a ~50TB NAS that should idle around 20-30W.

  • The embedded GPU supports QSV for transcoding.
  • Small footprint (no rack needed)
  • Quiet

Just throwing this out there as another (arguably extreme) option.

3 Likes

I don’t see why he’d need to make that into another box, as for HDDs just go with anything that’s NAS/Enterprise rated. Purple etc is just stupid marketing… SSDs will do fine as long you have reasonable compression.

Thanks for the precisions, a last question is for the rack and rackmount. I see all price. What will you advise me to take (in depth for the rack server, for the rack mount, how much space between them I should take, the brand, web site (I live in EU)) ?

I will take a 9U with a 4U for the server

Rack life is a rabbit hole. I just went through the transformation of tower to (almost) full rack and… its expensive like audiophile hobby. Its a total commitment I’d say take it if your space is limited and you plan to have lots of computers and lot of nice hobby to keep you busy.

You also need the wallet to match because you are sort of within the enterprise space. If you live with other people (spouse, gf, parent, etc) you will have to consider their opinion in living with enterprise grade equiptment have a say in the noise levels you produce.

I’d also go for 25U+ because its roomy enough and is reasonably future proof. Rack equiptment is pretty much a single purchase for life.

In retrospect, I should have opted of a mini rack vs a standard rack. Playing with Raspberry Pis and SBC/NUCs while keeping it small and low powered is just elegant. Whats funny is, I’m no where near the IT and general computing field at all…

As someone who has witnessed WD Blues (which aren’t enterprise HDDs, but regardless) on surveillance DVRs (not my setup), I’ve always bought surveillance HDDs for myself and recommended them to others. The reasoning is simple: the enterprise drives like WD reds and seagate ironwolves are aimed more for performance, with 7200 RPM and some large cache.

Look at the spec sheet for purples and skyhawks. They can be found in 5400 RPM and they’re made for constant writes. Despite the later, I doubt there’s much difference in disk longevity between red and purples or ironwolves and skyhawks. Their power consumption is slightly lower and if all you need is to mostly sequential writes, then get the NVR grade HDD over the enterprise one.

Except for the power draw, which could be a fair point if you don’t need to high capacity to store at least a few weeks of footage (say you went on vacation and when you came back, found your house was burglarized right after you left), then why would you buy a SSD for surveillance footage? Typically the price doesn’t make much sense even with the power savings of SSDs.

Unless you go with a LackRack design.

Certainly not wife approved, but it can be a compromise instead of buying a really expensive 9 or 12U rack.

IMO, you shouldn’t plan for future-proof racks. Reasoning behind is that racks maintain their second hand value really well (unless you scratch or bend the rack) and because something like a 24U rack is insanely large. Moving that thing is no easy feat.

If you live in your own home and you’re not renting, maybe it makes sense. But even then, I’d argue to go smaller first and only buy something new if need arises, otherwise, with all the empty slots you will be planning to fill them in instead, which is going to be a money sinkhole. Don’t do it, lmao.

Empty racks create a void in the soul, which can only be filled by filling the rack - Internet Confucius or something.

2 Likes

WD Reds comes in both 5400 RPM and 7200 RPM variants (that’s also true for Purple series (5400 being 5640)), not that I would recommend either in general. If we compare lets say Western Digital WD84PURZ to Toshiba MG08ADA800E there’s very little that makes the WD a better buy especially since it’s only rated for 1 million hours compared to 2 million (MTBF) for the Toshiba and the Thoshiba drive is also cheaper. The Toshiba is rated 3 watts higher due to being 7200 RPM. The Purple Pro have comparable specs to the Toshiba unit (slightly better MTBF) however it’s also noticably more expensive and uses more power. Same story with Seagate Exos units… (very similar to Tosihba’s MG series in terms of claims and specs). Toshiba do also have a “surveillance” series but anything larger than 6TB is 7200 RPM it seems…

One thing I have done in the past to make my drives run cooler is rubber band a 2in by 2in by 1/2 inch heatsink to the top of the drive where the spindle is. Dry.

If the drive is running on the table without a dedicated fan it will drop its heat by 20F. Transitioning it from too hot to no problem. Using a heat pad would probably be even better.

Of course intentional airflow is better, but in a lab setting, that is a useful tip.

The room with my computers now has a ceiling fan which is very convenient.

I do the same thing with GPUs on the back plate to help cool the memory and core further

1 Like

I spent many hours, days, weeks and months researching low power servers on and off. I looked into things like the Raspberry Pi, Udoo Bolt, various motherboards with soldered low powered CPUs, you name it. Your needs don’t sound too demanding but if you want expandability down the line, you need to just go for server grade equipment. 1 storage drive will get filled up really quickly and I hope you have backups planned because drives die and 1 drive per array is asking for trouble.

In the end I realised that servers are the way they are and you just need to live with it. I would suggest to just get a proper server setup which is what I’m going through right now. If you get a low powered server, chances are, you’ll have to just throw it all out and start again because you’ll outgrow it pretty quickly and low end machines have no room for growth.

I’m not sure if you’re looking for 1 or 2 servers, sounds like you want two, a low powered storage and services server and a basic gaming machine since you’re not attaching a 4060/4070 to the Raspberry Pi, Udoo Bolt or low powered integrated machine.

Good morning !
To clarify, I own my house and I have an outbuilding where I work in full remote. The server will be in it, in a separate room of the “office”. Inside, I have a 3D printer (an old creality 10S) so if you tell me that a single server can make more noise than a 3D printer, I will think about it twice.
For what is a raspberry pi etc, I have multiple, I do projects on it but you are limited fast (and did you tried to start a VM on it ?).

For the rack, I will not take a 25U+ x) I don’t have the need of so much empty space !

You may also want to look at the workstation version of that intel processor.

You can read a good overview of the options here:

The Motherboards start at around $650, and the CPUs under $400, though the $700 cpu is a better deal. For $1200 you get nearly double the pcie5 and ddr5 lanes and channels.

Be aware that it doesn’t have many of the accelerators that are on the scalable CPUs, though the low end of those CPUs may still be achievable.

The chart of which cpus have what is here:


I saw one of the notes on stable diffusion on the intel cpu. With the ai accelerator, they were able to drop the time per iteration from 27 seconds to 7 seconds per core. so for 60 cores ($17,000) they could do nearly 10 iterations per second.

The 4070 ($600) does 11 iterations per second without tuning, and 34 iterations per second with tuning. The 4090 ($1500) is more than twice as fast.

I’ve been very interested in Epyc Siena (Epyc 8004 series) for this since I’ve heard of this platform.
I’m looking to build a server that is going to be primarily a NAS (4U, 10+ drives, eventually up to 20, ZFS, probably TrueNAS Scale), but I also want to do some other stuff on it. Probably no VMs. But some self-hosting of various services, some Docker containers maybe. Maybe some live transcoding but only for myself. I’m new to this and I feel like I might add more things in the future as I discover them (so having performance and I/O headroom would be great).

I’ve been agonizing over what CPU to get.
An older Epyc sounds good in theory because it’s got all the lanes (I’ll need multiple HBAs, maybe a network card, multiple SSDs probably…).
And also I really want ECC. And it want to be sure it works. I’ve heard stories of it not working properly on Ryzen, sometimes even despite reporting that it works (but then it actually doesn’t). I don’t really want to deal with that. I’d also love to log the ECC errors if any happen, not just have them silently fixed, just out of curiosity mostly. Not sure if that’s possible on Ryzen.

Unfortunately I also live in a country where electricity is very expensive, so apart from the drives I’d love to keep the idle power consumption down. I’d expect the system to be idling or close to idling a lot of the time.
So Epyc seems not ideal because it seems to idle pretty high, mostly because of the I/O die from my understanding, and nothing can really be done about that.

But Epyc Siena seems to likely have better idle power consumption, with the cut-down I/O die and also being tuned more towards lower power consumption in general. The lane count is still substantial. They’re also really cheap comparatively, and one would get a modern platform.
They do have low base and boost clocks (~3.0 GHz max), so not really suited for anything requiring single core performance I suppose. I wonder if one could still run e.g. a Minecraft server on these.

Is this kind of the perfect “in-between” platform, I wonder?
I wanted to compare to Xeon, they have so many skews and it’s pretty confusing to me… but they seem to be pretty expensive for what you actually get compared to AMD. But maybe it would be worth it if Sapphire Rapids has lower idle power consumption compared to (regular) Epyc.

Is anyone else interested in Epyc Siena for a similar use-case?

Siena is your in-between platform. It sits between consumer hardware and full EPYC platform. Threadripper or W-2400/W-3400 from Intel are both Workstation platforms that sit also between those two, but with more emphasis on CPU horsepower.

You can check on Intel W-2400, but you won’t get the power efficiency of Siena. 8 core with 70W TDP or 16 core with 120W is really really low as far as server hardware goes.

EPYC Rome (Zen2 generation of EPYC) offers fairly cheap CPUs and TDP figures. They have been the power-efficient low-cost option in the last years.

Which is normal clock speed in server land. 64 or 96 core EPYC have been 2, 2.4 or 2.8 Ghz most of the time. And most server stuff scales with threads, so cores are more important than clocks. And high clocks use way more power. I wouldn’t want to game on them or use them if I have CPU-intensive singlethreaded services, but for my homeserver and all the VMs, 3Ghz is fine.

If you want higher clock speeds, you have to get the CPUs with “F”. But I doubt you want to pay the pricetag associated with them.

Threadripper…the new 7000 series. 92 lanes, quad channel memory…awesome package for a homeserver. But the smallest CPU is 24 cores at $1.5k.

Consumer hardware and stuff like Siena or W-2400 allows for cheap CPUs which allows for more budget spent on board and periphery. There are some consumer boards that make for good homeservers although all are severely limited by the 24 or 28 lanes. But you can do it if you plan and select hardware carefully, as many in this forum have proven.

1 Like

I would be, except I bought epyc 9124 genoa this spring. If the sienna was available this spring I might have gotten that instead.

At the time the sp5 motherboard was the same price s the am5 motherboard with had enough features for me. The cpu cost $500 more than the equivalent am5.

If I had bought the am5 motherboard I would be stuck fairly early, and have to remove functional hardware in order to add functional hardware. The epyc allows me to run what I think I need, and quickly upgrade it if needed.

If I need to in the future I can quickly add a few hundred gigs of ram for a project.

One thing to remember about AMD sockets, they last for several CPU generations, often about 8 years of CPU upgrades. So even though the io processor is a drawback this year, in 3 or 4 years it will get a die shrink, and may halve the idle power draw. Personally I turn mine off and use my laptop when I don’t need it’s features.

I currently have 48 pcie lanes in use.
16 4070 compute,
16 older nvidia for display
4 118gb optane windows boot
4 118gb optane proxmox boot + truenas boot + virtual memory for hosted VMs
4 6.4 tb Intel u.2 (various vendors have them for $350ish, retail $7,800) zfs as truenas data drive, used for hosting additional VMs and nfs share
2 ipmi
2 10g
Sata rotational drive as truenas backup location spun up daily or after boot via zfs send receive
Sata rotational drive as apple Time Machine backup location for daily backup.

Truenas gets no slog l2arc metadata etc as the data drive is faster than any mitigation I could dream up.

Look at my Genoa build thread for links to stuff.

1 Like

Give me “honest” slots with open ends (and clearance) over “fake x16”-slots any day!


Edit:
:bell: SHAME :bell: On :bell: You :bell: ASROCK! :bell:

5 Likes

I don’t understand why manufactures don’t do what you suggest more often; Surely a smaller open ended slot would reduce the BOM cost over a full x16 mechanical slot?

2 Likes

But people don’t want x1 slots, they want x16 slots. So your proposal is bad for sales.

Screenshot_20231023_002024

Is this actual comedy? Or are we really at this stage of justifying design decisions?

The Comedy is in the pages.
Mobo-Layout on page 15, detail on lane-availability on page 35.

This is the future we are heading toward:

1 Like