Hi, I’m currently focusing on mini pc’s to build a cluster of Proxmox servers. It must be power efficient because power is expensive here in Europe. I have some experience of running Proxmox on a Minisforum UM773 lite, and Proxmox is running nicely. I’m not sure which route to go when building a cluster. I saw the video from Wendell where he is talking about connecting mini pc’s via thunderbolt, and USB 4. Can you guide me in the right direction? What is wise to do?
For high speed interconnect for a ceph ssd cluster, thunderbolt is a cheap (and very janky) solution. But for lower speeds, just get some 2.5 Gbps USB NICs and either a switch, or do a ring with 3 nodes, instead of thunderbolt. Even a single USB 3.0 5 gbps port should suffice for connecting the 2x 2.5 Gbps NICs.
It also depends on the cost, for the price of 2x USB NICs, you might be better off looking for USFFs that already have these ports built in. Wendell was using TB because the PCs already had 2 TB ports each.
Sounds like rather fragile setup in many ways and you might want to think twice about going that route. If you disregard the rather questionable hardware design and aftermarket support (BIOS updates etc) you have no redundancy at all of the storage except if you’re going to pair NVME and SATA which probably will be quite lackluster in terms of performance. No support for ECC memory which you might want to consider if availability is a thing. There isn’t necessary a huge gap in power efficiency between desktop and laptop CPUs. Depending on workload you can more or less replace two “Mini PCs” or at least 1½ with one efficient desktop CPU box (given that the Mini PCs can do full blast continously and they come with a 120W PSU) and get proper mirroring, ECC support etc and without it being much of a difference in the end.
Be careful with USB Ethernet adapters as they tend do be unreliable in general and have design issues such as thermal (ie overheating under load).
https://www.notebookcheck.com/R7-7735HS-vs-R9-7900_14952_15014.247552.0.html
I 100% agree.
But: If you want a cluster with limited budget and performance is of secondary concern, RPi or miniPC clusters are an obvious choice despite their drawbacks.
Single desktop PC pulling as much weight as 3x MiniPCs? totally possible and likely cheaper. But cluster choice here isn’t out of performance necessity, but for HA homelabbing and experimenting (I think)
lack of ECC isn’t that much of a deal because a node crashing or writing bad data isn’t that critical when you can afford to lose entire nodes and the cluster repairs itself later. Wouldn’t stop me from using ECC regardless.
Yeah, I wouldn’t trust USB NICs for 24/7 server use and they get expensive really quick >2.5G
If you want Ceph as your storage in a cluster, you need enterprise drives and cores if you want performance. Because that’s a hyperconverged server now, sharing CPU with everything.
I remember at least one How-to Guide from Wendell and one (fairly recent) thread in building a cluster based on TB connections. Check out forum search for that.
Thank you all for the replies and input.
This is indeed for a homelab, but I do run some crypto related nodes.
My main concern is the power bill. I pay almost 40 cents per kilowatt. So power efficiency is the most crucial thing at the moment. I have a Xeon E5-2699v3. I used it for Proxmox. This server draws way to much power now. So that is why the turn to mini pc’s
I indeed could connect every server with it’s own 2.5Gb USB NIC.
btw I also looking into making my storage server more power efficient. I’m running unRAID at the moment on a old E3-1270v5, but I want to move to something else. I had a Synology, but sold it after the HDD debacle…
I pay the same
Consider the following calculation:
3x Ryzen 7 7735HS @ 35W TDP
vs.
1x Ryzen 7900x/7950x @ 65W TDP (certainly one of my favorites in terms of perf/watt)
Single system wins on power efficiency. And you don’t need a switch and x amount of 1-2.5G or TB connections that all use power…non-redundant parts don’t need power, 3x power brick vs. 1x high-efficiency PSU, etc.
A simple Ryzen 7900 (the non X variant) will perform about as fast as both your boxes combined (rough estimate, and is a 65W TDP part officially) and will likely even be faster given newer archs and instruction sets depending on workloads. The 7950X3D can also be configured in that mode looking at https://www.pcgameshardware.de/Ryzen-9-7950X3D-CPU-279466/Tests/Tuning-Benchmark-Review-Release-7800X3D-1414209/ although the recommendation seems to be 105W if you plan to load the CPU heavily.
Power is crazy expensive…
The Ryzen 9 7900 has a TDP of 65 watts. Funny I also checked this cpu some time ago. Also I could at a SFP+ card for 10Gb connectivity. I already have a 10Gb Unifi aggregation switch, and the SFP+ card.
What do you advise for motherboard. Not sure if I want to go the server board route… I looked into both options when I was checking the Ryzen 7900.
certainly less power than RJ45 10G.
I like the ProArt B650 from ASUS. Fairly cheap but you can split the x16 into x8/x8 and can use two cards. But pick whatever you need in terms of SATA or M.2 ports and PCIe slot layout.
I prefer cheap basic boards and only pay more if it has a feature I really want.
Something Asus with 8 layer PCB depending on your requirements, that also leaves with the option of going with ECC memory.
Asus ROG STRIX B650E-E appears to be a pretty competent “budget” board with Intel NIC and 2x 8x PCIe slots if you would want to expand at some point. There are cheaper variants otherwise but that will also drop you down to non 2x 8x PCIE and Realtek NICs, this why I’d look at that over the ProArt B650. Be aware that there will be some PCIe sharing going on with B650 boards.
Otherwise the Asus ProArt X670E-CREATOR WIFI has been a popular board among users here but it’s at the upper end of pricing but with more PCIe lines etc.
Yes true!
I will have a look. Thank you for your suggestion. How do you split that slot?
simple BIOS option. Or it is wired so that as soon as you populate the second slot, both go into x8 mode. Check board manual.
I will check that board too. Thank you.
A Intel NIC is indeed nice, and I think preferable.
Ah, understood!
Indeed, it’s a bit different depending on board.
@MvL
ROG Strix B650E-E Gaming WiFi is usually a bit more expensive (not by much) but if you can catch it at sale it’s usually cheaper.
Also pay a bit of attention to PCIe line sharing depending on your requirements.
Both nice boards! The ROG Strix is indeed a bit more expensive but has an Intel NIC, 4 M2 slots, and WIFI. The cheaper one is interesting too!
So, I’ve said it for a while… My new favorite low power setup is an Asustor Flashstor 6 bay All-NVMe NAS as a foundational server, plus a Ryzen 9 7900 separate box.
NAS: Asustor Flashstor 6 bay full SSD
First, the NAS. The Asustor Flashstor 6 bay costs ~€500, VAT inclusive, and 4TB SSDs are ~€200. This means a fully decked out Flashstor with 20TB of redundant storage costs €1.7k. This beast will draw 13W idle and less than 25W when fully hammering those SSDs. This equates to a worst case of ~18 kWh a month, a more realistic estimate would be somewhere between 10-14 kWh.
Only downside is the lack of ECC, but this is not critical in a home setting, we are talking maybe three or four bitflips a year and the chance of that ever impacting data you actually care about is low. Most of the time it will mean a millisecond lost in an audio file, a green pixel instead of a red one in a photograph or even a corrupt frame in a video. These are annoying, but easy to fix.
Server: InWin Chopin Ryzen 9 7900 with ECC support
I used German PCPartPicker here to get prices in Euros, but apart from the CPU pretty much everything in this build is replaceable. I go with an InWin Chopin build here, but things like case, motherboard et cetera can and will be replaced according to your needs. RAM as specified is not ECC, but the system does support (unbuffered) ECC RAM.
Type | Item | Price |
---|---|---|
CPU | AMD Ryzen 9 7900 | €422.29 |
CPU Cooler | Noctua NH-L9A-AM5 CHROMAX.BLACK | €59.90 |
Motherboard | Asus ROG STRIX B650E-I GAMING WIFI | €281.81 |
Memory | Corsair Vengeance 2x32 GB DDR5-6400 CL32 | €224.50 |
Storage | TEAMGROUP MP34 1 TB M.2-2280 PCIe 3.0 | €55.28 |
Storage | TEAMGROUP MP34 1 TB M.2-2280 PCIe 3.0 | €55.28 |
Case | In Win Chopin MAX w/200W PSU | €139.00 |
Total | €1238.06 |
This server will draw a maximum of 200W which equates to a worst case of ~145 kWh per month. A more realistic number would possibly be 60 ± 10 kWh a month.
Summary
So with this setup you would have this (PDM = Power Draw / Month):
Unit | Price | PDM (worst) | PDM (actual) |
---|---|---|---|
NAS | ~€1700 | ~18 kWh | ~12 kWh |
General Purpose Server | ~€1200 | ~140 kWh | ~60 kWh |
Total | ~€2900 | ~160 kWh | ~72 kWh |
Translated to money, that is somewhere between ~$30 to ~$65 a month to run these two systems 24/7, but the 7900 machine does not have to be powered on all the time. If you halve the time the 7900 is powered on you also make a big impact on your power bill here.
Every W you save, btw, equates to a saving of ~29 cents a month with your current electricity price.
I see you constantly recommending the flashstor even when it is not asked for. Can you please start listening to what people want? Sure you like it, but it is still somewhat niche in what people want because it is not flexible at all.
The 7900 would also nowhere near use 200W idle, so that is a bad comparison as well.
So as for power efficient stuff in the netherlands, intel is still king because of low power use, for cheap clusters you can always go for N100 boards.
The topic on tweakers (dutch) on power efficient servers would help you a lot. the basic 4W PC is blogartikelen over zuinige computers - Complete systemen en laptops - GoT
10G ethernet is not efficient at all, the NIC would use more power than the pc above in idle (4-10W) I have a server where the 2,5gb usb3 nic uses about 2W.
TDP is not the same as power, most cpu’s will be able to turn stuff off to save power and depending on the motherboard there is a lot of idle use. I have a 100W intel 13400 pc that uses 4W in debian.
I’ve seen people run N100 boards as a cluster for high availability that are also very efficient
Here is a list of computers with idle usage put together by a german forum
@MvL Can you tell us more about what applications you want to run and how much availability you need? How much storage? that will tell us more on what kind of setup would be best and efficient.
The OP asked for a setup that used as little power as possible. Please let me know of another NAS configuration that use 15W from the wall when loaded with six drives and I will listen. Flashstor isn’t perfect but the whole system draws about as much power during stress testing as a single 3.5" enterprise HDD.
Does this make the Flashstor the end-all be-all, no, but it is currently in a particularly neat sweet spot and decking it out with a full 24TB costs like, €200-€300 premium over a regular low capacity NAS+drives, so not outside of price range by any means. That said there are 50W 4-bay NAS boxes that can be had for €1.3k including 4x12TB HDDs, so it is not perfect.
There are reasons not to go for it. Limited RAM, no ECC, no 10 GbE, low storage capacity, abysmal CPU performance… All valid criticisms if you need those things. But as a dedicated NAS, well…
This is the reason I keep recommending it to people that want a low power server with modest needs in a SOHO setting. If you require something beefier, then by all means go for a used Xeon build or take my suggested 7900 build and replace the motherboard for one of those fancy IPMI-supporting AM5 server motherboards for $500, the RAM for ECC and put it in a case with 8 HDD slots. It is, after all, your time, money and use case.
He asked for a cluster, that does not fit with a flashtor, unless you ask for extra info, don’t just suggest what you like with really limited information.
There are tons of things that use 15W, just look at my link, those are all cheap pc’s that do that and still offer flexibility with buying your own hard drives and run different software. Instead of stuck on nvme drives that you can’t even really max out the speed on.
SSD’s will just go into a low power state and makes it really easy to run low power.
The price premium is important and something you keep forgetting, not everyone wants to just go flash storage, i really want to implore you to listen more instead of advising the same thing.
The thing is good choice if someone wants some compact ssd storage and doesn’t want to fool around with it. But for a lot of homelab situations it is just an expensive ssd box. Or when you want a lot of ssd storage in a single system.
I don’t know where your conception of low power or just a 7900 comes from, did you ever test a amd power system? They use a lot of power, compared to low power intel servers, that would be the bestmidpoint for most people who want a low power server that has storage and can also run applications.