New home lab: moving to mini pc's to build a Proxmox cluster

A Proxmox (Cluster) that uses as little power as possible. Fanboyism doesn’t help because you can’t and shouldn’t run Proxmox on that thing.

That’s the topic and @nutral is right on this.

So let’s get back to our productive and efficient German (and Dutch) thread getting the best for @MvL and his homelab :slight_smile:

Being able to read both Dutch and German (Frisian) in an English-speaking forum certainly makes me feel like an erudite and knowledgeable.

I really like that low-cost N100 board. But I kinda feel you get a better deal with the Minisforum Ryzen. N100 is really more embedded/RPi league than laptop/desktop Ryzen/Core

It’s kind of a tradeoff, a N100 board as high availability is nice because it’s cheap and you can run it off a wall adapter (i believe 12 or 19V). But there power is limited. Kind of depends on the workload.

I’m running 22 services on proxmox in 3 containers. but my 13400 has averaged at about 0,4% cpu usage…

1 Like

So a board that is about 5%-10% more powerful than the Flashstor is adequate enough for a Proxmox utility but the Flashstor is not, yeah, that makes sense :stuck_out_tongue:

Which is the reason I suggested the Flashstor in the first place here. But fair enough, withdrawing now.

Just ran some quick calculations for myself. I was checking PSUs and whether expensive Platinum/Titanium are worth it or not.

use case: 24/7 server, 0,40€/kWh, 100W (2,4kWh/day) average power draw.

2,4x 365 x 0,4= 350€

So every 10W translates to 35€ power bill per year.

80% efficiency PSU (low 80plus PSUs or good power bricks)

100/0.8= 125W at the wall

90% efficiency PSU (higher grade PSU at 50% load, lower at 10-20% load)

100/0.9= 111W at the wall

That’s -50 bucks per year from the PSU alone.

Titanium PSU:

100/0.94=106W

Another 18€ per year over 80plus Gold with the added advantage to have 90% even at 10% load.
And I think this is critical and why I will get Titanium even if costs 100€ more than a cheap PSU. We’re not running a 750W PSU at 50% load. 10-20% figure is usually where we are at with home servers.

PSU is really important if you see the savings long-term, even at the 100W bracket. Not even talking about 200-400W servers where all savings get multiplied. And less waste heat = silent PSU out of the box.

So is a 200€ Titanium PSU worth it for a 100W home server in Germany? Yes. It’s German efficiency out of necessity :wink:

1 Like

Appreciate all input! :slight_smile:

Main reason to run a Proxmox cluster is that I run nodes from all kind of web3 projects. Now I run 9 VM’s with following specs: 2cpu, 4threads, 8Gb Mem, 220Gb storage. This will double at the end of next year. Next I run a Docker Vm with all kind of containers. Also it is my play ground to learn, and test stuff.

If I can avoid down time with the nodes then this would be nice. The less down time I have the better it is. That is why the idea of cluster.

Also I run a Plex server for my media. For the server I like to use quicksync for hardware encoding. Maybe in a VM? No experience with Plex in a VM.

I want to separate storage from the Proxmox cluster

When I started my rack power usages was not a issue, but now with the rising prices it is. So power efficiency is the way to go.

So you want a proxmox cluster then? how much horsepower will you need? If it is only low then the N100’s could work.

For plex hardware encoding is kind of a pain on proxmox. so i would build a nas that acts as a backup destination and run plex in a docker container. (that’s what i’ve done Nutral's HomeLab blog ) That build uses about 30W with a picopsu and 3 16TB hard drives + 2 4TB ssd’s.

Where do you need the high availability, is it in docker containers? Then kubernetes or docker swarm would run better.

In proxmox clusters you need a quorum so at least 3 nodes and then 1 node can stop working and it will keep running. whenever you write to storage it needs to write it to all 3 nodes, so that will need some ethernet bandwidth if you’re writing a lot.

It’s important to know how much cpu power is used, in a mostly idle system the consideration gets a lot different compared to constant cpu usage.

I also have to ask, what do you want to spend on it?

I will checkout your blog. Cool!

I’ve tested 9 VM’s on a Minisforum UM773 lite and the cpu was hovering around 20% - 25%.

Not sure how much I want to spend, but this is probably an project for 1/2 years. It also depends how expensive the hardware for one server is…

About Plex I indeed also had the idea if Plex not could run on Proxmox to build an Intel based NAS for Plex.

The downside with N100/N200 is that performance is quite limited (not terrible though) and it only supports up to 16Gb memory reliably and it won’t be able to replace any of the current boxes.

Often times you just need 3 machines to create a quorum holding a redundant data set that you can then test your app against. Being able to remove a cable to simulate a single link going down is quite useful.

Just because you learn how to deploy a 3 machine cluster does not mean that is what it will be deployed on when it goes into production. Maybe it will be 5, maybe it will be 30. Being able to see when it goes up, or down, or finishes replication and is ready to accept requests is a useful skill.

I need at least 64Gb per Proxmox server, but I think 128Gb is better.

Not completely sure if I must choose Intel or AMD. From what I understood that AMD under load in more power efficient. Intel is IDLE more power efficient.

Are ITX MB more power efficient as a Micro or ATX variants?

Has anyone a Proxmox cluster running with NUC’s?

More is always better :slight_smile: If you are counting every single Watt, less DIMMs are more power-efficient. DDR5 with 1.1V is lower power than overclocked gaming memory or DDR4.

I’ve seen very impressive benchmarks where AMD and Intel are locked to low TDP. I was checking 13900k and 13900T and comparing it to AMD. But that should apply to i5 too. You really get a lot of juicy out of very little power. AMD is usually your perf/watt champion, but Intel with E-cores are damn good as well. Intel SKUs with T have lower base clocks and seem very compelling in the low-power department.

It’s more about the chipset and components on it than form factor. I don’t think same chipset board with same amount of cards/CPU/memory on it differ from others.
If you have IPMI and/or 10Gbit Intel x550 controller on-board (which I got with my AsRock Rack X570D4U-2L2T), power floor is a lot higher than with a barebone cheap B650.

Fans don’t use that much power, but they add up…so if the rest of the system is very low power, you don’t need a lot of fans too.

Intel will be a bit troublesome due to P and E cores and seem more troublesome than AMD when using 4 DIMMs but I might be wrong on that one. Physical size of a motherboard isn’t much of a concern when it comes to efficiency (there will be a slight loss but negligible).

1 Like

Everybody thanks :slight_smile:

I think I slightly leaning towards AMD because of performance per watt.

I’ve only seen mini pc with max 64Gb?

Interesting thread! I saw a video of Wendell discussing this with a Minisform UM790. Very interesting option. Is max cluster 3 machines using Thunderbolt or If I’m correct AMD has something different then Thunderbolt?

M2’s are really interesting for fast shared storage.

I’ve seen video’s of the Flashstor from Asus. Something rackmount would be cool, bust is mostly more expensive…

Let’s face it…MiniPCs are laptops without a display and battery.

When talking cluster and hyperconverged infrastructure, the approach is to scale out/horizontally…meaning more (small and cheap) nodes. You are paying the (huge) network latency tax anyway, so 3 or 10 nodes doesn’t make a difference.

I think 64G on really limited nodes are plenty. If you need more, you need to scale up/vertically with better hardware (larger form factor) or scale out with more nodes.

I still though don’t see the benefit in this case compared to a desktop box as we mentioned before more than “just because” and more issues in the long run. There’s a lot more complexity you have to deal with that seems completely unnecessary in this scenaro.

1 Like

Ah, wait… You already have a rack, my bad. While the Flashstor is an awesome piece of SOHO equipment, if you have already invested in rack infrastructure, then I would take a good look at the previously mentioned N100 boards and 2x ITX 1u servers. 2u is Ok too of course.

For rack mount SSD storage, I would take a look at the new EDSFF form factors E1, E2 and E3 when it comes to storage:

Still seems to be a pretty young market, but, perhaps something from SuperMicro or HP could offer something good here? When going rack it is all about a slightly higher power bill but much more dense Ops. Here is what Supermicro is offering in this space:

As for rackmounted cluster, well… How about dual 1u ITX systems? Naturally they do not support extra add on cards very well, but do they need to do that? Was thinking something like this, you can probably find a chinese knock-off somewhere.

As for power efficiency, if we are going rackmounted we have some interesting options opening up, like a Pico PSU combined with a single 12V DC line hooked up to a rackmounted UPS. No idea if this exists in the market, really, but this is the most efficient way to do power by far IMO. This way you can measure the entire power draw from a single source in the wall + UPS goodness is awesome.

Even if there are no direct-to-12V UPSes in the market, running things through a UPS will still probably be far more power efficient.

Finally, as has been mentioned by others, many small machines will draw more power than a single big machine on a per-core basis, an EPYC 32 core may draw 200W of power but if each of those 32 cores are running a service each that is 6.25W per service. If you have a cluster where each machine draws 30W on a 4 core that is 7.5W per service. So, small will not scale as good but the question then becomes if you will need 32 cores or not, because if you only need 16 cores then you will still be drawing 200W on the big iron while you will only be drawing 120W on the smaller stuff, while at 32 cores you will be drawing 200W vs 240W. So yes this is very relevant too.

Hope that helped you somewhat :slight_smile:

I’ve seen some messages here that kinda go both ways. There’s merit to criticizing the design, but in fairness to OP this isn’t a terrible design either.

If you’re on a tight budget but want to experiment and learn “clustering” buy some used corporate desktops. Paying the premium on something new from Minisforum or elsewhere is…probably not advisable for this use case?

As an example:

that bad boy is $100 cheaper than the Minisforum thing OP was looking at. The Core i5 8500 still has gas left in it’s tank, and you get 16GB of RAM and a 512GB SSD as part of the deal. Sure the CPU isn’t as fast, but if you don’t have hard requirements for your workload, or you don’t really even know what your workload is, save the money.

I’ve done some interesting stuff with these things:
Guide to Turning a Project TinyMiniMicro Node into a pfSense Firewall (servethehome.com)
STH Project TinyMiniMicro the Plex Server Setup Guide - ServeTheHome

I am a huge fan of what Patrick at STH has dubbed “TinyMiniMicro”. The fact you can grab pretty beefly little computers for sub-$300 used on eBay is fantastic. I futzed around doing something similar to this over on the TrueNAS forums early this year and learned some hard lessons.
Cluster Relative SMB Performance | TrueNAS Community

There are definitely some “gotchas”. USB mGig adapters are…finicky at best. They work fine for their intended usecases (ie, docking a laptop where you may want to access files off a NAS), but when you consistently load them up with client traffic in a server use-case YMMV. Plus the very fact that they are USB represents some stupid challenges to overcome.

Also FWIW: Don’t worry about power in this general area. Small 1L PCs sip power, and modern (anything Kaby Lake onward reall) processors have VERY LOW idle power consumption. Quabbling over a 65 watt vs 45 watt variant is really like quibbling over a couple euro a month. Until you get into the wonderful world of graphics cards, you honestly shouldn’t see a massive difference in your electrical bills. Think about the money spent like an investment in your own future.

It’s alot cheaper than college, and I’d bet you’ll learn just as much but in different ways. :stuck_out_tongue: Textbook knowledge is not nearly as valuable unless you have the practical experience to back it up. I’ve had guys working for me straight of college. I’ve also had retired guys work for me looking to kill time. I’ll take the old greybeard telco technicians over an “up and coming” programmer any day if the work requires any sort of out-of-the-box thinking.

However you end up doing it, remember to have fun. Also, please document your journey here. Learning from folks’ successes and failures is how we all grow together.

EDIT: Also, I’ve seen quite a few folks (quite successfully!) cut their teeth on home labbing with crappy old laptops with broken screens.

You make some sacrifices in I/O but you get a “free UPS”.

3 Likes

Oh lawdy, if only you saw the guide I’ve got cooking.

3 Likes

I can’t wait :stuck_out_tongue: Make sure you @ me

2 Likes