"Superbuilds" vs. multiple discrete PCs

Hi all!

I’ve got a question that I regularly bounce around, and I’m looking for feedback.

I current run an independent Unraid Server as well as my main desktop. Unraid has three primary functions of 1) handling Docker, 2) Plex, and 3) NAS.

I am actively waiting for AM5 to determine if I will go to AMD or Intel when I rebuild my current workstation. I am, however, wondering about the costs/benefits of merging these machines into one hardware solution and virtualizing Windows. I am certain there are costs/benefits to this but want to make sure I fully understand potential repercussions if I go that route. It would be beneficial not to clutter up my office with multiple machines (Out of sight/Out of mind doesn’t really work in my home) and the cost savings is probably also significant.

I’m open to all viewpoints here.

Thank you!

3 Likes

Hi

I did a “superbuild” due to space availability in my home, but also because my old system was an ITX build, and I wanted to do something different. I think the costs factor of doing one superbuild is worse than, doing multiple smaller build, because you more likely need buy more expensive “speciallity items”, rather just collecting a diverse stuff of computer gear that collect over the years. Plus with multiple smaller builds you can focus the build exactly on the specific problem you want to solve.

2 Likes

One reason I had for doing my “superbuild” is power use. The overhead of running the hardware at idle is shared in a superbuild but multiplied in collection. Each service or VM will still have similar power costs but the idle overhead of CPU, RAM, motherboard, and PSU aren’t multiplied by the number of systems. Unless you go for just a couple (or mostly really low power) separate systems you are likely to end up using more electric at idle with a collection.
That said, I wish I still had a separate system to run production services on. It’s a lot more comfortable to have a separate system to freely mess around with.

1 Like

Superbuilds are fun, until something dies, and all the things you do with the superbox you can’t do anymore.

Since I set up my own NAS years ago, it’s really saved my ass a few times when a “do stuff” machine has died, or I’ve done something boneheaded that I didn’t have time to fix and needed to restore from “store stuff”. If you’ve got one “store stuff” machine with a ZFS array, or a cloud backup(extra points if you’ve got both!), and one “do stuff” machine, then that can be a pretty good setup that will require Something Very Bad(fire, flood, hit by meteor) before you lose something.

That, and just virtualizing things within the same machine can help also. I try to stay away from installing things on the base OS for “do stuff” machines, and then have different VMs for different tasks. That can make it really easy to just yank the VM and spin it up again on different hardware if something dies.

If it’s just you, then the “store stuff” machine doesn’t have to be anything big and fancy. If it’s just a backup target, then an off the shelf NAS like a Synology box would probably do fine. If you end up running a media server, or using it to store VM disks or something then you might need to scale up the requirements to fit what you need.

3 Likes

Superbuild, as in multiple systems, contained in same package? [ex. ATX + iTX]
Or one SPECC’d system, with resources being divvyed up [appropriate to needs]?

1 Like

One spweccd system.

1 Like

I’d go with multiple smaller machines that are easy to chill and are less noisy. Also, a workstation you can at least theoretically turn off when you’re asleep or away from home.

5 Likes

The choice really depends on your workload. For lots of tiny VMs/containers small computers can work, and is much more flexible. For a large, single workload a “superbuild” can work best.

If you want a NAS with a non-trivial amount of hard drives then one larger build is definitely easier. Most smaller builds I’d consider would be mini computers (like lenovo tiny), which are basically the size of a 1-2 3.5" HDD.

Personally I have one NAS running Jellyfin in a Fractal 7 case, and a lenovo tiny running all my mostly-idle VMs. This gives me enough flexibility to do NAS maintenance without taking down my VMs. And if I really need to I could move my VMs to my NAS temporarily for maintenance on the small unit. It also means I could leave my NAS off if I really wanted.

2 Likes

Two schools of thought here… well I guess three considerations as mentioned above as well as my reasoning/thoughs on the issue.

1.) Temp and trouble shooting issues are two reasons to go multuple machines. If one fails it should be less costly to replace and cause less downtime if you need to migrate services.

If you get low power machines like Pi or other SBC (single board computers), which I dont suggest with dumb prices right now, or fanless rigs like my Protectili firewall they draw next to no power. You can also ssh into any machine as needed for an issues.

2.) Single mega pc… I have considered thos with the likes of a Phanteks Ethoo Pro dual system pc.
Phanteks (PH-ES620PTG-DBK01) Enthoo Pro 2 Full Tower – High-Performance Fabric mesh, Tempered Glass, Dual System/PSU Support, Massive Storage, Digital-RGB Lighting, Black https://a.co/d/hIRjBP8

Theres a ton of room for storage, the trade off is I start to lose room for cooling. For me I only go custom water loop so that is a consideration for me.

Id also want to run a smaller ITX for some duties and ATX for gaming or vice versa. So I didnt have to buy more hardware Id probably use my ASUS ROG X570i for gaming and my ASrock rack x470d4u matx as the server.

There are postives and negatives to each one depending on your needs. For now Im happy with a didicated machine for each use. I have a pi for pihole and unbound, protectili firewall, matx fractal server with room for 6 3.5 hhd plus 2.5 satas and nvme drives for data hording and serving files and some services and my daily driver micro pc with a ryzen 5 pro 4650g, and finally my gaming pc. Its nice to have backup machines as well incase issues arise :slight_smile:

So I guess you could count me into the multiple camp. I could virtualize windows and all but thats a lot of work and can cause issues depending on the hardware and software used.

@ThatGuyB, @Trooper_ish and @PhaseLockedLoop could all have good input as well

2 Likes

Yeah, I changed form multiple Pi’s, so a single ITX build.

A few Pi’s were able to do the tasks I needed, from NAS storage thru DNS to accesspoint to Emby server to firewall.

But a single ITX board allowed x86_64 binaries, a native SATA bus, and PCI options.

I still find uses for the Pi’s around, and totally glad I got them.

1 Like

I would say, the superbuild would be a lot more expensive and a lot more fragile (if the superbuild breaks, all PCs on it break with it).

Also, if you have 5 separate computers, you can just upgrade incrementally and push the leftovers downwards. While I certainly don’t think you should run an old 4-core 8-thread 250W CPU over an 8 core 65W Ryzen 7 or Core i7, a case could be made for updating one PC every year and rotate the parts downwards, all according to budget and needs.

Then again, the super build gives geek cred and a lot of knowledge about virtualisation and server room stuff, so if you need that, then…

3 Likes

Many comments, a lot to unpack…

Single superbuild
Pros:

  • potential to learn about virtualization, resource splitting and a few skills related to this type of job;
  • in theory, less waste, both in power and other idle resources, as you’d have one system be in-use at an average rate, compared to many systems idling;
  • scaling up means that potentially all tasks will finish faster when they demand resources;
  • everything is at your fingertips, no need to connect stuff to other boxes for setup or troubleshoot;

Cons:

  • more expensive and potentially devastating on cash-flow;
  • if you go with old enterprise hardware, it’s going to be a powerhog even at idle;
  • something breaks, then everything breaks;

Multiple PCs
Pros:

  • something breaks, just that thing breaks;
  • potential for power sipping both at idle and during workloads;
  • big advantage when it comes to actually powering off those things that you don’t need and starting them on demand;
  • good on cash-flow, buy one now, buy another one later, you don’t have to have a big saving on-hand;

Cons:

  • tasks may take longer to complete;
  • with resources not being properly distributed, one PC could work a lot, while others idle;
  • scaling out is not linear and your workloads need to support it;
  • a bit more of a hassle to troubleshoot, but not a deal breaker;

I always found it cheaper to go multi-low-end than one big box, especially given the advantage of buying one now, then buying another one later. I had 2x Asrock J3455M and a Pentium G4560 PCs, all for probably less than a single 6 core Ryzen box, or maybe around the same price, but I couldn’t do a single beefy build at the time. And my 3 PCs were using less power than one Xeon X3450 that I had for free. I used to keep all of them powered on all the time, but I started powering off the chungus, because it was using so much power at idle, it was ridiculous. And it only had a 500W PSU, but my power bill doubled at the time and this was around when the coof started.

I sometimes find a middleground, but one thing I despise is superbuilds that contain the router. The forbidden router that Wendell does is a horrifying thought for me. Knowing that my system could break and lose internet access, thus lose the ability to research how to solve the problem is something I cannot live with.

I usually recommend people do a router build with small boxes like Mintbox 2 or Protectli or just keep a router around, then do a build for everything else.

I used to have a VFIO build on the aforementioned pentium and it worked fine, but Windows was choking a bit and I had other issues with it, mostly because Manjaro was such a thorn in my side.

I am biased towards multiple builds if you have the space for them, but even if you don’t, doing a rack or stacking them somehow is still better than a chungus build. I’m not against superbuilds, I’m just against using those for everything.

One example I give people is that their home routers aren’t their PCs. And imagine having to keep their PCs on to access the internet on their phones. It’s wasteful! And I am arguing the same for NAS. I believe people should have a NAS in their homes. Nobody should be storing their most important files on their PCs. Do a small NAS, store your files there and encrypt and send them to a cloud or somewhere. Bonus points is that you can access your files from any devices and migration on a new PC is eased.

Now here comes the even worse part of my ramble. I believe one should have a PC build for their own consumption, even a VFIO build good enough for just 2 OS and have a second build that stays on 24/7. The second one should be spec’ed per needs. It can serve as a NAS, a small hypervisor for services like Jellyfin or whatnot, and even a forbidden router if one has a backup AIO router that they can use to connect to the internet in case that one goes poof.


I somehow managed to derail the thread from a 1 superbuild vs many small builds, to “how much power does a build need to have and how many builds should there be?” It is slightly related, but it is outside of the topic.

My current infrastructure consists of:

  • a RPi 3 router (WiFi WAN);
  • a RPi 4 8GB - daily driver PC;
  • a powered off RPi 2 that I may use as a temporary tftp for netboot;
  • a powered off Pine64 RockPro64 to be a router, once I figure out how to run OpenBSD on it, and if that fails, FreeBSD;
  • an Odroid N2+ currently used as a makeshift NAS, will be a LXD container host;
  • an Odroid HC4 that I haven’t figured out yet, will be my NAS;
  • an unfinished Threadripper 1950x build, I need to order ECC RAM - this will be a VFIO build for Windows and Linux gaming (in case stuff doesn’t work on Linux, I will power on Windows) and a testing ground for other stuff that I don’t plan to run in LXD, like say, a test virtual Ceph cluster for learning, a virtual cloud infrastructure (not for sale or rent) and an automation zone, spinning up different VMs, testing stuff, updating them, customizing, purging and so on;

The plan for the SBC infrastructure is to make it redundant, by adding another switch and doubling the NAS, router and LXD host. I plan on automating some tasks, like updating, but I don’t know what alerting system I’m going to use. I’ve used Centreon in the past, but I don’t like its hacky nagios roots, I’d rather use nagios if that’s the case, but I’m in-between Zabbix and Grafana + Prometheus + Alert Manager, I need something that can serve as an automatic response: updates needed → do an update; restart needed → send me a notification.

The reason I went with SBCs is because I want to eventually power them via solar panels and have them run 24/7 and I want redundancy. I could have had a 3 cluster Intel NUCs and should have been easier, but at the cost of probably more power use and it would have been more expensive, although it opens the gate for all the x86 compatibility, although I probably don’t need that, as everything I run is open source, so I can just compile them.

1 Like

If you are a pragmatic person I would stay away from “Superbuilds” since there will almost certainly always be “small” issues that are going to f you over.

ASK ME HOW I KNOW :upside_down_face:

On the other hand the amount of psychological gratification should not be understated if your project should actually work as planned after a thorny odyssey.

3 Likes

Depends on the runtime and what you use your main desktop for. Accounting for enough compute power concurrently to run everything you need is what, in my opinion, will decide the route to take.
Especially if you want to integrate your desktop into one machine too.
If you do keep in mind that it would be like keeping it on 24/7/365, and that’s a cost you need to factor in.

In your situation I’d go Plex + Docker, desktop PC and NAS machine.
That’s because a NAS is where you keep critical data so it should have as few software on it as possible, to keep it secure. Plus, if it’s only serving data, can be low powered and be on at all times without costing too much electricity.
A Plex + Docker machine can have more horsepower, be “less secure” and not be kept on all day every day (unless you have Docker istances that need full time availability).
Dekstop machine on it’s own can fail, without bringing everything down, can be turned off at night or when out to save on power and have all the compute power needed.

1 Like

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.