Low cost, high efficiency server refresh. Searching for best options?

Back to topic, if you want a good NAS base that doesn’t take up that much space and still have some POWAH to fool around with, I’d recommend something like this as a base package. Highlights include:

  • 6 cores, 12 threads (great for some virtualization schtuff)
  • Intel 2.5GbE LAN port
  • Chassi houses 6 mechanical bays (though you will require an m.2 SATA expansion card)
  • Latest gen for AMD
  • 134W max power, should be possible to undervolt for 25-30W idle
  • PSU is fully modular, which is a godsend for small cases
  • It has Intel WiFi too! :smiley:

PCPartPicker Part List

Type Item Price
CPU AMD Ryzen 5 5600G $176.33
Motherboard ASRock B550 Phantom Gaming-ITX/ax $169.99
Memory TEAMGROUP T-Force Zeus 2x16 GB 3200MHz CL16 $82.99
Storage Samsung 960 Evo 250 GB NVME $47.99
Case Fractal Design Node 304 $112.70
Power Supply EVGA SuperNOVA GA 550 W $59.99
Total $649.99

Possible upgrades:

  • ECC RAM ( $50 extra for RAM, $30 extra for 5650G Pro )
  • m.2 expansion port with SATA ports ( $25 on Amazon )
  • A 10 GbE NIC ( $133 on Amazon )

But I’d say the budget for that is well beyond what the OP is looking for, posting this just to show what a couple of extra $$ will get you, if you can.

Is it verified that ECC messages/errors etc are reported correctly and not just being silenced?

For a budget home NAS, you could go a lot lower than those specs. This is overkill for most people, unless you ARE going to run VMs. If you can get away with just running Docker or Kubernetes, then save your money.

Fairly sure you could get away with a RK3399 SoC (2xA72 cores and 4xA53) or similar in terms of performance (hw crypto is more or less a requirement though) with a moderate amount of users but it would be nice if you use two PCIe 2x slots (one for SATA controller and one for a dual port NIC) if you wanted to combine both firewalling and a NAS into one device.

Firewalling 1Gbit is more than fine, I do think however running Snort and similar might be a ambitious though and I had no issues reaching linespeed with Samba off SATA drives on my RockPro64 board.

You mean expose it to the internet? Why would you be running Snort on your NAS? Maybe on your firewall/gateway, and even then only if you’re going to expose services, which isn’t a very good idea on your home’s internet connection behind your ISP’s firewall, and without a static IP. This diizzy person is beginning to make my head feel dizzy.

I just wanted to clarify some performance data, if it’s a plain NAS device there’s no need to run snort or such software :wink:

Thank you everyone for the continued ideas and information - sorry that I’m only able to reply now!

It looks like it would be useful to clarify the scope of what I plan to use my server for from now on. The current computer was bought to do a lot at once, and I’ve reduced what I use it for by quite a bit. What I currently plan, in priority order:

  1. Samba/SSHFS file server. Ideally I’d like to continue using ZFS to just connect my current mirrored drives, and keep benefiting from it’s features. I’m not too attached to it, so can change if it will make a big difference
  2. Store git repositories for backup and sync between the computers I work on
  3. Will probably move my ad blocking DNS server over to a (very bare) VM, to save having to power and connect an old raspberry pi 2. Would just make things a bit neater
  4. As this (other than my router) is the only always-on computer that I have, I would like to use it as a VPN server. Due to some peculiarities with my internet connection, this would also mean acting as a VPN client
  5. Hosting a single (very small) Minecraft server would be a bonus, but is not a 24/7 task, and I can happily just forward network traffic to another computer when I’d like to do this. I have enough that there will always be one free to do this job, and they can share the files on the network share

Everything outside these would either be lightweight enough to run anywhere, or likely fine to run on another computer on-demand.

Within this, two tasks probably need considered specifically:

  • I host media files (not using any media specific software) to play back on various devices, but haven’t needed any immediate transcoding. I can always do that as batches on my laptop or desktop if needed as the time it takes doesn’t matter much.
  • The media library will not be regularly added to, all I could add at the moment is backups of my small DVD/Blu-Ray collection. I’m not recording TV any more (decided to save on the TV licence) which was the previous source of this.
  • The server also hosts automatic incremental backups of my computers, so keeping up with regular transfers of lots of small files would be great.

I absolutely see the reasoning to suggest SSD storage and 2.5+ GbE NICs, however for my situation, these would need me to spend a lot of money upgrading hardware, or provide no real benefit. A quick overview of the computer environment in my house:

  • I use an all-in one Mikrotik LTE router + AP for internet. LTE is by a fairly big margin the best internet I can buy where I’m living (and actually costs less than the alternative). The AP is slooow, topping out at 20Mb/s but all it’s used for is light duty browsing and streaming. Will buy a new AP if I ever need more, but considering that faster than ~70 Mb/s down is either beyond the LTE tower or needs a higher end modem+antenna, there isn’t much point.
  • Wired to the router is the most basic managed 8 port gigabit tp-link switch, and I’ve put in CAT5e runs to the rooms computers are used in. This is why I’m unlikely to move past gigabit, as I have no 2.5 clients to connect to, and would also have to upgrade the switch to support them. I can however do link aggregation with dual gigabit nics if more bandwidth is needed in the future
  • Four client computers to the server, can’t imagine using more than 2 at once: Gaming/General Purpose Desktop (wired), HTPC (currently wireless), work desktop (wired, previously mentioned mac mini) and laptop (wireless)

So far I’ve been suitably warned of USB to consider it a lower-priority option, and quite like the idea of looking further into using the rockpro64, if I can find parts to set it up how I’d like. I get the recommendation of x86, it being a first-class supported architecture (and more efficient than it gets credit for), but the options for x86 do tend towards the expensive (but more versatile, although I don’t really need that). It’s not out as an option, but probably less likely.

I’m still waiting on getting a power meter to find out the running costs of the current server, but at a guess it’s probably in the region of £50 to £100 per year, especially if making proper use of it. So the budget for something better is probably around £100 to £200. I do have parts I can re-use (boot drive, storage drives, intel NIC) along with probably being able to make or modify my own cases for non-atx computers. If a new device can replace or improve another one that I currently use at the same time, more budget is available as I can sell on the other device (double-duty as HTPC or work computer, I don’t really want to deal with virtualizing the gaming desktop)

If you want something “right now” forget about ECC.

I can’t verify for now compatibility with 12th gen and FreeBSD 13.1-RELEASE or 14-CURRENT but since you’re not mixing cores with that i3 CPU I think you’ll be fine.

Oh I completely agree. Just wanted to show what is out there at the somewhat higher tiers. What I like with this setup is that it is $50 extra for CPU, $30 extra for RAM and ~$60 extra for motherboard compared to bottom-scrape of the barrel, and you have more power in your home server than you can reasonably use. You could even use it as a guest gaming machine if you’d like! :slight_smile:

Incidentally, tweaking the build for U.K. prices bring pretty much the same parts minus the 2.5 GbE (still Intel though) costs 590 quid instead:

Got pretty much the same deal over at uk.pcpartpicker.com with AMD, except… Add £30 for 2 more cores, £45 for twice the RAM, and roughly same price for motherboard. So ~£320 for an Intel core and ~£400 for the AMD core. add ~£200 for case, PSU and boot drive, and you will be set for a low power monster. :slight_smile: Both are solid builds though.

Though, we’re soaring way above the price roof still, given that OP requested:

At that budget, very little is on the table. Though, if we’re only talking about a core, and do not really care about performance too much, this core is probably the cheapest you can get away with for ~£250:

1 Like

Thanks for clarifying. As it turns out, my suspicions were correct. The only workloads you’re planning which might be of any concern at all are the ZFS and VPN. I have been building computers for longer than @wendell, and have decades of experience running these types of workloads on far lesser hardware. Let me assure you that literally ANY cheap, low-powered computer you scrounge together with relatively recent parts is GOING TO SUFFICE. You do NOT need a powerful computer for what you’re planning to do. You also don’t need to use ARM or a SBC. Those are far too limiting, and offer you zero upgrade path. Sure, they’re a lot of fun to hack around with, but I wouldn’t rely on one for anything serious.

Recently, Wendell has made several videos about building a low cost, low power server to handle some really demanding tasks. He highly recommends the Asrock DeskMeet as the system he’d want for himself. I’m with him on that, and would love to own the DeskMeet for myself. I already own their DeskMini, and it has served me well. I’ve really enjoyed it.

You don’t need to go out and buy a completely new system, either. You can literally just throw whatever cheap motherboard and APU together with at least 8GB RAM, and call it a day. IT WILL SUFFICE, and save you TONS of not just money, but other headaches in the long run. You could even buy used, but I’d stick to relatively recently released parts. Just get something standard and upgradable, and it’ll serve your needs well for years to come.

1 Like

I should add that if you’re planning on doing some light video transcoding, then I’d get an AMD APU over Intel, because the integrated graphics will get you a lot further. Also, if you’ll use TrueNAS Scale, it has the ability to containerize a lot that work, which means that you don’t need to passthrough anything to the hypervisor. It can use the system’s integrated graphics, which would be adequate. Both Intel and AMD graphics drivers are already baked into Linux, and the people over at IX Systems have even included a recent Nvidia driver in case you decide to add a discrete card, later. No need to pass it through and deprive other apps. Just like on your gaming rig, they can all share the same resources.

Frankly, you could do that with Unraid, but then you can’t use ZFS. Personally, I don’t think ZFS is a necessity, and getting rid of it would free you up, and expand a lot of cost-saving options. FYI LVM can also do snapshots. Also, keep in mind that snapshots are NOT a BACKUP. Rollbacks are convenient, and especially on a mission-critcal system which MUST remain online. Think of snapshots the same way you think about RAID. It’s adds resiliency, and possibly some redundancy. However, you still NEED a proper backup and recovery solution.

I do second this, but with a couple of caveats. First off, I can only suggest three CPUs in the current market, the quad core Intel Core i3 10100, quad core Intel Core i3 12100 and hexacore AMD Ryzen R5 5600G.

The reason I do not suggest anything cheaper/older is that due to the GPU craze, everything that got any sort of GPU performance seems to have been swept up, which is CRAZY. Yes, even the Ryzen and Athlon 3000 and older APUs. They could be found second hand though, I suppose, if you care to look enough.

Now let’s do a quick overview of the CPUs and cheapest Motherboard / CPU combo for the CPUs in question:

Field Intel Core i3 10100 Intel Core i3 12100 AMD Ryzen 5 5600G
Cores 4 4 6
Threads 8 8 12
Socket LGA1200 LGA1700 AM4
Low tier chipset H510 H610 A520
Mid tier chipset B560 B660 B550
Chipset 10100 (£105) 12100 (£129) 5600G (£160)
Low tier £164 (£59) £189 (£60) £208 (£48)
Mid tier £184 (£79) £228 (£99) £236 (£76)

So when going for the cheapest, it’s easy right? A H510 chipset board and a 10100 and we’re off to the races with 35 quid to spare for RAM!

Unfortunately not, here comes the sad part. The H510 and H610 have had so many features cut, they are ABYSMAL for server use. You could do it, but it just makes more sense to upgrade to the higher models. Meanwhile, the A520 has no such problems, while yes it too has features cut, those will not impact your server in any meaningful way.

This completely rules out the 12100 as a worthy competitor and since only £24 separates the 10100 over the 5600G, the 5600G will provide more cores and a better power curve overall. Thus, unless those £24 really matters to you, go with the 5600G on an A520.

As for RAM, 8 GB is sufficient and cheapest 2x4 GB kits cost £26 for a 2133 MHz kit, but a 2x8 GB 3200 MHz kit cost only £52. This is both faster and twice the capacity at pretty much double the price.

So, is it worth £50 for a substantially more powerful PC? This is up to you, and the second hand market might have something to say about it, too. I say personally, that while a slight overreach, you will have a better experience on the AMD system with 16 GB RAM. This is however an opinion, in the end go with what feels right for you. :slight_smile:

1 Like

I finally found that video I was thinking of. I thought it was Level 1 Techs, but it was actually LTT who made it, which threw me off. Also, it’s a little dated. Still, the advice is sound, and definitely holds up for the type of system you’re trying to build. Certainly, you don’t need gaming performance, but the important information is at the end when he wraps it all up, and explains how to go about planning a budget build, and what to expect from it. For once, I’d actually stand behind Linus, which can often be dangerous.

Some of these workloads will perform noticably poorly on “any” CPU and some such as git and Minecraft relies on good single performance so Intel i3 or better (performance wise) is probably the best choice for this workload / usage. ARM isn’t going to cut it at this price point and interacting with rather large git repos (for example) isn’t fun at all.

Getting a 10th Gen Intel or older CPU is a bad idea in general due to vulnerabilities and the performance penalties appying mitigations. If you can get a system dirt cheap there might be a better price / performance but getting a new one isn’t a recommended purchase. This also goes for AMD, you want a Zen 3 based CPU

Getting smaller sticks than 8Gb today is also not a good idea. While using a single stick will reduce performance somewhat (mainly 3D performance using Integrated graphics) due to not being able to run in dual channel mode you wont use up the memory slots and potentially introduce instability further down the line if you need to expand if you end up using all 4 slots if your motherboard supports that many.

Intel’s QuickSync performs better than AMDs counterpart in also any scenario so I’m not sure why you’d go for AMD if you’re going for transcoding. There are a number of articles around covering this.

@thechadxperience
You should read up on ZFS, it’s a much better choice than most other file systems if you care about your data that is.

Ugh! Bruh… If you care about your data, you need to be more concerned about backups before all else.

Seriously, though. If I were the OP, ZFS would be the very first thing I would get rid of.

Reasoning:

A.) ZFS is a resource hog. It was designed to only offer performance benefits on capable hardware. As a home user, you’ll only observe performance benefits from running ZFS if your network speed exceeds the capability of your storage array. The OP is limited to 1 GbE. As you so kindly explained, a single spinning HDD is capable of 250% performance compared to his network. That means he’s already saturating what his network will allow. Which is why ZFS ARC adding a RAM cache isn’t helping his performance. He’d require a network capable of providing at least double his storage performance before he’ll see any tangible benefit. That requires upgrading all the NICs in all his computers and switches, which his low budget simply won’t permit. Also, it would consume more power. And last, why does he need it?

B.) I’ve said it before, and I’ll say it again. Snapshots are NOT backups. You still NEED a dedicated backup and recovery solution. He probably doesn’t NEED snapshots. Even if he does, then there are still better solutions to run considering his hardware constraints. For example, LVM also supports snapshots, but they are limited to volume level. If he simply must have filesystem-level snapshots, he could use BTRFS, which is less demanding. Since he only has 2 HDDs, he doesn’t need to worry about parity RAID, which requires 3, at least. However, I doubt he needs even that. Snapshots only provide resiliency, convenience, and continuous uptime. Unless you require being able to recover your server without having to take it offline, and I have my doubts, then this is merely a “nice to have”. Just remember to offset those benefits against the drawbacks.

C.) Filesystem compression doesn’t help files which can’t be compressed, like MP3, FLAC, MP4, AV1, etc which are already compressed as it is. You really need to understand what other file types you’ll be storing, and whether compressing them is going to save that much space. Also, remember that his system is bottlenecked by 1 GbE, and which his storage capability already exceeds, and so compression probably won’t help his performance, either.

D.) If he’s planning on running additional services, which he is, then ZFS is only going to starve his system of critical shared resources which could be put to better use elsewhere, like running all those container workloads. Without ZFS, he could switch to Linux which offers better support for containers, and which I would recommend over a hypervisor, because they can share system resources, without having to exclusively passthrough devices. That means he can run more workloads on less hardware.

E.) He can use the money he’d save being able to buy a less powerful system on some extra storage drives to install into his old server, which he could then use for dedicated backup duty. He doesn’t need to keep it running 24x7, since it’s only going to be used for occasional backup workloads. Just turn it on until it’s finished, and then shut down.

If he expects to retain ZFS, then he should seriously start to consider upgrading his network before all else. Even moreso than his budget constraits, that’d be his main limiting factor. Also, if I were him, I’d consider running Unraid over FreeBSD or TrueNAS anyday. In fact, the rumor on the street says that Unraid is the reason IX Systems decided to create TrueNAS Scale. They saw the writing on the wall and realized they’d have to come up with something which could compete with that. I think Scale is great… For those with the budget to permit it. For everyone else, there are even better options.

1 Like

Here’s another idea, if you’ve got the patience to learn it. Red Hat is now letting people use their products for no cost to individuals on up to 16 systems, virtual or metal. They offer a product called Red Hat Hyperconverged Infrastructure which has all your hyperconvergence needs baked into a single interface, based on Cockpit. It’s a powerful one-click solution to get you started. It’ll offer far greater power, performance and capability than anything FreeBSD or TrueNAS can offer. However, it’s also a lot more complicated and has a steeper learning curve. Although, if you can get good at that, you could get certified and land yourself good paying job. Something you might consider, if you’d like to one day be able to afford better hardware. Did you know Red Hat provides free training and self-learning resources?

1 Like

It definitely looks like once I can get a figure for what the current computer costs me, it’s a decision between a higher spec build (using as many of my current parts as possible), or if there’s a good SBC that can handle the core parts of what I do.

The good news, is that i’m not worried about git as a workload at all, all I want to do is have the access to the base OS to manually create the directory structure that I follow, and to use SSH with the same user accounts that I already use for NAS file management. I don’t code too frequently any more, and what I do generally is for low performance devices, which can usually work with their own repositories.

Minecraft would just be a nice-to-have if it works on an option without a great deal extra power consumption; I’m happy to run that on another computer on the rare occasions it’s needed, and can probably even automate that easily enough. A lot of my devices are set up with remote shell access and wake-on-lan. Likewise, video transcoding can either take ages as a background task, or just be offloaded to an actually good compter at the time it’s needed.

It does sound like a really narrow band of viable used but recent systems if zen3/10th gen are the oldest recommended. My newest non-server computers are only zen2 and intel 8th gen, and honestly they have way more performance than I need.

A probably feeble attempt to justify my technology choices (I’m known among peers as the girl who uses weird old tech): I’ve used linux on my personal servers for a long time in the past, and while it is much more widely supported and has all the features, it just doesn’t feel as good to use for this particular case. I really appreciate the simplicity of BSD, as I tend to do things the (old) manual way, with a scheduler and shell scripts for automation, and minimal wrapper/manager software. It’s not what you’d do in a job, where the value is produced, but on a personal server, I really like it all neat and tidy, which is hard to feel when you have (for example) management tools for the one VM used, etc.

To give an analogy of the above; it feels a bit like editor choice to me; I personally choose to use vim most often when I’m working on 1-3 files, but open vscodium or an IDE when working in a big project.

I’m not using ZFS primaraly for snapshots or compression, the other features are what matters for me! I use per-directory size limitation, and feel a lot safer with regular scrub runs to check and find errors in the data (assisted by the mirror). Can’t actually remember what other things I’m using, set it up quite a long time ago and haven’t logged in to check for a while.

I wasn’t aware of being able to access the commercial products for limited quantities, which is definitely interesting (thank you for pointing it out). Don’t think this is the time and place to use that though, as my limited budget is caused by life situations and not the types of jobs I could be in. I’ll definitely consider it if I’m looking for a more high-scaled toolset or a career change though!

The issue with older platforms is that you have hardware vulnerabilites and the software mitigations (if possible) comes at a high price in many cases (noticably reduced performance) which I why I would think twice getting hardware of older generations.

This thread has been going on for a while.

RockPro64 with a non-official SATA card and the official case running FreeBSD should be all that OP needs. Sips power, has ZFS, doesn’t have ECC, but makes up for being cheap both in hardware cost and in running it.

An alternative if OP is fine with skipping ZFS and use BTRFS is the Odroid HC4 (only because I could not make ZFS to run on this thing on neither the official ubuntu, armbian, nor other distros, but I think it comes down to user error and the fact that dkms sucks, should have used the source). I would still suggest the RockPro64, only because it is better supported. The HC4 may be better supported in 2 or 3 years, but by that time it may be EOL already. Which doesn’t meant much, because software for it will still be updated, just that the hardware won’t be manufactured anymore.

2 Likes

Hi! Yet another uncalled-for opinion, to make your decision making process even harder:

I can 100% vouch for the “just grab a used mac mini” route. I had an old 2014 mac mini with 16 gigs of RAM which I originally used for playing around with OSX. Since MacOS 12, the 2014 Mac Minis have been out of support, so I figured “Hey, this might be the perfect home server machine: OK cpu, good RAM, SSD option installed, silent, low profile, and looks nice on the shelf!”

Ofcourse it wasn’t without a bit of work: I tried putting Ubuntu server 20.04 on it, but the version of GRUB that comes with it gets thrown-off by the weird UEFI firmware on the Mini, so you have to install version 22.04.

the SD card slot really, really doesn’t work with this version of Linux. But everything else has been really great. I run PiHole, GitLab instance (total 5gb repository size, so its pretty good size!), personal MediaWiki server, internal DNS server, and I used to have a Jenkins server running, but that’s been lying stagnant for a while.

the server itself is aboslutely braindead. I do everything from the boring-old linux command line over SSH, and just religiously keep everything compartmentalized into docker containers, and keep all the data that should be persistent on bind-mounts. Those bind mounts are under a root /data directory, whcih i just manually rclone to backblaze every now and then.

anyway, just another random user chiming in. I’ve tried out truenas, proxmox, unraid, etc. and while they clearly have value, at the end of the day im not a sysadmin, and theres a real risk of forgetting how to configure/administrate them (i have this problem with my MikroTik router solution: its super powerful, but I invariably forget how to configure them, and when something happens I have to spend a day to figure out how it works again, and then fix it. But obviously, someone who is a network engineer is not going to hvae that problem, and using high-end hardware would actually play to their strengths. so it depends on what your own skill set is.)

1 Like