Low cost, high efficiency server refresh. Searching for best options?

@risk
As mentioend before, USB controllers are kinda crap (at least if you care about your data) and Thunderbolt enclosures are very expensive so that’s probably not an option.

@thechadxperience
One downside with any of these systems (“chinese systems”) are aftersales support both in terms of warranty repairs and BIOS/firmware updates, at least I would very much be concerned if there are no BIOS updates available.

Looking at the description running FreeBSD barebones is probably a better choice and probably is what’s being done right now. I’m quite surprised that many here seems to lack(?) the knowledge to actually setup a server with various services instead just adding VMs and Docker instances to everything which of course has its up and downsides.

The Intel DeskMeet isn’t that bad but it’s not a great deal either and given ASRocks rather odd track record on recent Intel motherboards I’d do quite a bit of research before going that route. The AMD variant also adds costs for a decent NIC and at least here it’s not much of a great deal looking at the pricing.

@thro
As I mentioned earlier, Intel NICs are much preferred over Realtek so that would add to the cost and I don’t really see why you’d go for a Zen+ based CPU which is very dated by now and less efficient (pricing is negligible), same goes for the B450 chipset when B550 is around and X570 is a downright bad choice in that regard.

@EleaOwl
Your biggest obstacle would be ECC memory support which right now is hard to achieve for cheap / homeserver budget. I would highly recommend that you have that as a hard requirement even if it means that you’ll need to stick with your current setup for a while longer. A decent W680 motherboard would offer you many options (as long as it uses DDR4) and while very new most likely run fine on 13.1-RELEASE/13-STABLE but availability is very sparse.

Reasoning: its not going to be doing much and when idle power draw difference is likely insignificant vs. total system. Where i am they’re much cheaper than the newer Ryzen APUs including Zen3. YMMV, if you can get a newer Zen APU for similar get that.

Point being a low end Ryzen APU even with 4 cores is likely going to destroy the performance of your v1 Xeon for most things, idle a lot cheaper and include an onboard GPU to avoid needing to install a discrete card.

If you have a cheap GPU to use, i’ve been very impressed by my 3300X system i built as a “what’s the trashiest Zen2 build i can do - and just how bad is it” experiment during the early pandemic.

I mentioned B450 for that specific CPU as B550 does not support it (X570 does - be aware, B550 boards drop support for Zen+ based APUs, X chipsets do not). For a NAS or home server, X570 also gives you better IO for more/faster drives than B550 does. You can get X570 boards for similar cost to B550 these days too.

Given that, i’d say that unless you’re wanting to do overclocking (which the B550 boards excel at vs. X570) i’d say that B550 is the worst option for a home server, and X570 actually gives you the best Ryzen system IO and CPU choice.

I’ve got Zen+ (X470), Zen2 (B550) and Zen3 based systems (X570) here in my house, if i was to build another I’d probably go X570 again unless it was for gaming only.

edit:
Not trying to be a dick about it, but i’ve made the mistakes with B550 and CPU support :smiley:

This might be a handy read, in the same boat (UK) with Lecky costs

1 Like

Funny that you say that X570 provides better IO as there have been multiple reports of X570 actually performance worse for SATA (which I guess is a primary objective in this setup). It also adds a fan which is an additional point of failure. B550 certainly not the best chipset out there but looking at the primary usage it seems like a much better fit ni this case if you’re going for AMD.

I would also be a careful throwing around number, looking at Intel Xeon E5-1620 Power Consumption - ServeTheHome and https://www.reddit.com/r/Amd/comments/pkp726/5600g_owners_what_is_your_idle_power_consumption/ it might not be as much of a difference as you’d expect (if any).

By better IO i mean more ports and also PCIe4 to all slots (not just the GPU slot), so you could add PCIe to M.2 cards with more bandwidth down the road. Typically you get 2.5 gig ethernet included as well (which is probably plenty for a home server where most clients will be gig or slower and only a few users hitting the box at the same time).

X570s (newer boards) don’t have a chipset fan. I waited for the S boards before buying in to x570 because of that reason. I’ve not noticed any SATA speed issues myself; i do have a heap of SATA SSDs hanging off mine. Maybe that’s an early X570 board issue, mine is an X570S Aorus Elite AX.

edit:
Not shitting on B550 as a chipset in general (i do own one after all) - for gaming builds its great as you get excellent memory overclocking and overclocking support in general, and it still has PCIe4 to the GPU. I just don’t feel its the best choice for building a home server on within the AMD lineup, that’s all.

I’m well aware of the benefits of X570(s) but is it really of relevance given the use case? The cheapest X570s boards comes out at ~mid-range B550 ones and still uses Realtek NIC which is less desirable for this kind of build (again, “low cost”).

About SATA performance, https://www.reddit.com/r/Amd/comments/fwh7q0/sata_performance_is_gimped_on_x570_compared_to/ - I’m not saying that this applies to all use cases and motherboards but it is a potential issue.

lol, you’re talking about 1-10% performance difference (10% being the extreme) from looking at those graphs.

Certainly not worth giving up PCIe 4 to all slots imho.

edit:
downloading crystal disk mark to see out of curiosity now. I actually have a 1TB SATA 860 evo amongst other drives in this box. Unfortunately its 88% full so maybe results may be tainted by that.

Hmm… still checking.

As I mentioned, I have the Odessey Blue, which does receive BIOS updates. Also, I’ve contacted their support to ask questions and gotten prompt responses. So, my experience with Seeed Studios has been very positive. One thing you might not know is that they make a lot of products which are used by governments and corporations, and so they must have good support. They are especially popular in Europe. This isn’t one of those no name Amazon fly-by-nights.

I disagree. TrueNAS isn’t just middleware. It in fact IS FreeBSD, but which has been tuned and refined to offer better performance and stability. It also includes an easy to use interface, which can be convenient, and potentially faster to get some things done. If you’re just running a NAS, then why not let somebody else do all the hard work for you making it stable and run faster?

I’m not as much a fan of Asrock. Although, my experience has been decent. I’m no fan of their BIOS, but it’s fine. For a home server that’s just going to sit on a shelf and serve files, who cares? You’re never going into the BIOS that often, anyway.

You seem to not realize just how powerful these little systems actually are. I know a guy who is running over 25 containers on his pi, and it runs perfectly stable. In fact, he’s even using it for a NAS!

Just think, how much more powerful would an i3 or Ryzen APU be?

How is the one it includes not a decent NIC? Plus, why would you settle for whatever 1 GbE NIC they throw in with the MB when you could add a 10 GbE card? Did you maybe consider the fact that you’re actually saving money which could be better spent elsewhere? Also, there is support in FreeBSD for RealTek cards, but it’s not always automatic. However, enabling it requires only a single command, and you only need run it once.

Is anything in today’s market?

Unfortunately, most are only gen 3, though. Same goes for most NVMe drives you’d be putting into it. Although, for a low power file server using only 2.5 GbE, why would you waste money putting NVMe drives in it for? They usually include 1 or 2 ports so you can use for your ZIL SLOG, Metadata, or L2ARC, if you need it.

And, some would argue this is actually BETTER for a highly efficient, low power home server that’s just going to sit on the shelf and idle power draw most of the time. It’s certainly better from an economic standpoint. Of course, the OP is only running dual HDDs, and so even 1 GbE would sufficient for that.

If you get BIOS updates that’s great, such information is actually worth mentioning as in many cases you can just hear cricket noise …:frowning:

I would be a bit careful about some claims, can you elaborate on offer better stability? If you read the first post it does mention that serving files might not the only task for this box so a NAS-centric distro might not be the best choice.

Just because someone does X doesn’t mean that it’s a good idea and/or the preferred way to go. You can of course shoehorn a lot of containers onto a RPi or whatever but performance isn’t going to be good if all these have moderate load and even have bursty workloads.

What I meant was the Realtek NICs aren’t ideal, you most likely want to look at Intel or Broadcom for example instead. I’m quite aware of the situation of Realtek NICs in FreeBSD, not to nitpick but if you’re referring to the port it’s not a single command and there are issues. pkg-message « realtek-re-kmod « net - ports - FreeBSD ports tree

Most of the issues I’m aware of are related to Realtek’s 2.5 GbE NICs, and that Asrock system doesn’t have that. Besides, we aren’t talking about a mission-critical system, here. So, even if there is some performance degradation, it’s not as if it’ll need to serve hundreds or thousands of simultaneous clients, or anything. For a home NAS with dual HDDs, he doesn’t require all that much performance. I’m unaware of any issues with their 1 GbE NICs. I’ve even successfully used cheap Chinese Realtek-based NICs of questionable origin with FreeBSD, TrueNAS Core, and OPNsense, before. Never had a problem with using them, myself.

How much electricity does it cost for you to run the server every month? I’m wondering if it would be cheaper over the next 2~3 years if you run the E5 downclocked and downvolted ( if possible on a Dell ) versus the cost of procuring new hardware.

A single spinning disk will do > 200 MB/sec sequential these days, and if its coming out of cache or async write, a ZFS box even with 1-2 spinning drives will easily outrun gig-e (which is only ~100MB/sec)

He mentioned having 1GbE now, and sounds satisified with it. Plus, he’s wanting to reduce power draw. So, while what you’re saying about the speed is true, and the server would be capable of providing more than it can over that connection, please keep in mind that a faster NIC is going to suck more power and money out of his wallet.

Personally, I’ve been using a USB SSD conected to 1 GbE LAN using the built-in file server capability of my Asus OpenWrt-based router, and am fine with it’s performance for small loose files. If you really want to conserve power then that’s the best way, because you’re already using the WiFi router as it is, and the USB drive doesn’t consume any power when it’s not in use. So, you’re not adding anything which consumes more electricity. In fact, the only reason I’ve added a HCI NAS at all, is because I need to run additional services which the router’s hardware simply isn’t capable of. I wanted to save money and so built mine out of an old gaming PC, which in hind-sight I’m beginning to regret since it uses so much power. Lately, I’m even having to keep it powered-off when not in use. Unfortunately, they don’t give you a lot of options with mini PCs or SBCs, which is what I plan on replacing it with. They are getting better though, and I expect to make the jump over the next year or so.

I was merely suggesting some inexpensive, low-power solutions which would allow him to keep his dual HDD and 1 GbE setup. Further, both would offer the ability to upgrade, and run additional services with ease. They are both x86_64 so he doesn’t have to worry whether his hardware is fully supported, or whether it will receive regular security patches and updates. Also, would be simple to setup via the default methods, and which would be easily portable to another system down the road. Less of a headache to setup and manage. Also, if he’s going to run VMs he can use the “host” feature set, and gain full performance benefits. Less worry about compatibility when transferring his VMs to another platform. Better access to support and other resources.

I was merely adding some suggestions which I felt were lacking since everyone else was recommending ARM-based SBCs which feel kinda hackney and incomplete, IMHO. I wasn’t sure whether others were aware that x86_64 systems can also be compact and energy efficient, too. Some laptop CPUs can even get down to only 15 watts TDP. They actually do make mini PCs with those same chips, and you can use low power laptop RAM in them. However, I’m unaware of any all-in-one system which offers the capability to have dual HDDs in the same chassis to be able to recommend to him. Most only support a single SATA connection, if even that. Since he metioned possibly spending the money on a used Mac Mini, I thought the Seeed Studio reServer would be even better for him. If going with a Mac Mini, why even use FreeBSD at all? Don’t you know that MacOS can be made to support ZFS, too?

If you don’t mind sacrificing ZFS, and just want a simple file server on your network, then you could simply attach one of the Sabrent HDD enclosures to an Asus WiFi-6 router, and call it a day. Some of them even have 2.5 GbE. If you need additional services, consider running the Merlin firmware on it.

If you feel you really need it, then a pfSense router can do ZFS. So, you could save some power by running it all on one box. Just, most people don’t have a lot of RAM on their pfSense box, so you’d have to install it onto something which supports that much memory. Again, the reServer has dual Intel 2.5 GbE NICs, and would make a handsome and totally overkill router, which could double as a ZFS server since it supports up to 64 GB DDR4. Some versions (unavailable at present) even support ECC. The Asrock Deskmeet could do the same with an add-in NIC, and costs even less.

Something a lot of you are overlooking is that those pi and other SBCs don’t have a lot of RAM to be able to run ZFS with. The pi 4 maxes at only 8GB (unobtainable at present). The people over at IX Systems recommend 8GB minimum. However, a lot ARC caching might be important if you want to get good performance using HDDs.

Irregardless you want to things to be reliable, if you want to use Realtek no one is stopping you but there’s a reason why they’re not recommended and not used in “server/enterprise” hardware.

Yes, USB adapters “work” but they’re unreliable long term and one off test samples doesn’t serve much as data in that regard. Just because “it works” doesn’t mean it’s a good idea to do so, there have also been reports of controllers doing much amusing things so your milage may vary.

For the record, unless you have a fairly beefy ARM/ARM64-based router file sharing crawls (Samba works, ksmd is still dodgy) and overall system performace suffers greatly (throughput) so I’d say its one of these “it works but poorly in practice” cases. In that case a decent SBC is a much better choice any in way and can hopefully use a native connection (SATA or NVME depending on your requirements).

1 Like

My USB drive SMB server on my Asus RT-AC68U has been working excellently for almost 10 years running, and still receives regular firmware updates from Asus. No complaints here.

While you are not wrong, you are missing one aspect here. Sometimes crap quality is all the quality you need.

I’m not disagreeing that a USB NAS enclosure is a pretty crappy setup especially compared to big iron rigs or a proper hot-swap NAS case. But for a home server with 2-3 persons using it and a very limited budget, it could be a viable option. “It works” is, after all, infinitely better than “I got nothing”. :slight_smile:

Of course, the only reason to buy it over a proper DIY NAS box is lack of time, money and/or knowledge of PC building. I’d like to make the argument that a DIY NAS box ticks the “nice to have” boxes far more than say, a laptop with a USB enclosure. At the same time, $50 vs $600 is a big difference and the question is, do you really need those extra bells and whistles?

In an ideal world I’d love a low-powered NAS where it’s only purpose is to shuffle data back and forth. Personally I’d love a PCB solution with four m.2 slots on the backside, an FPGA with firmware for direct data shuffling, dual NICs, USB-C for graphics and that’s about it for I/O - CPU is basically just running some diagnostics and a bare bones OS from a 16 GB ROM, with a flash card for volatile storage. This should allow for something like a 100x135x40 mm enclosure (or half a liter) that sips 30W of power from an USB-C phone charger. :drooling_face: But I digress. ^^

Finally, here is some recommended reading, this guy builds DIY NAS boxes and tries to keep the content current:

2 Likes

I’m not advocating for getting the latest and greatest hardware [1] or such but suggesting bottom of the barrel solutions with questionable reliability isn’t what I would call a good “advice”. That’s when I think people should add a comment about it rather than “it’s all fine and dandy”. I’ve done my share of shoehorning like running Samba 3.x on a ~600Mhz single core MIPS CPU and while it “works” (with some interesting random issues) it’s essentially a waste of time in end because it’s barely usable.

[1] - My main “server, firewall, NAS” is a Dell T20 using a G3220 CPU which obviously needs an upgrade :wink:

2 Likes

Normally I would agree - Unless you make it crystal clear such a solution is, in fact, bottom of the barrel but about the only thing the user can afford at said budget.

It is a solution, that will probably be outgrown pretty fast, but when you need to pay twice the money for the solution you really want, and that solution is just not viable on said budget, then what can you do?

You can always not buy of course, but if the need is there for a NAS… Better a bad one than none at all, yes?