Home Server

I second that you won’t need ECC. Funnily enough my NAS (i.e.: old hardware that has a bunch of hard drives) is not ECC but a second PC I have is ECC.

Depending on what your electricity cost is it might be cheaper to use used hardware you already have (i.e.: old gaming PC) and just take the hit on electricity.

Not using ECC still poses a data corruption risk, in the best case, or a data loss risk in the worst, depending on file system and raid configuration used. The chance is small, sure, but it’s there, and it’s not going to be fun when it does happen.

Whether that risk is worth it is up to the user, but it should be disclosed nonetheless.
I’ve seen enough failed RAM and other components that I wouldn’t personally run a NAS without ECC, but to each their own, just do your research before deciding running without ECC is worth the risk (please do note that while these risks are well documented for ZFS they are not limited to ZFS)

2 Likes

The R6 certainly isn’t a bad choice. Good quality, plenty of room for HDDs and SSDs, good noise dampening.

However a quick look at Fractal’s site shows that there is no option which would allow a side fan.
That fan comes in real handy if you plan on using an HBA or RAID card. Those things can get toasty.
Check out the R5, the windowless version has that fan slot.

You can get something like this:

Just mount a 120 / 140 mm fan above the PCIe card slots. The R6 should have ample airflow in the case so that it does not get too hot inside.

I am on a Ryzen 1700x but thinking about upgrading to a 3700x or 3900x. My old computer is what my family is using right now which is a i7 2700k (got it on sale cheaper than the 2600k). I don’t mind paying a decent amount and already have 4 of the TB hard drives (Reds but luckily they are the CMR type). Was thinking of getting 2 more Ironwolf drives.

I am finally out of school and working (not in tech, chem major but trying to learn linux and potentially python and maybe more languages if I can figure it out, more jobs in my area that are in engineering and programming). So I have some money I can allocate to this but trying to go as affordable as possible but still have a solid server build for several years.

There is a supermicro board that is ITX with an epyc chip for about 600 bucks, M11SDV-8CT-LN4F but don’t see much information about it and also was hoping for a little cheaper but if it is worth it than sure. Also MITX so some drawbacks compared to ATX. I would like ECC which makes this a harder project. From everything I read, it just seems safer to go ECC if it is only slightly more expensive than normal memory.

Thanks for all the feedback. It is very helpful.

There is a thread about an AM4 server board here somewhere.

I don’t know if the 1700X works with that though.

The amount you are able to spend should not be your primary decision point. The “business need” should be your primary driver. If you don’t need multiple concurrent users and massive memory bandwidth then EPYC is a bad choice given it reduces choice for boards etc.

For your use case you really are looking at consumer or prosumer parts, so a Ryzen or possibly Threadripper solution or a vintage Xeon. All could use ECC if that really is needed. Unless your server is hosted on the moon it really isn’t critical and should spec your system around the rest of the needs before considering if ECC is needed.

One option for you may be to separate your “NAS” from the VM box and run two physical hardware instances with separate specs. The NAS needs low spec CPU and lots of RAM, the VMs need higher spec CPU and more flexible RAM options (that will get eaten into by your ARC). I run separate boxes for this reason.

Ok I was in your spot two years ago and I went a different path than any of the suggestions so I will give my two cents.

I went ahead and started scooping up slightly older enterprise level equipment that was well taken care of about two and a half years ago. It was all made to last and perfect for use in my adventure. Also based off of things I’ve learned this purpose built equipment has many advantages over using consumer level equipment.

Put it this way learning about this equipment and how to properly use it has expanded my knowledge greatly over the past two years. Things like SAS backplanes, QPI links, and idrac or ipmi, load balancing, multi pathing ect were all Greek to me two years ago. That being said here’s how I look at it from a used standpoint Dell and HP server equipment is reliable and readily available. Also people have known working configs of things you likely want do.

When you start mixing consumer level gear with common enterprise tasks you can be in for a whole other adventure and you might be one of only a few people with that exact setup. Very good chance what you want to do will work but you might be in for some not very well known work arounds to get things working. This and fewer people to help you through it if you use a fringe combination of equipment that no one’s familiar with can fragments your frustrated search data strings when trying to find an answer.

Also servers are designed to house large amounts of memory and generally have more resources available to expand. They also have more redundancy in subsytems, I have grown into this server and been able to find a plethora of similar setups when troubleshooting. This is not to say building a server from scratch is out of the question because there are plenty who have successfully. However there are some pros to finding lightly used enterprise gear. One thing I can guarantee regardless of the path you take is you will learn alot and the journey is rewarding.

What would you recommend for a low power NAS wih potentially plex or some media player system and automatic backups or something comparable for moderate home use? Any suggestions would be great as I am confused at the moment to figure out what to go with. I can do the VM stuff on my normal desktop but the NAS is something I would like and from reading things, ECC seemed like better to have than too not but if bitrot and data loss isn’t that serious without ECC, I guess I could go without it.

Could you give me an idea of what I should look for in a Dell or HP server that hopefully is more of a desktop/ATX size rather than a Server Rack form factor?

Also, using ECC still poses a data corruption risk.

There is always a chance of something going wrong.

That said, AMD does ECC on their consumer cheap CPUs and you don’t need the fastest ram for a storage server. (It’s much faster than HDDs anyway) - slow unbuffered ECC isn’t that much more expensive than slow unbuffered non-ECC, you might as well go for it.

Wouldn’t bother with epyc, but would go with either atx or micro ATX, just so you get a pcie slot for a 10gig or 40gig nic.

Wouldn’t go with ancient (less than Skylake) Xeons – too power hungry and slow and bulky and usually not good value for money compared to a lot more modern AM4 ryzen.


For storage, for home use, I’d go smaller, 2 to 4 drives in raid 1 or raid 10 (of course with checksums and using copy on write, something like btrfs maybe).

10-14T per drive of spinning rust, m.2 flash for the os.

Regular sata on board should be fine, sata Is slow anyway, HBAs cost money and you’d just be using the in jbod mode anyway … a year down the road that HBA will become just another thing that can go wrong during an update and will net you zero benefit.

I wouldn’t care whether they’re SMR or not, access times just simply don’t matter as much for the use case.


On the software side of things,

Filesystem: btrfs is easier and more flexible than ZFS, ZFS is older and more widespread. With btrfs you can convert raid levels in place (2 copies of data across 5 drives if your want, for example, or raid 10 to raid 5). With ZFS you’re stuck with your drive layout and can only swap a drive for a bigger drive.

By all means go with zfs - there’s a lot more resources for it online it’s vastly more popular.

For OS. I’d go with regular Debian based server install, but switch it to “testing” repos, so that it becomes a rolling release Debian that I can update when I feel like, without ever being stuck on ancient versions of anything.

If you use ZFS, make sure you stick to LTS kernels (whatever the distro). ZFS really expects to be paired with a kernel version that it’s running on, and as awesome stuff like dkms and friends is… better to avoid having to use it altogether.

Client side backup software:
… no idea … I mean, for a developer it’d be possible to whip up your own that just snapshots the client filesystems (using shadow copy service or whatever there’s for that on re fs) and just rsync everything over to a samba share…

That’s pretty much what all of the good backup softwares do more or less.

There’s probably a solution.

For media:

minidlna works great, all the decoding would be done by your TV.

Plex is ok except if you want to do anything with hdr other than just subtitles, it’s regularly buggy. And any kind of transcoding requires a subscription and an nvidia card to do at a decent bitrate, … or other contortions. Generally it’s ok to just cast plex to a Chromecast ultra and watch HDR blurays that way.

Jellyfin… never got it to work well (once I had to dig into it to figure out why it wasn’t working and saw c# and just gave up… someone else can fix whatever was wrong)

Kodi on the client, with mounted files over the network:

  • yes, works good.

Hard to argue with the higher end synology boxes for simplicity.

People overstate the need for ECC all the time but don’t explain why very well. I run NAS boxes with and without it. It’s fine if you don’t. You should have a valid backup before you worry about ECC because losing your array is more likely than data corruption. Focus on how you’re gonna back up your storage now before you build your array.

If you’re dead set on building something then I recommend the asrock x470d4u with your choice of AMD CPU.

Someone should make a pyramid diagram of concerns when building a storage system / NAS. Ordered by likelyhood and magnitude of possible damage (what you call expecation in stochastics).

Edit: I had a bit of a thought about it and I’ll probably do a bracket of NAS worries.

1 Like

@Musrocs1478 The HP tower style is the Proliant Mxxx series servers and the Dell is the Poweredge T series. So you have to pick a generation in your price range and search ebay. As for expansion, I recommend something at least PCIE 3.0 with a full-size board so that you can add an 6g or even 12g HBA and or 10gb nic if your systems grows. You also might add a quadro to offload transcoding if you do a lot of out of network streaming. Even running multiple 4k streams Ive never seen my CPU load much above idle because in network nearly everything direct stream’s to the client.

Also if you are just dipping your toe in so to speak the Synology box might be the way to go as @Adubs suggested. This limits a lot of variables and lets you get familiar with some of the systems without the learning curve of any of the other suggestions including mine.

Some actual numbers on the likelihood of RAM errors, and there’s also this one from Microsoft that asserts that DRAM errors are actually even more likely to occur than previous studies indicated.

Knowing someone who lost an array (unrecoverable) due to filesystem corruption (Synology, fwiw, not sure if it was RAM, or something else, not like they’re likely to ever tell us either way. The hard drives were fine though) you’re free to take your chances. I know I rather wouldn’t. And please, save me the “but backups”. A lot of data we store is expendable (in the sense that the expense of backing it up isn’t worth the cost), but that doesn’t mean that rebuilding your music, or plex (or “linux isos”) library is fun when spending a few 100s more could have saved potentially weeks of rebuilding said libraries.

If you’re using ZFS (or Btrfs) for the checksumming and then can’t be bothered with ECC, then why bother at all? All filesystems blindly trust the system memory (at least the commonly used ones do) so if something goes wrong there you will have data corruption of varying degrees of severity. At that point you’re purely relying on backups for consistency (since you can’t trust the file system any more), which means you also need to verify your local data against your backups to make sure that still matches, or you’ll end up overwriting your backup with potentially corrupt data.

Aside from data corruption metadata corruption can, as alluded to earlier, take out your entire pool, and make it potentially unrecoverable. And it’s not just theoretical, I’ve seen it happen (though thankfully haven’t had to experience it)

If someone considers the risk worth saving the money for the use case, then by all means, they can go right ahead. But they ought to be made aware of the risk they are taking.

No, I wont. if your data is important to you then you should use both but ZFS and ECC ram arent a replacement for a backup solution.

which is why you need a backup anyway because you can lose your pool for many reasons

I’m not going to argue the merits of ZFS without ECC, because theres many reasons why people would choose to use ZFS and not use ECC. If you want an argument, thats for another thread.

1 Like

“A lot of data we store is expendable (in the sense that the expense of backing it up isn’t worth the cost), but that doesn’t mean that rebuilding your music, or plex (or “linux isos”) library is fun when spending a few 100s more could have saved potentially weeks of rebuilding said libraries.”

Cloud storage gets expensive once you start storing terabytes of data, and if one has an offline backup solution that can handle very large amounts of data, like a tape streamer, then skimping on ECC makes even less sense since none of those solutions are cheap either.

There’s a difference between losing your pool, and your pool getting silently corrupted, and potentially taking your backups with it (the array completely dying to corruption is likely the less common, though more extreme, outcome).

As I mentioned, if people know the risks of not using ECC, then fine, “you do you” as they say.
But I see people in this thread basically claim it is “overrated” without giving any data to back up that claim, which runs contrary to the advice given by most experts like iXsystems (FreenAS), or one of the ZFS authors:

The previous post links to two papers on the likelihood of DRAM errors occurring. If, after considering those, people still feel they like those odds, then that’s fine. I don’t exactly feel like I need to argue the point further as we each accept different levels of risk, which, in the end, is what all of this is about: managing risk.

I said what I did for a specific reason. If you disagree with that reason then fair enough. Dont drag the topic to an argument.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.