Home Server

Hello everyone,
I want to build a home server and have researched a bit and have come to the conclusion that, one I don’t know enough about server stuff and two that I still want to build one but have some requirements that I think are a good idea. I want to build a home server that is a NAS, plex or some other media application, snapshots (I have never used them before but it would be great to enable my families computers and my computers to create backups for future issues, possibly some VM stuff or other interesting stuff I may learn in the future (mainly headroom for a little growth).
I don’t really know what hardware to go with. I would preferably like something that isn’t too power hungry, I am thinking of getting 6 4TB hard drives as my budget doesn’t allow for 8 TB hard drives yet and ECC memory. I think I would need an HBA card as it seems they are preferred over motherboard SATA controllers, and possibly a NIC card. I would like a small footprint and have a Lian Li PCQ08 from way back. However, it seems to be more difficult to go with a Mini ITX build and there is less room for growth. I am thinking of possibly getting a Fractal Design Define R6. Lots of room for consideration and I am just overwhelmed at this point.

I am trying to learn linux as well and have been reading books and going through Wendell’s course on Udemy. I was thinking Ubuntu Server with ZFS or something similar. Any recommendations or tutorials I should look into would be greatly appreciated. I have been searching the web and my ADD has made this even more difficult with the endless blackhole of information pouring out.

Before spending a huge pile of money on drives, you might want to start small with just two and a boot drive. I really like a small 250 GB M.2 for the OS.

I also use a Fractal Design tower case for my server and it is great. Very quiet and lots of space. One note though: I am not sure how many fans they ship with. Mine just had two and I had to order two more to make sure I had airflow over all the drive bays. Also I wanted the fans to match even though no one sees them. Shallow of me, I know.

I like AMD stuff for its ECC support if you can find the right board. With the right board I don’t think you need a HBA or another NIC. Unless you plan to go 10 Gbps.

1 Like

Take a look at the freenas mini xl (way more expensive then I expected probably avoid )

+1 on every suggestion from @zlynx

A server is just a role that a computer plays. Typically enterprise grade kit is used only because it is designed for 24*7 use in a loud and warm environment. If you are building at home you can just reuse old pc hardware. As you have old cases do you have any old hardware to try first before you spend any money? Including hard drives. You don’t need high end hardware and most appliance type Nas boxes use arm or intel atom CPUs.

I assume from the 6 drive idea you are thinking raidz2? This is good for bulk data with high availability but if you are on a budget the 16TiB of data can be achieved other ways. Think carefully if you want 6 hard drives spinning all day every day versus say 3 8TiB drives in RaidZ1 plus a backup. Do you need 100% uptime? If you are going to use lvm rather than zfs you can just buy a mirrored pair of drives and expand the pool later.

You won’t need ECC. You may not need a hba if the motherboard is newish and again, you aren’t loading it with massive load.

Think about using separate pools for your virtual machines so they are not on the same drives as your critical data.

Otherwise I would suggest you spec out a build and post a pcpartpicker list here and we can advise further. Good luck.

I second that you won’t need ECC. Funnily enough my NAS (i.e.: old hardware that has a bunch of hard drives) is not ECC but a second PC I have is ECC.

Depending on what your electricity cost is it might be cheaper to use used hardware you already have (i.e.: old gaming PC) and just take the hit on electricity.

Not using ECC still poses a data corruption risk, in the best case, or a data loss risk in the worst, depending on file system and raid configuration used. The chance is small, sure, but it’s there, and it’s not going to be fun when it does happen.

Whether that risk is worth it is up to the user, but it should be disclosed nonetheless.
I’ve seen enough failed RAM and other components that I wouldn’t personally run a NAS without ECC, but to each their own, just do your research before deciding running without ECC is worth the risk (please do note that while these risks are well documented for ZFS they are not limited to ZFS)

2 Likes

The R6 certainly isn’t a bad choice. Good quality, plenty of room for HDDs and SSDs, good noise dampening.

However a quick look at Fractal’s site shows that there is no option which would allow a side fan.
That fan comes in real handy if you plan on using an HBA or RAID card. Those things can get toasty.
Check out the R5, the windowless version has that fan slot.

You can get something like this:

Just mount a 120 / 140 mm fan above the PCIe card slots. The R6 should have ample airflow in the case so that it does not get too hot inside.

I am on a Ryzen 1700x but thinking about upgrading to a 3700x or 3900x. My old computer is what my family is using right now which is a i7 2700k (got it on sale cheaper than the 2600k). I don’t mind paying a decent amount and already have 4 of the TB hard drives (Reds but luckily they are the CMR type). Was thinking of getting 2 more Ironwolf drives.

I am finally out of school and working (not in tech, chem major but trying to learn linux and potentially python and maybe more languages if I can figure it out, more jobs in my area that are in engineering and programming). So I have some money I can allocate to this but trying to go as affordable as possible but still have a solid server build for several years.

There is a supermicro board that is ITX with an epyc chip for about 600 bucks, M11SDV-8CT-LN4F but don’t see much information about it and also was hoping for a little cheaper but if it is worth it than sure. Also MITX so some drawbacks compared to ATX. I would like ECC which makes this a harder project. From everything I read, it just seems safer to go ECC if it is only slightly more expensive than normal memory.

Thanks for all the feedback. It is very helpful.

There is a thread about an AM4 server board here somewhere.

I don’t know if the 1700X works with that though.

The amount you are able to spend should not be your primary decision point. The “business need” should be your primary driver. If you don’t need multiple concurrent users and massive memory bandwidth then EPYC is a bad choice given it reduces choice for boards etc.

For your use case you really are looking at consumer or prosumer parts, so a Ryzen or possibly Threadripper solution or a vintage Xeon. All could use ECC if that really is needed. Unless your server is hosted on the moon it really isn’t critical and should spec your system around the rest of the needs before considering if ECC is needed.

One option for you may be to separate your “NAS” from the VM box and run two physical hardware instances with separate specs. The NAS needs low spec CPU and lots of RAM, the VMs need higher spec CPU and more flexible RAM options (that will get eaten into by your ARC). I run separate boxes for this reason.

Ok I was in your spot two years ago and I went a different path than any of the suggestions so I will give my two cents.

I went ahead and started scooping up slightly older enterprise level equipment that was well taken care of about two and a half years ago. It was all made to last and perfect for use in my adventure. Also based off of things I’ve learned this purpose built equipment has many advantages over using consumer level equipment.

Put it this way learning about this equipment and how to properly use it has expanded my knowledge greatly over the past two years. Things like SAS backplanes, QPI links, and idrac or ipmi, load balancing, multi pathing ect were all Greek to me two years ago. That being said here’s how I look at it from a used standpoint Dell and HP server equipment is reliable and readily available. Also people have known working configs of things you likely want do.

When you start mixing consumer level gear with common enterprise tasks you can be in for a whole other adventure and you might be one of only a few people with that exact setup. Very good chance what you want to do will work but you might be in for some not very well known work arounds to get things working. This and fewer people to help you through it if you use a fringe combination of equipment that no one’s familiar with can fragments your frustrated search data strings when trying to find an answer.

Also servers are designed to house large amounts of memory and generally have more resources available to expand. They also have more redundancy in subsytems, I have grown into this server and been able to find a plethora of similar setups when troubleshooting. This is not to say building a server from scratch is out of the question because there are plenty who have successfully. However there are some pros to finding lightly used enterprise gear. One thing I can guarantee regardless of the path you take is you will learn alot and the journey is rewarding.

What would you recommend for a low power NAS wih potentially plex or some media player system and automatic backups or something comparable for moderate home use? Any suggestions would be great as I am confused at the moment to figure out what to go with. I can do the VM stuff on my normal desktop but the NAS is something I would like and from reading things, ECC seemed like better to have than too not but if bitrot and data loss isn’t that serious without ECC, I guess I could go without it.

Could you give me an idea of what I should look for in a Dell or HP server that hopefully is more of a desktop/ATX size rather than a Server Rack form factor?

Also, using ECC still poses a data corruption risk.

There is always a chance of something going wrong.

That said, AMD does ECC on their consumer cheap CPUs and you don’t need the fastest ram for a storage server. (It’s much faster than HDDs anyway) - slow unbuffered ECC isn’t that much more expensive than slow unbuffered non-ECC, you might as well go for it.

Wouldn’t bother with epyc, but would go with either atx or micro ATX, just so you get a pcie slot for a 10gig or 40gig nic.

Wouldn’t go with ancient (less than Skylake) Xeons – too power hungry and slow and bulky and usually not good value for money compared to a lot more modern AM4 ryzen.


For storage, for home use, I’d go smaller, 2 to 4 drives in raid 1 or raid 10 (of course with checksums and using copy on write, something like btrfs maybe).

10-14T per drive of spinning rust, m.2 flash for the os.

Regular sata on board should be fine, sata Is slow anyway, HBAs cost money and you’d just be using the in jbod mode anyway … a year down the road that HBA will become just another thing that can go wrong during an update and will net you zero benefit.

I wouldn’t care whether they’re SMR or not, access times just simply don’t matter as much for the use case.


On the software side of things,

Filesystem: btrfs is easier and more flexible than ZFS, ZFS is older and more widespread. With btrfs you can convert raid levels in place (2 copies of data across 5 drives if your want, for example, or raid 10 to raid 5). With ZFS you’re stuck with your drive layout and can only swap a drive for a bigger drive.

By all means go with zfs - there’s a lot more resources for it online it’s vastly more popular.

For OS. I’d go with regular Debian based server install, but switch it to “testing” repos, so that it becomes a rolling release Debian that I can update when I feel like, without ever being stuck on ancient versions of anything.

If you use ZFS, make sure you stick to LTS kernels (whatever the distro). ZFS really expects to be paired with a kernel version that it’s running on, and as awesome stuff like dkms and friends is… better to avoid having to use it altogether.

Client side backup software:
… no idea … I mean, for a developer it’d be possible to whip up your own that just snapshots the client filesystems (using shadow copy service or whatever there’s for that on re fs) and just rsync everything over to a samba share…

That’s pretty much what all of the good backup softwares do more or less.

There’s probably a solution.

For media:

minidlna works great, all the decoding would be done by your TV.

Plex is ok except if you want to do anything with hdr other than just subtitles, it’s regularly buggy. And any kind of transcoding requires a subscription and an nvidia card to do at a decent bitrate, … or other contortions. Generally it’s ok to just cast plex to a Chromecast ultra and watch HDR blurays that way.

Jellyfin… never got it to work well (once I had to dig into it to figure out why it wasn’t working and saw c# and just gave up… someone else can fix whatever was wrong)

Kodi on the client, with mounted files over the network:

  • yes, works good.

Hard to argue with the higher end synology boxes for simplicity.

People overstate the need for ECC all the time but don’t explain why very well. I run NAS boxes with and without it. It’s fine if you don’t. You should have a valid backup before you worry about ECC because losing your array is more likely than data corruption. Focus on how you’re gonna back up your storage now before you build your array.

If you’re dead set on building something then I recommend the asrock x470d4u with your choice of AMD CPU.

Someone should make a pyramid diagram of concerns when building a storage system / NAS. Ordered by likelyhood and magnitude of possible damage (what you call expecation in stochastics).

Edit: I had a bit of a thought about it and I’ll probably do a bracket of NAS worries.

1 Like

@Musrocs1478 The HP tower style is the Proliant Mxxx series servers and the Dell is the Poweredge T series. So you have to pick a generation in your price range and search ebay. As for expansion, I recommend something at least PCIE 3.0 with a full-size board so that you can add an 6g or even 12g HBA and or 10gb nic if your systems grows. You also might add a quadro to offload transcoding if you do a lot of out of network streaming. Even running multiple 4k streams Ive never seen my CPU load much above idle because in network nearly everything direct stream’s to the client.

Also if you are just dipping your toe in so to speak the Synology box might be the way to go as @Adubs suggested. This limits a lot of variables and lets you get familiar with some of the systems without the learning curve of any of the other suggestions including mine.

Some actual numbers on the likelihood of RAM errors, and there’s also this one from Microsoft that asserts that DRAM errors are actually even more likely to occur than previous studies indicated.

Knowing someone who lost an array (unrecoverable) due to filesystem corruption (Synology, fwiw, not sure if it was RAM, or something else, not like they’re likely to ever tell us either way. The hard drives were fine though) you’re free to take your chances. I know I rather wouldn’t. And please, save me the “but backups”. A lot of data we store is expendable (in the sense that the expense of backing it up isn’t worth the cost), but that doesn’t mean that rebuilding your music, or plex (or “linux isos”) library is fun when spending a few 100s more could have saved potentially weeks of rebuilding said libraries.

If you’re using ZFS (or Btrfs) for the checksumming and then can’t be bothered with ECC, then why bother at all? All filesystems blindly trust the system memory (at least the commonly used ones do) so if something goes wrong there you will have data corruption of varying degrees of severity. At that point you’re purely relying on backups for consistency (since you can’t trust the file system any more), which means you also need to verify your local data against your backups to make sure that still matches, or you’ll end up overwriting your backup with potentially corrupt data.

Aside from data corruption metadata corruption can, as alluded to earlier, take out your entire pool, and make it potentially unrecoverable. And it’s not just theoretical, I’ve seen it happen (though thankfully haven’t had to experience it)

If someone considers the risk worth saving the money for the use case, then by all means, they can go right ahead. But they ought to be made aware of the risk they are taking.