Home Built NAS Question

Hey Syndicate Peeps !,

So it is time for me to finally build the home NAS. I originally had musings of using one of the many Rpi I have floating around, but since thought better of it.

I went for second hand AMD hardware and purchased a FM2 mobo that supports DDR3. My question is in two parts

Hardware:
More specifically HDDs. All vendors are now selling NAS drives made for small consumer setups with "special features" but vague on details. In Australia these are between $45 to $60 per more expensive than normal drives e.g WD blue vs WD Red. How much better are in real terms?. Is it just marketing placebo to make me feel safer ?
Given that files system either BTRFS/ZFS is handling redundancy in Raid 5 or 6 is it worth it Or should buy another normal drive with the money saved and add some capacity and provide better redundancy.

File-systems:

I am deciding between ZFS and BTRFS. At the moment I like the BTRFS's ability to add drives to your pool dynamically and I think that it is stable enough to use. But I haven't had any experience in using it or what bugs/things to look out for. What is be a good amount of the RAM to install to support the filesystem is it as much as 1gig per TB ? or is it a leaner beast.

ZFS looks cool, I have wanted to play with it for ages and also experience FreeNAS. But I think that it is fairly inflexible and requires significant system resources. But happy to be convinced otherwise, or maybe there are features I haven't consider.

Thanks in Advance

You're going to want to have the same drives in a RAID if at all possible, NAS drives are made to run 24/7 in a NAS, and you should really be buying them for a NAS, just buying as many cheap drives as you can isn't going to cut it for long term use.

in any case you're going to want plenty of RAM for whatever filesystem you're going with

BTFS has been stable for a while I think, but it's still new and what not.

also if you care about your data you'll want to go ECC at some point.

@Streetguru

Thanks as always.

I will have symmetrical drives. I take your point that the are on 24/7. Guess i know that can drop the extra $, but most of my desktop over the years have effectively run 24/7 (Fingers Crossed) without incident.

I am happy to accept that, but I wondering that functionally means. When I say cheap I am not talking about and no-name HDD more like buying Blue drives over RED driver.

Can the lack of ECC be mitigated by having more RAM by setting up some sort of ram disk/swap. ( I assume prob not)

Coz my mobo doesn't support ECC

ECC would be a whole motherboard/cpu upgrade ya, you can get an 1150 board with an i3 and it supports ECC on the cheap, the board itself has to support ECC naturally.

The drives are generally better built and have some built in error correction

greens vs red here

Also you're going to need a battery back up in case of power loss since I don't think you're going to have a RAID card with all that

1 Like

Great link/article

yeah no ECC support of my FM2 mobo. Battery is on the shopping list.

Ok TLER seems pretty important.

For any one else is interested

http://www.wdc.com/en/library/other/2579-001098.pdf

I am using btrfs on my server at home with any trouble, and I have a really mish mash of disks. It is by no means a back up, just there to store movies and the like. I have a roughly 2gb pool and 2 gigs of ram.

1 Like

BTRFS is pretty stable. The RAID 5/6 support is fairly new but development looks to have shifted from implementing to maintaining. If you need RAID56 id suggest using kernel 4.3+

1 Like

If you're doing a traditional RAID you pretty much need NAS or enterprise drives as they have firmware which prevents issues which can pop up with using regular desktop drives in a RAID configuration. But for things like ZFS and BTRFS it doesn't really matter. However the NAS disks do have a higher mean time between failures. So on paper at least they should last longer, on average anyway.

Anecdotally, I used to use WD greens in my server, and they all failed within a month of each other after 18 months. I replaced them all with WD reds and I haven't had any issues since. However I was still using a pair of seagate eco drives (basically the same thing as greens) which have lasted forever, I replaced them recently as I don't trust them but there's no indication of anything going bad with them.

So there really is nothing to say that buying more expensive disks will save you from disk failures, but disks with higher MTBF should last longer. But if you're going to rely on the software to protect against disk failures then you can probably get way with cheaper disks and just deal with the failures if they occur, and it will probably turn out cheaper.

I'm not sure how important ECC memory is for BTRFS, I know it's recommended for ZFS but you can live without it in a home environment. I can also tell you that BTRFS doesn't have the same steep RAM requirements as ZFS, you won't need loads of RAM, even 4GB would probably be okay. Depends more on what else you plan on running on the system.

1 Like

I can tell you about my experience with this home nas deal, I hope it helps.

I got sort of newer hardware than yours because I wanted to do video transcoding, but the kernel of the matter is the same.

I have 3 4Tb Seagate NAS drives, using FreeNAS with ZFS. Two are in a RAID 1 (the zfs equivalent) and one is on its own. I think that the extra expense for the 24/7 usage scenario is worthed. I mean, they're no SAS drives for multiple simultaneous R/W but they're plenty fast and responsive over gigabit.

I have 16 gb of ddr3 off the shelf ram, non ECC and so on. Yeah, don't yell at me. I had 8Gb before and it was absolutely fine, I just wanted a bigger ARC. The 1Gb of ram for each TB of data is a solid measure, but I see a lot of people not following it and be ok with the systems. I mean, how mission critical you want to be?

The point is, for a home nas with no real wish to use it to drive virtual machines or transcoding, everything goes. The really important element are the HHDs, I guess that NAS drives are overpriced a bit but they're meant to run 24/7. Normal drivers CAN run 24/7, until one day they just don't. And your're screwed.

I guess there's the argument that with what you'd pay for a decent array of NAS drives (say 3 4TB) you could get 6 2TB desktop drives (like WS green), use raidZ2 and get the same storage space with extra redundancy and just be aware that they might die a bit more often (I say MIGHT). But then case space, energy and heat get into the equation. Less drives, less vibrations, less heat... You get the picture. Just my two cents.

If I were you I'd get the UPS and two of the biggest NAS drives you can afford. Done. And maybe start planning some sort of backup plan anyway in case the array dies.

I can only speak for ZFS on freenas, but I'm pretty happy with it. Rather solid, but I've never tried RaidZ1, there's all kind of aggro against it on the freenas forum. Not as much against Z3. Still, it would seem that the best solution would be some sort of Raid 10 equivalent.

Ciao bello.

1 Like

This is the key with ZFS, ECC isn't mandatory, its just recommended for enterprise work, which is what they push, that's why it comes up so often. If you can have it, go for it, but its not mandatory. You use your desktop all the time with non ecc ram fine.

Interestingly NAS drives aren't that much more expensive. A WD RED 3TB is only £10 more than a green. Get the reds.

Just 10$? Wow, there's a bigger delta in price in the EU. If so, go for the Reds and never look back.

My keyboard is not UK for some reason. 10 GBP so about 15$ more between green and reds in the uk

Oh wait I did say pounds xD. Yeah. 15$, 10gbp, 13EUR

My bad! Yeah, I just looked at amazon (haven't bought a NAS HHD in a while) and the prices have dropped even here in Italy. Just go for the NAS drives, it's a small tranquillity tax afterall. :)

1 Like