I Need to build a NAS

Hi. I need a good NAS and would like to know what you would do.
Maybe you have some suggestions.

I need 30+TB. So at least 8 Drives. As I see it, I have 3 Options.

  1. Build my own NAS from old hardware
  2. Build my own NAS from new hardware
  3. Buy a prebuild NAS
    I want to use ZFS, Nextcloud and a handful of small servers (TTS/Mail/Website/IRC…)
    Budget: ~3000 €0 (or $)

I never build a NAS, but I assume its like building a desktop PC with drives.
But still some questions came up.
I heard ZFS need RAM. About 1GB per TB. Really? If I build a 8x6TB NAS, do I really need 64GB of RAM? Isn’t 32 enough?
Second, do I really need ECC Ram or DDR 4 Ram? Someone sad he would call it reckless to not use ECC. I think thats exaggerated.
Do I need a CPU that supports AES-NI for encryption?

If I buy new hardware, I would maybe build a Ryzen 3 System. ECC should work, right.
And it sounds like a good Price/performance build. So server parts through.

If I use old hardware, I would use my current PC components.
CPU: Intel i7-3770k (Ivy-Bridge)
Mainboard: Gigabyte GA-Z77X-D3H
RAM: 16 GB DDR3-1866 CL10 (2x8 GB) Kingston HyperX Beast (KHX18C10AT3K2/16X)
would upgrade zu 4x8=32GB
Power Supply: Bequiet Pure Power L8 630 Watt (80+ bronze)

Does anyone see a problem with this hardware?
Its DDR3 Ram without ECC. Not enough SATA 3 ports, so I would buy a cheap PCI card.
Do I need a raid cards? Isn’t it enough to do a software raid? Raid cards sound like extremly expensive hardware that runs some sort of software along the road.
There is no true hardware raid.

Than I would build a new sweet ryzen 5/7 Desktop PC to replace my old parts missing in my main pc

And for both of this options I would need a case.
But that’s a secondary problem.

And prebuild sounds expensive and/or offers to less flexebility for me.
And real storage servers are on a whole other price level.

Btw i am running a 8 TB NAS (2x4TB) in Raid 0 and allot of external hard drives right now.
I feel like i have to so something right now.
And on the RAID 0 Nas there are all my vacation photos, important docs and videos.
Of course without a backup.
Yea, i need a nas. quickly

per “effective” TB, not HW TB, and probably less if you enable compression and disable deduplication. you can tune the default ARC value along with other parameters to get this figure down significantly, though I’d recommend 24GB+ if you want the NAS to do other stuff at the same time.

ECC is a plus, but not a must. If you can’t afford it and your array is sufficiently redundant, it probably won’t matter.

The “Scrub of death” is a wives’ tale, and zfs isn’t magically made any less reliable by not going with non-ECC ram over any other file system.

Again, nice feature to have to be sure, but not absolutely necessary.

PSU is the most important part of the build. Go with a Seasonic or other good ODM model.

No, in fact, you’d want to avoid RAID cards as they obuscate direct access from zfs

DO NOT do this.

zfs handles redundant drive arrays on it’s own quite well, the moment you introduce other mechanisms you invalidate all the safeties it has in place for managing them.

Other notes:

  • your used setup is totally viable
  • Don’t cheap out on your drives
  • Get a high speed NIC if you can afford one
  • you may want to get a proper sata/Sas HBA if you need more connections down the line
6 Likes

ok thanks.
i put “understanding ZFS” on my to-do list :grin:

best thing you can do is read the oracle documentation. interfacing with it is simple and easy, and it’ll save you headaches down the line if you have a drive or outright system failure.

Also: Budget for getting a UPS

1 Like

shudders yeah, you definitely need something right now. Like yesterday.

Eh? I always understood that dedupe was the real RAM hog…

mhh… yes and no. Technically ZFS runs fine on regular ram, but the problem is that if there’s a bit flip anywhere ZFS trusts the CPU and RAM because they are designed to be ECC. It’s really up to you. With ECC on Ryzen you basically depend on the mainboards support for it.

Oh also, forgot to say… Since there are just physical limits on how much RAM you can get you can also use an SSD for some of the stuff ZFS does.

I said disable dedupe. You’re agreeing with me.

1 Like

oh oops, yeah misread that :slight_smile:

a) that isn’t how it works, zfs is actually more robust than most if not all common filesystems when it comes to sussing out and not writing invalid data.
b) name a filesystem that doesn’t risk corruption from improperly storing invalid data.

Good advice, I completely forgot to mention setting up an L2ARC. The one disadvantage is that it increases boot times, but if it’s a NAS it’s going to be on all the time anyway.

2 Likes

L2ARC ???
I can and probably will use a Sata SSD as a boot drive. Am i ok with it?
or does it need to be a separated SSD.

I didn’t say this was a ZFS issue. On the contrary, what I mean is that ZFS does have all the checks in place, but ECC can help with hardware related problems on the RAM side. If there’s a problem with regular RAM it’s just going through, because ZFS trusts the RAM. With ECC it will just redo whatever it’s doing or shut down if the problem persists to not corrupt the data.

An L2ARC is like a ZFS ARC, only it’s ancillary and runs on an SSD instead of RAM.

you can throw your boot drive and L2ARC on the same drive, but it isn’t necessarily recommended for a few reasons:

  • L2ARCs are writethrough intensive, so they will somewhat shorten the life of your boot drive, whereas having them separated necessarily makes both last longer
  • removing or changing an L2ARC device is normally easy because it isn’t a persistent part of the filesystem, but if your boot drive is on the same device it complicates things
  • If your L2ARC is saturating the I/O on your boot drive at any point it can cause performance issues.

@mihawk90 ah, the way you worded it made it sound like you were saying ZFS was designed with ECC as a requirement:

Looks like I misinterpreted you, my bad.

The problem is that, for some reason, a lot of people hate ZFS and spread misinformation about its requirements and reliability. Don’t know why, but that’s the reason many are quick to jump in and correct this type of thing.

Most of these people are probably just folks without experience with it messing things up because they didn’t read the manual, ZFS is different from most filesystems out there and you can run into trouble if you assume that it is the same, sure, but a certain percentage are FSF nuts and btrfs fans engendering their cognitive bias, which is troublesome.

I am a bit insane.
If i mostly use my nas as an archive NAS, do i really need an UPS or a better PSU than my bequiet one?
Can i destroy something if the power goes out? Or do i “only” risk corruption of data that is currently in the process of beeing written on the disks. Files already on the NAS sould be fine, right?
I will probably buy a UPS down the line, but i have a limited supply of money right now.
Also my Router is running since 8 months without a shutdown. And i probably rebooted it 8 months ago. There are no blackouts in Germany.

FreeNAS/FreeBSD sync I/O by default, but that will never completely account for potential in-flight data loss. If the purpose of the machine is to archive and retain data integrity, then these are non-trivial factors in making sure that it fulfills that purpose.

Also, having a reliable power supply also reduces the likelihood of taking other devices with it on failure, not just failure rates.

And if you don’t have enough native SATA connections for your purposes, it’ll prevent a house fire.

Another topic:

WD Red
HGST Deskstar NAS
or Seagate Iron wolf

I currently tend to HGST drives.
Also had no problems with reds so far

Yeah, not as a requirement, but at least an optimum :smile:

Also btrfs fans? I don’t think it’s been out long enough to have a fan base :stuck_out_tongue: Especially after that raid5/6 incident :smile: (though to be fair this wasn’t officially released at the time, and already marked not stable soooo whoever lost data on that, shit happens).

There are, but it’s really rare. UPS I’d say you really only need when you have critical data. It’s a bit different in the US because the power lines seem to be really shitty as I understand (wendell mentioned something in the last news that the voltage was really wonky). But yeah, here the regulation is pretty strict about that.

You’d be surprised. A lot of FSF/copyleft nuts don’t like the fact that ZFS isn’t fully gpl compatible. Also, a lot of linux communities engender a “least effort” solution methodology, regardless of outcomes. This makes a lot of people rationalize using btrfs/xfs/ext4 over zfs in redundant applications pretty heavily, leading to specious and pedantic misinterpretations of how zfs actually works as “problems” with it.

If you can accept that people argue for hours over which cli text editor is better, I don’t see how this is considered unreasonable behavior in the OSS community.

Filesystem war?
I use Ext4. And i like it. Its the best one !!

I never tried another fs than ntfs, ext1/2/3/4 and fat.
I am using Linux since 1 year. I never had a server.
My rpi was never really used.
The text editor i like is xed and nano. I#m currently running manjaro Linux with cinnamon.
And i can’t stand gnome.
My world is still small. :smile:

Most studies put HGST/Hitachi and Toshiba on top for reliability:

image

Note that HGST/Hitachi sold their drive division to WD last year, meaning we haven’t got a gauge for how much WDC has changed their process or QA since then.

1 Like

Hard drive wars are a bit pointless in my opinion (and for the record, I really don’t like the Backblaze statistics because the conditions they have don’t apply in a home environment).

Basically some people will tell you “oh I had only shit experiences with X and that’s why I use Z” where the next will just say “oh that total BS I only had issues with Z and never with X”. The problem with that is that a) noone ever lists part numbers so there’s no point of reference anyway, and b) quality can shift from generation to generation.

If you buy a new hard drive now there is no way of telling reliability, unless you buy 5 year old drives where there’s reports and statistics available, and at that point you’re buying old products. So that’s a bit of an issue…

Also regarding the HGST and WD merger. From Wikipedia:

On October 19, 2015, Western Digital Corporation announced a decision from China’s Ministry of Commerce (“MOFCOM”) which enabled the company to integrate substantial portions of its HGST and WD subsidiaries under Western Digital Corporation (“Western Digital”), but they must offer both HGST and WD product brands in the market and maintain separate sales teams for two years from the date of the decision. As such, as of October 19, 2015, HGST is a Western Digital brand, and no longer a separate entity.

How that influences things is at least debatable.