I’m thinking about building/purchasing a NAS and I have the following questions:
If I were to purchase a new one, I’d like this PC: https://kobol.io/
It seems really neat, the problem is that unRAID does not run on ARM devices and I would have to use Open Media Vault. Does anyone have experiences regarding OMV? Afaik there exist a ZFS plugin, but how does it compare to unRAID and FreeNAS?
If I were to build a NAS myself, I have the following options:
I could use my old 2500K CPU with my old RAM and PSU. I would have to purchase a Sandy-Bridge mini-ITX mainboard and a case for my NAS. However, the question is how many of those mainboards are still available.
I could use my current CPU, a 2700X and purchase a new 3900X(T?) which I would put into my X370 mainboard. I could use the PSU mentioned above and I’d also have some RAM lying around. In this case, I’d have to purchase the 3900X(T?) and a mini-ITX board. Considering AM4 is a current platform, my selection will be much larger.
What’s your opinion in this regard? What would you do/purchase? I’d like to mention that for me it is important that the PC will be small, so I won’t be able to use my ATX boards for my NAS.
I run it. It is based on Debian, so the underlying software is rock solid, although the web interface not so much.
I have not used the ZFS plugin, but I believe it it has a kernel module that has to be built. So when the ZFS module is loaded, it is as good as any other Linux distro with ZFS, but when you update the kernel or ZFS, if the module build fails, then ZFS will stop working. In Ubuntu and some other distros I think the module is prebuild, which is much less likely to break.
OMV is a good solid choice. I used it for years for my own NAS systems. Never had ZFS break. It’ll be as simple or as complicated as you want it, as it’s basically just Debian plus some management scripts and a web UI. Ultimately using it and ZFS is what finally got me comfortable enough to get off windows.
The biggest source of problems will be setting up permissions and samba shares, realize you did it wrong, and go back to fix it. This will happen regardless of what distro you use.
If you are not updating the kernel or ZFS, then you are fine. I also think you would be probably fine if you use the standard kernel, since there are generally not too many breaking changes with the stable kernel. It is just something to be aware of, and to watch out for when you update.
Where you are much more likely to run into issues is with is if you are planning to use the backports kernel, or a mainline kernel.
Respectfully this is not the best question to ask first. The first questions have to be:
What do I need the NAS for? If just for media playback on a couple of devices then an ARM based NAS is fine. If it is for massive data backup then you may want something that will support high speed networking etc. If you need virtual machines, then you need many cores and RAM.
2a) How much storage do I need now and in the future and 2b) how much ‘uptime’ do I need? This will dictate whether you can get away with a simple ARM based single / double drive array or if you need beefier processing and sotrage controllers (expansion slots, backplanes etc) to hold multiple drives. If you expect the data to grow rapidly the ability to add drives is a useful feature of Unraid. If you need high uptime (not the same as data resilience but related) then ZFS is king.
Do you want low maintenance / noise / fire and forget (choose ARM), or do you want to constantly check whether your 10 year old motherboard and RAM are going to keep trucking with your precious cat photos?
Personally I use FreeNAS and old enterprise gear to get good availability and transfer with capacity to grow, but you should consider your use cases before choosing your parts.
I presume that before updates are pushed to omv, the maintainer checks if anything breaks, including the popular plugins of which ZFS is one. Like I said, used it for many years and once installed, just worked.
No idea about unraid, never used it.
As for that article, there’s a few thing you need to realize about ZFS and storing data that’s important to you.
You need incremental backups. This is not optional, else you are playing the waiting game for losing it all. Preferably You should have an additional offsite backup, so that the riskS of lightning strikes, theft, fire, flooding, and little children are unlikely to affect all copies of you data at once.
ZFS is really fucking complicated and originally meant for enterprise applications (meaning you are expected to have backups). Not only is trying to shuffle around data around on disks with a raid2/3 type of implementation hard, it’s not often sensible to even bother as it would take longer anyways. What you are expected to do is backup, scrub your backups, destroy the main pool, recreate the pool with the disks you want to add, and send the data back over, and scrub your main pool.
The “hidden costs” of ZFS aren’t due to some special deficiency of ZFS. They are a fundamental part about storing your data safety. If you have 1TB of data, you also need another 1TB in order to check it for errors for a simple mirror. You also need an additional 1-2TB for backups. Basically take the amount of data you think you have, and plan on needing 4x storage.
ZFS has hardcore fanboys because it’s about data protection first. Performance and flexibility are optional, though certainly reasonably obtainable.
As far as ARM goes, it’s not something that matters to me. What you should really be concerned about is how much data you may want to store, and leaving yourself options for the future. I get the appeals of things like low power usage in a nice cute package, but if you actually run numbers the cost delta isn’t a worthwhile thing to obtain in exchange for having a gimped system you can’t grow into and change over the next several years.
My own priorities are:
Can hold a shit ton of hard drives. I wouldn’t recommend anything less than 8 bays.
The largest capacity drives available, as long as the cost/TB issue too fa me away from others. I’m standardized on 10TB disks, but you should consider 12,14,16TB Drives. Shuck easystores and wd elements when on sale and take care that they are NOT SMR. Don’t fall into the “lots of small drives to go faster” trap. If you need something fast, use SSD’s.
PCIe lanes, and physical slots. I have never once regretted having too many, but my early purchases were made useless by not having enough. PCIe lanes let you add 10/40G network devices (because even a single hard drive can saturate 1G networks), HBA’s to handle more disks, even things like nvme risers.
My main purpose will be to backup (*) data I consider important and also to be able to access it with whatever device I chose. While media access isn’t mandatory/required it might be a plus in the long run. I might also be tempted to execute various different tasks on it, but this is also not mandatory.
Currently, I would need about 4TB of data, which isn’t that much. I don’t need high/permanent uptime to be honest and if the NAS turns out to be too noisy, I might also turn it off during the night.
I don’t mind checking if everything still works, afaik ZFS is built with the assumption that a hardware failure will occur.
Maybe I should also read up on BTRFS again. Afaik, BTRFS’ RAID5/6 support is not that mature. (Regardless of that, I would still need the backups ofc).
Thanks, I’m aware of that since it has been mentioned in various videos on the site’s channel.
I think 8 bays would be an overkill for me.
As mentioned above, this exceeds my required capacity. However, I do intend to exceed it a bit.
Then the above build looks like a perfect fit for the need on small size.
Am running a a ZFS based NAS at home, what do you mean by complicated?
The learning curve for a base setup and replicated snapshot scripts is very low, as fortunately plenty documentation exist (including scrub sms/mail-on-issue scripts and auto daily/weekly/monthly snapshots + clearing) - for me the same learning curve is valid for BTRFS.
Would not call this cost “hidden” or unique to ZFS, any type of redundancy or parity requires more space.
In the end it depends on the redundancy level that is chosen. Plain mirror (2x), raidz1 (+1 disk), raidz2 (+2 disks), or raidz1 combined with mirrors (2x+1), or even no redundancy at all, but seperate machine spun up every now and then to receive the latest zfs snapshot changes via zfs sendand zfs receive.
That is a very good point, @Azulath do you plan on running any software directly on the NAS, as this would then be limited by what the ARM processor supports (virtualization, CPU cache, hardware support for certain encryption or other features performance wise)
Helios64 mentioned in the first post is having 5 disk slots.
Let’s say using 4x4TB disks in each => ~10 TB usable data (using raidz2, allowing 2 disks to fail), plenty of room to grow + snapshots for things that change and you want to be able to recover.
Note that the machine only has soldered on 4GB of ram. Thus using eg. de-duplication is not recommended, that said, you can still enable it for individual portions of the overall zfs pool without running into issues.
“Extending” Is something to think about as well. Upgrading the pool from eg. 4TB -> 8TB disks will require to swap out all disks prior to the storage pool to be extendable to the full size. BTRFS has features to allow uneven disk sizes, openZFS does not yet (to my knowledge) support this, as they always kept everything working (FreeBSD and Linux alike, not like BTRFS in somecases).
Thinking about it, a “dirty” solution could also be - depending on the needs - run the system from a Raspberry Pi 4. There are tutorials for OMV with ZFS on a RPI4). USB attached disk(s) and ready you are, no noise (if disks are not cooled by any active fans), compact overall size, cheaper, still running ARM, disk2usb are in place (makes it easy to just attach the disk(s) on a laptop and have zfs import to allow reading all the same data) simple to use and dead easy to build and assemble.