I’m trying to gin up a back up solution for my data and am curious on opinions re: “DIY vs COTS” for a noob to NAS. I’m weighing:
Purchase a uGreen 4 bay box
Repurpose an old AM4 system and DIY (Ryzen 5 3600X on a B450 Tomahawk)
For either of the above solutions, I already bought a pair of 22TB WD gold enterprise drives. I could maybe get another if I wanted a set up with parity built in, but honestly I’ve been running without ANY backup for close to a decade (just system+external HDD, with NO backup for wither), so going from 0 to 2 levels of back up will be an upgrade, lack of parity be damned
Use case is strictly local back up of just personal data from multiple PCs. I do not have any need for remote access. This would be meant as the final backstop of failure model. The last part of the 3-2-1 idea (though sans the offsite component). As such, a semi-monthly backup cadence would probably be aggressively optimistic. I could see myself entirely air-gapping/unplugging it when not in use.
Some other stuff that I’ll try to keep brief: I’m currently running a linux-centric (Mint) set up. I haven’t touched Windows in several months, and would prefer to keep it that way. I’m not averse to the DIY route, but I am wringing my hands a bit over unknown unknowns I could trip up on - as I’ve done more research, I just keep having more questions instead of nailing a solution. Things like “do I need a dedicated GPU? (effeciency, bottlenecks, et al)”, or “do I need to worry about saturating bandwidth?”. Also, files systems, OSes, etc. I’m sure as I research these in the coming days more stuff will pop up. What I’m wondering about right now is if maybe a lot of the conversation around NAS is for people with requirements much more demanding than mine, or even for people bordering on commercial/enterprise solutions. Presupposing the former is true, I then wonder if maybe I’m wandering into ‘overkill’ territory. But that is pre-supposing.
It makes little sense to have a “backup” that works randomly, it should just work all the time so when/if a problem arises it’s easy to determine.
It all depends, I would however say that FreeBSD 14 and ZFS is a very solid foundation with very few surprises along the way. ECC is preferably but not a requirement and remeber that RAID is not a backup.
Something like syncthing goes a long way or even rsync. Pair that with ZFS snapshots on the server and you have a very good base to use.
Most motherboards will require you to have a video card of some sort to boot
If you don’t know if bandwith is going to be an issue it very likely wont be.
Synology is doing some very shady things and I wouldn’t be surprised if the rest of the NAS vendors followed suit, so I would go DIY.
I agree with @diizzy that the backup shouldn’t work randomly. Make it as low power as possible and leave it plugged in and online all the time.
As far as the backup cadence, whatever works for you. I do a daily incremental backup to my DAS (two 20TB drives in RAID 1) for 60 days, then the oldest incremental gets merged into the initial full backup, but that works for me.
Another +1 for the DIY route and permanent accessibility.
However, don’t go the trueNAS or even FreeBSD route, just stick to what you already know: Mint. Linux is perfectly fine as NAS OS, it just lacks the polished GUI’s commercial competitors have. But anything they can do, Linux can do as well. A few things required/recommended:
Use RAID1 for storage, better yet: get that 3rd drive and create a RAID5 from the outset. Converting a RAID1 to RAID5 is technically possible if you know what you’re doing, but very involved and best avoided if you can spare the extra investment of that 3rd drive. It also expands your storage capacity by a factor 2!
Use a separate OS drive, a 256GB NVMe M/2 SSD is plenty
Partition the OS drive such that a runaway process cannot eat all drive space, so separate partitions for /, /boot, /etc. /usr, /var, /log and /tmp are strongly suggested, while /home/[username] should really be left empty
Mount the actual storage in a separate directory in the tree, I used /storage and pointed that to the RAID6 I use in /etc/fstab (/dev/md0 in my case).
you can set up a cron job on the server to sync desktops and server storage using rsync and a bash script.
Rsync syntax is
rsync [options] [source] [target]
Both source and target can be on another machine so in principle you can start an rsync session between desktop1 and the server from desktop2. Not recommended though
Why would you go for a distro that’s clearly not meant for the purpose, that seems like a rather ill advice and it seems to be heavily focusing on desktop computing. ZFS is the standard for NAS boxes these days. BTRFS can be doable but only for mirrors.
Under the hood, Mint is “just” Debian with a friendly GUI sauce. Debian is the go-to distro for servers, so IMO your “conclusion” is incorrect.
I also dispute your statement that ZFS is standard for NAS-es, as in the very same sentence you contradict yourself by mentioning BTRFS. There are more file systems quite suitable for NAS use, especially on Linux as they’re pretty much native to Linux whereas ZFS isn’t (it’s from the Solaris eco-system, for those wondering). I use JFS ever since I build my first NAS back in 2008-ish, but XFS is also a good candidate, as is ext4. Notice all of those are journaling file systems, meaning they will replay their logs as fsck checks are performed. I’ve stated previously on these forums that ZFS is overhyped, overrated and basically broken as the tool chain is incomplete. Try resizing (expanding/contracting) a ZFS partition, never mind a pool. On ext4, JFS and XFS the tool chain is equipped to do just that, I haven’t paid much attention to BTRFS in that aspect (as I won’t use it until the RAID5/6 issue is fully resolved) but ZFS does most certainly not have that capability.
But that discussion is beyond the scope of the OP’s question
https://linuxmint.com/ - “It is completely free of cost and almost all of its components are Open Source. Linux Mint stands on the shoulder of giants, it is based on Debian and Ubuntu.”
That’s from the front page, most of what you refer to seems a bit out of date?
XFS lives on somewhat but again, it’s not really in the same league?
Having quick looks It lacks snapshots, compression, encryption, deduplication, RAID, subvolumes, and checksums.
Actually, it isn’t. Read the last mail from John Paul Adrian Glaubitz in that very same thread. TL;DR: ReiserFS is basically dead, JFS is still very much active in the kernel dev cycle.
So it says so on the front page: based on Debian and another distro that’s based on Debian.
That’s because those file systems were never designed for these features. Apropos, you mentioned RAID in that list: my NAS runs JFS RAID6, converted some years ago from the initial RAID5 by just adding a drive and have mdadm resilver the new array. Conveniently handled via Webmin.
Anyway, it’s good to see OpenZFS is finally completing the minimum standard tool chain by adding resizing. Now add the same functionality to ZFS proper (as in: on BSD systems ) (yes I know it’ll trickle down upstream eventually, but IMO it’s still an oversight on part of the dev’s to implement it this late in the dev cycle )
I never claimed to be sane Anyway, I’m not gonna argue any further in this thread despite not agreeing with you on those issues, the OP must be really put off by now. (sorry!)
From a cost/performance standpoint the DIY route is definitely the way to go, but you do pay with time and, especially so for Ryzen, a pretty hefty idle wattage. Whether the latter is an issue depends on your local electricity prices, but in Europe there are months where we pay peak prices over $1 per kWh.
This may be a hot take but: I would definitely not recommend BTRFS for any NAS workload unless you consider yourself qualifaat of ZFS and you run the risk of unknowingly doing something that compromises your data integrity without the system yelling at you.
Put the disks in a ZFS mirror and create enough datasets you can replicate to a remote location either via the interwebs or if you prefer air-gapped over USB drives.
Unless you are very familiar with linux I suggest you start “soft” with either TrueNAS Scale or something like OpenMediaVault. The latter is more home oriented but is bascially just Debian with a somewhat janky albeit functional web UI. TrueNAS is probably more solid but it’s also oriented towards enterprise use.
Another popular commercial solution is UnRaid, but I have no personal experience with this.
If you want to roll your own completely using Mint or Debain then I suggest something like:
Your Ryzen system with PBO turned off. This effectively caps the TDP to 65W max on the CPU
256GB OS drive with separate partitions for the usual suspects like /tmp etc.
Cockpit for your basic admin tasks like power on/off
Enough RAM for ZFS to work its magic, 16GB should suffice
Some setup you probably want in Debian:
Automatic scrubs of the pool, can’t remember if this is default in Debian
Use sanoid to automatically create snapshots
Use syncoid to send the snapshots off-site
Some monitoring if the drives report errors
And as always I recommend having a replacement drives or two handy when they inevitably break. I am fortunate to live 10 minutes from a shop that sells hardware, but they do not always stock the “good” stuff like enterprise drives of your chosen capacity.
I rebuilt my NAS recently. I used an old business PC (Ryzen 3000) and just added the drives that I needed and am running a ZFS pool. I am running Rocky Linux 9 as the base OS for it’s near-DECADE support life and the exhaustive documentation (Because it is Red Hat (RHEL) based). Debian is also a good choice, especialy if you have never messed with RHEL.
I would suggest that whatever storage you think you will need, double it (If you can afford it. You can always add another few drives later).
Rsync is probably a good choice for backups but I do not use my NAS for backups.
I would also agree with a few previous posters that you should plan to keep the system on and run regular (whatever that means for you and your use-case) automated or semi-automated backups.
if you are not the most familiar with CLI system administration, Cockpit is also a great remote (or on network) management platform and I use it on every headless sytem that I run.
Seagate Exos 14TB disk for Time Machine on local mac’s
manual scripted rsync to push data to the RAID1 volume as needed (sync home dir’s mainly, from all systems not just my Mac’s)
Backblaze Personal Backup runs on the Mac Mini to backup the RAID1 volume (unlimited backups + file version history)
this has worked pretty well for many years
key piece of this is usage of Backblaze Personal Backup (unlimited flat rate backups) which only works on macOS or Windows
this has been a key piece of the 3-2-1 strategy and has managed to help me avoid needing to do a custom Linux server config just for important backups. Though I do consider it often still.
Commercial systems are nice when you are not footing the bill. Dell, Pure, Quantum, IBM, all very nice when stuff needs to work and you are instantly raked over the coals if it does not.
As Synology demonstrated, vendor lock-in is an expensive problem.
DIY is good because free time is free and money is finite. Since most commercial solutions are stacked on top of the free systems, if you know the basics, you will find your way round the commercial systems easier too.
As for myself:
I am running 2 NAS’s, one is basic to the point of being an ethernet-to-SATA with Web-Gui (Open Media Vault) good enough as a backup target. The other is so janky I refuse to share details (Odroid H3, USB-HDD enclosure, requires manual intervention to boot up properly, etc.)
I go with DIY mainly so I’m in full-control of my data. I have a SATA HDD formatted to something no-frills (NTFS on Windows, ext4 Linux) for easy data recovery and cross-OS management. I share it with basic right-click → Share on Windows or a quick vsftpd server Linux/FreeBSD, and it’s accessible anywhere on LAN easy.
I used to use an old Phenom II X4 desktop with 2 HDDs (one main, one videos for Kodi set-top boxes). I have an Ivy Bridge laptop now as my NAS with the HDDs in dual-HDD bay USB-C enclosure. Disks and data for a small lab or home is pretty easy with a spare computer, and I got that laptop for free
I had a Seagate Personal Cloud NAS years ago that did RAID1; one drive failed, I wasn’t able to view data from either drive plugging it in as-is to a Linux box, and the NAS OS was stored on the HDDs themselves, so one gone = array gone = giant plastic paperweight even with brand new disk drives. There was a Debian on Seagate NAS guide somewhere that worked easy though, and I used that NAS for a week before just breaking out an old computer.
My DIY NAS was so flexible, I had my cable modem and routers powered off its Molex (hardware on 12V, fans 5V)