So I have been thinking about a storage solution.
=> long term storage 4TB drives in an raidz2 config with 6 drives = 16TB seems like it would meet my needs for the next 12 months or so.
=> 8TB drives on the other hand would probably meet my needs for the next 5 years and are extremely tempting as a target…
However… Money.
I can budget 8TB every couple of months through the year but I can’t budget 6 8TB (or even 4TB) drives in one pop.
It seems like you can add disk space to unraid pretty darn flexibibly and I could start using the disks as I bought them
BUT
How do you get your storage transferred from unraid to zfs OR any other solution without an intermediate storage solution to offload the data while rebuilding the disk pool?
buy 6 - 4TB Drives over a couple of months … use in unraid… … couple of months buy 6 more - 4TB drives … build zfs pool and transfer data and then build a second zfs pool from the original disks?
Is there a better way that doesn’t include buying all disks in one shot or leaving disks on the shelf until you have enough for and efficient pool?
that’s not the process I’m describing, but it is a feature that should hit freebsd/illumos some time late this year or early next year.
until then, striping vdevs works just fine.
if you want to use ZFS on linux, there’s no telling when it’ll be ported. ZoL doesn’t even have features from 2012 ported yet.
If you are using Freebsd/FreeNas/IllumOS, you can stripe vdevs, then just expand them when the feature hits RELEASE. If you’re using ZoL, Striped Vdevs will probably be the only way to expand arrays for the forseeable future.
I did a similar thing with BTRFS on my machine. I prefer mirrored sets on ZFS which would make it easier.
Keep 2 sata ports empty to build the ZFS final pool and then shrink the unraid / btrfs storage drives 2 at a time and add then as another ZFS mirrored pair till all the data is on ZFS.
Doing mirrored pairs drops to 12TB from 16TB but you can upgrade drives a pair at a time or even one at a time and grow the raid.
For example final 6 x 4TB drives. One fails add a 8TB drive and resilver. When you can, replace the other good drive in the pair to 8TB and that set will grow to 8TB with spare 4TB drive to cover the other 4 drives in the system.
My MB has 8 sata + 1 nvme. Im at 5 drives at the moment myself but mixed sizes.
I’ve had a dual mirror pool since 2012. The same pool
I replace disks one at a time. It’s in a 4 bay system. When i migrate it (soon) to a new machine, i just pick up the disks, stuff them in the new box in any order, and let ZFS sort it out.
With more bays, i could add a third 2 drive mirror and the pool will begin to balance writes across it, with a preference to the less full VDEV.
Without adding more drives, i replace one drive in a mirror, wait for it to sync, then replace the other drive. Once a mirror has synced back up, i can expand it to the new disk size.
edit:
if your end goal is ZFS, just install FreeNAS and skup unraid?
I’ve recently (6 months ago) converted/rebuilt my second storage box from zfs to btrfs since I dislike the lack of flexibility in ZFS and I needed to add space and btrfs lets you rebalance. I’ve been running btrfs and ZFS on two boxes in parallel for about 2 years prior. One box is running various 3x10T and 2x5T HGST drives and the other is running 4x4T WD Red , both boxes keep their data in raid1.
I don’t know what the RMA process is like for Seagate, all their drives I used it to have failed just outside of warranty. WD RMA is good. I haven’t had HGST/Toshiba drives fail yet, not sure about those.
The two boxes are just archives/backups / they don’t need to be fast as long as metadata operations are quick.