Unraid as a stop gap to ZFS ... or how to get there from here?

So I have been thinking about a storage solution.
=> long term storage 4TB drives in an raidz2 config with 6 drives = 16TB seems like it would meet my needs for the next 12 months or so.
=> 8TB drives on the other hand would probably meet my needs for the next 5 years and are extremely tempting as a target…

However… Money.

I can budget 8TB every couple of months through the year but I can’t budget 6 8TB (or even 4TB) drives in one pop.

It seems like you can add disk space to unraid pretty darn flexibibly and I could start using the disks as I bought them


How do you get your storage transferred from unraid to zfs OR any other solution without an intermediate storage solution to offload the data while rebuilding the disk pool?

buy 6 - 4TB Drives over a couple of months … use in unraid… … couple of months buy 6 more - 4TB drives … build zfs pool and transfer data and then build a second zfs pool from the original disks?

Is there a better way that doesn’t include buying all disks in one shot or leaving disks on the shelf until you have enough for and efficient pool?


why not just set up a zfs pool in the first place

1 Like

… Again because it will take several months to accumulate the disks for a zfs pool wheres with unraid I can add disks as I purchase them.

just set up multiple vdevs as you add them and grow the pool each time

as long as you use drives of the same size (best practice anyway) it’s a seamless solution

You have a Raidz1 of 3 disks

You add 3 new disks, create a new pool, and add this new pool to the existing one

You now have a new pool of 6 disks. (1 pool = 2 vdevs, each vdev = 3 disks)

This way you can buy as you go, you don’t have to go through the trouble of migrating later, and you get zfs the entire time.

Pool (Striped)
┣ Vdev1 (Raidz1)
┃    ┠   4 tb drive
┃    ┠   4 tb drive
┃    ┗   4 tb drive
┗ Vdev2 (Raidz1)
     ┠   4 tb drive
     ┠   4 tb drive
     ┗   4 tb drive
1 Like


that’s not the process I’m describing, but it is a feature that should hit freebsd/illumos some time late this year or early next year.

until then, striping vdevs works just fine.

if you want to use ZFS on linux, there’s no telling when it’ll be ported. ZoL doesn’t even have features from 2012 ported yet.

If you are using Freebsd/FreeNas/IllumOS, you can stripe vdevs, then just expand them when the feature hits RELEASE. If you’re using ZoL, Striped Vdevs will probably be the only way to expand arrays for the forseeable future.

1 Like

I did a similar thing with BTRFS on my machine. I prefer mirrored sets on ZFS which would make it easier.

Keep 2 sata ports empty to build the ZFS final pool and then shrink the unraid / btrfs storage drives 2 at a time and add then as another ZFS mirrored pair till all the data is on ZFS.

Doing mirrored pairs drops to 12TB from 16TB but you can upgrade drives a pair at a time or even one at a time and grow the raid.

For example final 6 x 4TB drives. One fails add a 8TB drive and resilver. When you can, replace the other good drive in the pair to 8TB and that set will grow to 8TB with spare 4TB drive to cover the other 4 drives in the system.

My MB has 8 sata + 1 nvme. Im at 5 drives at the moment myself but mixed sizes.

1 Like

+1 to mirrored VDEVs.

You can add disks 2 at a time. E.g., my current NAS

pool: tank
state: ONLINE
scan: scrub repaired 0 in 0 days 01:44:19 with 0 errors on Sun Jun 24 01:44:20 2018

    NAME                                            STATE     READ WRITE CKSUM
    tank                                            ONLINE       0     0     0
      mirror-0                                      ONLINE       0     0     0
        gptid/936cc415-c5d6-11e6-a21f-c8cbb8caf480  ONLINE       0     0     0
        gptid/dddd73fa-c5f1-11e6-a1f6-c8cbb8caf480  ONLINE       0     0     0
      mirror-1                                      ONLINE       0     0     0
        gptid/2a86e102-c5f2-11e6-a1f6-c8cbb8caf480  ONLINE       0     0     0
        gptid/2e69ba8f-c5d7-11e6-a21f-c8cbb8caf480  ONLINE       0     0     0

errors: No known data errors

I’ve had a dual mirror pool since 2012. The same pool

I replace disks one at a time. It’s in a 4 bay system. When i migrate it (soon) to a new machine, i just pick up the disks, stuff them in the new box in any order, and let ZFS sort it out.

With more bays, i could add a third 2 drive mirror and the pool will begin to balance writes across it, with a preference to the less full VDEV.

Without adding more drives, i replace one drive in a mirror, wait for it to sync, then replace the other drive. Once a mirror has synced back up, i can expand it to the new disk size.

if your end goal is ZFS, just install FreeNAS and skup unraid?

I’ve recently (6 months ago) converted/rebuilt my second storage box from zfs to btrfs since I dislike the lack of flexibility in ZFS and I needed to add space and btrfs lets you rebalance. I’ve been running btrfs and ZFS on two boxes in parallel for about 2 years prior. One box is running various 3x10T and 2x5T HGST drives and the other is running 4x4T WD Red , both boxes keep their data in raid1.

I don’t know what the RMA process is like for Seagate, all their drives I used it to have failed just outside of warranty. WD RMA is good. I haven’t had HGST/Toshiba drives fail yet, not sure about those.

The two boxes are just archives/backups / they don’t need to be fast as long as metadata operations are quick.

you could’ve just striped mirrored vdevs though if you’re just using mirrors

guess it’s appropriate since the entire architecture of btrfs is built on a bad working knowledge of zfs