Getting off of unraid, going with ubuntu zfs

short story, i can not get the desired I/O performance out of unraid. i’ve been reading up on zfs and i figured it would be worth a shot. I have 4 3tb drives, a single 6tb, and a single 640gb. the 4 3tb drives and the 6tb i have in a pool (the command: “zpool create z-storage raidz /dev/sda /dev/sdb /dev/sdc /dev/sde /dev/sdf -f”). in the little i have tested im getting 4x the network transfer over NFS than what i was getting with unraid.

before i 100% go with this zpool, i would like a second opinion if someone could tell me what they think is the best way to configure the zpool is or where this could potentially go wrong in the future

My concern is that you won’t be able to have full fault tolerance on the array, due to the 6TB not having a duplicate. If you’re looking for reliability, that’s a concern.


Other than that, it’s looking good to me.

oracle has pretty good documentation and i havent gotten through all of it, but would using raidz2 or raidz3 in the command provide that full fault tolerance?

So if you’re trying to pool the 4x3tb and the 1x6tb together, you’re not going to be able to do that.

A raidz vdev must use identically sized disks, so that’s gonna be a problem, or you’re going to lose 3tb off the 6tb disk.

Im only a layman but if you have another free sata port available. Go for mirrored pairs of the 3TB drives and leave the 6TB out or buy another 6TB drive now to make a 3rd mirrored pair.

Will be much easier to upgrade going forward doing only 2 disks at a time.

2 Likes

Suppose I make 2 software raid0s (2x3tb), that would give me 3x6tb and then zpool those. How much bad juju is that?

This is what I have read as well, apparently a good practice.

1 Like

That’s some seriously bad juju. You want to give zfs the raw disks. Mdadm does some funny business that zfs doesn’t like.

1 Like

AMEN to @SgtAwesomesauce

NEVER put zfs on raid…zfs handles the functions of raid and the filesystem rather than one or the other like with ntfs on windows for instance so it really must be direct or in other words NOT through a raid controller in a raid configuration (and some controllers even without raid are just a bad idea if i understand correctly)

let me clarify…using a raid controller itself is fine…but not with raid being used…depending on the controller…you know what theres too much here to note, do your research it isnt super complicated but in short you want ubuntu to see each individual disk NOT a raid array to use zfs thats the overall thing to remember

1 Like

I think the other guys are suggesting that you make a pool of 2 mirrored pairs of 3tb discs, which would give some fault tollerance, and if any one of the drives goes bad, your pool would still work, until you can replace the failed drive.

if you added the single 6tb drive, and it had a fault, the whole pool would fail.

So much better to just use the 4 x 3tb drives, and wait until you have a second 6tb drive, before adding the pair to the pool.
Or use the 6tb drive mirrored with one of the 3tb drives, which would only use half of it’s capacity, but allow you to swap out the 3tb drive it’s paired with for a 6tb drive later down the road, then pool that mirror in a pool with a pair of 3tb drives, so just 4 drives…