Zfs advise

Hello Level1Techs Forum,

I’m a newbie when it comes to zfs, and I need some advise. I’d like to migrate my Unraid pool to zfs (on Unraid). I currently have:
4x4Tb Seagate
4x8Tb Wd (one usb)
2x 12Tb Wd (usb)
and a 2tb samsung drive (to be used as drive to start the unraid pool)

My current plan is to build a raidz1 with the 4tb drives and go on from there.
I have a couple questions regarding that.
can I add a second raidz-1 pool (with the 8tb drives) to the existing one(and later the 12tb drives)?
Will I need an ssd for caching?
I have 64Gb of ecc memory (about 48Usable after dockers and vms) is that enough when I upgrade to 4x4tb, 4x8tb, (4x12tb in the furture) all raidz1?
Can I saturate 10gbits?
As My files are mainly music, movies, should I set to pool to use compression?
How can I share my zpool via SMB?
Is my plan to use raidZ1 any good?
Is there anything else I might have missed?

Thanks in advance for reading this.:+1:

PC specs
Ryzen 2600 @3.8Ghz
64Gb Kingston ecc @2666Mhz
ASrock Taichi X370
GeForce GT 710
Unraid 6.8.3

No. Not for streaming movies and music.

Not sure how unraid handles that, freeNAS wants 8GB for itself + 1GB per 1TB of physical disk space. Until you put in the 12TB drives you should definitely be fine.

No.

You will be able to transfer a lot faster than gigabit, for sure. 10Gbit maybe not quite with just four drives for now.

You would (usually) add a vdev to a pool, not a new pool. But that’s just terminology, so yes.

“need” is a strong word. It certainly doesn’t hurt, but it’s not required either.

The filesystem doesn’t really matter for SMB, once it’s mounted the SMB daemon can just access the drives. They are independent components. Unraid already has SMB, it just works transparently.

You can but I wouldn’t.

If you lose a vdev you lose the pool. 1 drive fault tolerance isn’t great.

Depending on how much RAM you assign to ARC, potentially no.

Sure. The 1gb per tb recommendation is a rule of thumb and not a hard line.

In theory with ARC then yes, in practice this isn’t as easy as it seems especially if you use samba.

If your CPU can handle this without issue then sure.

Samba is the way to do this but that’s its own separate can of worms from ZFS. Which OS you use ZFS on will determine how you do that. When you get to that point we can talk about it more.

Do you have a way to back up the data?

That’s a tough question to answer until you get to that point where you realize that for yourself.

Thanks for the reply

Yes , I backup all my data to Google Drive using rclone (encrypted)

The reson I’d like to do it that way is: I want to have one Network drive with all my subfolders

If you have a backup and don’t mind losing all the local data then raidz1 is fine. The problem becomes repairing a vdev in the pool. Just like raid5 you’re prone to have another disk fail while rebuilding a failed disk. Then your entire pool will be gone even though only 1 vdev has problems.

Should stick with mirrors instead to get better performance. Rebuilds are quicker. Able to grow as needed. Downside you half your raw capacity but small price to pay imo.

@paul_dema plead read this.

https://jrs-s.net/2018/04/11/primer-how-data-is-stored-on-disk-with-zfs/

Thanks this link, was very informative but mirroring would loose me too much capacity.
I mean I currently run a “raid 5” with 11 disks in unraid so choosing raidz would still be lightyears safer I’d say.

Okay so far this is what I would do to create the pool:

zpool create
-o ashift=12
-o compression=lz4
-m /mnt/zfs
pool1
raidz /dev/sdg /dev/sdi dev/sdp /dev/sds

do you guys know a way how to to use the device serial insted of /dev/sd* or is this fine to use /dev/sd* ?

//after create
zfs create pool1/folder1
zfs create pool1/folder2

safer yes… faster… definitely not

With serial, you could use
ls -l /dev/disk/by-id/
For a list, and the use the ata-wdcsomething combination?

So it ends up like:
zpool create -o a shift=12 poolname raidz3 ata-wdcezrx123456 ata-wdcezrx234567 ata-wdcezrx345678 ata-wdcezrx456789

1 Like

You want to do it with serial. Because the other layout while easier to type screws you over if you ever move the drives or plug in a USB and the ordering changes.

1 Like

Huh? The by-id avoids that. I agree it is better to not use the /sdX

Sorry, I replied to your response. I meant to piggy-back off of what you said.

1 Like

Okay, no worries.
I use ls -l to see which serial lines up with which /sdX, and thought I’d mis-typed, or left the intention to just check /dev/sdX.

Thanks,