ZFS On Unraid? Let's Do It (Bonus Shadowcopy Setup Guide) [Project]

@wendell Is there an upgrade path from FreeNAS 11.3U2 that doesn’t involve me nuking my main Pool? If so, would it be “safe” to do? If the answer to these is no, I’m going to have to buy some more drives… Also, does anyone know if supermicro backplanes provide any 3.3 voltage, I’m looking at some cheap 14TB drives that utilize PWDIS.

I tried reading the article carefully and experimenting, but one thing that’s not too clear is if I have to create at least one “normal” unraid array, or if I can just use my one zfs zpool, but still use smb/nfs related features.

Thanks, John

@John_Goodwin The video Wendell did with Gamers Nexus on their NAS, Wendell mentioned needing at least one UnRAID style array for plugins, and stuff. Not sure if you need one for SMB to work unless that is a plugin. You might try watching those videos.

I’m getting a log message after attempting to add the zfs-auto-snapshot.sh script and files. I’m sure I have done something wrong. The log message is
Oct 4 21:59:01 Beast crond[3324]: failed parsing crontab for user root: PATH="/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin"
Any help would be greatly appreciated.

I was wondering if I could ‘pause’ or let the array sleep.
As it is in my bedroom and the disks are way louder than 6 noctuas fans in low speed.

Background info:
I’ve got a 5 disk pool, 3 x 4TB 2.5" HDDS, 500GB special and a 2TB nvme cache.
My system:
i7 8700T, 16GB RAM, 1,5U short server, other config;
2 x 16GB USBs as unraid array, 1 x 128gb nvme as system and appdata, pfsenseVM with dual 10gig passthrough, pihole DNS and LANCACHE DNS)

If I spin down the disks with hdparm -y /dev/sdX they will spin up within 20 seconds…

I could follow the guide right to the step where you test the performance with fio but I get really bad read / write speeds of about 22 MB/s (both about the same). zpool iostat -v even reports only 18M bandwidth on write for the individual drives.
I also tested the SSD I used for Unraid’s array and could only get speeds of 36 MB/s (again same read and write speeds).
I then tested the read speeds with hdparm -t and it reports about 180 MB/s for the HDDs and about 445 MB/s for the SSD which seems a lot more like what I should get. What is slowing me down?

Specs:
MB: ASUS PRIME B450M-A (BIOS Version 2409)
CPU: AMD Ryzen 5 3600 6-Core @ 3950 MHz
HVM: Enabled
IOMMU: Enabled
Memory: 32 GiB DDR4-3200
Unraid Array: Samsung SSD 840 Series - 250 GB
ZFS Pool (raidz1-0): 3 x Seagate IronWolf NAS HDD 6TB, SATA 6Gb/s (ST6000VN001)

Zpool created with ashift 12?

Not created but I set it afterwards. Didn’t change change anything.

I think if you stub a drive for passthrough then it doesn’t count because it’s not technically visible to unraid.

I found out the hard way you can’t do anything with unraid when you have more disks attached than the license allows.

1 Like

BTRFS is creepin up but :slight_smile:

Who need a billion dollar FS when a new shiny one is better :slight_smile:

P.S I need to add a new HD to my porn stash

1 Like

its funny, you can tell a mid-level programmer from a senior programmer because “throw all this out, it’s trash” is usually a programmer who is good enough to have an ego, but not good enough to read old code and really understand what it’s doing.

ZFS needs refactoring in places, sure, but that’s not start-again. If you look at the history of btrfs they tripped over so many SO MANY bugs that others also tripped over that came before them.

A great programmer has the humility to learn something from anyone, even the code janitors.

3 Likes

ZFS or die :slight_smile:

I will make you butter my FS :slight_smile: in time

Im going to unraid my next build because I am lazy

I found

zpool create dumpster raidz /dev/sdb /dev/sdc /dev/sdd /dev/sde

caused issues with unraid seeing the mount points but

zpool create -m /mnt/dumpster dumpster raidz sdb sdc sdd sde

works fine

1 Like

I seem to be stuck pretty hard here Making the samba share. I’ve tried every witch way and I always end up with no write access. I have added the h8750 credentials to credential manager. What could I be missing here.

[test22]
path = /mnt/dump/dataset
browseable = yes
guest ok = no
writeable = yes
write list = h8750
read only = no
create mask = 0775
directory mask = 0775
vfs objects = shadow_copy2
shadow: snapdir = .zfs/snapshot
shadow: sort = desc
shadow: format = zfs-auto-snap_%S-%Y-%m-%d-%H%M
shadow: localtime = yes

I see the issue here I just don’t know how to fix it.

root@Tower:/mnt/dump# ls -la
total 3
drwxr-xr-x 6 root root 6 Jan 12 18:43 ./
drwxr-xr-x 6 root root 120 Jan 12 21:45 …/
drwxr-xr-x 2 root root 2 Jan 12 18:35 dataset/
drwxr-xr-x 2 root root 3 Jan 12 18:37 docker/
drwxr-xr-x 2 root root 2 Jan 12 18:43 isos/
drwxr-xr-x 3 root root 3 Jan 12 18:42 vms/

After some more poking around I found this to help
chown nobody:users /mnt/dump
chown nobody:users /mnt/dump/dataset
And

chmod 775 /mnt/dump/
chmod 775 /mnt/dump/dataset

@wendell I tried adding an Optane 900p 280GB as my SLOG device, but I do not see any write speed increase. I’ve set sync=always, compression=lz4, atime-off, and recordsize=1M. I even tried modifying the zfs tunable parameters but nothing I do seem to make a difference. I checked the zpool iostat -v and saw that the Optane is only doing ~100MB/s writes. When I use the Optane by itself I get ~2000MB/s writes. If I do a Crystaldiskmark test or transfer a large file through SMB, the write speed is the same as without using a SLOG. Could the issue be due to the ZFS plugin? Or are there other ZFS tuning I need to do?

2 Likes

hi,
maybe this can explain a little better?

2 Likes

So does this mean adding a SLOG device won’t increase the write speed even if I set sync=always to write the entire ZIL to the nvme? I am still limited to the speed of the spinning disks? That would make sense because I have a raidz1 pool of 5x10TB Seagate Exos drives. I get around ~550MB/s when using SMB. Adding the Optane didn’t see an improvement in write speed. I just want to clarify before I return this optane nvme.

1 Like

You should be able to add it as a cache device for that pool if I remember correctly.