ZFS On Unraid? Let's Do It (Bonus Shadowcopy Setup Guide) [Project]

I sent these commands and now I can access the share

chmod -R 775 /dumpster

chown -R nobody:users /dumpster

I also have a new write-up on the whole process.

Process to use ZFS in Unraid (Thanks to Wendell from Level1 techs for the baseline)

Unraid needs to be running with at least 1 data drive, pref 1 parity and 1 data drive.

Install Unraid options and set it up how you want

Backup the data from Unraid.

All following processes have to be installed using the GUI (monitor connected to the Unraid server, do not try to do it over remote operation)

Install ZFS Unraid Plugin (Done via the Unraid Install plugin option)

https://raw.githubusercontent.com/Steini1984/unRAID6-ZFS/master/unRAID6-ZFS.plg

Check the drives you have available “Unassigned Devices” that are not parity, data or cache drives

In my case I am using 9 drives /dev/sdb-/dev/sdj

Using terminal in unraid use the following commands:

(list drives)

lsblk

This will list all harddrives in your system.

(Create Zpool)

Zpool create dumpster raidz(1,2,3,x,x,x) /dev/sdb /dev/sdc /dev/sdd /dev/sde

depending on how many drives you have, this may take a few seconds

(add vdev)

zpool add dumpster raidz1 /dev/sdf /dev/sdg /dev/sdh

zpool add dumpster raidz1 /dev/sdi /dev/sdj;etc

(View Zpool Status)

zpool status

(Zpool Status with vdevs)

Your Zpool is setup

Using the GUI goto

https://slackware.pkgs.org/14.2/slackonly-x86_64/fio-3.7-x86_64-1_slonly.txz.html

scroll down and click on

Binary Package fio-3.7-x86_64-1_slonly.txz

Make sure you save this file to the ROOT folder (no other folder will work)

Using terminal run the following command

upgradepkg –install-new ./fio-3.7-x86_64-1_slonly.txz

Now onto setting up the ZFS dataset (file system and folder creation)

Using terminal

(ZFS Dataset)

zfs create dumpster/test -o casesensitivity=insensitive -o compression=off -o atime=off -o sync=standard

(ZFS verification)

ZFS list

NAME USED AVAIL REFER MOUNTPOINT

dumpster 32.0G 7.65T 140K /dumpster

dumpster/test 32.0G 7.65T 32.0G /dumpster/test

Now onto the testing of the Zpool array

Using terminal

(FIO Disk / pool testing)

fio --direct=1 --name=test --bs=256k --filename=/dumpster/test/whatever.tmp --thread --size=32G --iodepth=64 --readwrite=randrw --sync=1

Results from a Raidz3 with 9ea 3tb drives

Results from Raidz1 with 3 vdevs (9ea 3tb drives)

Now how to share the dataset and make sure permissions are setup correctly:

Using Terminal

chmod -R 775 /dumpster

chown -R nobody:users /dumpster

Using a windows PC connected to the same network, in the file explorer window type the following into the top address bar: \server name or server ip address “\2.2.2.2”

You should see the share “Data” go ahead and open it and see if it will allow you to create a new folder and you should see a 32gb file in there called whatever.

1 Like

Hey Wendell, why did you decide to install all those for the cache, plus muck with unraid as opposed to just installing the normal steam cache bundle and putting it on its own br0 IP on the network ?

What normal steam cache bundle? The one I tried didn’t work with https sni and the lancache container caches way more than just steam…

I have my zfs pools set up and my snapshots auto snapshotting, however… windows clients cannot see the shadow copies.

I believe because its missing in the “samba-vfs-modules” in the guide it mentions no where how to install this in unraid. I see no community plugin, and the github guide gives you instructions how to install it in debian.

Any ideas?

So, i found a lot of the commands for this have flipped arguments. but i have zfs setup, i have samba set and shares working, i have the autosnapshot script and it is running on a schedule.

i see the snaps in previous versions in windows. problem is this i set snapshot on dumpster/users and i also chown and cmod the dir recursive so permissions are correct.

files in dumpster/users are in the snapshots all subdir eg. dumpster/users/user1 show snapshot but they appear to be empty even from command line. tried adding --recursive to the run of the script but that didn’t resolve.

one other thing i didn’t see in your orig post, how do i share out zfs to nfs? and would it be a good idea to add ssd cache to zpool if i’m using it as a datastore for esx?

The samba vfs modules were there for me… oddly. Perhaps it is the snapshot formatting. Can you post ls -l .zfs from your snapshotted folder? Where the snapshots are?

The SMB/vfs formatting requires a double digit number at the end for it to work

zfs-auto-snap_01-2019-11-17-2050/
zfs-auto-snap_01-2019-11-17-2110/
2019-11-17-170000/
zfs-auto-snap_01-2019-11-17-2055/
zfs-auto-snap_01-2019-11-17-2115/
zfs-auto-snap_01-2019-11-17-2037/
zfs-auto-snap_01-2019-11-17-2100/
zfs-auto-snap_01-2019-11-17-2120/
zfs-auto-snap_01-2019-11-17-2048/
zfs-auto-snap_01-2019-11-17-2105/

That looks correct. What about your SMB conf can you paste that in? You don’t have a share by the same name elsewhere in the GUI do you?

GNU nano 4.2 /boot/config/smb-extra.conf

#unassigned_devices_start
#Unassigned devices share includes
include = /tmp/unassigned.devices/smb-settings.conf
#unassigned_devices_end
[migrate] >
path = /migrate
browseable = yes
guest ok = no
writeable = yes
write list = totallysecretusername
read only = no
create mask = 0775
directory mask = 0775
vfs objects = shadow_copy2
shadow: snapdir = .zfs/snapshot
shadow: sort = desc
shadow: snapprefix = ^zfs-auto-snap_(frequent){0,1}(hourly){0,1}(daily){0,1}(monthly){0,1}
shadow: localtime = yes

So where did you get this? this is way different formatting of the snap prefix than what was in the howto, above:

path = /mnt/dumpster/vms
browseable = yes
guest ok = yes
writeable = yes
read only = no
create mask = 0775
directory mask = 0775
vfs objects = shadow_copy2
shadow: snapdir = .zfs/snapshot
shadow: sort = desc
shadow: format = zfs-auto-snap_%S-%Y-%m-%d-%H%M
shadow: localtime = yes

So that shadow: format line is literally telling samba how to interpret the file name on the file system for the snapshots. Your config regex doesn’t match the file system layout. So no shadow copies are found. Mixing how-tos won’t work?

I am betting that is the problem. The formatting here follows the default zfs auto snap script format, and the format string that you pasted in looks way wrong, and not from this how-to?

a share named [migrate] doesn’t appear elsewhere in the unraid config, right?

No it doesnt appear anywhere in the config.

I tried from the howto first with no success, tried several things then fiddled around with it, i just forgot to switch it back before posting the config.

I swapped it back, copied it from your config, still nothing.

Did you restart samba?

yep, but good news, it magically started working lol.

Thanks for your help, this has been interesting.

That was the issue but windows caches the shadow copy tab for a short time, so it may have thrown you off.

Very weird, been working on this for several hours, only fiddled with the settings in hte last 45 minutes. reloading samba after every config change. Thanks for your efforts on this how to :slight_smile:

What would you say the benefit of this vs say… proxmox + cockpit (with the upcoming zfs plugin) ? Especially since you cant use must of the built in unraid shares stuff.

The vm + vfio deadeasy passthrough is nice, and the dockerui is nice

It may be worth a revisit once the zfs plugin is good to go, for sure

2 Likes

Wendell,

My ZFS array is awesome to work with along with the Unraid array. I have a Windows VM that is tasked to do a nightly sync and check for updates. Best of both worlds with ZFS and Unraid, I just wish the ZFS plugin had a GUI and could setup everything in one place, but I will take what I got.

I wonder how many people have actually implemented this setup, besides GN and I.

@Wendell,

Is there a way for us plebs to pre-load the entirety of steam library for use in a lan event?

Woah, Aim high there little buddy!

1 Like

Tell it to this guy!