ZFS On Unraid? Let's Do It (Bonus Shadowcopy Setup Guide) [Project]

1 Like

I will have to look tonight but looking at the primer it only mentions the slog and l2arc. I swear you can add just a cache device.

1 Like

Iā€™ve done both; adding the Optane as a SLOG (write cache) and a L2ARC (read cache). The performance was underwhelming. I didnā€™t see much increase in the write speeds, but the read speeds were slightly better especially when I access the pool using multiple clients. I was hoping that the Optane would give me much better write performance since my workloads include dumping hundreds of gigs worth of video footage a day. If someone else has a better experience or know a way to get 1500MB/s or more write speed please let me know. I am using 40Gbps fiber between the host and client. With the client having PCIE 4.0 NVME storage. I want to saturate the 40G connection if possible. What setup is best for me to achieve the high write speeds on the ZFS pool?

Iā€™m pretty sure ZFS wonā€™t sustain speeds much faster than the pool itself.
So for a faster transfer, I would posit that a pool of faster vdevs might be worth in investigating. (Mirrors are a good start, flash drives better, NVMe probably best you can get)

Or a different method of caching; ZFS wants to protect data, so goes to lengths to ensure itā€™s accuracy, with checksums etc, and it works to store the data safely, via redundant vdevs.
A reason for the ZIL/ SLOG, it to safely store the data while it works out where and how to store in on the pool.
IIRC, it caches approx 3-5 seconds worth of data before flushing it out to the pool.

You might try making a dummy pool m, like with RAM disks, to experiment with speeds etc, but you would need a fair amount of spinning HDDā€™s in mirrors to consume a 40gig input stream

Or unraid has that thing where they cache stuff to a single drive, and at the end of the day, if the drive has not died yet, it writes out to an array?
Not sure if one can mirror the cache drive, or one looses everything not written out yet.
I donā€™t understand the unraid thing, but I am sure it has some cache thing.

I guess I have to look into SSD or even NVME vdev. Itā€™s gonna hurt the wallet for sure tho. I do use unraid cache, but the max they allow is 2 cache drives and can only be mirrored, it wonā€™t let you stripe the drives. Plus, whenever the files are offloaded into the array, the write speed is dependent on the parity drive, so itā€™s very slow and when that happens, the array is unusable.

Forgive me for I am super new to this kind of stuff, but once I have the zpool created and working how do I make that storage available over the network? For the time being I am not using any VMs or anything like that just mainly want a fast network share that has the benefits of ZFS and the UI of unRAID. I have three hard drives that I already plan on using for the unraid array that was mentioned as a requirement in the video, just not sure what to do with all the ssds in my zpool to make them accessible as a network drive

I guess my main question is how did you access warp and engineering and holodeck through another computer after creating them, I am assuming you go them to show up in the shares tab of unraid but have no idea how to make that happen

It looks like you just share your dataset as a Samba share like you would a directory. The datasets just look like directories under /mnt/
Presumably Unraid GUI has a page for setting Samba shares.

The l2arc drive can be a small Ā£20 120GB SSD. It can be slapped onto a running vdev with one command.

Probably to create the pool as a mirror stripe.

Or rather, why use Unraid if youā€™re not going to use itā€™s amazing magic of being able to throw whatever disks into the box as and when? All other storage systems are quite tricky to add drives to. I extended an LVM XFS system without too much trouble but hardly easy or intuitive. TrueNAS and Proxmox come with ZFS so they seem obvious choices.

just a little hint here.

you can install FIO using the nerdpack plugin.

Man, I canā€™t believe Iā€™m just now finding this.
I have a TrueNAS box with 8x4TB, an Unraid box with 5x3TB, and another small Ryzen box that I just set up with SnapRAID/MergerFS to play with slapping in a bunch of old laptop drives because I did not want to pay for another Unraid license.

After seeing this, Iā€™m thinking I can combine them all into the Unraid box. 6x4TB z2 + 5x3TB z2 = 24 TB for zfs, use the old laptop disks for the Unraid array, then take the remaining 2x4TB that iā€™ll slap in a small system, zfs mirror, and take that to my parentsā€™ for an off-site backup of critical stuff like family photos.

This is going to be a fun excursion.

1 Like

Hello,
Iā€™ve created several datasets to tune parameters but have shared my whole zfs tank in samba.
The shadow copy refuses to show up on any of my files. Works only in the root of my share.

Layout of datasets:
tank
tank/documents
tank/media
tank/backup

smb.conf
[tank]
path = /tank
browsable = yes
writable = yes
read only = no
force user = lshallo
dfree command = /etc/samba/space.sh
vfs objects = shadow_copy2
shadow:snapdir = .zfs/snapshot
shadow:snapdirseverywhere = yes
shadow:sort = desc
shadow:format = -%Y-%m-%d-%H%M
shadow:snapprefix = ^zfs-auto-snap_(frequent){0,1}(hourly){0,1}(daily){0,1}(monthly){0,1}
shadow:localtime = no

Samba: 4.11.6
ZFS: 0.8.3

Well, firstly thank you for this, you made implementing ZFS a breeze on unraid !

I have a question, can you use ā€œrecycle binā€ plugin on ZFS pool ?
As I understand once you delete something, is it gone forever. Is there a way to add a Trash Bin like folder ?

Thanks !

Yes for smb shares itā€™s an smb.conf setting around vfs objects

Yeeahhhh thatā€™s awesome !!

For those interested I did

  vfs objects = recycle
  recycle:repository = .recycle
  recycle:keeptree = yes
  recycle:versions = yes

Itā€™s two years later. Is this still the case? Iā€™m new to Linux and about to set up ZFS on unraid when I realized Iā€™m not using any core unraid functionality.

Plus, after setting up my new Synology box, I think Iā€™d prefer flexibility. The Synology GUI was niceā€¦until you need something that isnā€™t there. Then youā€™re trying to figure out how they do their magic in an SSH terminal.

This is what Iā€™m setting up:

  • ZFS
    • raid1 (Two SSDs):
      • SQL tempDB
      • SQL log files
    • raidz2 (Eight 4TB WD Red Pros):
      • SQL data files
      • SQL Backups ā†’ rsync to Synology
      • file shares ā†’ rsync to Synology
      • machine images
  • Docker
    • MSSQL
    • caddy (reverse proxy, static website)
    • discourse forum

they added support for zfs to unraid!

So truenas scale is maybe the best fit.

Instead of docker, you can do podman, the video I did yesterday is a good start on this.

Podman is docker-compatible with a web-gui on an alternate port vs truenas scale.