ZFS On Unraid? Let's Do It (Bonus Shadowcopy Setup Guide) [Project]

Probably to create the pool as a mirror stripe.

Or rather, why use Unraid if you’re not going to use it’s amazing magic of being able to throw whatever disks into the box as and when? All other storage systems are quite tricky to add drives to. I extended an LVM XFS system without too much trouble but hardly easy or intuitive. TrueNAS and Proxmox come with ZFS so they seem obvious choices.

just a little hint here.

you can install FIO using the nerdpack plugin.

Man, I can’t believe I’m just now finding this.
I have a TrueNAS box with 8x4TB, an Unraid box with 5x3TB, and another small Ryzen box that I just set up with SnapRAID/MergerFS to play with slapping in a bunch of old laptop drives because I did not want to pay for another Unraid license.

After seeing this, I’m thinking I can combine them all into the Unraid box. 6x4TB z2 + 5x3TB z2 = 24 TB for zfs, use the old laptop disks for the Unraid array, then take the remaining 2x4TB that i’ll slap in a small system, zfs mirror, and take that to my parents’ for an off-site backup of critical stuff like family photos.

This is going to be a fun excursion.

1 Like

Hello,
I’ve created several datasets to tune parameters but have shared my whole zfs tank in samba.
The shadow copy refuses to show up on any of my files. Works only in the root of my share.

Layout of datasets:
tank
tank/documents
tank/media
tank/backup

smb.conf
[tank]
path = /tank
browsable = yes
writable = yes
read only = no
force user = lshallo
dfree command = /etc/samba/space.sh
vfs objects = shadow_copy2
shadow:snapdir = .zfs/snapshot
shadow:snapdirseverywhere = yes
shadow:sort = desc
shadow:format = -%Y-%m-%d-%H%M
shadow:snapprefix = ^zfs-auto-snap_(frequent){0,1}(hourly){0,1}(daily){0,1}(monthly){0,1}
shadow:localtime = no

Samba: 4.11.6
ZFS: 0.8.3

Well, firstly thank you for this, you made implementing ZFS a breeze on unraid !

I have a question, can you use “recycle bin” plugin on ZFS pool ?
As I understand once you delete something, is it gone forever. Is there a way to add a Trash Bin like folder ?

Thanks !

Yes for smb shares it’s an smb.conf setting around vfs objects

Yeeahhhh that’s awesome !!

For those interested I did

  vfs objects = recycle
  recycle:repository = .recycle
  recycle:keeptree = yes
  recycle:versions = yes

It’s two years later. Is this still the case? I’m new to Linux and about to set up ZFS on unraid when I realized I’m not using any core unraid functionality.

Plus, after setting up my new Synology box, I think I’d prefer flexibility. The Synology GUI was nice…until you need something that isn’t there. Then you’re trying to figure out how they do their magic in an SSH terminal.

This is what I’m setting up:

  • ZFS
    • raid1 (Two SSDs):
      • SQL tempDB
      • SQL log files
    • raidz2 (Eight 4TB WD Red Pros):
      • SQL data files
      • SQL Backups → rsync to Synology
      • file shares → rsync to Synology
      • machine images
  • Docker
    • MSSQL
    • caddy (reverse proxy, static website)
    • discourse forum

they added support for zfs to unraid!

So truenas scale is maybe the best fit.

Instead of docker, you can do podman, the video I did yesterday is a good start on this.

Podman is docker-compatible with a web-gui on an alternate port vs truenas scale.

How is the podman-compose project going? As I understand it … Its very alpha and for those whom use compose to orchestrate. Might be a deal breaker if its not functioning well yet or not a drop in replacement.

It’s fine. There’s a couple rough spots but minor changes to your docker compose yaml is probably all that’s needed if it doesn’t work.

It’ll be fine for anything homelabish

1 Like

Podman does not work with gitlab-runner which is a deal breaker for me.

I looked at TrueNAS Scale and was excited until I saw you trying to manage the VMs. Like I get that it’s easier to get started with these types of appliance distros, but I find it very frustrating to fight a black box with a CLI.

What am I really losing by going with fedora server + cockpit?

The git version from a few weeks ago should. At least it has so far for me

This is very similar to how I did it

https://der-jd.de/blog/2021/04/16/Using-podman-instead-of-docker-for-your-gitlab-runner-as-docker-executor/

1 Like

Kind of a lot. Minimally you’d want to setup zfs manually and the smb and the vfs objects so you get shadow copies on the smb share that represent the zfs snapshots.

You can totally use virsh from the cli and it’s fine. It just doesn’t show in the truenas gui (you wouldn’t want it to because it will clobber your config anyway)

Ohhhh interesting. I’ll have to take a look.


Edit: Looks like it may get official support now that paying customers want it.

So is Unraid actually adding proper support for ZFS or is it still all just talk still? Haven’t kept a very good eye on Unraid for a while but I’m keeping an eye on it now, haven’t seen much aside from a few minor posts.

I’m trying to move away from FreeNAS and thought Scale would be good but Docker / Kubernetes gave me nothing but headaches lol. Right now it’ll probably be TrueNAS Core with an Alpine VM for Docker stuff but Unraid with native ZFS might be enough for me to cough up the $$$ for the unlimited license lol.

Blockquote

Run status group 0 (all jobs):
   READ: bw=154MiB/s (161MB/s), 154MiB/s-154MiB/s (161MB/s-161MB/s), io=16.0GiB (17.2GB), run=106862-106862msec
  WRITE: bw=153MiB/s (160MB/s), 153MiB/s-153MiB/s (160MB/s-160MB/s), io=15.0GiB (17.1GB), run=106862-106862msec

On my set-up, numbers are too low. Considering drives are rated @ 7200 rpm. This is mainly with forced sync.

fio --direct=1 --name=test --bs=256k --filename=/zfs/isos/test/whatever.tmp --thread --size=32G --iodepth=64 --readwrite=randrw --sync=1
test: (g=0): rw=randrw, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=psync, iodepth=64
fio-3.23
Starting 1 thread
test: Laying out IO file (1 file / 32768MiB)
Jobs: 1 (f=1): [m(1)][12.6%][r=8704KiB/s,w=7424KiB/s][r=34,w=29 IOPS][eta 31m:48s]

System (Unraid 6.10.0-rc3):

  1. Ryzen 3800x
  2. 64GB RAM @ 3200 MT/s
  3. Motherboard: ASUS X570-E
  4. Unraid main array has Samsung 980-pro
  5. HDDs connected directly to the MoBo SATA ports.

zpool status
pool: data
state: ONLINE
config:

NAME                                   STATE     READ WRITE CKSUM
data                                   ONLINE       0     0     0
  raidz1-0                             ONLINE       0     0     0
    ata-ST10000NM001G-1                ONLINE       0     0     0
    ata-ST10000NM001G-2                ONLINE       0     0     0
    ata-ST10000NM001G-3                ONLINE       0     0     0

Hey all,

I followed this guide around a year ago with 5 drives in a SilverStone CS380 with 8 bays. Foolishly I didn’t consider that I could not remove physical disks from the pool, but I would Ideally like to fill up the rest of the bays with more HHDs since I ran into a bit more money recently.

My thoughts are that I could reasonably remove a disk from the pool, and have two pools with 4 physical disks, or simply add 3 more disks to the pool. I’m not super versed on what option would be better for me and I was hoping to steal some of the smarts from you folks.

zpool status dumpster
  pool: dumpster
 state: ONLINE
  scan: scrub repaired 0B in 04:47:46 with 0 errors on Sat Apr  2 04:42:47 2022
config:

        NAME        STATE     READ WRITE CKSUM
        dumpster    ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            sdb     ONLINE       0     0     0
            sdd     ONLINE       0     0     0
            sde     ONLINE       0     0     0
            sdf     ONLINE       0     0     0
            sdg     ONLINE       0     0     0

Any thoughts worth considering before these Drives ship?