ZFS On Unraid? Let's Do It (Bonus Shadowcopy Setup Guide) [Project]

How is the podman-compose project going? As I understand it … Its very alpha and for those whom use compose to orchestrate. Might be a deal breaker if its not functioning well yet or not a drop in replacement.

It’s fine. There’s a couple rough spots but minor changes to your docker compose yaml is probably all that’s needed if it doesn’t work.

It’ll be fine for anything homelabish

1 Like

Podman does not work with gitlab-runner which is a deal breaker for me.

I looked at TrueNAS Scale and was excited until I saw you trying to manage the VMs. Like I get that it’s easier to get started with these types of appliance distros, but I find it very frustrating to fight a black box with a CLI.

What am I really losing by going with fedora server + cockpit?

The git version from a few weeks ago should. At least it has so far for me

This is very similar to how I did it

https://der-jd.de/blog/2021/04/16/Using-podman-instead-of-docker-for-your-gitlab-runner-as-docker-executor/

1 Like

Kind of a lot. Minimally you’d want to setup zfs manually and the smb and the vfs objects so you get shadow copies on the smb share that represent the zfs snapshots.

You can totally use virsh from the cli and it’s fine. It just doesn’t show in the truenas gui (you wouldn’t want it to because it will clobber your config anyway)

Ohhhh interesting. I’ll have to take a look.


Edit: Looks like it may get official support now that paying customers want it.

So is Unraid actually adding proper support for ZFS or is it still all just talk still? Haven’t kept a very good eye on Unraid for a while but I’m keeping an eye on it now, haven’t seen much aside from a few minor posts.

I’m trying to move away from FreeNAS and thought Scale would be good but Docker / Kubernetes gave me nothing but headaches lol. Right now it’ll probably be TrueNAS Core with an Alpine VM for Docker stuff but Unraid with native ZFS might be enough for me to cough up the $$$ for the unlimited license lol.

Blockquote

Run status group 0 (all jobs):
   READ: bw=154MiB/s (161MB/s), 154MiB/s-154MiB/s (161MB/s-161MB/s), io=16.0GiB (17.2GB), run=106862-106862msec
  WRITE: bw=153MiB/s (160MB/s), 153MiB/s-153MiB/s (160MB/s-160MB/s), io=15.0GiB (17.1GB), run=106862-106862msec

On my set-up, numbers are too low. Considering drives are rated @ 7200 rpm. This is mainly with forced sync.

fio --direct=1 --name=test --bs=256k --filename=/zfs/isos/test/whatever.tmp --thread --size=32G --iodepth=64 --readwrite=randrw --sync=1
test: (g=0): rw=randrw, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=psync, iodepth=64
fio-3.23
Starting 1 thread
test: Laying out IO file (1 file / 32768MiB)
Jobs: 1 (f=1): [m(1)][12.6%][r=8704KiB/s,w=7424KiB/s][r=34,w=29 IOPS][eta 31m:48s]

System (Unraid 6.10.0-rc3):

  1. Ryzen 3800x
  2. 64GB RAM @ 3200 MT/s
  3. Motherboard: ASUS X570-E
  4. Unraid main array has Samsung 980-pro
  5. HDDs connected directly to the MoBo SATA ports.

zpool status
pool: data
state: ONLINE
config:

NAME                                   STATE     READ WRITE CKSUM
data                                   ONLINE       0     0     0
  raidz1-0                             ONLINE       0     0     0
    ata-ST10000NM001G-1                ONLINE       0     0     0
    ata-ST10000NM001G-2                ONLINE       0     0     0
    ata-ST10000NM001G-3                ONLINE       0     0     0

Hey all,

I followed this guide around a year ago with 5 drives in a SilverStone CS380 with 8 bays. Foolishly I didn’t consider that I could not remove physical disks from the pool, but I would Ideally like to fill up the rest of the bays with more HHDs since I ran into a bit more money recently.

My thoughts are that I could reasonably remove a disk from the pool, and have two pools with 4 physical disks, or simply add 3 more disks to the pool. I’m not super versed on what option would be better for me and I was hoping to steal some of the smarts from you folks.

zpool status dumpster
  pool: dumpster
 state: ONLINE
  scan: scrub repaired 0B in 04:47:46 with 0 errors on Sat Apr  2 04:42:47 2022
config:

        NAME        STATE     READ WRITE CKSUM
        dumpster    ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            sdb     ONLINE       0     0     0
            sdd     ONLINE       0     0     0
            sde     ONLINE       0     0     0
            sdf     ONLINE       0     0     0
            sdg     ONLINE       0     0     0

Any thoughts worth considering before these Drives ship?

Has anyone been able to get the fio file?
I built a new server and am unable to find it, so that I can test the zfs speeds

So with Unraid officially rolling out ZFS what is word from everyone on is they have tried it out.

Hello, its working fine, but very rudamentary and basic. The GUI does not allow any special config, like SLOG or lvl2arc and no special Vdevs at this Point.

But a working ZFS via a few clicks and an GUI its working finde.

1 Like

I use ZFS for main array format, I also have 3 Pools with NVME drives and they work good.
Very basic, but you can do stuff via command line

Has anyone ever managed to get shadowcopy to work with nested datasets? Since each dataset has its own snapshots, they appear in /share/sub/.zfs/ but shadowcopy is expecting them in /share/.zfs/ so it doesn’t work.