ZFS On Unraid? Let's Do It (Bonus Shadowcopy Setup Guide) [Project]

Turns out that worked! I just miss-clicked Shutdown instead of Reboot :sweat_smile:

Thanks for the help!

2 Likes

I can fully understand why there was the 1 Windows 10 VM and Passthrough because FFmpeg VA-API is still spotty on Linux. ESPECIALLY on Navi.

But setup a built from scratch with AMD VCE enabled build of FFmpeg with batch scripts. Handbrake still hasn’t fixed their variable frame rate issue that reaks havok with Premiere.

FFmpeg with -r 60000/1001 is good for compressing their A-roll and B-roll. The problem with recalling A-roll from their renders is Premiere already turned 59.94fps to 60.00fps. Raw footage should follow the framerate of the source file.

Blame the NTSC for deciding drop frame framerates were what were needed to bring color TV to the market.

I am able to see my shares in Windows, but upon creating or copying a file over I get an error.
I need permission.
Here is what I have setup so far
#unassigned_devices_start
#Unassigned devices share includes
include = /tmp/unassigned.devices/smb-settings.conf
#unassigned_devices_end

[data]
path = /dumpster
browseable = yes
guest ok = yes
writeable = yes
read only = no
create mask = 0775
directory mask = 0775
vfs objects = shadow_copy2
shadow: snapdir = .zfs/snapshot
shadow: sort = desc
shadow: format = zfs-auto-snap_%S-%Y-%m-%d-%H%M
shadow: localtime = yes

[test2]
path = /dumpster

Secure

public = yes
writeable = yes
write list = mike, test, mytime34

No comma separated? Space separated is fine … What is the file system permissions for ok m the yerminal? eg. ls -l /

Should be nobody:users with 775 on the root path ‘dumpster’. Though making an additional dataset under dumpster is recommended

Wendell,
Thank you for the response.
I am used to using Unraid, but not command lines (at least not in Linux)

ls -l /
total 9
drwxr-xr-x 2 root root 3720 Nov 24 2018 bin/
drwxrwxrwx 9 root root 8192 Nov 12 21:01 boot/
drwxr-xr-x 15 root root 4920 Nov 12 22:47 dev/
drwxr-xr-x 3 root root 3 Nov 12 20:42 dumpster/
drwxrwxrwx 54 root root 2880 Nov 12 22:46 etc/
drwxr-xr-x 2 root root 40 Jun 25 13:18 home/
lrwxrwxrwx 1 root root 10 Jun 25 13:18 init -> /sbin/init*
drwxrwxrwx 7 root root 140 Jul 11 03:26 lib/
drwxr-xr-x 5 root root 3340 Nov 10 15:16 lib64/
drwxrwxrwx 3 root root 60 Nov 12 21:29 media/
drwxr-xr-x 9 root root 180 Nov 12 22:46 mnt/
drwx–x--x 3 root root 60 Nov 12 17:18 opt/
dr-xr-xr-x 558 root root 0 Nov 10 15:15 proc/
drwx–x— 2 root root 160 Nov 12 22:53 root/
drwxr-xr-x 9 root root 240 Nov 12 17:18 run/
drwxrwxrwx 2 root root 5000 Jul 11 03:26 sbin/
dr-xr-xr-x 13 root root 0 Nov 10 15:15 sys/
drwxrwxrwt 15 root root 320 Nov 13 07:37 tmp/
drwxrwxrwx 15 root root 360 Jul 11 03:26 usr/
drwxr-xr-x 14 root root 320 Jan 9 2017 var/

So yep

I think you want

chown nobody:users /dumpster

And

chmod 775 /dumpster

Then restart samba

Thank you again.

I have the following setup and hoping to make a “better” Unraid setup
TR 2920x
32gb Ram
4x1tb 660p NVME drives (drives for VMs)
3ea 2tb 660p NVME drives (unraid cache)
4ea 6tb HGST 7200rpm drives SAS 12gb/s(2ea parity, 2ea array drives)
using lower controller
9ea 3tb HGTS 5400rpm drives sata (ZFS pool)
using upper controller
2ea GTX1070s
DS4226 with dual controllers and connected to a 12gb/s hba

I want a nice fast array, but I also want a backup with redundancy.

I do have a question about the snapshot installation. Did you install that from the GUI or did you do it remotely? I tried remotely and was getting errors

From the GUI just as in the link to unraids forum

One you got the script at some path you can. Call it from user scripts on a schedule

I started over and followed your posting to the letter (unlike last time I was using the remote setup)

root@Pughhome:~# fio --direct=1 --name=test --bs=256k --filename=/mnt/dumpster/test/whatever.tmp --thread --size=32G --iodepth=64 --readwrite=randrw ( --sync=1 )
-bash: syntax error near unexpected token `(’

Fio is just for testing so you could skip that bit. Which part you stuck on?

I am trying to run the FIO test. I want to see how well the pool runs

I did have to change one part of the process.
I had to delete the “raidz” option as it would not initiate, once I ran it without that it created the zpool without issue
zpool create raidz dumpster /dev/sdb /dev/sdc /dev/sdd /dev/sde

root@Pughhome:~# zpool status
pool: dumpster
state: ONLINE
scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    dumpster    ONLINE       0     0     0
      sdb       ONLINE       0     0     0
      sdc       ONLINE       0     0     0
      sdd       ONLINE       0     0     0
      sde       ONLINE       0     0     0
      sdf       ONLINE       0     0     0
      sdg       ONLINE       0     0     0
      sdh       ONLINE       0     0     0
      sdi       ONLINE       0     0     0
      sdj       ONLINE       0     0     0

errors: No known data errors

Here is the message I get when trying to run the FIO without the (–sync) option

root@Pughhome:~# fio --direct=1 --name=test --bs=256k --filename=/mnt/dumpster/test/whatever.tmp --thread --size=32G --iodepth=64 --readwrite=randrw
test: (g=0): rw=randrw, bs=® 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=psync, iodepth=64
fio-3.7
Starting 1 thread
test: Laying out IO file (1 file / 32768MiB)
fio: pid=0, err=2/file:filesetup.c:161, func=open, error=No such file or directory

Run status group 0 (all jobs):

You may need raidz1:-- not sure this pool isn’t just a big stripe which is not great

1 Like

Here is my ZFS list

root@Pughhome:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
dumpster 1.31M 23.7T 96K /dumpster
dumpster/test 96K 23.7T 96K /dumpster/test

I will destroy the zpool and try raidz1.

Thank you again for the help

Np. I can remote in worst case scenario. Helpful for me to know what trips up newbs so I can write better guides.

1 Like

I found the issue with the zpool creation.

I had to put “dumpster” first and “raidz” after it, now my array is correct.
root@Pughhome:~# zpool create dumpster raidz /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj
root@Pughhome:~# zpool status
pool: dumpster
state: ONLINE
scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    dumpster    ONLINE       0     0     0
      raidz1-0  ONLINE       0     0     0
        sdb     ONLINE       0     0     0
        sdc     ONLINE       0     0     0
        sdd     ONLINE       0     0     0
        sde     ONLINE       0     0     0
        sdf     ONLINE       0     0     0
        sdg     ONLINE       0     0     0
        sdh     ONLINE       0     0     0
        sdi     ONLINE       0     0     0
        sdj     ONLINE       0     0     0

errors: No known data errors

2 Likes