ZFS On Unraid? Let's Do It (Bonus Shadowcopy Setup Guide) [Project]

Well that eliminates not enough pcie lanes.

What are the nvme drives being used for? Cache drives for the pool?
The amount of storage per the book you should have more RAM but how much disk space are you using at the moment?

All the drives are empty. This set up is a replacement storage system to an 8 yr old 8x3TB Windows home server, and some random 5TB or 6TB external drives.

The nvme drive was to try out tier storage in storage spaces to see if it would increase performance, but it was harder to setup and not well supported outside of Windows server OS. Want to use the nvme for fast network transfers to server, edit off from, multiple user access, etc not really sure yet.

Regarding RAM, I was thinking of upgrading to 64GB later on. I heard in the video that 3vdevs in a pool would saturate a 10Gbps NIC, I was hoping that 2vdevs would get me at least halfway there, or at least comparable to Raid 5 or 6 ~400MB/s write and a bit higher for reads.

Ok so nvme drives are not being used as cache drives in zfs. That eliminates them.
One thing that came to mind is maybe the 1700 IPC wise cant handle pushing all of that data. Or maybe your ram speed is low causing some data movement to be slow? To be honest sort of at a loss but want to help cause I am basically doing something similar with proxmox soon.

1 Like

Yea, Iā€™m at a loss too because the setup and the usage case is very similar to Steve from gamer nexus. In the video they have a Ryzen 5 3600(x), they start off with 16GB of ECC RAM, they have 3vdevs in their zpool, and have a nvvme as the only array in unraid.

I was able to get ā€œdecentā€ speeds with storage spaces with same setup, and double the performance with RAID 6 with a RocketRaid card, so Iā€™m really at a loss as to why the zfs has such poor performance. I even tried a fio test with 1GB and still had the same results.

Whatā€™s the speed of your ram?

  1. I donā€™t think speed would affect it much since itā€™s running at Ryzen gen 1 standard speeds.

raidz|720x326

Note that on the ZFS pluggin line, the command given doesnā€™t seem to create a real raidz array as the size is much smaller. Therefore wendellā€™s indications are better.
Using the .sh script wasnā€™t an easy job for me as Iā€™m not a linux user.

I got a hard time understanding how important it is to set snapshots not to the root of a dataset or it will not work.
path = /mnt/dumpster/vms
I was trying : path = /mnt/dumpster/
And ended up with empty snapshots.

Thanks a lot for that content and tutorial, it definitely pushed me towards unraid. However I would have never imagined how much time it required to set up and debug a VM with all the tiny bugs that came with it. It would be impossible with such ressources.

I wanted too use unraidā€™s cache to do video editing, but itā€™s not a good idea. For an obscure reason, it is super slow with a lot of small files, and was making Resolve Buggy. Thanks to you, I now have a raidz pool with 6 sata SSD.
It works (nearly) great now, and Iā€™m a happy man now !!

1 Like

@wendell

Question: How do you proactively remove a drive from a pool before it fails?

Iā€™ve got some pretty old drives in the pool and some of the SMART numbers are worrying me. Iā€™d prefer to get them out of the pool before they actually fail. Is it possible? Or do I have to backup the pool and recreate it?

As no-one else chimed in yet, Iā€™ll give a starting point for your googlingā€¦

If the drive is still more or less okay, the easiest way is to put another drive into the machine, and use the Zpool Attach to copy the old driveā€™s data to a new drive, then Zpool Detach the old drive from the array after the new drive is silvered, whenever convenient for yourself.
You could instead use Zpool Replace function, with the new and old drives both connected, and the system will kick the old drive offline once the new one has been re-silvered.

Depending on redundancy, you can technically offline the old drive, swap it for a new drive, and replace the now ā€œmissingā€ drive with a new drive, but itā€™s better to keep all drives in place, for speed and redundancy, in my opinion.

Please be aware that Attach is very different to Add, and each must not be confused.

1 Like

Can the new drive be of a bigger size and work in these scenarios? I would prefer to replace this old 4TB with an 8TB or greater.

1 Like

Yeah, same or bigger.
It generally wonā€™t use the extra space untill all parts of the vdev is upgraded, but it wonā€™t harm it by having more space for later.
It will not work with smaller though, just in case

One cool thing is you can Replace a drive before itā€™s offline totally and thatā€™s generally faster than resilvering.

I will often replace 3_4 drives at once when changing pool capacity and it works fine

2 Likes

Neat.
Turns out I managed to get a 6 SSD zfs pool unrecoverable in only a few weeks ahaha.
Searched online about zfs errors and got confirmation I couldnā€™t do much.
Possibly two SSD got errors, probably due to a forced reboot.
However I didnā€™t got any notification in the UI.
I need to find a script to display the status info somewhere.
Use the command :

Hopefully no critical data lost, however I would love to easily backup datasets between zpool.
That was I could send my SSD data to some HDD based pool.

znapzend doesnā€™t allow the script used on this topic to be ran along it, so no shadow copy.
zrep/sanoid not unraid compatible.

Iā€™m curious to know what you guys are using ?

Footnote, I ended up setting multiple script with cron settings.
Using the argument : --label= XXX (frequent, hourly, daily, and so on).

Edit : doesnā€™t work with shadow copies as thereā€™s multiple formats.

Hi Wendell,

First time replying/posting anything here. Love your posts on Level1Enterprise Channle

but you have not update it for some times now. As what you said in this video
< RAID: Obsolete? New Tech BTRFS/ZFS and ā€œtraditionalā€ RAID>
never relied your raid system on raid card, I have been seeking for a good solution on good processing power and replaceable processing unit for a storage system, and I found this.

<# RAID No More: GPUs Power NSULATE for Extreme HPC Data Protection>youtube
A HIGHLY RELIABLE GPU-BASED RAID SYSTEM
by
MATTHEW L. CURRY
will this be a new way of storage for consumer levels of usage? raid controller has on board processors its own RAM, and very high bandwidth. if we can harvest the processing power of GPUs and the big pcie Gen 4 bandwith on new platform, is this possible e.g dual GPU as raid processing powerhouse and cheap hard drives as the storage cells for the file system?

Just wondering if you have any experience with this kind setup? this may bring back those GPUs very close to the edge of recycle bins.

Thank you.

1 Like

Interesting!

But you should know that most modern CPUs have SIMD capabilities that are insanely fast, to the point that GPUs probably arenā€™t needed. Also, GPUs are not quite as accurate. Having helped some university folks DIY some gpu-based solutionsā€¦sometimes some funny business happens with gpus a lot more often than errors in cpu-based calculations. This can probably be dealt with in software.

if you notice when using softraid (md) on linux it benchmarks how fast avx2/sse/etc can run on a particular system.

At array speeds like 20 gigabytes/sec+, bandwidth to main memory becomes a real problem for these kinds of on-the-fly calculations.

2 Likes

Thank you for the respond, that helps a lot. Itā€™s very helpful to have someone who has already had and sharing experiences for this type of configuration for raid system.

Level 1 team has been doing a fantastic job making some enterprise level things down to general consumer level IT for both usage and educational purposes. Looking forward your future videos. :+1: :+1: :+1:

1 Like

Thanks for the great guide and listing all the required steps. I managed to create a RaidZ1 and it works like a charm. Very pleased.
After 1 week of having the ZFS pool online I targeted the next step, but am stuck here:

Do I have to follow all the steps in the github link?
Because I only managed to:

*$ git clone https://github.com/zfsonlinux/zfs-auto-snapshot.git*
*$ cd zfs-auto-snapshot*

with the next command I am already stuck and I donā€™t know how to proceed:

*root@unRAID:/zfs-auto-snapshot# git merge origin/leecallen*
*error: Merging is not possible because you have unmerged files.*
*hint: Fix them up in the work tree, and then use 'git add/rm <file>'*
*hint: as appropriate to mark resolution and make a commit.*
*fatal: Exiting because of an unresolved conflict.*

Any help would be appreciated.

all you really need is this:


copy it to somewhere you can run it, run it manually to get a feel for how it works. then the command youā€™re running manually you can use the same command in a scheduled task and it will create the snapshot on that schedule.

Alright, that simple. Got it, thanks :smiley:

Edit:
Ok, next (hopefully last) problem.
I was able to create some snapshots and edited/saved a text file inbetween in the folder that is supposed to be snapped.
My SMB settings are (applied and done):

[Privat]
path = /tank/privat/
browseable = yes
guest ok = yes
writeable = yes
read only = no
create mask = 0775
directory mask = 0775
vfs objects = shadow_copy2
shadow: snapdir = .zfs/snapshot
shadow: sort = desc
shadow: format = zfs-auto-snap_%S-%Y-%m-%d-%H%M
shadow: localtime = yes

And when I navigate in the terminal to the path:

root@unRAID:/tank/.zfs/snapshot

I see some snaps:

zfs-auto-snap_01-2020-04-26-1621/ zfs-auto-snap_01-2020-04-26-1623/ zfs-auto-snap_01-2020-04-26-1624/ zfs-auto-snap_01-2020-04-26-1656/ zfs-auto-snap_01-2020-04-26-1657/

Did a samba reload in the terminal, but canā€™t get the ā€œPrevious Versionsā€ to appear in the properties of the test.txt I edited.

Any ideas?

Edit 2:
@wendell :smiley:

Edit 3:
Ok it is working now. Somehow it took some timeā€¦?! Very happy. :partying_face:

2 Likes