ZFS On Unraid? Let's Do It (Bonus Shadowcopy Setup Guide) [Project]

I keep seeing people ask why one might do this.

One reason you might really want to use unraid is the simplicity of the user created apps ‘store’.

Once that plugin is installed you gain access to a bunch of user generated content that is a click away.

Install any one of these docker containers, and boom, you’re off to the races. Theres a ton of stuff on here. Its one click install (with maybe a little initial setup). This is extremely appealing and more akin to a software center than what most people think of when they create docker containers.

Some might prefer a more granular approach with greater control but I just wanted something to just work™ and so far unraid has done a better job than freenas for me. I had some problems with freenas causing me to need to troubleshoot it from time to time out of seemingly nowhere. I dont have to do that with unraid.

It should be said that I’m not a slick daddy sysadmin or a code monkey. I’m just a user and i’m coming from a synology box. This is just enough rope for me to do something with. I considered the roll your own solution with ubuntu 19.10 and cockpit but opted instead for this and im glad I did.

7 Likes

Log time fan, first time posting, but this build is what I’ve been looking for these past several years as I feel it will meet my needs well into the future. If part 2 doesn’t cover it, could you go in depth on how the storage arrays are configured? I like the idea of having an array on the first box and being able to add a disk shelf as my needs grow, but I’m unsure how they are configured or what steps need to be taken when adding it to an already existing unraid device. I get that some sort of HBA card would be needed to connect the two boxes, but if you could walk through configuring and adding it that would be helpful.

2 Likes

This is probably not documented anywhere and it’s not a good long term solution, but you can set up VMs with PCIe passthrough devices using vm-bhyve, which comes preinstalled. You’ll have to set an rc tunables vm_enable=YES and vm_dir=zfs:${your storage pool/dataset}, set the loader tunable pptdevs=${your pci devices to reserve for passthrough} and do the configuration of VMs on the command line with the vm command. It may not have an easy polished GUI (and you’ll lose the convenient web vnc client), but where there’s a will there’s a way.

I know you’re already well finished with this project but I figured someone might find this idea useful anyway.

3 Likes

I attempted this and it worked on the 12.x branch but not 11 on amd, but with more fiddling I might have been able to get it.

I did have better luck on freebsd…

https://clonos.tekroutine.com/ was maybe also worth trying but I didn’t get a chance to yet

If nothing else wonder if stuff from this can be copy paste into freenas… :smiley: :smiley: :smiley:

1 Like

I don’t think I’ve had a chance to try that one out yet either, but it does look appealing.
In general I tend to stick with vanilla FreeBSD. I know my way around it well enough to make it work for my basic needs. I don’t get too fancy with things and when I do find weird bugs I get satisfaction out of trying to fix it and submit a patch.

I think FreeNAS 12 is switching to libvirt for managing the bhyve VMs and I saw commits about fixing pci passthrough which presumably means they intend to expose that feature somehow. I haven’t messed with that in FreeNAS at all myself, but I have been hacking on libvirt on vanilla FreeBSD adding a few missing features to the bhyve backend and the zfs storage backend. I honestly don’t blame people for running Linux when they need to host VMs.

Thanks for the reply. I guess we have to see in the second part what were those specific use cases.
Anyway if a tech channel can’t handle a bit of NAS/Server/Linux configuration well they are not very “tech”, are they :wink:

Proxmox vfio howto: https://pve.proxmox.com/wiki/PCI(e)_Passthrough

Note that I just found the page, I don’t pretend to know what you’d use it for or why…

@wendell I just found this thread. One thing I would note, and I am sure you already have a way to overcome it, but you have listed you want to move to 2666 64GB on this setup.

I have both the regular and 2T board, and the 2T board documentation seems to be a bit out of date compared to the X470D4U, but if you look at the memory chart for Matisse on the D4U’s board (which should be the same limitations for both):

You are limited to only 2400 with 4 x slots populated with DR memory. I don’t think you can get 16GB DIMMS that are ECC and SR, right (assuming you are using 16GB DIMMS)? So you may be in the same boat I am and stuck on DDR4 2400 to get to 64GB, still not that bad but wanted to call it out.

There isn’t really a perf penalty from slower memory but more of it because it operates the ranks in parallel. Usually it’s pretty close. Plus you may be able to oc a bit since the imc is quite good on Ryzen 3000

Agreed. I did upgrade though from a 2700X (Firesale at microcenter) to a 3700X just to get the bump in perf.

My DDR4 2400 ECC 16GB sticks on the 2700X dropped to 1866, that was too low for me, so I returned the 2700X and forked over the cash for the 3700X to get DDR4 2400 speed across 4 x sticks.

Just wanted to mention that to you, in case you were going to spend extra $ to get 2666 only to have it drop to 2400. Might be cheaper for you to just get 2400 out of the gate, that’s all.

1 Like

So i have the same case , but that 9405w seems crazy expensive. Its like 650 euros here. I am clueless and have never used a HBA. A cheaper HBA, like a LSI SAS 9300-8i should work for all the internal bays right?

Yep that’ll work fine. The OEM cost on that adapter is around us $250 but only reason I got that is for 1 shelf 4x4 channel for Steve. Plus it supports nvme later.

1 Like

@wendell Thank you for your responds and damn i wish i was an OEM :laughing: , $250 that is actually reasonable. But like i said i have a Silverstone CS381, i wish to combine it with a yet to be purchased: 3700x and an ASRock Rack X470D4U2-2T or X470D4U. However i am having a lot of trouble deciding on the CPU cooler. In the video literally everything you try has some sort of issue. There seems to be very little space too work with, i noticed that silver stone advertises the case with: ‘‘For CPU cooling, the CS381 has room to accommodate up to a 240mm radiator for nearly unlimited CPU choices.’’.
But after watching the video it looks like its very hard to even place a radiator. And honestly a radiator literally sitting on top of the power supply makes me nervous, so i would rather use air cooling. Do you know of any option that would allow me to properly cool this system without the ‘‘not recommend’’ heatsink flip? I havent been able to find anything myself.

Hey guys, just wanted to let you know there is a easier tool (IMO) for automatic snapshots called znapzend.
The link has been added to the original ZFS plugin post on the unRAID forums.

I cannot link in the forum but the plugin is called znapzend-plugin-for-unraid

1 Like

@wendell

(you mentioned multipath in the part 2 video around 17-18min)

Since theres only a single HBA, do the drives show up with multiple paths because of the backplane in the disk shelf or did you mean device mapper?

If you meant multipath, do you like to use friendly names, or turn it off?

because of the disk shelf, there are 4 groups of 4 channels on the hba, and the disk shelf is connected via 2 groups of 4 channels, so from the kernel’s perspective, there are two paths to each disk because each group of 4 sas ports it technically seen as a “different controller” even though they are on the same pcb.

I like using friendly names of wwn for multipath, usually.

1 Like

Do you filter the devices in the multipath config so it only picks up /manages the disk shelf drives, or do you configure it to just find multipath devices?

I like to blacklist ‘*’ and then define the devices I want it to see/create multipaths for.

well, I just let it do auto because I want it to work when I migrate the shelf elsewhere. Plus I like active/active pathing for speed. :smiley:

So, I was looking at UnRaid for server designed specifically for my own game streaming in my house.

However, after doing a little research, i thought i may be better off running CentOS with Cockpit.

This is going to be a headless system so I need a WebGUI…

My plan was for the Linux host to be a NAS/Plex server and then have the separate gaming VMs have a 256gb SSD and then the games i was goinf to have user folders on the Linux/NAS side so it would be easy to add more storage.

After seeing this, would Unraid be a better choice?