ZFS On Unraid? Let's Do It (Bonus Shadowcopy Setup Guide) [Project]

Hi. I was wondering if iSCSI is working in this combination, Unraid and ZFS I mean. Right now I am running Freenas and everything is an iSCSI target except one SMB share. Also if yes, is the performance the same or at least similar?

If this works I am deleting freenas simply because of docker and vm.

This is one of the things i was wondering. I know iscsi is a kernal process so it would need to be added like zfs was. problem is i’ve been unable to find out how to get it added and unraid says that they will not be adding it. Very tempted to drop unraid as my storage box and use it as an application box as it’s very user friendly for community apps and virtual machines (though i have vmware esxi 6.7 for that).

1 Like

Hi! Forgive me if I am reviving an old thread. I came across your video for GamersNexus and this thread, and one of the things I am unclear on is how to connect the disk shelf. While watching the video, you guys never actually showed the installation of the HBA and the connection from the head unit to the disk shelf.

Wanting to replicate this setup, I went ahead and purchased an LSI SAS 9200-8e and a NetApp DS4246 disk shelf on ebay. It seems I need SFF-8436 to SFF-8088 cables based on what I have read here and the ebay seller’s item page. Do I need 2 cables (both coming from the HBA into the “square” and “circle” ports of one IOM6. Or do I need 3 cables (1 from HBA, one from “square” on IOM6-1 to “circle” on IOM6-2, one from “circle” IOM6-2 to HBA (per NetApp connection guide)? Do I even need the second IOM6?

Thanks!

One cable in to each iom6 and that’s it. You do need a fancy $100 cable to go from the qsf+ connector to the external connector on your sas controller.

Two cables to each iom6 doubles your bandwidth if you do active active

Thanks! I did order those fancy cables, but was able to find them cheaper on Amazon ($43 each).

1 Like

A post was split to a new topic: GPU passthrough with Cockpit?

Thanks for the guide, I’m one of those newbs who’s expecting to be tripped up when I build my new unRAID setup later this week. I loved the sound of everything unRAID has to offer (mainly the way it handles VMs and containers), but the idea of taking 6x 4TB WD Reds and not getting any sort of performance boost above a single drive really bothered me - looks like this might be the solution.

I may have over read this but does the zfs pools you create show up in unRaid? Like I said I may have missed that part as I’ve been researching what to us on my Epyc server for 3 days now.

Before I build this thing and embark on a whole bunch of trial and error, I was hoping someone might be able to take a quick look at my plan and see if I’m doing anything glaringly stupid.

The plan is to put 6x 4TB WD Reds in RAIDZ-1 with ZFS, since I don’t need more than one parity disk and want the most performance I can get out of them. (Am I right in that putting all 6 drives in an unRAID array would limit performance to the same read speeds as a single drive?) Then, hopefully unRAID will allow me to use a single 1TB WD Black NVMe drive as a single drive “array” with no parity? (Where all my VMs will live.)

I’m also wondering about this - if unRAID can’t see my ZFS pool, how do I create shares that are visible to VMs and containers? Do I manually configure the shares in samba? Apologies for the simple questions.

No. You’ll use the terminal via web browser to interact with this.

This is correct, unless you set up caching, which is a whole other topic.

I havent tried NVME to confirm, but yes, you should be able to do it this way.

As you stated, samba is the way you would go about that assuming you didnt use NFS which I would advise personally. Its up to you but you are correct in that you will have to dig into that manually.

1 Like

Yeah once I started poking around I was able to create the pool, mount it and share it. I guess my next thing would be if I need to add more memory to this server. I’m at 64GB right now and I want to add at least 3-4 VMs to it and utilize ZFS. Since ZFS is a memory hog I suspect I’ll be running out of memory between 50-60TB of disk and the VMs. The plan is at least I’ll have 20 drives of mixed sizes that I want to put into 1 zpool. HOPEFULLY I can pull that off. On another topic the VM performance of this thing is amazingly fast. Faster than Citirix Xen Server and FreeNAS.

One thing not mentioned in this thread to mount a zpool you would need to do the following.

zpool create poolname raidz1 /dev/sda /dev/sdb -m /mnt/yourpoolname

Then you can edit the smb config file mentioned to create the share.

1 Like

Wouldn’t it be better to use /dev/disk/by-id/ for each drive, because drives shift around?

Not such a problem for daily use- on boot zfs checks the zpool.cache for existing devices, but replacing when one goes wrong (all drives die) might be easier if drive letters shifted and you have to replace /dev/sdb (old, died and now on /dec/sdc) with /dev/sdb (new drive added and the system assigned that letter)

Just a thought?

1 Like

I got the basics of setting zfs up and have tested it thus far now it comes to what would be the best setup to use.

I have 4 512GB NVMe drives installed in this server and will have 8 6TB drives at first to get the data from the old nas moved over. Then, I’ll be adding another 14 drives to the system from the old nas. I’m thinking of doing a raidz or a mirror for 2 of the NVMe drives for VMs and I’m curious how you would add a cache drive or if it would even be worth it do a cache drive? And can you do a raidz1 cache drive too?

Pretty happy overall with my setup now, I went with:

1x 1TB WD Black NVMe as a single disk array, which holds all VMs - no important data lives on the VM disk images, so those get backed up daily to the ZFS array
1x 1TB WD Black NVMe as a passthrough PCI device to the Windows VM for native performance
6x 4TB WD Reds in the ZFS array with a single parity disk (~20TB total storage)

Something I noticed is that the ARC seems to expand and fill almost all available RAM (on this machine that’s about 40GB of spare memory), and while apparently that’s memory that should be freed up when other things need to use it, that isn’t the case when launching VMs in unRAID. If I have a 16GB Windows VM running and ZFS hogs the remaining RAM, unRAID won’t allow any other VMs to launch until that memory is freed up. (I put a limit on the size of the ARC to solve this, but thought it was a bit odd.)

The only thing I wish is that I could pin ZFS to a specific CPU core - does anyone know if this is possible?

Yeah the ZFS ARC doesn’t release the memory until reboot most of the time. Sure, it’ll fluctuate 1-4GB but to release all of it requires a reboot. As for specific to CPU cores I don’t think that’s possible.

You could, if you wanted, export the pools, then import.
That’d clear the ARC, but probably better off tuning it to a lower max, if it’s often taking too much ram for your tastes?

1 Like

To All (Wendell Included)

I have been running my Unraid/ZFS setup since Nov and it is still working exactly as I hoped.

TR 2920X, 64gb ram, GT710, 10gb Solarflare fiber nic
Cache: 3x2tb 660p + 4x1tb 660ps
Diskshelf4226: 4ea 6tb HGST drives (2 parity, 2 main),
9ea 3tb HGST drives (3 x 3 ZFS pool)
5tb removable drive for important backup (kept in the safe)
1VM = 8gb ram, GTX1070FTW, 200gb (using Cache array) HDMI out to an HDMI extender (HDMI & USB2) via POE extender upstairs to a 24" 1080p monitor, USB mouse/keyboard. (I can play any game without issue)

The ZFS array is used for speedy access and at night it does a comparative backup to the main array.

@wendell or anyone that can help.

Let me start by thanking wendell and anyone that has contributed to this post. I’ve been following this guide and have run into some issues that, and have no idea what else to do. (Some background about myself: I’m more or less tech literate, but when it comes to stuff outside of windows i’m pretty much a noob.)

Problem: zfs pool composed of two vdevs in raidz 2x(5x10TB) has abysmal performance as reported by fio reads and write around 10MiB/s with the '–sync=1" and around 40MiB/s without. fio output at the bottom.

System: Ryzen 7 1700, 16GB ECC (2x8GB), LSI HBA 9400-16i, GT710(pciex1), (MOBO SATA connectors not used), Melanox 10Gbps NIC, 10x 10TB WD HDD, WD black M.2 Nvme 512GB.

Previous setup: Windows 10 Pro Workstation, ReSF formated StorageSpaces dual parity with 10 column setup with LSI controller. The sq write performance was comparable to single disk native performance ~180MB/s, but it was constantly fluctuating from peak to as low as 28MB/s , and the fuller it got the peak write kept decreasing. Also tried, RAID 6 with HighPoint RocketRaid 2840 with more consistent transfer rates. Sustained sq write was about 376 MB/s. I did not like the performance of storage spaces as it decreased as the drive got fuller, and I didn’t like that I was limited to NTSF with the raid card and it didn’t have native file integrity features.

Expectation: from the guide, zpool of 1 vdev of raidz, was getting ~180MiB/s, comparable to single disk performance. Since i’m using 2 vdevs, I was under the impression that it would be a bit higher.

Usage case: Similar to what was covered in the video. Repository for media, and video project. Plex Server, steam/lan cache, etc. total expected users about 5 and expected simultaneous users about 2-3.

What I tried: I followed the guide to the best of my ability, I did catch some typos here and there that were covered in the comments. I had to use the fio-2.21 package because when I ran the fio-3.14, I was getting a missing file error.
I tried to flash the latest firmware/bios to LSI card, but was unsuccessful.

Hypotheses: (slow performance)
Not up to date firmware/bios in LSI card.
Not enough RAM.
Inefficient zpool set up.
faulty disk, but all show healthy in SMART and each one of them in diskmark have about same performance results from each other.

Questions:
-What is causing the horrendous performance, and how can I fix it?
-Would changing it to a zpool with 3 raidz vdevs be better?

-How do you mount the zpool to UnRaid? I’m kind of unsure if I need the shadow copies, but wanted to mount the zpool and test it as a share.

Thank you in advance.

fio output

:~# fio --direct=1 --name=test --bs=256k --filename=/BigBoii/test/whatever.tmp --thread --size=32G --iodepth=64 --readwrite=randrw
test: (g=0): rw=randrw, bs=® 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=psync, iodepth=64
fio-2.21
Starting 1 thread
Jobs: 1 (f=1): [m(1)][99.8%][r=41.0MiB/s,w=42.5MiB/s][r=164,w=170 IOPS][eta 00m:02s]
test: (groupid=0, jobs=1): err= 0: pid=12411: Sat Feb 15 00:26:41 2020
read: IOPS=81, BW=20.4MiB/s (21.4MB/s)(15.9GiB/797870msec)
clat (usec): min=68, max=294247, avg=12117.06, stdev=9721.38
lat (usec): min=69, max=294248, avg=12117.77, stdev=9721.38
clat percentiles (usec):
| 1.00th=[ 732], 5.00th=[ 5792], 10.00th=[ 7136], 20.00th=[ 8256],
| 30.00th=[ 9024], 40.00th=[ 9536], 50.00th=[ 9920], 60.00th=[10304],
| 70.00th=[10816], 80.00th=[12736], 90.00th=[17792], 95.00th=[27264],
| 99.00th=[58624], 99.50th=[65280], 99.90th=[98816], 99.95th=[108032],
| 99.99th=[201728]
bw ( KiB/s): min= 6144, max=46080, per=0.10%, avg=20897.58, stdev=4825.14
write: IOPS=82, BW=20.6MiB/s (21.6MB/s)(16.1GiB/797870msec)
clat (usec): min=40, max=105560, avg=108.13, stdev=663.48
lat (usec): min=43, max=105563, avg=111.80, stdev=663.57
clat percentiles (usec):
| 1.00th=[ 46], 5.00th=[ 56], 10.00th=[ 75], 20.00th=[ 78],
| 30.00th=[ 80], 40.00th=[ 82], 50.00th=[ 89], 60.00th=[ 97],
| 70.00th=[ 99], 80.00th=[ 103], 90.00th=[ 114], 95.00th=[ 124],
| 99.00th=[ 334], 99.50th=[ 438], 99.90th=[ 2288], 99.95th=[ 9536],
| 99.99th=[14144]
bw ( KiB/s): min= 2560, max=46592, per=0.10%, avg=21124.17, stdev=6594.64
lat (usec) : 50=1.21%, 100=34.00%, 250=14.23%, 500=0.74%, 750=0.66%
lat (usec) : 1000=0.40%
lat (msec) : 2=0.10%, 4=0.31%, 10=24.07%, 20=20.79%, 50=2.61%
lat (msec) : 100=0.83%, 250=0.05%, 500=0.01%
cpu : usr=0.14%, sys=2.17%, ctx=72474, majf=0, minf=0
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwt: total=65179,65893,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
READ: bw=20.4MiB/s (21.4MB/s), 20.4MiB/s-20.4MiB/s (21.4MB/s-21.4MB/s), io=15.9GiB (17.1GB), run=797870-797870msec
WRITE: bw=20.6MiB/s (21.6MB/s), 20.6MiB/s-20.6MiB/s (21.6MB/s-21.6MB/s), io=16.1GiB (17.3GB), run=797870-797870msec

Is lsi card flashed to IT mode?