Freenas build need ZFS pool reconmendations

Hi good people of the interweb! I have been running Freenas for a while now and it is great! I have recently started growing my home environment by adding a XCP-ng box. Things are OK but i am running in to some bottle necks somewhere primarily with plex. my plex is running on a windows VM that has access to both my 1gb network and my 10GB network. I do not believe it is a network issue or a cpu ram issue so that leaves me with how my zfs pools are built. I just ordered 7 4tb hdd to move all my data off my existing drives to allow for a rebuild. they will eventually wind up in a second freenas box to handle backups of the main rig. If you have made it this far in to the post your are probably wondering what hardware I have.

Freenas BOX
MB; Super Micro X8DTL-iF
CPU; dual xeon 5620’s
RAM: 24GB ECC DDR3
HBA: LSI SAS9211-8i
NIC: Chelsio 10GB 2-Port PCI-e OPT Adapter Card 110-1088-30
HDD: 12x 2TB WD 7200K 64MB
HDD 1 4TB WD 7200 64MB
SSD: 5 intel 320 40gb
Case: SUPERMICRO SuperChassis CSE-846TQ-R900B

XCP-ng
MB:Supermicro X8DTH-6F
CPU: Dual xeon 5670’s
Ram: 112GB ECC DDR3
NIC: Chelsio 10GB 2-Port PCI-e OPT Adapter Card 110-1088-30
SSD: OCZ Vertex 3 120GB

I am unsure how to view my current volume topology. if memory serves me the 2tb drives are in a raid z2 with a striped ssd for zil and zlog and the 4TB just has one ssd for zil. What would be the best way to reconfigure for max performance with 2 drive fault tolerance? Would it be worth while to move some of my ram from my hypervisor to my freenas box to pickup performance? Also I am using block storage to run the VM’s over a DAC 10GB

I don’t have an answer for you sorry. But I do have a question if you don’t mind.

If I’m reading this right you use your NAS to store the images but run the VM on another box using the 10G DAC? How has performance been? Is there a noticeable increase in latency?

zpool show

will give you your current zpool topology

For max performance with 7 drives, you’d only be able to have 1 hot spare, but you’d go for 3 mirrors and a hot spare. for 2 spares you’d need 8 drives.

mirrors are always faster than RAIDZ-Xs, but you pay the penalty in terms of usable space.
mirrors are easier to upgrade as you can expand by upgrading one mirror at a time.

1 Like

Run iperf3 on your freeNAS box to test network speed first. The intel NIC’s in my firewall are garbage and have been giving me sporadic 100-500mbps throughput.

I don’t believe that RAM is going to be your issue unless you have most of the NAS full. I haven’t been having an issue with Proxmox using Z1 on 4 x 8TB disks with an SSD as cache. That being said, I have 96TB of RAM. I can get read speeds of roughly 800MB/s and write of 400MB/s on the unit. So even in Z1/Z2 you should be able to get a good speed.

I’m in the process of moving my data from a VM that I was using as my NAS to an actual bare metal NAS I’ve built. I looked at FreeNAS but in the end opted to go with Rockstor as, while I’ve liked ZFS so far, I don’t like the stupidly high RAM recommendations and you have little to no flexibility after the pool is built. Since I have 32TB worth of new drives and 32TB worth of working drives I didn’t want to waste my time with two pools that I would be forced to make with ZFS.

Rockstor uses BTRFS so I am able to dynamically expand and shrink the pools after I have created them. This means I can eventually have a single 64TB pool. (minus parity of course)

Speed wise, I’m getting 500MB/s write with 800MB/s read on Rockstor without any caching, using BTRFS RAID 6. Even better, I don’t need a stupid amount of RAM.

This might sound counterintuitive but see if performance is better with l2arc/zil disabled. Also, you typically want zil to be on a mirror.

Definitely go with striped mirrors for the storage if you really want to leverage the 10GbE.

Try using jumbo frames (mtu 9000) on the point-to-point connection between the nas and hypervisor.

1 Like

So I’ve done experimentation with Plex on Windows. It doesn’t end well. I would highly recommend you use the Plex plugin on your FreeNAS box, or use a Linux distro.

Make sure MTU is set to 9000 on your 10g NICs throughout the connection. It makes a big difference to the default

Adding more RAM won’t hurt your FreeNAS rig, as anything extra will be used by the ARC. Unless you NEED all that RAM in your hypervisor host, I’d say set your FreeNAS box up with 64GB of RAM.

1 Like

Thanks for the for the replies! MTU is allready cranked on both ends. The 7 drives i mentiond will just be for data swap from my main pool. my primary pool is 12 2tb drives. I am playing with iperf now getting the lay of the land so the speak. I will report back what i find.

for what I am doing I have not seen and issues nor would i expect any. I am running 10 linux 18.04 lts for my masternodes. no probs my only issues is playing around with plex on windows something that is IO and cpu demanding. the only reason I went windows was because i wanted to play around with GPU pass through. I have not has good luck in the past getting vid drivers working in linux.