Freenas Build/Vdev suggestions

So I am upgrading both my Server, Encloser and drive size, But Still unsure how to build out my vdev’s
Mostly stores Video files, Depot for ISO’s, EXE’s, Common files needed for working on stuff, Download folder.

Current System
HP DL380 G5 1x E5345, 22GB Ram, 1x1gb Nic, H200e
8x146GB Raidz2 [Working Drive]
3U SGI Rackable SE3016 16 [Fan Modded]
16x2TB Raidz2 [Archive Storage]

New System
Dell R710 2x X5650, 64GB Ram, Teamed 4x1gb Nic, H200e
SC847E26-RJBOD1 [New 45 Drive Jbod]

I have
16x3TB Sata Drives
16x2TB Sata Drives [Current Drives]
10x1TB Sata Drives
Assortment of smaller drives I would rather not use.

I could buy up to 25x2TB SAS drives I know of, for a good deal, I believe that you can mix Sas and Sata in Freenas.
If there is a strong recommendation to get a few more 3tb I think i could do that.
I was thinking of using some of the 1tb or 2tb in a Mirrored vdev to replace my old working drive and to get better performance,

I will be running one vm and one or two dockers on freenas as well
Currently I Need 21tb usable after the 20% overhead freenas likes to have.
I would like to be able to replace a set of drives with larger ones ie: remove 10x1TB drives for 10x6TB drives. for upgrading smaller drives over time.
I keep a full back up on Cold storage that i do every month. So I can use the 2TB that are in use now
I don’t know if 2x 8 drive raidz1 vdevs is better or worse than one 16 drive raidz2 vdev.

I would like to do either a steam cach or a network game drive. [Haven’t really looked into this yet]

Future upgrades:
10GB Nic
4x1.6TB SSD [Network Game Drive?]

I’d put 16 drives in 2x8 drive RAIDZ2 (with 3TB drives this would give you ~36 TB). You can have different sized VDEVs in a pool (so you could do 2x8 drive RAIDZ2 with the 2 TB drives for another couple of VDEVs in the pool), ZFS will auto-balance writes across them).

i.e.,
tank:
vdev1: 8 drive RAIDZ2 - 3tb drives (~18 TB)
vdev2: 8 drive RAIDZ2 - 3tb drives (~18 TB)
vdev3: 8 drive RAIDZ2 - 2tb drives (~12 TB)
vdev4: 8 drive RAIDZ2 - 2tb drives (~12 TB)

total pool: ~60 TB

I’m not sure if it still applies but i do recall something about RAIDZx having slight performance advantages with certain numbers of drives (i can’t remember, one of the RAIDz levels likes odd numbers of drives in a VDEV), but given you’re on Gig-E and this is mostly “static” or archive type data (i.e., not a lot of random write) i’d go for the capacity anyway.

Why so many VDEVs? performance. Upgrade-ability (can upgrade storage by replacing 8 drives at a time to replace a single vdev). Fault tolerance. Performance when you go 10 GbE.

Why RAIDZ2?

RAIDZ1 (i.e., sorta RAID5) hasn’t been recommended on 1 TB or larger drives by most storage vendors due to long time to rebuild after a failure. And that’s when your array has a hot spare or two.

Hence I’d suggest RAIDZ2 VDEVs (so you can handle 2 failures per VDEV). Mirrors are faster but your workload doesn’t sound like you’d need that performance bias, and RAIDZ2 will give you a lot more space.

I never knew you could mix drive size like that inside a pool, That is neat and useful.

Yea the only time I push performance is during my back ups, I will be moving my pc I use for that to 2xGbE teaming for that. And on my working drive. Which I will probably do a mirrored vdev as that is the only drive I hit for alot of reads and writes, then it gets moved to the large pool for long term storage, and eventual backup.

Can Anyone Confirm that Sas and sata drives can be mixed in freenas? This guy only wants $17 for each of these drives. was at least going to grab a few for cold spares.

Yup, drives only have to be the same within a VDEV.

Inside a pool you can mix different sized VDEVs and you can even mix and match different VDEV RAID levels. e.g., you could have a pool with a RAIDZ2 VDEV and a mirror VDEV.

It’s not something i’d particularly suggest (as once you settle on a VDEV type, you’re stuck with it), but you can do it.

e.g., in a pinch, if you’re out of space and only have 2 drives available you COULD add a mirror VDEV to your pool to expand it. But for the rest of the pool’s life, you have a mirror VDEV in there, so ideally you’d only do that as an emergency and re-create a new pool when you have more disks or something.

As far as SAS and SATA drives inside FreeNAS/ZFS go - sure.

You can mix them. The drive performance will be different, and you’ll likely be limited to the SATA performance (as every write in a pool is full-stripe across all VDEVS in the pool) and ZFS will need to wait for the SATA disks to keep up), but ZFS won’t care.

However, it may be worth creating two pools in that case though so you can take advantage of the faster SAS performance. i.e., create an archive type SATA pool and a SAS pool.

e.g., leave your SATA pool as RAIDZ2 above, and then create a higher performance SAS pool with 8x 2 drive mirror VDEVs (for example) for high performance.

Thro, I think I did this how we discussed, I am waiting on getting the 2tb out of the old machine, but
i have the 16x3tb in two 8 drive raidz2 setups. and I made what I believe is a Mirrored vdev “raid10” equivalent. My only question is how would I go about replacing a Zdev in the future? from what I can tell I would have to replace a drive at a time and re-sliver it till i swapped all 8 drives. or is there a way to add a full 8 drives and tell it to swap vdevs?

First thing…

ZFS does NOT mirror across VDEVS (edit: i see actually you did also create a RAID10 equivalent with 2x mirrors).

With the RAIDZ2s…
You have created the equivalent of RAID60 - you have 2x RAIDZ2 (raid6 equiv) and zfs is STRIPING across both VDEVs (not mirroring). I.e., if one entire VDEV was to fail you lose all your data. The redundancy is within a VDEV. ZFS stripes all VDEVs in a pool. Make sense? Hypothetically, if you had each VDEV on a controller and had a controller failure in that situation (wiping out all drives in a single VDEV), you’re boned.

SO given that… correct, you’d need to pop one drive at a time, resilver, repeat to upgrade a VDEV. You can not remove a VDEV from a pool, only add new VDEVs.

You can’t really just “swap VDEVs” unfortunately. What you COULD do however if your new VDEV is big enough is create a new pool with a single big VDEV, replicate your pool to the new pool and then add a new VDEV to the new pool?

But that would result in all the data being on one VDEV and not striped across both initially.

If you can plug in more drives without unplugging existing ones, then you should be able to replace existing disks with new ones without much shenanigans.

I’m not as familiar with the FreeNAS interface, but ZFS has a zpool replace command for replacing a disk, so FreeNAS should have some way of doing that through the interface. You should be able to replace as many disks as you can have extra slots for.

From the docs I gather that after attaching new disks you should be able to click on a drive in your pool, click replace, and select a new disk to replace it with. You should be able to do this to several drives in parallel.

Yeah, disk replacement: easy.

VDEVs however are a collection of disks; you can’t replace a VDEV in its entirety in one hit… as it is a member of a stripe-set within your pool.

So you need to replace a VDEV disk by disk, and then expand the VDEV once all the disks in it are replaced.

I guess I’m not understanding the scenario correctly.
It seems like OP wants to swap out the old disks with bigger disks and doesn’t want to do it serially one disk at a time (hence asking about swapping whole vdevs).
If the vdev layout will be the same, then as I said you can simply replace all the disks.
Alternatively, you can attach a new vdev as a mirror of the one you want to replace, then detach the original after the resilvering is done.

WARNING: The second option is easy to accidentally screw up if you don’t know what you’re doing, so make sure you know what you’re doing and be aware that if you screw up you could be stuck with a pool that needs ALL the disks in it. RTFM, and I accept no responsibility for the loss, pain, and suffering of anyone who does this wrong.

1 Like

Alright, Thanks again guys,

I think I will only add the 8x2tb drives when I need them, and then be done adding Vdevs to that pool. and when I need more space I will swap the 2tb’s for what ever is cost effective at the time.

Only other things I might play with is a stripped array for a steam cache [Who cares if a cache drive dies]
I will probably be running a vm off freenas So adding what ever drive i need for that also.

So glad I have this up and running. Time for some burn in.
Thanks for all the help!

1 Like

Just be aware of a couple of things:

  • ZFS can read different data from both drives in a mirror (As it has block checksums to know if the data is bad, it doesn’t need to compare data between the disks to know if its broken) so you aren’t losing performance for reads vs RAID0. You do lose the space and the write IOPS performance, but the reads are (i believe) just as good (or close to it).
  • if you add disks to another pool or array, you’re leaving performance on the table vs. if they were to be part of your existing pool. you may well have reason to do that, but if you don’t (and that reason may be something as simple as “i want to”) i’d consider adding them to the same pool
  • if you’re using the storage as a VM backing store (via NFS), be sure to turn off “atime” or access time recording on that data set. Otherwise performance will tank as ZFS will be attempting to update the access time (which is a write - and in ZFS all writes are FULL STRIPE) on the vm disk file for every single read of it. In fact, unless you care about when a specific file was last READ for auditing purposes or such, i’d turn off atime in general.
  • Just on data sets. think of security in terms of data sets, not folders. You can have virtually unlimited data sets and they can be under other data sets. Data set = basically a folder in ZFS that you can set different security permissions or storage options on. If you feel you may want different security or other options on a folder on a ZFS system, create a data set for it, not just a regular folder. This in particular took me a little while to get my head around. e.g. i’d create a different data set for VM storage vs. your other file storage and maybe use different options on it (e.g., turn off atime as above).
1 Like