NAS ZFS migration

So I like the price (and likely driver support) of the 10g fiber stuff, but I’m really geeking out on the rj45 10g as my houses current cat6 setup can be leveraged.

On FreeBSD it’s pretty different, sda would be da0 or ada0 depending on what kind of controller it’s attached via, sdb1 would be ie da1p2, and there is no disk/by-id exactly but there are analogs like /dev/gpt/gpt label or /dev/label/geom label

FreeNAS uses gptid, which ends up being some uuid-looking thing like gptid/88c2d680-35b4-11e9-9030-f01faf47a547

2 Likes

Shame that it’s so much more expensive for those cards

1 Like

So if I’m doing 2vdevs, can I set just one 6 drive array/vdev first, load data to it, then build the second 6 drive vdev? I actually want hardly accessed files on a vdev that uses the power saving hdd spin down and the other vdev can be the very active storage (iscsi, Plex, next cloud etc)

If you want tiered storage like that, you’re probably better off with two separate pools.

1 Like

Ah IC, so I’m thinking I should make a pool for each vdev. One pool will be for the lightly accessed storage, the other pool will have the zvol for iscsi, and datasets for frequently accessed files.

1 Like

This was a cool article

FML, so I didn’t review this thread entirely when I finally went through the FreeNAS wizard, so I appear to have created 2 zpools, one zdev each, so
Pool1, zdev x6 2TB drives Raidz2
Pool2, zdev x6 2TB drives Raidz2

What I should have done:
Pool1, zdev 1, x6 2TB drives Raidz2
zdev 2, x6 2TB drives Raidz2

Correct?

*Edit guess I’m impatient, nuked pool2, made a vdev with those 6 drives, put them in pool1 and Im assuming it started to stripe.

Oh and updraded to 11.2… O. M. G. That GUI is legit, love it… I have been drinking though…

3 Likes

Yeah, that sounds right, If you nuked pool2, then recreated the vdev and added it to pool1, it should make more space available, and stripe any blocks going forward.
I’m pretty sure ZFS doesn’t rebalance.

1 Like

Ouch… the OCD in me is debating nuking it all, doing it right, then reloading my backup on it so its striped.

1 Like

Yeah, it would balance that way.

But most advice is just let it balance over time, unless you need the data stripped for a particular reason.
That’s why the traditional advice is mirrors, just adding a pair or triplet when it gets to like 70-80% or whatever (but deffo before 90%)

1 Like

My advice is don’t stress over benchmarks and just let it work.

5 Likes

Correct.

If you have inexplicable performance issues later, balance might be the issue. Striping is not a completely accurate description of what ZFS does. It’s more like load balancing. It writes records to whatever vdev is available first, so if one vdev is very full, it will be slower and be written to less. If the vdev is very full, it will start slowing things down because writing even one record will take a long time.

At least that’s what my understanding is and I believe I have experienced severe performance issues that were attributable to vdev balance before. I am not sure how to check balance but there is probably a way to see individual vdev usage.

1 Like

something like

zpool iostat -v

and see if any figures are off/uneven?

Edit: you meant no way to check balance meaning to see which vdev is more filled up; rather than which is performing better/worse

1 Like

In version 0.7, they added a change that allows ZFS to switch over and start preferring the lowest latency vdev when really stressed with certain types of writes (random, small block, synchronous). Under typical circumstances, ZFS will prefer to redistribute writes across vdevs mostly evenly if it can, though I am also unclear how set in stone that is. There may be some sort of complicated formula involving latency/freespace/data-evenness-across-vdevs at play but good luck finding that needle.

And you are right that “striping” is incorrect to use with ZFS. It distributes blocks across vdevs in a way that is highly likely to destroy the soul someone with perfectionism. You can have one vdev that’s 70% full, add another vdev, and ZFS says “this is perfect just the way it is” and continue on to evenly fill them both. Only way to “rebalance” is to destroy the pool, and recreate it.

It should also be noted that all Reads are limited by the slowest vdev, and each vdev is limited by the slowest device in the vdev. An SSD paired with an HDD as a mirror isn’t going to be very thrilling.

2 Likes

Do you mean ZoL .7?

My understanding came from Wendell’s interview with Alan Jude. He sort of mentions it in passing. Not sure if/what differences exist between the FreeBSD and Linux implementations though.

Oops yeah, fixed.

1 Like

You can copy files or send|recv datasets on an existing pool and delete the old ones.

1 Like

I want to start logging SMB events (successful object access, denied, deletion etc).

Then use a REST API someone made for Splunking the data.

But curious about this: