Yet another FreeNAS build...help!

Trouble is, if the LSI 9305-24i isn’t SAS2 backwards compatible, I would still need four PCI-e slots for GPU + 3x LSI9211-8i’s.

Also, I’m not entirely sure ECC would work on that Asrock board.

Other another issue is there are no Aquantia drivers for FreeBSD (that’s included afaik); this is something that’s being developed https://github.com/Aquantia/aqtion-freebsd

Oh shit, you’re right. I forgot.

I haven’t come across a board that doesn’t work with ECC on pure Ryzen CPUs.
(So, excluding the APUs. Nobody supports ECC on those.)

1 Like

@noenken BTW thanks a lot for actively helping out, really appreciated.

It’s a port now. No drivers included with FreeNAS until 12 but if you need them sooner you can build them yourself (I can help).

The Norco has SFF-8087 Mini SAS connectors and SATA/SAS 6G drive bays, that’s SAS2.

The SAS3 card is probably overkill when your bottleneck is mechanical hard drives. For what you’re spending on this card you could get a chassis with a proper SAS backplane and get away with using a single SAS2 card. My server can have all 24 drives running off a 4i card.

3 Likes

I’m pretty sure sas is backwards compatible. With both previous generations of sas and sata.

2 Likes

When you say “proper SAS backplane” I suppose you mean it would have a single connector, unlike the Norco where it has 1x SFF-8087 per x4 drives?

Our backplane compatible with SAS-3 (12.0 Gb)

That’s at least the Norco supplier’s response; not sure if that’s the actual truth though.

What’s the head unit you’re using BTW?

Hi here @SgtAwesomesauce @noenken

I’m thinking of upgrading my current vdev of 8x4TB drives to 8x10TB WD HDDs. My vdev has been setup with Raidz2, so I can survive maximum 2-drives dying during a resilvering process.

All drives report good SMART health, with ~11,000 power on hours. These are WD Red NAS drives (5400 rpm).

What are my chances of killing my pool during the rebuild? The danger would be if the new resilvered drive dies (during resilvering) + 1 more drive dies within the pool.

FYI considering the volume of data, I don’t really have a full backup of my pool. That’s why I’m building a second system, but it won’t be online till early 2020.

Thoughts?

Wait, you want to switch from 8 drives to 8 bigger drives? Can’t you just copy the contents over?


I’m not making predictions about hard drives. Might totally work without a hitch or might implode and create a singularity that swallows the planet. I don’t know. ¯\_(ツ)_/¯

3 Likes

Well that could work if I had another pool of 20TB around hehe…

Learned something valuable here - I can add a 9th drive (since I have 8) and replace existing drives one by one - This way, the pool is never in a degraded state and even during the resilver process, I retain 2-disk parity.

At the end, I’ll have 8-larger drives and 1x smaller drive; offlining and removing that last one is the final step!

cc @freqlabs

2 Likes

Huh, didn’t know that.

2 Likes

There is no resilver. If you’re doing a replace operation, they’ll upgrade, copy the data from the original to the replacement, then the pool will automatically expand to use all the space when you finish replacing the last one.

Basically, the array is never in a degraded state because it’s basically just making a backup of an operational disk, until the process is over. And only that disk in question is the one that’s in use.

4 Likes

Also if you are loony tunes like me you can replace multiple disks simultaneously no problem. As long as you’ve got slots for the drives.

7 Likes

Rather shocking response. Earlier he claimed this was a SAS3 12Gb/s backplane, yeesh!

Now I’m trying to see if Supermicro Singapore can help. They need 3-days to respond to an email though :thinking:

Once all the drives in the vdev are replaced (there’s only 1xvdev in my pool) – would the GELI keys be ‘re-keyed’ as well?

Will take a new backup of the key just incase.

I’m not sure, tbh.

1 Like

One last daft question - the original drives constituting the vdev (prior to the replace operation, on to the new drives), are the original drives still a working ‘vdev’ in the sense, once they are removed, if they were to be put into another FreeNAS host, would all that original data still be there and could that pool theoretically be imported again?

TL;DR does the replace delete/kill the original pool, post upgrade?

Reason I ask this, it would save time and go @wendell’s route. Hear me out -

I have 8x more bays connected to a second LSI 9211-8i controller just sitting there; I could install the 8x10TB into those bays. Then I could replace each 4TB > 10TB. Remove all the 4TBs.

HOWEVER, if any of the 10TBs start giving trouble, I would effectively still have a fallback set on the 4TBs?

Caveat: if my guess of the GELI keys being replaced happens, then I don’t think my theory would hold up.

only if the 10tb drives fail during the “resilver” (replace technically) will it work the way you expect. If the resilver completes then enough 10tb drives drop out of the array to cause failure, there will be a loss of the pool, though.

if the replace fails (during sync before sync completes), the original drives keep working as per normal. the original drives will not be importable once they are marked as having been replaced from the old pool.

2 Likes

Yeah, No harm in doing that.

I’d give each of the 10TB disks a burn-in first though, just to rule out SIDS.

dd dev/zero /dev/disk/by-id/new10tbX a couple times would suffice.

2 Likes