Trouble is, if the LSI 9305-24i isn’t SAS2 backwards compatible, I would still need four PCI-e slots for GPU + 3x LSI9211-8i’s.
Also, I’m not entirely sure ECC would work on that Asrock board.
Other another issue is there are no Aquantia drivers for FreeBSD (that’s included afaik); this is something that’s being developed https://github.com/Aquantia/aqtion-freebsd
The Norco has SFF-8087 Mini SAS connectors and SATA/SAS 6G drive bays, that’s SAS2.
The SAS3 card is probably overkill when your bottleneck is mechanical hard drives. For what you’re spending on this card you could get a chassis with a proper SAS backplane and get away with using a single SAS2 card. My server can have all 24 drives running off a 4i card.
I’m thinking of upgrading my current vdev of 8x4TB drives to 8x10TB WD HDDs. My vdev has been setup with Raidz2, so I can survive maximum 2-drives dying during a resilvering process.
All drives report good SMART health, with ~11,000 power on hours. These are WD Red NAS drives (5400 rpm).
What are my chances of killing my pool during the rebuild? The danger would be if the new resilvered drive dies (during resilvering) + 1 more drive dies within the pool.
FYI considering the volume of data, I don’t really have a full backup of my pool. That’s why I’m building a second system, but it won’t be online till early 2020.
Wait, you want to switch from 8 drives to 8 bigger drives? Can’t you just copy the contents over?
I’m not making predictions about hard drives. Might totally work without a hitch or might implode and create a singularity that swallows the planet. I don’t know. ¯\_(ツ)_/¯
There is no resilver. If you’re doing a replace operation, they’ll upgrade, copy the data from the original to the replacement, then the pool will automatically expand to use all the space when you finish replacing the last one.
Basically, the array is never in a degraded state because it’s basically just making a backup of an operational disk, until the process is over. And only that disk in question is the one that’s in use.
One last daft question - the original drives constituting the vdev (prior to the replace operation, on to the new drives), are the original drives still a working ‘vdev’ in the sense, once they are removed, if they were to be put into another FreeNAS host, would all that original data still be there and could that pool theoretically be imported again?
TL;DR does the replace delete/kill the original pool, post upgrade?
Reason I ask this, it would save time and go @wendell’s route. Hear me out -
I have 8x more bays connected to a second LSI 9211-8i controller just sitting there; I could install the 8x10TB into those bays. Then I could replace each 4TB > 10TB. Remove all the 4TBs.
HOWEVER, if any of the 10TBs start giving trouble, I would effectively still have a fallback set on the 4TBs?
Caveat: if my guess of the GELI keys being replaced happens, then I don’t think my theory would hold up.
only if the 10tb drives fail during the “resilver” (replace technically) will it work the way you expect. If the resilver completes then enough 10tb drives drop out of the array to cause failure, there will be a loss of the pool, though.
if the replace fails (during sync before sync completes), the original drives keep working as per normal. the original drives will not be importable once they are marked as having been replaced from the old pool.