I am a bit confused… and sorry if this is getting a zfs tutorial…
I did the following:
zpool create tst draid1:3d:7c:1s sde sdf sdg sdh sdm sdn sdo
Thinking that it would create two draid1 vdevs with each three disks, and leave one for spare…
But not sure if that was actually what happend
Here is what the pool looks like…
pool: tst
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
tst ONLINE 0 0 0
draid1:3d:7c:1s-0 ONLINE 0 0 0
sde ONLINE 0 0 0
sdf ONLINE 0 0 0
sdg ONLINE 0 0 0
sdh ONLINE 0 0 0
sdm ONLINE 0 0 0
sdn ONLINE 0 0 0
sdo ONLINE 0 0 0
spares
draid1-0-0 AVAIL
I do have the correct avail…
NAME USED AVAIL REFER MOUNTPOINT
tst 1.09M 71.2T 279K /tst
Is is just me, or is the view a bit strange…
I would have liked it like this…
pool: tst
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
tst ONLINE 0 0 0
draid1:3d:7c:1s-0 ONLINE 0 0 0
sde ONLINE 0 0 0
sdf ONLINE 0 0 0
sdg ONLINE 0 0 0
draid1:3d:7c:1s-0
sdh ONLINE 0 0 0
sdm ONLINE 0 0 0
sdn ONLINE 0 0 0
sdo ONLINE 0 0 0
spares
draid1-0-0 AVAIL
But I can also see the “issue” that the number of disks doesn’t make sense… and you also cannot put one physical disk with the spares, because they are distributed… so I kinda get it, the more I think about it…
So for my large pool it would be something like
zpool create toolarge draid2:d12:c24:s1
But I think I am running into the issue that this way the “vdevs” or redundancy
groups as it is, have to be equal number of disks and this will not fly if I would like one distributed spare…
So I think I may be forced to have two distributed spares, or one hot spare…
zpool create toolarge draid2:d11:c24:s2
Redundancy is the enemy of capacity I guess