I know I just posted about ZFS, but I figured it’d be easier to ask a different question on a separate post…
Is there such thing as column size when creating a mirrored ZFS pool?
zpool create poolname mirror disk1 disk2 mirror disk3 disk4 mirror disk5 disk6 mirror disk7 disk8
zpool create poolname mirror disk1 disk2 disk3 disk4 mirror disk5 disk6 disk7 disk8
I came from Windows Storage Spaces, and it does matter there, because it dictates how many drives you must add to your pool when you want to expand.
A 2 column mirror would consist of 4 disks, and you’d have to add 4 disks every time.
- Does the above sample command matter?
- Does ZFS allow me to add simply 2 disks each time when expanding a mirror?
Wouldn’t the difference be that
- First example has four vdevs of two-way mirrored drives: disks 1+2 are mirrored, 3+4, 5+6, 7+8; and
- Second example has two vdevs of four-way mirrored drives: disks 1+2+3+4 are all mirrored to each other, and give an available capacity of one drive, as are 5+6+7+8. (Overkill if you ask me. If the data is important enough to warrant 4-way mirroring, you should instead/also have three mirrored servers in a cluster, in two different geographical locations.)
As to the actual question: Vdevs in a zpool need not be of the same type. You can add a set of mirrored drives, or a set in RAID-Zn (or even a standalone drive - but don’t do that except for strictly test enviroments) to a zpool, regardless of what types of vdevs already exist in it. It’s recommended to use similar vdevs in the same zpool, though.
You need to make good decisions when adding new vdevs, because even if you can add any kind at any time, you can never remove any of them. To get a smaller pool, you instead need to migrate all data to a whole new zpool.
Thank you for the response, I’m glad you corrected me before I even started.
I did not know 1+2+3+4 = the drives are all exact copies of each other. I’ve seen videos and examples online but there was no direct explanation of how the data is spread between drives…
it wouldve sucked to set everything up only to discover that I only had 1/4 of the capacity I wanted!
In general, whenever you’re faced with a dilemma like this, you can make some sparse files, set them up as loopback block devices, and play around with zpool and ZFS commands on those loopback devices.
You can expand a ZFS pool by either
- adding a new VDEV
- replacing all the disks in an existing VDEV (with bigger ones)
- one by one replacing all the disks in all VDEVs (same as above but for every VDEV)
so, if you’re running say 2x mirror vdevs, you can get more space by either replacing the drives in one mirror with bigger ones, or adding 2 new drives as another mirror, or 3 new drives as a RAIDz VDEV, etc.