I was working at IX systems (primary developers of FreeNAS) a few years ago, and they liked to have at least 2 extra data drives in any given commercial data set. Where raidz2 was not cutting it, ie people using a large raid array instead of an SSD for a database or to back the boot drives of VMs, they would have pools of mirrors.
ZFS does mirror differently than most hardware mirrors. Write events go to all of the drives, but read events go to a single drive, so a mirror in practice works much faster during read events than a striped array of the same number of drives.
They put 3 to 5 drives in a mirror, then made pool of those, often up to 32 vdevs in a pool, then a bunch of spares, and some flash caching drives. An array like that would have the normal redundancy with HA (High Availability), ie 2 motherboards (hosts) in a HA chassis, 2 cards per host, 2 data paths all the way to the drive in each drive shelf. Each motherboard can bring to bear 32 SAS channels to the drives. Write events would consume a lot of channels, but the amazing part was during read events. The read event goes to the drive with the data. With a mirrored array, all of the drives in the vdev can be the drive with the data.
When you look at how much hardware is committed to making the data sets high availability and performant, it just does not make financial sense to make the VDEVs into raidz arrays instead of mirrors. With 3+ drive mirrors you can have a hard drive failure, and rebuild that array while it still stays performant to read events. There are many vdevs in the pool, ZFS will give write events to a mirrored vdev that is not busy performing a resilver, so the pool stays performant.
If you are going to have several pools per HA, you might as well spread out your mirrored vdevs amongst the disk shelvs so if a disk shelf gets lost due to someone dropping it down a flight of stairs while you are moving it to a different rack, you don’t loose any data.
Also every disk shelf should have at least 3 hot spares of every drive type that it contains upon deployment. The hot spares can temporarily decrease as drives die and get RMAed, but if you go down to 1 or 0, you should buy some more drives to add to that enclosure (or keep nearby to add) so that you maintain a safe number of hot spares.