I didn’t vote…
What is your requirement for uptime? What are your performance expectations (i.e. what are you planning to use this storage for? VMs? DBs? Live file store? Backup archive?)
All the VDEV types have their strong points…
Well here’s the thing. You still need to monitor for and respond to drive failure. Hot spares don’t get you off the hook for that. Having 3 drives fail will take you out eventually otherwise.
If a drive fails you need to replace it to maintain your resiliency goal, hot spares just give you a little more time to get to it as the hot-spare drive will look after you between the first failure and potential other failures whilst you source a drive. You’d still want to replace the failed drive as soon as you can - you just can sleep easier whilst waiting for the new drive to arrive or be purchased.
So given that… If you can swap in a cold spare within 4-5 days with hot swap drive bays, i’d be tempted to forego the hot-spares and use them for more VDEVs. (I say 4-5 days as Netapp have been taking 1 week to get disks to me lately
and they don’t seem to fussed vs. RAID-DP with a single hot spare per aggregate)
I.e., i’d maybe consider 2x 6 drive RAIDZ2 with zero hot spares (maybe keep a spare drive on hand for responding to failure - but with 2x raid6 you could sustain 4 failures (vs. 2 for the single raidz2) and no loss if they were evenly distributed between both VDEVs. Which is a bet… but if we consider that failure chance should be evenly distributed it’s a bet i’d be willing to take… especially as i’d be planning to replace failed drives ASAP anyway. You could still have all the failures in one VDEV but… stats/probability/etc… i’d risk it. But again depends on your risk profile.
Yeah its a bit more of a risk but i think the 2x IOPs performance is worth it, and you should have backups anyway. For the number of disks involved, so long as they’re CMR i think cold spares and 2x raidz2 would be plenty of resiliency.
I agree with avoiding RAIDz1 (if this is not test/play with an acceptable window for restore or not-mission-critical) - 16 TB drives are way too big and take too long to rebuild for that. Tis why i didn’t vote - i’m always for getting as many VDEVs as you can with an acceptable fault tolerance objective - but RAIDZ1 with huge modern drives is skirting the bounds of pointlessness.
Enterprise array vendors were not recommending single parity disk raid for drives > 1 TB 15 years ago due to rebuild time required and considered risk of possible second disk failure during rebuild. Even with 1 TB disks. Enterprise array vendors are very risk averse so take that one with a pinch of salt, but the drives you’re using are 16x that size 
All that said, i currently have an OLD out of support Netapp that’s doing non-critical file restore temporary storage and its had a single failed disk (i think 15k RPM 600gb SAS) in one of its RAID-DP (think raid6/raidz2) aggregates for 4+ years now…
It’s still ticking (if it dies i’ll re-create the aggregate with less drives, its just scratch space).
I do not suggest doing that, but as above it really depends how mission critical the storage is and how fast you can respond…