Reading this a few days too late, but given you have no redundancy currently, If it was me, I’d do the following.
Use the six new drives plus two virtual block devices to make a raidz2 with what is effectively 8 devices.
Then fail the two virtual devices off the vdev so the array is degraded, but also then using only the physical 6 new drives.
Copy the data onto the degraded array from the three drive device (zfs send).
Destroy the 3 drive device, and then add two of those drivers to the degraded array, and let it rebuild up to the full 8 drive raidz2.
Assign the last remaining 12tb drive as a hot spare.
End result is 72tb of very redundant storage.
But then you’re trusting all of the six brand new drives to function flawlessly through the rebuild process. Compared to “only” three tried-and-tested drives needing to continue to function through the send/recv to not lose data.
I guess it all depends on how valuable the data is and what risks one is willing to take. Interesting option!
Thats why I said “if it was me”… I can’t make the call for OP, just providing an option.
The initial copy to the degraded array is safe as they still have a copy on the original set of drives.
The OP clearly has concerns about the new drives, after zero’ing them - so it comes down to the level of risk in the data. They did say its not worth $10k, so maybe creating a degraded raidz2 is an acceptable level of risk.
An alternative might be to buy one more 12T drive, and then degrade the raidz2 by only a single disk (which preserves the redundancy of 1 disk failure), and then at the end instead of a hot-spare, having two 12T drives to make a mirror pair.
For a commercial situation, no way I’d recommend it, this is “home user with losable data only” type stuff (no family photos, tax documents, musical compositions, kids school work etc.)
)
(and yes I have created a degraded raidz1 array to migrate to previously).
OP - if it fits your level of tolerance, its something like this, obviously tweak depending on your host’s situation.