Two of the major design points about ZFS that trips people up are
-It DOES NOT go back and retroactively change things when a setting is changed. ZFS writes data once, and then leaves it the fuck alone forever.
-Certain settings are limited to a pool-wide or dataset-wide scope, that sometimes can’t be changed, or when it can only affect new data that is written.
Meaning the only way to make certain adjustments is to create a new pool/dataset and copy thing over (in some cases, you DON’T want to use ZFS send, as it will preserve some of the original settings). This is fine for enterprise, where replacing a 4U rack of drives is the quickest and therefore cheapest solution to making major changes, and the admin already knows what to do and not to do.
But a home NAS guy that barely knows the command line or that redundancy is not a backup can encounter severe problems if he doesn’t know where to hang out to learn what not to do.
The greatest risk of data loss in ZFS is the admin himself. I love ZFS, but I find it difficult to recommend without being able to judge a person’s ability and willingness to put in the time required to get safely squared away. It’s a bit like a gun or lathe, it can be a wonderful tool, or it can destroy something precious due to what is typically (but not always) personal negligence.