ZFS Space Usage w/ESXi

I have a ZFS on Linux pool that I use for VM storage on ESXi via NFS. I’ve been testing some different settings for datasets on this pool for performance, but I’ve noticed that in migrating VMs between different datasets via vMotion, the size has ballooned. For example, a 1.3 TB disk with about 1.1 TB in use is suddenly taking up 2.1 TB on the pool. Similarly, a 2 TB disk is now using 4 TB. Both of these issues happened after migrating from one dataset to another via VMware.

I’m not sure what’s going on here. I thought maybe this had something to do with the recordsize in ZFS, but it doesn’t seem to matter what it is set at on the dataset because VMware always sees the NFS datastore as 4k block size.

I’m in the process of testing a ZVOL with iSCSI to see if migrating to/from that type of datastore might correct the disk file sizes. However, here I’m already seeing that for some reason, the ZVOL is reporting 2x the space usage compared to what VMware reports has been written as part of the vMotion. I generally have a lot of spare space on my pool and I’m going to need every bit of it as this continues…

Any ideas what might be going on here?