How possible is it to lose a zfs encrypted dataset?

assuming the keyfile is still present, how possible is it to lose an encrypted dataset to, for example, bitrot on the target drive?
With the ever increasing surveillance state, I am increasingly paranoid about someone busting my door down, claiming it’s a search warrant for illegal material, and finding old backups of since-lost media, resulting in heavy fines and no recourse. In the process of switching to ZFS with the use of native encryption, at least until I can move away from society and live out in the woods or some such.

It shouldn’t really be any different than normal. ZFS creates a checksum for the blocks after encryption, so bitrot can get detected and corrected all the same.

2 Likes

Bitrot/corruption is detected as standard, but one needs more than a single copy * of a file to heal the rot.
Else ZFS will just report the dead file (or block)

Also, I don’t use ZFS native encryption, but doesn’t it store some metadata/headers/file names in an unencrypted way?
As in, data is encrypted, but not totally anonymous?

I thought, and could be wrong, but you might want to make an encrypted container on each drive, then make a pool of the containers, for the contents to be secret?
Like with Geli or Luks or whatever?

Would like to know that one

*as in, one needs a mirrored pair, or raidz array to self heal.

1 Like

I think it’s only the Datasets Name and Properties which are stored unencrypted so that if you export and re-attach it, it’s possible to read it’s properties.

All the Data in it should be fully encrypted for as far as my experiments have shown, but I could be wrong.

IMO the bigger factor here should be the host and how encryption keys are stored. If someone busts down the door, it’s more likely that the operating system which hosts the array and stores the keys will be their way in.

2 Likes

Thanks, that is a great relief!

1 Like

Oh, and loosing a whole dataset is very easy; through user error, one might fat finger pruning snapshots and add a space before the @ (should give an error, might destroy regardless of error) or one could fat finger a replication the wrong way, and write an old backup over a good current copy.

But, backups should still be done.

Especially if the main drive is a single disk, then saving a backup give a lot more chance of recovering dead/corrupt/rotten files

That’s a pretty good point. I guess I should have a secondary encrypted drive with keys to unlock the boot drive.
redundancy is really expensive and it’s not mission critical data, but the plan is to eventually have manual redundancy(copies of data on multiple drives), starting with more important backups first. Just waiting for spinning rust prices to dip again.

From zfs-change-key.8 — OpenZFS documentation

Enabling the encryption feature allows for the creation of encrypted filesystems and volumes. ZFS will encrypt file and volume data, file attributes, ACLs, permission bits, directory listings, FUID mappings, and userused / groupused data. ZFS will not encrypt metadata related to the pool structure, including dataset and snapshot names, dataset hierarchy, properties, file size, file holes, and deduplication tables (though the deduplicated data itself is encrypted).

So dedup could potentially leak information. You also don’t want to name your dataset “highly_illegal_secrets_password_is_anon123”.

So yes, if you are trying to dodge government level threats that are willing to target you specifically, you’re gonna need something else or take extra precautions.

3 Likes

I’ll name my datasets appropriately :slight_smile:
Dedup is a big nope, every time I’ve enabled it, it has had so many problems with performance and reliability. Sounds like it should be fine; the big concern with encryption for me is the possibility of losing all my data very suddenly and very unrecoverably.

3 Likes

Dedup is a big nope, every time I’ve enabled it, it has had so many problems with performance and reliability.

Agreed. The Deduplication Revolution and its consequences have been a disaster for the OpenZFS community.

Properties would presumably also include mountpoint, so one should be careful with naming?

Like /home/trooperish/tax/offshoreassets/unreported

Or /zpoolname/share/stolen-secret-Tesla-plans/

3 Likes

That’s a good point. Yeah that will be apparent. It will also show what kind of encryption you are using, so if you use aes-256-gcm it’ll show that.

2 Likes

Another thing is that the data is only encrypted at rest on the disk. Once in memory it is not encrypted, from my understanding. So you could send and receive datasets to a friendly government-hosted backup server and be as fine as if they grabbed your disks, but if for some reason you mounted the dataset in a VM on their server, well you just fucked up in multiple ways. I know there are ways to mitigate this, but you’d have to trust those mitigations to actually be active.

There’s also probably some theoretical way for them to get what they need from your still running computer when they kick down the door and plug in their HackLinuxUSB stick.

2 Likes

I see a big drop in performance for large file transfers with deduplication. But day to day operation is more or less the same (albeit with a 6 core 12 thread cpu). I’ve found if enable dedup when I first make the pool, turn it off for the data migration, and then turn it back on after the data is all local to the truenas it runs ok.

I don’t think this is recommended, however. What are the reliability issues? I don’t like the sound of that.

Locking up completely in large read or write operations with dedup enabled, basically locking the disk with 0 io usage.
Browsing the files and accessing anything under 10gb in size works great, but when you have disk images that are 30 or 60 or 100+gb compressed, things get a bit messy. Could be the WD Blue(64mb cache) drive contributing to that, but I’m not sure if I can trust ZFS with dedup on.

Then again, I haven’t tested long reads with zfs very thoroughly. Everything’s been writing too ZFS, never from. I should probably have done more reliability testing before migrating all my unimportant horded datas to it. Guess I’ll do that next.

Also, my understanding is, if you copy the large amount of data with dedup off and then turn it on, all that data is just… not deduplicated, making it pointless. I’d imagine deduplication is great for running 5 instances of a largely identical OS, but for data hoarding, not so much.

Not if the rotten fies are written over your backups (i.e. backing up bit-rotten files). You could go to previous backups to recover that, but if the bit rot goes back long enough, i.e. if you don’t observe the rotten file in time, your old backup may not save you, because it may already contain the bad data.

Healing only woks locally. How that works is that zfs scrubs a file or block, verifies its checksum and if it doesn’t match, you have 2 scenarios:

  • you are not using a redundant array (mirror or parity), so your data will be marked as corrupted, you get no healing.
  • you are using a redundant array, so the healing works by doing a parity calculation to get the correct file and the corrupted one gets overwritten, or by verifying the file in the mirrored drive(s) and copying it over the corrupted file.

Backups by themselves do not protect data from bit-rot.

I’m out of the loop, what? Deduplication is amazing when you run many VMs that share the same core files, it can save quite a bit of space. Performance depends on the scenario, if you don’t have a lot of duplicated files, you are likely to incur a performance penalty, but if if you do have lots of duplicates, you will actually see an increase in performance. Why has it been a disaster? I never heard of this.

2 Likes

ZFS checks the checksum on each read too, so should not back up a corrupt file.

Files of course can get corrupt afterwards.
But a single copy of a file has a 0% chance of healing, so having a backup is greater than 0%.

Yep, and it also does so automagically on a file read, if the file does not match the checksum.
Else it throws an error, and isolates the file if not enough good copies remain (i.e. all copies fail checksum)

In that case, you gotta pull the file from elsewhere

When it was introduced there was a lot of excitement and hype, you can still find ancient posts of people eagerly looking forward to the feature. You can also still come across old posts where after it came out people are beginning to realize it wasn’t what they thought it was going to be. It’s actually kinda funny in a sense, and my comment it really more of the ZFS community shooting themselves in the foot.

As it turns out, there are vanishingly few people who can actually gain any benefit from dedup. Instead most that tried it only got poorer performance (especially on consumer hardware) and occasional strange issues, to the point where “you aren’t using dedup are you?” is one of the diagnostic questions to ask when trying to resolve problems.

Now dedup has had considerable fixes and improvements since it’s release and isn’t as bad as it once was (it’s much better with special vdevs made with enterprise nvme drives, or better yet optane) and there are certainly viable use cases such as your many VMs, and of course various enterprise workloads it was intended for. But most homelabbers, who are obviously the loudest internet ZFS users, absolutely do not want the feature enabled and should just get a bigger drive.

You are probably the 3rd or 4th person I’ve ever seen use it and have a positive experience.

2 Likes

I’m not using it, as I don’t run a lot of VMs anymore (didn’t use it in the past either), it’s just that I’ve seen others on the internet use it, looked at it a bit and opinions were mixed when talking about benchmarks, but I didn’t know the backstory of the implementation. Thank you for sharing.

One relatively recent thing that I’ve seen was Jeff’s from Craft Computing video on ZFS dedup.

All heil the spiderman meme.

I’m running a mirrored wd sn750 nvme for the special vdev. When I run gstat while doing a transfer over samba I see it’s running around 100k iops off the special vdev. So I decided to do an experiment replacing the nvme with sata ssd’s, and it tanked performance to the point of being unusable. It also started causing me network timeout issues. I definitely see what you’re talking about now lol.

(I put the nvme back in and it’s running fine again.)

2 Likes