Data recovery prevention questions

I’ve been reading about this a lot lately, but some of the topics are kind of fuzzy and google is failing me.

In the case of a hard drive, preventing data recovery is simpler in that you can overwrite the disk with zeros or perhaps random data. There was some research in the 1990s that suggested recovery was still possible with expensive equipment unless this was done multiple times but my reading suggests this doesn’t really apply to newer drives.

My question is, is overwriting a file with zeros good enough to destroy the file? Every time I try to google this question I get nothing but topics about wiping the whole disk which is pretty time consuming. I’ve seen some tools that purpose to do that but I was a bit skeptical.

SSDs are another animal due to wear leveling. SSDs are encrypted with their own key. A secure erase, fired off with a manufacturer utility to something like hdparm simply “forgets” the key and creates a new one, which is basically instant. Baring a bug (can’t find information about how buggy things are now, but some one tested in 2011 and common SSDs secure erase was very buggy and unreliable!) or a backdoor, how secure is this? I know this one is a pretty fuzzy topic but it’s interesting.

Another question I had on the SSD secure erase, is there any reason to believe running that frequently would kill the drive?

The research paper in the '90s is old and the author admitted not long after it that at the rate the HDD density goes, it would be hard to try to restore data from such a densely packed storage device. And recovery was basically a theory in that time on the HDDs of the era anyway, it was difficult then and it is impossible now.

I suggest you write a HDD with random data, just to make sure, once is more than enough. SSDs should be the same, a bit of wear won’t affect it if you write the data in a single go (like using something like /dev/urandom, as opposed to trying to write using multiple small files).

If it’s a HDD, probably, but not sure how you would go about doing that. There could be software out there for that, but my knowledge is finite.

I wouldn’t be too worried about it, unless you are doing heavy writing to it anyway. The only problem I see with that is that a file may not be overwritten, but just marked as deleted and may not get something written in its cell until the space is getting full. So writing 0s to an SSD may not accomplish much (correct me if I’m wrong).

Best advice if you have data you want to keep secure is to always encrypt files with strong unique passwords. If you use file system encryption (LUKS, ZFS encryption), or if you use unencrypted FS, but secure enclaves like keepass for passwords, or compressing and encrypting files on disk and only decompressing / unencrypting them on a RAM-disk (tmpfs), you should be fine even if you lose the disk before you have a chance to destroy the data on it.

I would not trust those. There has been a Bitlocker bug in the past that said it was encrypting data using the SSDs encryption mechanism (Samsungs and I believe Crucials were affected), but it didn’t do anything. That said, I never used hdparm to try to remove disk encryption, but again, I would rather trust my own security measures.

To be honest, I’d be way more worried about online data exfiltration (when the disks are unencrypted to run the OS, or when your encrypted archives are unarchived in RAM), rather than evil maids attacks.

1 Like

I think sdelete is an example of a overwrite in place program. But if I understand it correctly you could do it with almost anything.

I’m not sure if that bitlocker bug is the same thing, although I haven’t looked into it much. My understanding is that basically all SSDs actually encrypt everything stored, but the key is part of the drive itself so in a way you don’t see it. The secure erase simply obliterates this key and issues a new one.

ram disks are an interesting idea…

Not if you are using a copy-on-write file-system like ZFS, or use file-system snapshots, or if you’ve ever copied the file from one place to another, or if the file was ever loaded into memory and parts written out to swap, or…

There’s quite a few scenarios where you can’t ensure the contents your old file isn’t still on the disk somewhere without wiping the whole thing (of at least filling-up all free-space).

A bit, but HDDs do sector reallocation, which is a bit like wear-leveling and happens outside of your knowledge or control. There probably won’t be much of your data left in those reserved sectors, but even wiping a whole disk isn’t 100%.

The easiest way is to re-partition the drive multiple times.
Re-partitioning erases the disc index file and creates a new one.
An accidentally erased partition is possible to recover, but not one that’s been partitioned and formatted multiple times

D-ban does work but it’s slow because it overwrites 37 times in 3 passes but may report errors on a sata drive, and I definitely would not use it on a ssd

Gparted is Linux but it’s very easy to use with its gui
Gparted usually comes with any iso ,but you can also download the Gparted iso and put it on a boot able USB drive.

Most formatting programs do NOT wipe the entire disk. That would be a waste of time (likely: HOURS). You can reformat over and over, and your old files will still be easy to recover.

What will make deleted data unrecoverable is filling the new file systems with new data, which will eventually happen to overwrite those deleted files.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.