Veracrypt vs SSD health

I plan to set up an SSD with a linux partition, and the remaining bulk of the drive as a Veracrypt partition for cross-platform access. What would be the consequence of having let’s say 90% of the drive as a VC partition?

Would it look to the SSD as though the drive is full, thus impacting performance and longevity do to the drive seeing no empty space for dumping data and wear leveling?

1 Like

From the linux side, there would be an unformatted partition as 90% of the drive. If you do not format it, the OS will completely ignore it after the initial setup.

On the Windows side, there would be an unformatted partition as 10% of the drive, and it would also ignore it after initial setup. Windows 10 has been known to shrink the existing partition in order to place recovery/setup/update images/files there during system upgrades, but it will not touch previously existing partitions.

From the perspective of the Operating Systems involved, the partitions are unformatted because they do not understand the formatted structure, and so it would not be technically correct to call these partitions encrypted when thinking about them from their perspective. Only you would understand that, not the software. However, unformatted partitions are not empty space e.g. “unpartitioned space.” As a rule, the OSes will not touch existing partitions, formatted or not after the initial setup.

Remember that wear leveling is done internally by the controller inside of the SSD. Disks/SSDs in general do not understand files, file systems, partitions, encryption etc. All that junk is higher level stuff in the storage stack. What disk controllers understand is LBA or “write these bits to that logical block” with some logic mixed in to understand which blocks have been written to and which are free.

This implies, in a dual-boot config involving encryption that the logical disk management software (e.g. the OS) will only ever have access to “wear level” or “write level” the percent of the drive that it has access to because the drive has been told on a previous occasion to write to every logical block. Because those logical blocks have been accessed, written to and then not written to again, the corresponding physical cells cannot get marked free in a uniform fashion.

The drive will not mark those blocks as free until they are written to again and that would never happen because each OS will completely ignore the space taken up by unformatted partitions, directly leading to a premature death given an “un-even” amount of usage of one OS over the other.

In other words, the solution to maintain SSD life health in encrypted environments is obvious: do not write every sector preemptively as is typical with FDE.

Modern FDE encryption software can either write garbage to an entire partition prior to placing data/encrypting in place onto the OS-level volume (for protection against disk analysis techniques to facilitate plausible deniability), or it can apply encryption dynamically, encrypting whatever is currently on it and then again as contents are written.

Just use the second “thin provisioning” mode and the SSD life will be mostly unaffected because those physical cells would not have been marked as written to previously by the LBA commands and the unused file system-level space would be marked as unused by LBA allowing the corresponding physical cells to participate in write leveling.

I am not actually sure Veracrypt supports this second mode for volumes that are being used to boot, although it should be simple enough to test for. Check their documentation. Bitlocker definitely does if Veracrypt does not.

Usually this is at: C:\Program Files\VeraCrypt\docs\html\en

The wear leveling will not be 100% uniform because some cells linked to currently written to logical addresses on the OS that is not currently active will not be able to participate in wear leveling until the corresponding OS is booted and told to write to those logical blocks. That would then mark those physical cells are free.

So just switch OSes every once in a while.

wikipedia: https://en.wikipedia.org/wiki/Wear_leveling

Edit: After thinking about this some more. Maybe disk partition management software (the OS) will use LBA to mark a logical sector as free from the perspective of the file system after in-place FDE since the whole point is to write garbage, not necessarily keep it used. If so, while this will still exclude that physical cell from being written to from the perspective of the HDD, which is how I am used to thinking of disks, SSDs specifically may write to those sectors outside of their partition range anyway since that is the whole point of wear leveling. x.x Yeah that makes more sense, so even the full encryption mode of traditional FDEs should not decrease the lifespan of SSDs provided you switch OSes occasionally.

2 Likes

Thanks for such a detailed write up. I’m going to have to reread it several times to understand fully.

Although dual-booting has been my typical setup until now, this questions wasn’t based on dual boot. The Veracrypt encrypted partition of the question would just be a non-boot “data” partition. If it matters, I was thinking of using NTFS on it for easy cross-platform access.

It’s not quite relevant to this thread, but I’m tempted to see if I can get away with using VMs for my Windows needs, thus simplifying the drive setup (dual boot encryption seems like a pain).

So, is it correct to say that as long as encrypted partitions are mounted, wear leveling should take place, thus not affecting SSD lifespan? Does the filesystem used matter?

Also, if an encrypted partition is not mounted, is it holding up wear leveling? Or can the SSD still do wear leveling at a lower level?

And how is write performance affected if a large encrypted partition is or is not mounted?

Being mounted has less effect than being partitioned/ formatted.
The OS/ disc controller will use unformatted/unpartitioned space for levelling,
But if neither of the partitions are mounted, the drive won’t need levelling, as no data would be added/removed, so it’d just sit there quietly waiting to be called into service.

Ok, but that’s not much of a choice if the drive is being used, unless you mean underprovisioning.

What happens to longevity and performance if most/all of the drive is partitioned and formatted, and one or both partitions are mounted?

If one partitions all the available space, the drive should mostly perform to the rated write cycles, during the warranty period.
To extend the lifetime one may choose to over provision.
There are no guarantees, and all drives die, but the companies offer warranties, working out that most of the products will perform within tolerances, with an acceptable failure rate

So what I’m hearing is not to worry and use the drive as needed/desired.

2 Likes