Three Way Mirroring on Two SSDs

Is it worth putting a three way mirror on two SSDs? I believe this can be done with BTRFS or ZFS, but I’m wondering how much or if this improves data integrity versus a two way mirror. In the event of a total drive failure one could lose two of the 3 copies which really isn’t better than a two way mirror, but less space efficient. In the event of bad sector reporting from one device would the three way mirror on two devices prove more reliable? I don’t really know much about the typical failure modes of SSDs, but if someone else does, it could maybe help shed some light on the topic for me.

I think the recommended way to do copies is still the usual 3-2-1 (3 copies minimum, 2 on-site and 1 remote copy) and also the obligatory raid is not a backup reminder.

If I am understanding you correctly, you want 2 copies for 2 disks and “spread” a 3rd copy between 2 disks? There seems to be no benefit in doing a three way mirror in two disks. A more sane way forward is to use snapshots rather than a 3rd copy.

I am wanting to spread 3 copies on two drives. I wouldn’t bother with trying that on two hard drives, but the SSDs are a somewhat different animal. From what I understand they are more likely to have bad block errors than just give out completely. One thing I don’t understand is the locality of the failures on an SSD. So, if I have failures occurring on one partition of an SSD with two partitions, is it likely to be confined to a single partition or just as likely that both partitions are failing in the same fashion?

SSDs do some sort of CRC check at the block level when it writes. You’ll be fine.

Put that 3rd copy elsewhere. Also technically the mirror isnt a “copy” so… you really dont have a copy. If I were you, I’d actually separate your copy to a different computer and sync them occasionally. And upload a 3rd copy to Backblaze or Google Drive or something.

SSDs thankfully dont wear out when you read them (unlike spinning HDDs) so just run integrity checks regularly if you are paranoid.

I definitely agree with the backups, but I’m also concerned with data corruption as much as outright data loss. The problem with a two drive mirror can be that if one is misreporting then the file system is making an educated guess as to which one is reporting correctly and which one is wrong. I think three mirrored copies on three separate devices is a good way to handle that problem. With three mirrored copies on two devices the problem gets a bit fuzzier in my mind.

I dont have much experience in SSDs but so far, its either ok or catastrophic failure for me. Never got to the parts where things we starting to fail and had actual errors. I am having cautionary warnings that my SSD is getting old but no actual errors yet.

If you are looking for a way to ensure error correction even with one disk, then your best bet is to take each drive and partition it into two parts. Then you take all 4 partitions and put them in a 4 wide mirror. That way if you lose a disk, you still have a full mirror pair to correct errors with. The obvious downside is the 25% space efficiency.

There is also the dataset property copies=2 but I can’t speak for how ZFS actually allocates the copies.

It should be noted that metadata is already redundant.

1 Like

Pretty much what @Log said.

3-way mirror (odd number) on two drives (even number) isn’t really possible.

BTRFS offers 3-way mirrors with 3 disks or DUP which is a RAID1-like single disk
ZFS offers basically the same as mentioned above.

The benefit of the copies=2 property in ZFS is that you can set it on a dataset level, so you can choose individual “directories” in which the data is stored two times on each side of the mirror without the need of applying it to the entire disk.

The question is what problem you want to solve. Protect the remaining drive during a resilver process after one side of the mirror failed? Needs a 3rd physical drive.

Or do you want better integrity? Well a single mirror has two checksums and two blocks of data. ZFS runs totally fine with a 2-way-mirror. The only “vulnerable” time ,when talking about correcting corrupt data, is the time between drive failure and the completed resilver process of the new drive. But even IF there are some errors, ZFS will tell you but can’t repair them.

To sum things up: You always get correct data from ZFS with a 2-way mirror. There is no such thing as double-valid data, it is either corrupt or it isn’t. n-way-mirror and copies=2 or 3 have their niche use case (things like crypto wallet or archiving stuff, self-healing on a single disk), but are generally considered unnecessary. If you want protection from two disk failures, you need 3 (mirror) or 4(RaidZ) drives.

1 Like

With ZFS, the checksums for integrity that Exard mentioned, is the bit that makes the educated “guess.”

Instead of guessing, it checks the checksum on every read.
so it doesn’t actually need to guess.

IIRC, BTRFS does this too

and it is complicated, but the way I understand it...

When a transaction is prepared to be written to the pool, it creates a checksum for the data, then it save a copy to each side of the mirror (or split among a raidz, or copies=x stores more copies)

When reading, ZFS checks the checksum and the data. if they don’t match, it automatically checks the other copy in the pool. If That doesn’t match, it sends an error to the OS that the data does not exist, and reports/makes a note of it.
If the second copy Does match it’s checksum, it makes a note that the first copy is garbage internally, serves up the good copy to the OS, and writes the good copy again to replace the bad copy.

The only slight problem with this, is if the memory has a corruption at the exact time the transaction is being prepared, in which case the data is corrupted, but it gets faithfully checksummed, and stored with a successful checksum all over the pool, and when read, the checksum will match the corrupted stored data, and it will serve it to the OS as good data, because it is exactly as it was when written. That’s the reason ECC is promoted. But it is pretty rare.

Most file systems don’t checksum, and so they would just save and serve corrupted data regardless. Some do checksumming of just the metadata.

2 Likes

On thing I’d like to add it that in a normal RAID1 mirror (BIOS RAID, RAID card, whatever), there are only two different blocks. And if suddenly one block doesn’t match the other one, you have to guess and gamble or do nothing.
With an additional checksum (520b sector drives (old school) or modern filesystems like ZFS and BTRFS) you can have:

single disk scenario: data+checksum, but checksum doesn’t match. Report corruption, but can’t repair (no other good data available)

mirrored drives scenarios:

  • data1+checksum1 matching data2+checksum2 → all well!

  • data1+checksum1 matching checksum2 but not data2 → purge the unclean block! copy data1 to data2 because we know data1 matches both checksums. → self-healed, let’s get back to work!

This is a bit simplified, but should do for demonstration purposes.

a scrub in both filesystems ensures that this gets detected periodically for all stored blocks so it doesn’t get any worse than the examples mentioned above.

Some nice lecture about this stuff from Philip Paeps: "The ZFS filesystem" - Philip Paeps (LCA 2020) - YouTube

1 Like

This reminds me of a somewhat related/similar question I had.

Is there a way to set up mirrors within a single HDD to speed up access times?

Since HDDs have to wait until the platters spin to the location they want to read, it should be possible to put multiple mirrors on one in such a way that read speeds (or rather latency) should increase.

It’d be an interesting way to give my older and smaller but still usable drives a use as a cache drive/array that is a bit better than reading from the array itself.

Thank you for the explanation. I wasn’t factoring in the check sums being used to verify data integrity. It looks like a two copy, two device mirror would provide what I was looking for without going to higher level mirroring.

1 Like

ZFS doesn’t limit you to how many ways your mirror has, back be for Sun was bought they made a 45 drive in a 3 RU server (x4500 IIRC). Someone at Sun got a hold of one and made a 45 way mirror. ZFS didn’t even bat an eye.

In the UNIX tradition, ZFS will give you enough rope to shoot your foot off.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.