Does hd encryption work with RAID?

Wanted: "Perfect" Persistent, High Performance, Secure, Encrypted Disk Subsystem

I think this is relevant because to me it represents the minimum level of data integrity we should all have.

Persistent: i want this data to last my lifetime.  (i will always be able to migrate it when i upgrade my system)

Encrypted: so no one can snoop on my data.

Secure: secure from drive failure or similar type data corruption.

High Performance: we all want raw speed on our disk subsystems, you can never have enough throughput~

of course this would still be vulnerable to physical attacks such as the fbi kicking your door in and confiscating the subsystem itself, then the NSA brute force decrypting it because they have the first 40 bits of your encryption key. Or even just someone stealing your computer while you're not home or your house burning down, flooding, getting hit by a meteor, nuclear war, etc. So some sort of cloud backup service would be necessary to fulfill the "persistent" requirement.

I am the proud owner of a WD 1TB RE4 "raid edition?" (i think thats what the RE stands for lol.) hard drive. My plan is to purchase a second one and put them together as a raid 1 volume. This should fulfill two of the above requirements;

1) my data will be mirrored so if one drive fails i can keep using my system uninterrupted.


2) high performance - raid 1 (or perhaps raid-z which i am now aware of thanks to your resident genius Wendell, thank you sir!) should improve the read times, which is the most used function anyway.

So my question is this: do i just have to add a tpm module to my motherboard and enable encryption in the UEFI, and is this technology compatible with a soft raid solution like Intels Matrix Storage Manager? Also does the drive need to support encryption like the 840 pro's advertised support of 256bit AES encryption? Or is it more motherboard dependent? I am using an asus maximus V Gene.

Also, is there any difference between tpms? I see one online for 10 bucks "for asus motherboards," but i also see them for like 80 bucks.

in case you couldnt tell im trying to do this on the cheap, otherwise I would be putting a bunch of samsung 840 pros in like a raid 1E to saturate an LSI Nytro MegaRAID 8120-4i's pciex8 throughput or something like that lol, or just a straight ssd raid on pcie card.

in the future i plan to add additional drives; 3 more, to make a 4 drive raid 10 or 1E with a spare to automatically rebuild a corrupted volume.

A note to all you RAID 0'ers out there...never again!! i had 2 ssd's which of course were touted with a very high MTBF when they came out, then one died...and it was time to install windows again lol.

 

Raid Z is only available on the ZFS filesystem which currently works best on FreeBSD (there is a linux port but it's still behind the FreeBSD implementation). So if you don't want to work on FreeBSD you should use RAID 5. Don't use RAID 0 if you can use RAID 5 because it prevents corruptions (! RAID 0 doesn't protect against corruption but only against a drive failure) and because it has the same speedup as RAID 0.

I currently use EcryptFS for encryption but from what I've seen, full disk encryption is faster (but not so good integrated in the login manager on linux distributions). If you use it only as a data partition you don't have to worry about that and I'd go with full disk encryption (that's actually what I have done with my external drive).

I'm not sure what windows does but at least on Linux those features are all software based. You don't need special hardware for this!

So, my personal favorit hard disk setup:

3*(same sized hdd) in RAID 5 with EcryptFS in /home and a SSD with one partition as BCache and the rest mounted in /. All other storage completely encrypted with LUKS.

edit: Woops, I meant RAID 0 not RAID 1.

Just to clarify, Raid 1 does not have any speed up. It's purely redundant with a storage loss equal to half of the sum of the drives involved.

I've read in the past that:

Raid 0 gives a speedup because files are spread in pieces over both drives, so they can both be read simultaneously with a theoretical doubling of read speed.

files in Raid 1 are duplicated on both drives, so i thought the same would be true if both drives can be read from simulataneously, with no write speed increase because you have to write the same file to both drives simultaneously also.

And raid 10 the best of both worlds, because your data is striped across multiple drives for speed, and mirrored for integrity. 

I've read that Raid 5 is not as good for performance increase because of the additional calculations necessary on-the-fly for parity.

also Raid 1E is much like Raid 10 in that data is striped and mirrored, and different pieces of a file can be read simultaneously from different drives.

Raid Z sounds cool (as per Wendell's infos about how it can sense data corruption on a block level) but im on windows 7 so i guess thats a no-go for now.

On encryption, i guess i just need to order a TPM and start playing around with it, or use a software encryption technique as opposed to enabling whole disk encryption in the BIOS/UEFI.

I think im going to give TrueCrypt a try after a little research pointed me at that free software.

You're right, RAID 10 is preferable over RAID 5. The problem is that you need at least 4 drives for RAID 10 and the size of the system is the size of all RAID 0 systems (means that you normally have 1/2 of the used space) while RAID 5 only requires 3 drives and the size of the system is (size of smallest drive * (number of drives - 1)).

If you're using 4 1TB drives in RAID 10 you get 2TB usable space

If you're using 4 1TB drives in RAID 5 you get 3TB usable space

edit:

files in Raid 1 are duplicated on both drives, so i thought the same would be true if both drives can be read from simulataneously, with no write speed increase because you have to write the same file to both drives simultaneously also.

The kernel/the controller has to make sure the data is the same on both drives, therefor it must read from both drives.

as per Wendell's infos about how it can sense data corruption on a block level

Most RAID systems do that.