RAID Obsolete? Part 2: H/W Raid & BTRFS Overview | Tek Syndicate

				</p>					





What happens when RAID goes wrong?


Be sure to watch part 1 first so this video makes sense as to what we're doing! This video picks up where the other one leaves off.


Part 1:


https://www.youtube.com/watch?v=yAuEgepZG_8



In this video we introduce some failures in disk arrays in order to see how the different setups react.


We start by looking at what happens with a hardware raid controller. Are errors detected? Reported? Corrected/correctable?


We looked at Linux's Multi-Device (md) raid in Part 1 -- now we're going to redo our test procedure from that with a hardware raid controller (with BBU) and then do the same with a two-drive BTRFS array.


We're going to introduce some errors and see how each of these respective multi-disk technologies reports and/or recovers from that situation.




This is a companion discussion topic for the original entry at https://teksyndicate.com/videos/raid-obsolete-part-2-hw-raid-btrfs-overview

When you started talking about btrfs i found a free nas like solution called rockstor. It is early development but from what I tested in hyper v it looks like its gonna be the linux version of free nas. Have you seen this distro and what do you think.

Theoretically couldn't you just use the md5sum to get the file back? you got the md5 sum from the file can't you do the opposite?

md5s do not make the files recoverable. MD5s are only a small number of bytes. The file has 1 or more sets of corruption. If it were easy to go backward from an md5 sum to a password, that would be bad news for people using md5 algorithms to store passwords (hint: even with a small amount of information like a password, it's less "expensive" to just have a database of md5 sums for passwords than to try to reverse convert an md5 backinto a password. And with these files, its like that process but with a 500 meg file instead of a 3-20 character password.

@wendell In the video, you mentioned that no Raid Card supports BtrFS. In theory could it and if so, I want it to throw the battery away and have it use error correcting non-volatile RAM. (like MRAM or something) I've heard of non-volatile RAM for years, but I never seen it go to fruition.

It is surprising to me that this type of file system was not adopted sooner. I know OS's are very careful about switching to new things but it makes so much sense. Thanks Wendell for the video, I learned a lot.

I guess the trade of between BTRFS and a normal mirror must be performance vs data integrity ?

Microphone seems low.

awesome video, I learnt a lot.

Raid 1 with two disks seems pointless. You're getting the performance of one drive, the speed of one drive, and the redundancy of one drive unless it fails in the correct way where you end up with a working copy. Wouldn't it make more sense to raid 0 the drives and spend the time saved by the double performance just backing up to tape/writing out the 1's and 0's by hand.

At least you would know that your data isn't safe instead of being tricked by 'mirroring' into thinking that you had a backup since RAID isn't a backup

Very cool video!

I entirely agree with the description of the present state of btrfs, most functionality and reliability seems to effectively be distro-dependent. I've been using it in conjunction with LVM setups, those add quite a bit of flexibility, especially for integrating a RAID10 solution with btrfs. On OpenSuSE, btrfs does work better than on anything else, even than on Fedora in my experience. I love it for desktops, I don't really see btrfs server-side to be honest. One of the tricks you can do (like with XFS), is to write the journal to a separate volume. That does solve quite a few problems and can enhance redundancy performance quite a bit. SSD's are great for this kind of thing, two partitions for journaling and a partition for caching on an ssd, easy to set up, cheap, high performance and high reliability.

Both btrfs and f2fs need to develop a bit more before they can really be gamechangers, for which they have the potential.

So now that I have a more complete understanding of the failures of RAID 1, 5, 1-0 and the like is there anything superior to RAID 0 in terms of reads and writes for an array of SSD's. I do a fair amount of video editing and I have been using a pair of Sandisk 120GB SSD's in RAID 0 as a "scratch disk" and while its fast enough for my purposes I was wondering if there was anything I could do with that hardware that would be any faster. (in avoidance of purchasing a PCIe SSD as its a hobby and doesn't justify the cost).

Nice demonstration Wendell. I would however not recommend btrfs for production yet.
Last week actually I got a kernel upgrade that corrupted the filesystem, which luckily was recoverable.
If you want something that really is production ready, you should use ZFS. Even on Linux it has been considered stable for years. This is not the case with btrfs.

The question then is, does this give you any protection or does it just add more risk?

I personally use EXT4 on my SSD, which has been stable since I started using Linux like 6 years ago. Aside from that I use ZFS On Linux in a setup containing 2 mirrored 3TB Western Digital Reds.

Sure play with it in a (K)VM ;)

Wendell, did you experiment with Native ZFS on Linux? Debian might stat shipping this in the near future. Any particular reason you chose the less mature btrfs for the testing?

@wendell:
I'm interested in the entire result of the last run of md5sum-checker (with btrfs and after inserting corruption). What we see in the video (at 27:50) is that everything was ok at least until file 124. Could it (in this example or theoretically) be that the hashes or every copy of a file was influenced by the corruption and thus we get at least one dummy file which cannot be recovered?

I don't believe this will be possible as long as only one drive is faulty. The error reporting could be better about telling g you which drive has bad data though.

Btrfs fixed every single corruption because it always had the copy on the other drive

1 Like

Open question to all :)

My current server is configured with these HDD's:

1 SSD (OS only (Currently Windows Server 2008R2))

2 500GB WDC in RAID1 (on Adaptec 1420 RAID Controller).
2 1TB WDC in RAID1 (on Adaptec 1420 RAID Controller).

If I change the 2 500 GB drives, to 2 1TB drives, and run with 2 RAID1 sets of 1TB each on the RAID controller, would that be a benefit if I install Linux on the SSD ,and use the 2 RAID1 sets with BTRFS?

Would that give a more secure platform? as the RAID controller offers redundancy on the RAID1 sets, and BTRFS adding redundancy/file health etc. via its own design?
Or will that be a waste of resources, and result in a divide by zero scenario ? :)

Hi Wendell. Great piece on hardware RAID and ZFS / BTRFS. I found it quite thorough and informative! I was just wondering if you'd by any chance ever had a look at Windows latest file system ReFS and had any thoughts about it and it's software RAID system "Storage Spaces"?

Hi @wendell (and other knowledgeable people)

I have a computer in the works, 1x SSD for the OS and 2x WD reds for mass storage. I'm using Windows 7 as the OS for Solidworks and rendering with a Quadro.
Originally, it was just going to be a RAID 1 on the WD reds and that's it. Then I saw these videos and wonder what is the best way to go. One potentially dumb suggestion would be the run Windows 7 from within Linux on the SSD and have BTRFS look after the WD reds...? or is it better to stick to the original plan?
Computer Specs: Impact VII, 4790K, 16GB RAM, Quadro K4200, 850Pro SSD, 2x 4TB WD red pros.
Cheers

Hi xXDeltaXx,

One possible solution would be to stick with Windows by upgrading to Windows 8.1 (or wait until Windows 10) and use their software RAID solution "Storage Spaces". Look it up and see what you think of it.

Sort of off topic, but not. In the video you talk about using a RAID Card. However, in Logan's NAS build, he doesn't mention using one? Is there any particular reason as to why? I'm pretty green when it comes RAID stuff and looking to build a home media server with plex and ~ 6 HDDs in RAID.

-Aaron