RAID: Tech in Transition | Level1techs

I would also like to see this. Kinda curious to see how well it stacks up to ZFS and BTRFS.

the shirt rocks! it inspired me to submit one to the t-shirt thread.

No because refs doesn't support hard links so it's not recommended for a system disk. NTFS has far more features and is fast. Refs is more for extremely large sets of data.

Yea ReFS is more for data storage / archiving, it doesn't support compression or booting from drive.

You use samba and it gives windows access to clients. :)

1 Like

you should probably get a hold of linus before he sets up his new server for his office because he is all excited over a massive RAID array...errrr.....help him quickly!

Question: (I asked this on the Youtubes and no response and I figured I'd ask here)

Would a NTFS Disc Image on a VM or a PXE Boot Setup benefit from the error correction of ZFS or BtrFS system that would host the image?

You've talked about BTRFS an ZFS. I have and ext4 server running samba (AD) I will (hope soon) migrate the data to the new server and one of the features I want is deduplication (10 copies of the same institutional video is killing me).
The new server has a Zeon E5-xxxx, 16GB of RAM and 2TB of storage (we won't need more than this for while). I am testing btrfs (converted from ext4) but I would like to use zfs. I am using debian and its less than 200 users using the shares. Is that enough to use it without any problems?
Also I am having problems rsyncing the old one with the new one since I've converted to btrfs, its taking too long.

i guess I answered myself huh...................

Yes, bitrot on the underlying hardware would be detected and corrected on the fly without the VM being aware anything even happened.

This is common among sans though not specific to zfs. And for being a VM image host you may have performance tuning considerations. It takes a bit of work to make zfs also fast.

That's good that it would be protected. I hope one day Intel would build File System Error Correcting Accelerators into the CPU like they do with AES, but I think Microsoft would have to get behind. NTFS is so out dated. >,< "New Technology File System" my ass, it wasn't even new when Microsoft released it with NT 3.51! It was already out in the wild 6 years earlier as HPFS on OS/2 1.2.

yeah, samba is filesystem agnostic. So windows doesn't know/care what is running underneath as long as the file exchange protocol (samba) matches up with what it expects.
AND you can even expose the zfs snapshots like as if they were shadow copy to your windows clients.

1 Like

be careful with deduplication. You'd have to check the wiki but that makes the ram requirements shoot way up. I believe you can use an SSD for the database where that sort of stuff is kept (rather than ram) in very recent versions of ZFS. this may not be true of the zfs on linux. There are actually two ways zfs is on linux, fuse and natively. You probably want to use natively.

I would try this for a few months and see how it goes. I setup an old 16 core opteron box with compression, dedupes and about 6 750gb disks I had laying around in raidz1. I copied a bunch of crap to it, everything was fine, then mysteriously one day the box was crawling. This was a text box so it didnt matter much but it was.. crazy.. how slow it was. I rebooted. Uh oh zfs won't mount. After some fiddling it turned out that I was out of ram and hitting swap when trying to mount the zfs volume. Ooops. Added some more ram, that got it to where I could mount. Then I upgraded freenas, updated the zpool schema version, then added a 64gb SSD. Then took the ram out. and it was fine. So it pays to actually create some disasterous situations on your own so you can get a feel for what happens when things go sideways.

2 Likes

this video makes me question my perc H700 raid 5 setup in my house. @wendell so constant patrol reads with weekly full consistency checks isn't enough? ditch the HW raid and go for ZFS? I know this video is aimed at enterprise ... but I feel like alot of us here would like to visit this topic on a home DIY approach as well. might be a good idea for another discussion video.

It will work if the drives report failure to read. If the drive just returns bad data, it won't recover. Worse, if the drive returns incorrect info, the inconsistency may be corrected to the wrong data. In the upcoming video we randomly wrote 1000 4kb sectors of jibberish on one drive of a raid 5 setup. Both MD and the Perc5 had about 500 corrupt files (about 50% as one would expect, depending on just the odds of the corruption hitting the data or parity sector).

When md did the "corrections" it had no way of knowing which one was right. So 50% of the time it just ran with the first one.

you can confirm this by creating a raid1 on the H700. You can use the drives in a radi1 w/o the H700 e.g. they will work as a stand-alone drive. When the drives are out of sync, how does the H700 know which drive to trust if neither report errors?

Granted SAS drives (and even nearline sata drives) are better at self-reporting errors, which is what the controller counts on to know which drive not to trust.

This is probably one of the better educational videos you guys have made. I feel you tackled the many diverse RAID implementations well and you helped me get a better understanding of it all.

Going into this I thought that Windows software RAID and/or disk spanning was an OK option. I now know better! I look forward to the rest of the videos in this series.

If I were just going to build a media server/Home NAS kinda thing do I really need to go all out with it? Couldn't I just get some hard drives, throw em in and slap a RAID 1 on it? Or should I use SSDs for that kinda thing since it's mostly reading and not much writing.

@wendell can you tell what controler i should buy because i can't find any that are 30 bucks that are good i seen dell raid are they any good??

Wendell, as always you have gave me more information to help me with my everyday job wonderful video. I can not wait to see the testing in action.

@wendell what about bitrot on ssds? If we just raid ssds together instead of hdds can we avoid bitrot or some other magical doo daa appears.