As the title says, why do people still recommend RAID than using FS like ZFS/BTRFS.
It's simply because most people tend not to notice the strides in technology such as the file system. Most people will recommend RAID just due to "if it ain't broke, don't fix it," (however my motto is, if it ain't broke, fix it until it is).
Edit: kind of like how Windows STILL uses NTFS. People want traditional, and as such will fork more money over.
If it ain't broke... Break it!
I have two smallish SSDs that are slowerish. Doubles the throughput (RAID 0) and is nice for games since some are getting 60+GB in size.
"If It ain't broke... Break it!"
Overclock testing in a nutshell.
As to the OP, Wendel talked about this. I've always been ooohed and aahhhed by RAID, but other than loading speed on slow HDDs (something irrelevant now with cheaper and cheaper SSDs) and mild file protection, RAID doesn't actually have any benefit... and it has plenty of potential downsides. Here's Wendel on the topic. Throw this out the next time you see RAID advised. SSDs for speed and redundancy for security is where it's at, folks!
Running 2 850 EVO's in Raid 0, I dont have a problem. If it aint broke
Maybe I'm missing something, but how is a file system going to bring me redundancy? Does ZFS or BTRFS have some sort of mirroring between multiple disks etc?
Also, because of hot, nasty, badass speed.
+1
autonomous caching seriously rocks and RAID hardware is a long term solution. You can't get the same performance with tweakery and "regular" hardware on the same sustained long term basis.
Then there is also the maintenance. A hardware RAID is very easy to maintain, which means cost reduction.
You can set up a RAID as a JBOD, and two backplanes as redundant environment. That brings the best of software and hardware.
Btrfs is not perfect. It's a good compromise and a serious step forward for smaller systems and SOHO solutions, but on the enterprise level, it's pretty workstation-level, not server-level. There are still some serious issues with btrfs. Like when your system crashes because of a total hardware failure, and you have to scrub your array with an external live distro because it won't work at a decent speed natively any more after the crash... been there, done that... there is no substitute for multiple levels of redundancy and multiple safety nets, like snapshotting on an external SAN for instance... always a good idea to prevent misery and time loss, but snapshotting multiple workstations costs a lot of resources, and in the end, you'll need a solid server-side solution, that will involve a lot of different filesystems, layers of redundancy and a unholy amount of storage that needs to accessed efficiently and as bloody crazy fast as possible. You just need the hardware to make that happen. That hardware is high speed autonomous caching RAID controllers.
Flexibility is everything, some applications will need different solutions, the whole reason why servers run linux is to have that flexibility, to not have to settle for a "one size fits none" solution that costs a lot of money but doesn't offer targeted performance.
There is also a scale problem. There is no mobo that will provide a reliable solution to hook up 32 or more devices with maximum performance. You have to use interfaces anyway, why not maximize the performance of those, and then you end up going with a hardware RAID controller on a standardized backplane solution.
@Zoltan is as always dead on the money, I'd add that in a lot of SOHO or small businesses running a Windows server a hardware raid is really the best and only solution, once you move away from the Windows server environment your options change, but the reality is that hardware raid is still a viable solution especially in small businesses.