Doesn’t sound like you actually did watch it. He steps through and outlines all of the data integrity errors, like the RAID5 write hole.
He speaks to modern RAID solutions no longer checking parity as they go, and instead relying on the drive to self report problems, which means you are never going to catch bit rot.
You absolutely cannot trust your hardware. There is no hardware on the planet that can be trusted, what with flip bits and other random errors being not only likely, but guaranteed if you process enough data.
ZFS verifies the parity of every single block as you read it, and if it doesn’t match, fixes it and rewrites the block on the fly. And it does it FAST.
Additionally - by default - it will scrub the pool every month, to detect any bit rot in data that is not frequently read.
I’d agree that BTRFS is an inferior solution. The only thing it has going for it is the kernel tainting license issues with ZFS as OpenZFS is released under a suboptimal license. It had merit when it started, but it just hasn’t panned out. Even many of the most die-hard Linux fans seem to agree at this point.
Modern hardware RAID is just a disaster waiting to happen.
Yes, I know, we should all have multiple offsite cold storage backups, but how long does it take you to restore from that backup when the shit hits the fan?

