Raid 5 is already a bit moot in 4tb consumer drives, especially if you bought a bunch around the same time (so they will approach end of life together, and failure rates will likely be higher by the time you do first proper rebuild). As for the speed of rebuild, with a better setup (like raid 6/7 or the ZFS equivalent… which IMO both make more sense than “hot backups”) the idea is you don’t instantly stop and wait for a rebuild, rather it rebuilds while still issuing the typical files use needed by clients… typical client duty cycle on the drives is very rarely high on average for a server, NAS or SAN anyway, so it doesn’t even slow the rebuild time much.
Also for reference, rebuilding an array isn’t crazy long on modern drive speeds, even on 10TB drives it’ll happen overnight for a business, writing 6TB sequentially on a Toshiba X300 6TB\8TB happens at over 150 MegaBytes per second, or total rebuild of a full (or 75% respectively) drive in under 11hrs fi the attached CPU\controller can keep up with parity creation ect in your chosen format. Also given its not much load on the rest of the array to provide 150MBps, even without user prioritization the speed delivered from a NAS\SAN to users shouldn’t be dropped much during rebuild… nor is a rebuild a “stress” far out of the ordinary for any one other drive in the array.
Regarding failure rates, neither the drive manufactures nor more independent reviews claim higher failure rate per Tb\year throughput on higher capacity but otherwise equivalent drives. You may have more read heads than a single lower capacity drive, but you have less moving parts than an equivalent capacity of lower capacity drives. One of the large reasons for helium isn’t the need for tolerances, its the ability to seal from external factors and reduce power draw, which (along side having less reciprocating mass per terabyte) is one of the reasons that the larger capacity drives can be much more power efficient than their peers (I say peers because enterprise drives tend to be hungrier than their consumer equivalents of the same capacity), enabling similar totally draw per populated drive bays, despite radically higher total capacity.
Cost per terabyte wise, on all the drives listed on PcPartPicker, limited to 7200rpm 3.5" and sorted by price per gigabyte, the top options are 3tb Seagate barracuda, 3tb Toshiba, 6tb Toshiba x300, 6tb Hitachi (my personal pick & 28cents per gigabyte), then 5&4Tb Toshiba x300’s. With an exception for stock clearance for older drive designs (which can be totaly worth it as a new in box last gen drive doesn’t necessarily even have a higher failure chance than the drive that replaces it), price per gigabyte isn’t really a shortcoming of big drives, especially once you consider the costs per bay in your storage array (or hassle of swapping more times in cold backup).
I do however totally agree about getting a drive that suits your use case, having good backups and learning not to worry… I personally wouldn’t raid0 a bunch of older drives (its just not been worth the headaches in the past for me, even when you don’t have any irreplaceable data on the array).
Indeed I suspect for home users, more often than not you’ll find that once you understand your needs you don’t even need raid arrays, since even if the storage pool is big the daily modification rate is pretty small, and up-time of most of the data is not essential so long as it can be eventually recovered, thus you can maintain complete data replacement in acceptable time via infrequent complete backups with something like a nightly delta. Although having said that 2 disk raid1 (even in poor software like within windows) is still certainly a possibility for storage within a machine your directly interfacing with, as it allows near zero downtime in an unexpected drive failure (just remove old disk). Just remember, RAID is not a backup!