from experience seagate are garbage, they work fine … till they don’t most of the time there no warning, SMART was displaying the drive as healthy yet they died at least WD you usually get a heads up before failure
when you buy drives for nas just remember the more drives you have together the more they affect each other
1 to 4 drives wd blue are fine
id start using red at 4 but there not a must till 8 + to be honest
20+ use red pro or enterprise
as a permanent solution external drives are too vulnerable, and usb is wayyy too flaky for reliable long term usage, those are best for offline backup not running storage
Ruffalo the price is not the same everywhere for example
WD My Book 8TB USB 3.0 External Desktop Hard Drive WDBBGB0080HBK is $320 in Australia you shouldn’t assume the op is in the us
anyway i’m gonna go in the opposite direction from everyone else about drive sizes when everyone say buy big and ruffalo says one should never buy bellow 6tb in 2019, i completely disagree
i have probably 120hdd running here and none is above 4tb the reason is sata drive throughput isn’t fast enough
in raid drives always rebuild slowly… especially if the system is in use it’s not rare for a 2 tb to take 24 hours to rebuild while this is the main reason why bigger is not always better it’s not the only one
with big drives you end up with a small number of drives in your raid if you only have 1 parity you are more then likely to lose everything as a second drive fail during the massive amount of time a 8 tb will take to rebuild or if you don’t change the drive right away and all your drives have the same age it might even fail before rebuilding even start
next lets imagine you have 2 8tb drive in raid 1 the drive read and write speed will just abysmal, if your main system has a ssd try doing to a hdd only system for a few days, you might be able to have 1 maybe 2 user at most or you will have sporadic performance
raid 5 and raid-z1 have the same single parity issue despite needing 3 drive minimum but slightly better throughput… sometimes, overhead is troublesome at times
if you look at raid 6 or z2, it require at least 4 drives (6 or 10 is better)
the benefit is simple better read and write speed, 2 parity helps a lot when rebuilding
then you have what i use, raid-z3
my chassis are: 24 drive bays with 2 pools of 11 live drives and 1 hot spare
the main pools are all made of 2tb drives and the pools where they duplicate are 4tb drives
zfs memory usage isn’t bad really as long you don’t use de-duplication
running 24 x 4tb system (11+1 x2) on 8gig of ecc ram works well
meanwhile with dedupe well that’s a mix bag of… the 1gig per tb is just a guideline some time it’s not enough from experience z3 with 11x2tb+1hot spare x2 give about 31tb of usable space but the system would never work properly with 32g of ram it just wouldn’t