Would there be any downside in replacing my 5TB WD Black HDD with a 14TB EXOS Enterprise?

I’ll use it for running games and media storage, it just seems too good to be true since I paid about the same for my 5TB WD Black approx. 5 years ago. Will it be significantly slower?

Would something like a WD Black 10TB be a better alternative?

2 Likes

In my experience, a HDD is a HDD, no matter what marketing tells you.

I’m personally using Toshiba MG08 Enterprise drives. Excellent performance (for a HDD anyway, they’re all terribly slow). And 7200rpm tend to be rather noisy, so you might want to look out for that as well. I certainly wouldn’t want those enterprise drives on my desk.

I’d go super cheap on the HDD and put that money into proper SSDs. Because even the fastest HDD RAID, with HDDs hand-crafted and blessed by the WD CEO, can’t compete with a cheap SSD when loading 5.000 small game files.

1 Like

Enterprise drives might produce more noise.
Still if you have enough sata ports left,
then i would add extra backup storage rather then replacing it.

I think i ll take Mystery’s route here

An SSD larger than 5 TB is insanely expensive and you van get so much more storage with a conventional disk. Exos drives are good drives. Buy 2 and go raid 1 if you want just a tad bit extra security.

RAID is not a Backup solution

You should be fine, I agree with @Exard3k. Unless there is some radical new technology on one of the drives, they are basically the same. As long as the cache is decent for the drive size you’re gold. I have an 16Tb Exos and it’s as quite as can be. I however have an older 8Tb Barracuda that screams. Honestly I paniced the first time I fired it up because it vibrated the damed case so bad. Comically and sadly why I keep all my drives on a rack outside their attached machines.

1 Like

True, but it can save your butt. :smiley:

The down side of RAID is all the drives are around the same age and even the same batch sometimes. So when one drive goes bad, the chance of another drive going bad during the rebuild is fairly high. Experienced it twice. Once during a RAID 5 rebuild, had to rebuild the array and restore from backups, and one during a RAID 6 rebuild which survived the experience, but took a loooong time to rebuild.

IT - Weeks of boredom followed by moments of sheer terror.

I went ZFS RaidZ2

All non ZFS RAID is inferior in my mind

The upside is the 14TB HDD will probably have a larger cache and be faster overall. The downside is that a 14TB HDD is probably a Helium drive. Helium WILL LEAK through aluminum and seals. One day, you’ll go to power up the HDD and – nothing. That day may be 10 years down the road, but that’s the reason I’ve avoided any He HDDs. 10TB is about as high in capacity as you can go before manufacturers need He in the drive to reach higher data densities. I store archival data on HDDs in safes. They can go for years before I need to access that data – so the last thing I need is for a drive to not read it’s label and have to send the drive out for an expensive repair…

I hate to say this because A) I’m cheap B) I’m old (I remember that phrase “built to last” when it meant something) and C) I’m cheap but…

You’d hope by the time a drive dies (He or not) you’d have moved to something lager / better. I have my 16Tb backing up to an array of smaller drives (which is a pita) but it helps with the age/batch issues. I know 10ish years ago every Seagate drive I had died all at the same time, dif models, batches. Just dead. RMA’d them. The RMA’s came back…died in a month?! Drives are “just not made like they used to be.” - but that god we don’t have to park them now…(who is old enough to know what that means?)

In the past when building raid 6 mdadm vols for clients I would purposely buy drives from different vendors. to help mitigate .

In a 8 drive raid 6 volume at least 2 of the disk’s would be different …

I’ve found this doc to be very useful. Details failure rates of drives in a DC by vendor.

I put HGST Enterprise drives in my NAS based on this report in 2019. Not one issue since.

Drive Failure Stats

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.