I’m running 3x vdevs of 8x HDDs each and for my primary backup array I’ve been changing the drives to UltraStar 16TB units (SATA). Since I live on the boonies, I order these from Amazon or B&H Photo, usually as single-pack “Retail” drives.
Since I’m not in the US, Warranty support gets voided as “Out of region”. In the past year I’ve probably had 3-4 drives present errors. The most recent has been 2x units with read/write errors just after 2-months of operation.
Each pair of drives run ~$700 and this is getting expensive, really fast!
I’m starting to pull my hair out - Any suggestions? Should I try something from Seagate?
In general, for the same class of HDD failure rate will be the same, the exception being particular bad batches/designs of which seagate used to be the champion (10+ years ago) but nowadays, looking at stats it’s not the case aymore
You’re running 24 drives, so proper mounting/vibrations/power/temperature may be a factor in the failure rate …
Also, I am not familiar with latest generation of Ultrastars so I am assuminf they are not SMR and you’re just using them for a purpose they were not made for?
These are the drives that I’ve noticed issues with:
They are marked as CMR. I’ve now started to shift the replacement drives to different slots on my HBA to potentially isolate any “bad” connections as well.
Having run these arrays for over 2-years now, I also had my very first “data corruption” event and ZFS nicely corrected it. Was pretty sweet to finally see this in action.
Previously, all my issues have been read/write errors in the zpool report.
Secondary host (Supermicro): 16x drives (2x vdevs). I’ve got 8 at the top and 8 at the bottom, with 4x2 (2x rows) empty in the middle.
The primary replicates to backup 1, then it makes a local replication to backup 2 on the secondary host.
**Yes, I understand that these aren’t 3-2-1 backups, but given the volume of data, it’s the best I can do; with super-critical data being pushed into BackBlaze.
i would try going to a shop.
picking them up and taking them home yourself.
one of the biggest killers of drives is home delivery.
not kidding, every drive i have had delivered to my door has failed within warranty more often long before the warranty was out.
8 drives 4 builds in 1 year failed
one of them had 4 drives, 2 in raid 0, all 4 failed within 4 months.
every one i picked up from the shop.
still working as far as i know.
tried seagate and toshiba. both have been decent brands for longevity on 1-2 tb drives in 20+ builds.
but again i picked all of em up from the store, and while only a few of them have been for nas/jbod.
they do the job well enough for my mates and friends of family customers needs.
A lot of the time the errors look like this - sadly I haven’t saved the system log but typically it’s a read/write type error. Next time I stumble on this I’ll make sure to share that here.
All my drives are shipped from the US and they make a LONG trip to Sri Lanka in a box via DHL or UPS - it’s basically a trip half-way across the globe.
The most recent pair had the drives inside a LARGE box, then each drive inside its own box, with plastic “ears” to hold the drives. One of the “ears” was fully shattered like glass.
Look into the smart data on the drives for the errors, that can help clarify what kind of an error it is.
If the drive is reallocating sectors, then yeah it’s a drive problem.
Things like cables problem generate transmission errors that show up under things like Ultra DMA CRC Error Count.
Also note that the values for smart data can’t be taken at face value. They could be a percentage, and absolute value, or just straight up weird and arbitrary bullshit.
Also, what do you mean by “failure”? As in the drive completely stops working, or you pulled it because of errors in ZFS?
well that sucks… guess theres no local newegg or similar close by…
also just saw what’s going on over there.
stay safe mate.
one thing that can save some wear on drives is the way you mount em.
the other real killer of mechanical drives is sympathetic vibrations.
so make sure your cages are snug and the drives are snug in them.
dampen the drives with rubber grommets of various densities, as they will change the vibration each drive will absorb/emit from and to other drives in the box.
the aim being to stop the drives forcing sync up which adds wear to the moving parts.
much like what’s going on here. a similar effect is happening inside your drives with the motors and actuators having to absorb the vibrations till they hit a sync harmonic.
as i said this adds extra wear to components and if the harmonics aren’t dampened to a frequency that the drives wont sync to, you can destroy a whole array pretty quickly through harmonic resonance.
I think Hitachi drives are better than Seagate or at least of the two types I’m using. I swapped out the Seagates from one of my arrays with Hitachi and it’s a lot better now, cooler and no error buildup in ZFS.
I don’t think those errors mean the drive is gone, you can probably resilver them away, which is what I was doing until one broke and I decided to replace them all one one at a time.
SAS drives are better because they are designed for arrays, see the metronome post.
I would suggest you start again building a new server using what you’ve learned and maybe this time the drives will be long lived.
I’m running WD Red Plus at the moment, no complaints so far, other than WD as a company being traitorous fucks who keep screwing people with firmware changes, submarined SMR drives in the normal red line, etc.
Recent (lol) drive history for me: ran a bunch of WD blacks in ZFS for 6-7 years just fine. Replaced with a bunch of Seagate drives (forget which), died after 4 years or so. Replaced with the red plus a few months ago.
I’m not sure, and you may want to check - but wouldn’t the 16 TB drives be SMR? If so… bad for ZFS. If not (CMR encoding), play on… but definitely check the spec sheet of any spinning rust you buy these days. SMR doesn’t work with RAID or ZFS properly.
With both previous sets of drives, ZFS has done its thing and scrubbed/fixed/worked just fine until one of the drives totally died and/or I replaced the pool.
I’m with @anon7678104 on this, rough delivery can permanently reduce the life of a HDD significantly; I’m sure we’ve all heard the yahoo shopping cart horror story about this.
My suggestion if you can’t get the HDDs locally would be to order a large number of the drives all at once, most reputable sellers will ship larger numbers of HDDs in their original packing that is designed to protect the drives better than “normal” packing.
Here’s the original box of some Toshiba MG08’s, basically saying that as long as more than 11 drives are shipped at once, they get the superior packing:
My MG08s came with exactly the same packaging. I didn’t get the full box (5 drives), but shittons of extra padding. Always good when your retailer and manufacturer know how to do properly.
slightly offtopic: How are your MG08s doing? Got 6 drives running for 9 months now (around 5.5k hours) without any complains. I’m really happy with them so far.
Nice. I bet the sellers are using the OEM packing when they have extras of them laying around from high amounts of low drive count orders… hopefully this will be the case for my next order.
I’ve only had them ~5 months so far but everything has been good. I’ve got 12 running currently and and going to purchase some more to expand to 16 soon.
I had been researching what drives to get after my HGST’s started to get long in the tooth and after the Q1 backblaze report came out saying the MG08s were superior to other contemporary drives I pulled the trigger.
Recently, I’ve been a tad lazy and just replace them whenever ZFS read/write errors are posted. I also tried the wipe and resilver, but then within a few days errors would stack up on the same drive.
In the past I used to be on the ball and anytime I pull a drive I would subject them to stringent script, it would
Run a short SMART test
Extended SMART test
dd burn 0’s
full badblocks run
short SMART test
Extended SMART test
Log everything to a super long detailed log file. Over the course of a few years I just built up a git repo with all these detailed logs. Most times I would throw a drive back into circulation and they would most likely throw up issues ~6 months later. Since then, I’ve stopped doing this.
Quick example - I’ve got two 16TB in the same vdev with two issues. One is failing SMART checks. the other, has these errors as resent as 13th July. From the 14th onwards this issue is not in the long and ZFS hasn’t raised any warnings (yet!).