Any info on Western Digital (WD) MTBF on Warranty Replacement drives?

Hi all-

Tried googling for “WD MTBF replacement drives” to find no info on this aspect. The drives I get back from Thailand always have a ‘white’ sticker, not the same red/white (at least for the Red and Red Pro NAS drives) sticker that comes on their retail drives.

I’m assuming these are simply refurbished drives, and I do recall previously that one of these warranty replacement drives died much sooner than a typical retail drive in one of my Synology boxes.

So my concern is if/when I throw in a warranty replacement if/when my ZFS (FreeNAS) array needs re-silvering that it may die during the resilvering process and potentially a lot worse on the Synology RAID boxes as they are older…?

There doesn’t seem to be much info in terms of MTBF on these warranty replacement drives or their performance standards.

Cheers M.

I presume they intentionally keep it vague, as they just use whatever passed their reform validation. Don’t they issue the refurb with a new, 1 year warranty, regardless of how long the warranty on the original had left?
The refurb would definitely have less life than a fresh drive, but performance should be about the same?

when i had to warranty a WD drive a 3 or 4 years ago, i got a refurb back with the remaining warranty on the original purchase. this drive is still in use today with no issues. the original drive had a 3 year warranty as i recall, i bought that sometime in 2014.

It really depends upon the series or model of a drive, I’ve known users/workplaces who received either refurb or new drive based on their supply channel–failure rates on WD “Factory Refurbs” are no different than a “new drive” bath tub curve, some “Factory Refurbs” are typically QA tested drives which can no-longer be considered “new” and other “refurbs” could fall under a mix of customer/retail returns with a potential of a shorter MTBF. Generally you can’t check a refurb drive as some OEMs reset/recalculate the SMART status including the power-on-hours.

Most companies should be “testing” their refurbs, I’ve heard of fewer WD refurbs failing in the typical 3yr time frame–Seagate refurbs tend to follow a bath tub curve they either fail within 3-6 months or last 6 yrs :open_mouth: (their 3TB HDD series is proof of this, a high number were bad either from a firmware or QA side of things with replacements being equally as questionable–if the drive from that dark era managed to last 2yrs, it was pure luck)

It’s always a good idea to do a 100% write/read cycle on new drives. If they are going to die early that often shows up faults like reallocated sectors which give you a hint. Only problem is it takes so long on large drives.

1 Like