[used vs new] How are your 10TB Ironwolfs holding up - buy 4yo HDDs? (ST10000VN0004)

The seller says, those came out of a lightly utilized Synology.

Why should i buy a pair of 10TB Ironwolfs that have 39k hours on the clock?
Why shouldn’t i?

Unless they’re dirt cheap it doesn’t make much sense unless you don’t care about reliability and/or is on a very tight budget but I would still question reliability.

6 Likes

I love how seemingly no matter what it is, no matter the condition, sellers always put “lightly used”.

With those drives, the warranty is likely expired by now and they’ve been online 24/7 for almost 4.5 years. Personally, I would pass.

I certainly wouldn’t bet a dollar on 40k hour drives surviving days of resilver

2 Likes

The seller sent me SeaTools screenshots with SMART data - must’ve been an 8-bay Synology - some of them look OK-ish (only 30k hours).

EDIT: no reallocated sectors, all the spin-retry counters are 0, though …meh, kinda OK-ish, if the price is right

And there’s the whole snafu kind of almost-not-secret sauce Seagate uses to calculate their “Seek Error Rate” - it’s not an actual counter, but fancy math - millions might not actually be terrible.

Well, they want ~150 bucks each, and THAT is definitely too much for 39k hour drives.

LOL, imma end up shucking external USB drives - i’m downgrading my HPE MicroServer Gen10+, from a 4-wide RAID-Z1 HGST Ultrastar 6TB pool, down to a 10TB ZFS mirror (2-wide) …with the NVMe cache, i don’t need that many spindles, but i’d prefer the reduced power consumption, while redundancy (n+1) remains the same. …EDIT2: and the reduced CPU load, without the parity calc is also quite nice to have on an i3-9100F

Probably makes more sense to get a pair enterprise drives and some kind of warranty?

3 Likes

yeah, that’s my ticklish spot - at least 1 year, maybe half a year of warranty left would be really nice.

side note: one of my 5 Ultrastars shat itself after a few weeks - no worries, got a refurb/re-certified replacement. But yeah, i’m not so comfy with a total out-of-warranty-gamble - unless it’s really cheap, then, my trust in the bell-curve / tub-curve takes over. :sweat_smile:

Btw: some of those drives had a few “Emergency Retract” incidents logged in their SMART report.

I’m not sure if i’m interpreting this correctly, but it sounds like a near-miss-head-crash - i assume without proof, that those drives have accelerometer-fed safety-routines built-in. (someone bumped the desk with too much g-force, or something)

well, i do that too, you gotta present your item in the best light - however, when i say it, it’s usually technically correct - i am very proud of my 700+ 100% eBay reputation, and i intend to maintain it. :vulcan_salute: :sweat_smile: …when i sell crap, it’s gonna be labelled accordingly, so nobody gets a reason to complain.

Especially with HDDs - i always include a SMART report in the photo gallery.

EDIT: same with SSDs, e.g. i’m not gonna sell my old 120GB SK Hynix one, it’s got ~75TB on the clock, and i’m not even sure what its TBW rating is …it probably doesn’t have one …it’s effectively an unreliable asset, unless i can find a use case, like a 3-wide mirror, for funsies to test how far it goes (that’s an unlikely scenario)
EDIT2: that’s the point, i don’t want to waste too many people’s precious time and money - nobody should buy an SSD like that - so, i’m not selling it, at any price. …i might just be typing out what i’m over-thinking …well, you’re welcome …i don’t think this will hurt. :beers: cheers

1 Like

Yes, don’t create problems for people unless you warn them of the problem. It’s very odd for instance how broken GPUs can fetch nearly as much as the same one working. Almost as if people will pay a premium for a challenge.

There’s a reason many drives have a 3 yr warranty.

Failure rate is a bathtub curve for hard drives and the steep ramp in failure on the tail end starts around year 4. The OEMs know this. SAN vendors know this. Which is why array maintenance cost skyrockets on year five (typically cheaper to forklift upgrade). These vendors have the stats.

Backblaze provide their stats which also show this.

I would not buy 4 year old drives unless very cheap and you plan extensive levels of redundancy and do not value your time to change them out (or alternatively are using them to store data you don’t care too much about).

What you save on drives you’ll likely pay for in either increased failure rate/data loss or cost incurred to provide the resiliency to counter (eg beefier or larger system or more disk shelves etc. so you can run more redundancy, plus cooling, noise, power to suit).

Edit;

Also top comment on the re-silver time above. For years now > 1tb drives haven’t been recommended for use in RAID arrays with single drive redundancy due to second disk failure risk during rebuild. That was array vendor advice in 2008 with new drives. Sure that’s for an enterprise array but 10 TB drives will take far longer to rebuild or resilver in the event of failure and replacement.

You would want to run at least raid6 (or even better triple mirror or raidz3) which as per my above comment means more slots in your nas etc.

Again unless you don’t care about the data.

I’ve steered clear of huge drives at home for this reason. I really don’t need to mirror the internet. 4TB rust is the upper limit of what I personally consider sane for general purpose RAID sets. YMMV but just be aware of the rebuild time required and judge your risk of multi drive failure during that time. I’m a bit more paranoid than most I suspect as I’ve had enterprise exposure and the shit on my nas is IMPORTANT to me.

Need more space, buy more spindles.

1 Like

Meh. User choice. Personally, I’d rather have a RAID 10 array of multiple drives at 1TB ea totaling that same capacity. I am of the opinion that having that much capacity on a single drive is a recipe for disaster. Coz baby when it’s gone it’s gone. BUT with the redundancy and striping that come with RAID 10 you can almost always save your data even when multiple drives fail. I have a simple RAID 10 array using multiple very very old WD Velociraptors and still haven’t had an issue with them. I’m pretty sure the drives are well over 12 years old by now. They have served me well. Again. User preference, but I don’t recommend buying large capacity drives that way when a good array can give you the same capacity and in the event of a failure you simply replace the much smaller drive at a much cheaper cost without risking data loss so much. Especially these days, with so many mechanical SMR drives on the market. Those rusty old iron wolves could lose their temper real fast. (Just my opinion.)

2 Likes

I don’t think a 10TB drive is 10 times more likely to break than a 1TB, probably twice as likely.

If you have 10x 1TB drives in your array you have say 5 times the likelihood that a drive will break as the chap with the single 10TB. However you don’t lose any data, simply replace the drive. If he buys a 2nd 10TB he doubles his chance that a drive will break but now he is protected.

Now you both have about 10TB storage but you have 2.5 times the chance of a drive failure as the chap with 2 drives.

Another difference is that when he replaces his broken 10TB and you replace your broken 1TB you’re thrashing 9 drives to recover 1TB and he’s thrashing 1 drive to recover 10TB. If either of you have a failure you’ve both lost everything.

It really is swings and roundabouts.

My array is a bit like RAID10, it’s 8 drives in RAIDz1 + RAIDz1

Yeah, kind of like an XJ12. The load is distributed more evenly for smoother, more efficient performance. But like I stated, it really is a user preference. Plus it’s a lot cheaper to replace one 1TB drive than one 10 TB drive. As for the thrashing part of it, 10 TB is a lot of work for one actuator to manage. :::sigh::: Mechanical drives are such a gamble. Have you heard of the one they developed recently that can provide the data transfer speeds of a standard SSD?

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.