Testing old harddrives in Linux (pre-failure)

I have an old machine that has a number of hard drives in it, all 500 GB or less. The SMART status shows they’re all terribly old and are “pre-failure”. Is there any way to stress test the drives to weed out the ones that are truly bad? They’re in a zpool right now which went degraded a few months ago, presumably because of hardware issues. I’d love prune out the worst of them. Any suggestions?

Well…running a resilver on that pool will stress them quite a bit. Other than that…you can scrub the pool 24/7 (hourly schedule or more depending on how long it takes for a scrub job) or run IO benchmarks like fio. Or both. scrub is sequential reads, pair that with a recurring fio job with random writes and those drives will have a hell of a time.

If you got a case where you can put the drives >40-45°C at all times that helps too.

But HDDs don’t play by the rules in my experience. They fail whenever they feel like to fail and not when the user is expecting/demanding it. So brute-forcing horrible conditions might lead you to safe assumptions only to get a failure a month after you considered everything running well again.

In the end these are <=500GB drives and considered e-waste by today’s standards. But even old/bad drives make good boot devices.

1 Like

I use badblocks to test drives. You can use -sw to write and read the drive to find bad sectors. Just keep in mind it WILL erase the drive.

2 Likes

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.