Shucking a Seagate - a performance horror story

I had a 5TB Seagate Backup Plus drive that performed moderately well, until you did a full system backup. Then the sucker would throttle hard after a few minutes worth of writing data. It would take 12 hours to back up 1TB of data. So I decided to shuck the thing on the long shot that it was a USB 3 controller issue. Once I got it hooked up to my MSI Tomahawk’s onboard SATA controller, I saw numbers like these:

Obviously, these are execrable transfer speeds. What on earth could be causing these terribly slow speeds? I know, I know, it’s a bloody Seagate drive, but it should still be able to surpass a measly 100MB/s, shouldn’t it?

It passes a SMART test with flying colors. Model number is ST5000DM000.

Could be SMR, try using writes larger than 1M

2 Likes

Page 13 of the manual
Lists recording method as “tgmr “
I don’t know if this is DM-SMR, but there is a an article suggesting those drives are DM-SMR, but it’s a sing source:

tunneling giant magnetoresistance (TGMR)

None of them means anything to me. https://www.researchgate.net/publication/290575521_Structure_and_performance_of_TGMR_heads_for_next-generation_HDDs

But yes the thinking seems to be TGMR=SMR

2 Likes

pretty much any recent consumer drive over 2tb is likely SMR
some 2tb and maybe smaller drives are SMR

SMR is a performance disaster. any rewrites need raid5 like read modify write on a cluster (within a single drive). they’re basically write once unless you’re willing to deal with totally garbage performance. and basically need OS support to work properly which isn’t included in most RAID software. so they shit the bed and fail to rebuild when in a raid set.

imho hard drives are dead to me. specifically because of these issues - smr is everywhere and difficult to determine whether your drives are or are not using it now. you could maybe go for enterprise rust, but ssd isn’t much more and way more performance and likely better reliability.

sucks but… time to go to ssd it would appear with all the bs games hard disk OEMs are playing. i’ve gone cloud for home archive too because i don’t want to deal with this shit.

@thro I seem to recall that current WD Red drives are guaranteed to be non-SMR. Don’t quote me on that though.

So I found an old PCIe 2.0 SAS/RAID controller, two 1TB WD Blues and built a RAID 0 array. I know, I like living dangerously. Buying a 2TB SSD is just too much of a financial drain ATM.

I’ve been looking at some 12T Toshiba, 16T Seagate exos/ironwood pro, and 18T wd gold. (All of them are low $/TB), they’re all CMR with 3-5y warranty.

WD Red are a hit an miss for any given size.

Do you mean price? or CMR vs SMR? If it is WD Red Plus or Pro they are cmr, if it is not plus or pro it is smr. I also have not had bad luck with shucking my WD Drives. I have like 10 that I have shucked all are CMR.

@imrazor
Try a diagnostic tool. See if there are bad sectors.

Not according to SMART. Haven’t tried a surface scan yet.

Another way to see if it is SMR is to check it for TRIM support. Not sure how to do that on Windows though.

Not all SMR drives have it, but many do. They use methods involving CMR cache areas and Flash SSD style block translation, which means that TRIM makes sense for them. They don’t need to rewrite a SMR zone if the data in it has been TRIM’med.

So on Linux anyway, running a fstrim -a (or other fstrim command arguments) can vastly improve the performance of a DM-SMR drive.

By the way, I keep hearing the nonsense of SMR drive performance being bad for RAID. It isn’t. As long as your RAID rebuilds go from block 1 to the end of the drive without randomly bouncing around, the performance is about the same as a CMR drive.

1 Like

That’s more like it…

The solution? A good old fashioned defrag. Need to stop playing with SSDs so often…

Well, duh.

2 Likes

Yeah unfortunately that requires purchase first.

I’m done with the shenanigans. I’ve downsized my data to fit SSD and back up to cloud.

I get that many can’t/won’t take that step and sucks for you if you’re in that position, but as far as shady hard drive shit is concerned, I’m done

:frowning_face:

1 Like

Not sure if you’ve seen the reports of rebuild failure with ZFS.

its bad for ZFS RAIDZ due to the read-modify-write at the drive level. Causes them to fail during rebuild.

Also when SMR was first a thing, there was discussion with one of the vendors with the ZFS team, explaining that OS level support was needed to handle them properly. The ZFS guys were at a loss as to how to make ZFS accomodate them.

Its on YouTube, if you feel like looking it up (I idon’t have link handy, at work).

1 Like

I have seen all that.

ZFS rebuilds drives by retracing the original tree structure. It does NOT start at block 1 and continue to the end. It’s the very definition of “randomly bouncing around.”

I believe that more recent versions of ZFS have changed that so that scrub and rebuild are more linear, but I’m not sure of the status on that.

Only for mirror vdevs, for now.

Your drive, was 40% full, now it’s 25% full… fragmentation? Grumble !!!@#$?!*/

How?

Not sure. I’ve seen that kind of weirdness before with SSDs (i.e., deleted files not being accounted for in the volume size), but not hard drives. I tried a lot of different things to improve performance. It’s possible I deleted some files in the process. But it wasn’t until I “optimized my drive” in Windows that performance improved.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.