Return to

Should I trust this HDD



I haven’t played roulette with anything. I have 2 backups of my data. And therefore I stand by my word. By what ever is on sale - AND BACKUP YOUR DATA.



I’d take its same issue for all HDD’s, while SSD would be solution for most - reliability of SSDs is much better. (tho constant r/w will use it up quite fast unless its using better cells.)


If I wasn’t stoned rn I’d respond to a lot of the seagate shit, but I wanna say look close at SSD sales. I just got a 500GB NVMe WD Black SSD for my laptop almost 50% off on sale. Fuggin steal.

Could that be a move for you?? It’d be reliable, I’d think at least.


I think Backblaze mentioned somewhere that their use case does actually involve aggressive spin-downs of their drives, so the comparison might be more apt than you expect.


“Aggressive spin downs” still sounds very different from typical end user backup workload which is maybe one power cycle per day :smiley:

Point being: different workload, different environmental conditions, etc… different reliability.

Backblaze’s metrics are interesting, but need to be careful to not take them out of context and apply them to an entirely different scenario.


ok sub roullete allegory for “just because you drove drunk and didnt get arrested/die/kill someone doesnt make it a good idea”

as to “blackblaze numbers arent relevant” argument. is there another large scale test of drives out there with failure rates by brand? is there any evidence that their numbers arent a accurate representation of failure rates? all i see is a few lines mentioning how stressful the data center is, and whats wrong with that? “the little donut of death”, firestrike, aida64, ect arent real word and we use them anyways because if the cpu/gpu cant handle the stressful benchmark then it will likely have trouble with games and real world applications.
people always say “this is more stressful and the more stress changes the numbers” and like please give me a source. show me this is not a assumption and their is evidence please.


What isn’t a good idea, is to trust you data to only ond hdd - no matter what brand it is. Even the most magical hdds with the best reputation fails.
I’m not arguing with backblazes data. But they keep buying seagates. Because they are cheap, and Backblaze have redundancy and backups.
It doesn’t matter what brand of hdd you pick - as long as you have multiple copies of your data.


Using a NVME drive for a offline backup would not be very effective fro me as i only have 1 slot that is in use and that is under my GPU, so not very accessible in the 1st place

I purchased the WD drive on new years day at a better price ($53 vs $55) than the seagate drive was anyway, newegg shipped it out monday of this week, should have it by Saturday

When i was comparing the drives by the spec sheet i was surprised to see the que depth of 1 differences given the RPM speed difference

The main thing is i do not want to need replace the backup drive cause it failed


Doesn’t have to be NVMe, I just meant an SSD in general.


Currently using a 500GB WD Blue, a 1TB WD Blue, 120GB Corsair NVME SSD
If i were to get a large SSD it would replace at least one of my HDDs
using a large SSD for offline backup feels like too much money for little usage time, given it would be nice to have as the initial backup process would be minutes verses hours


Not saying it is MORE or LESS stressful.

Am saying the workload and usage patterns are different, and thus results will differ.

Backblaze will give you some general info on drive reliability under those types of conditions, but applying those metrics to occasional desktop use and claiming that seagate is shit because they have a failure rate of say 18% over 3 years in a backblaze data pod is… questionable. Which is one of the reasons to avoid seagate as oer @orgake’s post above.

I.e., sure - look at the backblaze stats, but unless your usage pattern is similar, take any suggestion that seagate drives are not good enough for a desktop backup drive with a massive trailer load of salt.

It’s akin to saying that because 15% of construction workers will get skin cancer, that 15% of office workers will. Without taking into account working conditions.

Also, at the end of the day: every brand has a failure rate. Plan for failure. If you’re trying to buy a drive that will never fail, your strategy is doomed from the outset. It may take 5-10 years to fail, it may take 10 hours, days, months - whatever. Assume every drive will fail and plan accordingly.

As per the study, backblaze love seagates despite the rate because of the price. Whether to buy is dependent on value…


not talking about back ups at all.
they keep buying them but some models they dont buy anymore after high failure rates. and it matters because if you have a drive last 5 years or 2 years thats a big difference in value.


you’ve heard “extraordinary claims require extraordinary evidence” right? not asking for extraordinary just evidence. you say that data from blackblaze doesnt apply, give me evidence. you question it and strawman with your cancer comparison but you whole argument is “i dont think this is true” without anything to convince me or anyone else. give me evidence that blackblazes numbers dont apply here. give me a good argument with facts behind it…

and if you want to talk value, somethign that will reliably last 5+ years versus something that will have a good chance of not making it to 4 is a big value difference when the price difference is what 5%? maybe 10% at worst


Give me evidence that it does?

The test conditions are entirely different.


I think NVMe is much better for SSD because of the queue depth, right? I assume potential for optimizing writes to reduce wear increases with better queue depth.


blackblaze report is the evidence. you are saying that does not apply because “Reasons?”


Depends on the cell setup. And I know nothing about cell setups other than the intel 660p will literally burn itself in a week because its cell setup is so shit.


I think NVMe has practically no effect on wear and AHCI has a queue as well.
I’m no expert, but the point about AHCI was to allow the HDD to rearrange requests in a queue so as to minimize head movements, because head movements are slow.
But that feature is worthless for SSDs and having it creates overhead. In fact SSDs are so fast that the problem is now on the other end and the greater queue in NVMe is there to take off load from the CPU, so that the SSD doesn’t constantly interrupts the CPU.

In any case I would suggest you do not worry about write wear on modern SSDs, because a normal user is highly unlikely to ever wear out an SSD. I highly recommend this article from TheTechReport: The SSD Endurance Experiment: They’re all dead
Where SSDs showed actual measured write endurance of 100 TB to freaking 2.5 PB! :exploding_head:
You would have to rewrite the entire SSD every day, for at least one year to even approach any point of concern.


Evidently you have never heard of experimental controls.

Again, comparing backblaze to desktop workload is like doing an experiment where you stick two people outside with no water.

  • one of them in the sahara
  • one of them in say, Sweden

and expect the results of sun exposure to be the same


ok so scientific control requires two test groups. you cant just say the result of the test group without it happening. show me group two or evidence it will be different other then your assumptions.

trying and failing to find actual evidence to support your theory will be a better persuader then i.