Return to

Mechanical hard drives hardly worth buying


XPS is hardly what the standard consumer base is using, and is hardly the tier I would be referring to. Obviously your mileage may vary but normal people don’t spend more than like $800 on a laptop for normal use. That’s my experience with like 90% of people I’ve dealt with.

Data recovery is always expensive. But data retention on a chip vs a platter that’s on its way out is usually in favor of the platter. Plenty of failing controllers and bad sectors platters still let me get a good pull- but as with above, your mileage may vary.


You were talking about super thin laptops with too much CPU in them. That’s pretty much the subtitle of the XPS 15.


I know most of you are gonna cringe at this, but I have a HDD that is damn near 17 years old at this point. It’s one of first 1TB WD drives. And it’s STILL going strong. Yes, load times are vastly slower than my 8 year old corsair ssd, but damn I’ve seen more errors on my ssd than I’ve had on my 17 yr. old harddrive ever has. And it’s had data being written on it nearly constantly as a media drive for a solid 10 years because everyone in my household watches the media on it.

I’ve never had a problem with this drive, and will probably end up keeping it even longer. When it does die though, I’m moving to a 4 or 5 TB drive in the near future. I hope I get the same longevity out of that drive I’ve had out of this drive.


hard drives still have their place. as mentioned; archive storage.

for me though my computers are all SSD only now.

2 reasons: price per GB and high bandwidth internet has made streaming generally less hassle than content hoarding.

my nas has a bunch of old media (and backups) on it but i haven’t put anything new on it for some time. netflix and the like (and sufficiently fast internet) have made it simply not worth my time anymore.

now i am not attempting to hoard a whole internet worth of media on the off chance that I may want to watch it some day, my local storage requirements have dropped significantly.

as far as limited write cycles for SSD go… i’d suggest doing some research. “you won’t hit them” is the TLDR version, unless you’re running a highly IO bound server environment. and if you are hitting them a hard drive would likely never manage those numbers either due to being so much slower and probably dying of mechanical failure in the mean time anyway.


Before I got my first SSD I did extensive research on the matter. It feels like you didn’t read my post fully.

While there is variance between every SSD, sooner or later you will reach the write limit if you intend to keep SSDs as long as a good amount of ppl keep their HDDs.

There is a reason why SLC und only marginally MLC is used in enterprise SSDs where data has to be written constantly, SLC has the best write-cycles available.

As said in my other post lithography also plays a role in diminishing write-cycles the lower the process is (nm) the less there are (write-cycles), simply because of for example physics (ex. jumping bits due to close proximity to the other cells and electricty activating other cells). This also plays a role in the higher count bit cells just becuase they need more power to activate when writing. Stacked NAND is the next thing where it goes down a bit, for simliar reasons.

And sure in the end those are only “bit cells” and one or some of them dying isn’t that much of a problem because of over-provisioning and reserve but the cells are increasing in bits (TLC, QLC) which means more parts die out faster.

Also if we’re talking about failing components before reaching those limits, then the controller on an SSD would be the first. And those are also a reason to not ever be able to get the data back from and SSD after a crash, only the controller knows where a specific piece of data is for a files because of wear-leveling distribution for example.

Data having to be written only increases and not the opposite, sadly.

If we are never supposed to reach those limits, why do things like over-provisioning, wear-leveling, ECC, reserve cells and so on even exist?

Also NAND likes it warm and cozy while the controller doesn’t, can’t really cool the one thing and keep the other warm :stuck_out_tongue: .

I’m not saying HDDs are in any waay better than SSDs nor the opposite, just that flaws exist in every medium and you should know those limits and not think “I/You will never reach those limits, nothing to worry about”, that is dangrous thinking if you care about your data.

Edit: That escalated quickly :smiley:


the thing with hdd’s is that although the data is encoded to the disk as magnetic pulses so is the format scheme. periodic reformat of the disk while clearing out old data refreshes and renews the magnetic “lines” format of the drive.
data corruption on an hdd is not so much the data itself but the format “lines” blurring to unreadability.
I too have some very old hdd’s still working (a quantum 5 inch drive with dos 6.22 on it , 2.5 gig in size) (sounds like a harley when spinning up) :laughing: mostly old games and plotter software on it.
and yet Ive seen newer hdd’s fail a lot too.
so far the most stable ones Ive seen have been from 250 gig to 650 gig in size.
I haven’t yet gotten ssd’s to any extent (except the wifes laptop has one)


Real world:

Contemporary 250-500 GB SSDs have been endurance tested, real world to 1-2+ PB of writes before failure. That was done by techreport by punishing them at full speed re-write all day every day for many months. Which an end user simply won’t be doing. This was with drives such as the 840 EVO from a few years back.

If you’ve ever managed to accumulate that quantity of writes on a hard drive, i call bullshit - they simply aren’t really fast enough to achieve those quantities - and especially not under an end user workload.

Also - UNLIKE a spinning disk, modern SSDs give you plenty of advance warning that you are approaching the write limit.

Sure, they can fail due to other reasons, but so can spinning disks. Everything has a failure rate. Plan appropriately.


Reminder that HAMR and MAMR are breathing new life into the density arms race:


In some places HDDs are still a viable option.
I myself like to do archival storage with BluRay XL discs, but to keep the stuff i need in short notice spinning, i have my HDDs.
On the hall where i work we also keep the HDDs for storage (and archival on tape).


It still depends on the use case scenario’s.
I mean ssd’s are faster, but not more reliable.
So in large data servers they will still use mechanical storage drives for a couple of years.
HDD’s are also cheaper and have larger capacities.


It really feels like you didn’t read through neither my posts nor the article because both prove my points.

Almost all of the drives in the test were MLC drives (Two-bit cells) with more endurance (generally) than TLC, as also seen in the test the 840 EVO was the only TLC drive.
That it surviced and died third is nothing really unexpected, as I said variation is also a thing in the same generation.

You can reach those numbers when you’re like most people and keep your storage device for long periods of time, switching them from system to system for example (because it’s not broken and you can save some money with that).
I still keep/have some 1TB HDDs since 9+ years while those are almost constantly written to (full backup which replaces the old). But here is also a wide varance between the makes and models. (Platter size, platter count etc.)

And the main concern simply is that with more bits per cell (which have to be fully re-written when just one bit needs writing + need more power to write) and smaller nodes (nm)(physical distance shortens) + more/larger data that needs to be written to the drive (games take more space, updates get more frequent, logs need constant writting with Linux, Windows and other OSes, streaming still buffers and writes to the storage medium most of the time) we get shorter life-span out of those drives with every new generation not longer.

That is only the case if you’re watching out, because most SSDs are simply not recognizable anymore after a reboot and you have almost no chance of data recovery.

That I totally agree with.

And again not to say that HDDs are better than SSDs nor the reverse, but they both still have a place in the various environments that exist (some more than others).

God friggin damnit, I think I lost my points there a couple of times while writting this…
Edit: Found one again :smiley:


I still very much doubt it. Even the worst drive in that test, a 250 GB 840 EVO (a TLC drive) did almost a petabyte before errors.

Unless i fucked up the math, that’s over 41 full-non-compressible re-writes per day for a full year. on a 250 GB drive. Larger drives scale up in max PB written with capacity due to having more NAND to distribute the load over.


  1. there is no fucking way you’d do that on a spinning disk, even over 5-7 years. I don’t believe you’d ever be able to write 10 TB of data per day to a contemporary spinning disk with any actual workload. They’re too slow.
  2. there is no way you’re going to be doing that on an SSD as a typical end user doing typical end user stuff. not even as a power user, over say 3 years. Not even likely if you’re a typical user over the previous 30 years. Where are you getting 10 TB of data per day (to kill in a year / divide by years to suit) to throw at it? This is a consumer drive, remember?

MAYBE, if you have the drive say 90% full and then hammer the last 10% with constant writes… maybe. That’s still 1 TB of new data per day on the last 10% of the drive, which is still way beyond what you’d achieve in real world conditions as a typical end user.

that said, if you are talking enterprise, you buy SLC. But you weren’t. And for end users, as above, MLC (or even TLC as above) is FINE. Which is why samsung offer 5-10 year warranty now. Try get that on a hard drive.

And even there, as per below … hard drives are relegated to archive now. To get performance you either need way too many spindles, or you get it via compression and SSDs. Hard drives are replacing tapes for archive.



SSDs are far more reliable to physical shock. i.e., you drop your machine. You have an earthquake, etc.

Datacentre is already starting to go SSD. We are by no means cutting edge and any new SAN we buy is all flash, other than for archive data.

e.g… this one i put in last week (box is still sitting behind my desk, lol)


I’d end up getting one or two for a RAID 0 config for my Steam library, if they weren’t that dangerously expensive. I am always in denial that the $/TB is getting so low, for higher capacity SSDs.

Any of my machines that has an SSD would be only one per device.


More like 8x as expensive. I seen 1 TB SSDs for a comparable price to an 8 TB HDD, or even cheaper really.


Of course it also depends on the said SSD and the chips they used.
My main drive in my system is a Samsung pro SSD, which are generally,
more expensive then say EVO´s or other cheaper brands like kingston ssd now etc…
My pro drive still works fine today running 5 + years allready.

But of course there are also allot of crap ssd´s arround.

The thing with storage devices is that its all a matter of luck really.
I mean sure there some numbers to be found on the internet,
about drive failure rates and what not.
But that still doesnt really mean very much.
In the end its all a matter of luck.
And eventually every storage device will fail at a certain time.


I don’t agree with this at all. Hard Drives are here to stay for a while longer. Especially since there are two big disadvantages SSDs have that sort of make them a deal breaker if either I needed a lot of storage or was on a tight budget.

  • Price per GB:
    The gap has closed and SSDs have come such a long way that for a laptop it’s actually somewhat viable to replace the secondary HDD with another SSD, which is great because with laptops SSDs have the advantage of lacking moving parts. BUT, Hard Drives also slowly dropped in price per GB as well and they come with much larger storage sizes for the same price. Good luck finding a 4 TB SSD for under $2000, because I got a 4 TB External HDD for $120 a year ago.

  • Finite Write Cycles and Lack of Data Recovery
    It is much harder if not impossible to recover data from an SSD, with a Hard Drive, there is at least a chance of salvation for the data should the Hard Drive. This means for storing valuable data, I can’t rely just on an SSD or at least may not be a good idea. There are other limits too such as Finite Write Cycles and but I think that’s going to be a hard wall to even approach so that may not be a problem for most.

These inherent disadvantages means HDDs won’t be going away anytime soon.

That said, SSDs are obviously the better choice to have for running an operating system and programs. I tend to go for a small SSD for a boot drive (512 GB SSD in my laptop, some project I planned on doing is going to have 240 GB SSD I picked off as a boot drive but it’s also having NAS HDDs anyways). SSDs most certainly has advantages over Hard Drives but it also has it’s disadvantage and it’s much more expensive still.

I am just happy now that SSD prices are in the realm of affordability. 1 TB for $150 doesn’t sound terrible considering how much faster SSDs are and don’t suffer from shock and vibration issues that HDDs can have. But again, I managed to get 4 TB of HDD last year for $120 but the SSD is 10x faster I bet.


Here’s a 2TB SATA SSD for $250. This was $320 back in June

Recently Gamer’s Nexus did their weekly hardware news recap, and in it they discuss that RAM and SSD prices are expected to continue dropping through 2019, with SSDs approaching an all time low of $0.08 per GB. I actually remember back in 2004 when this same figure was being discussed about drives over 1TB

Looking forward to prices dropping even more :slight_smile:


Damn, 2 TB SATA SSD for $250 is pretty crazy. Unfortunate about RAM prices, they still don’t seem to be acceptable while SSD prices at least are.


I don’t think you get to post pictures of a box from a company that literally writes its own firmware to get around MLC flash’s durability issues…