Dual actuator HDDs - are they here to stay?

Found a few minutes to table up writes on most of the lowest cost HMB drives available here. Avoid QLC and tiny pSLC, basically.

drive cell pSLC, GB/TB pSLC, GB/s direct, GB/s folding, GB/s US$/TB
Patriot VP4300 TLC 345 6.8 2.8 0.90 55
WD SN7100 TLC 350 6.5 2.9 0.85 70
990 Evo Plus TLC 113 6.1 1.5 1.8 65
990 Evo TLC 57 4.0 1.5* 1.7 63
Corsair MP600 Elite TLC 50 6.2 1.4 1.4 65
Silicon Power US75 TLC 160 5.8 2.5 0.85 55
Silicon Power UD90 TLC 69 4.6 1.8 0.28 48
Patriot P400 TLC 300 4.7 2.0 0.50 50
Crucial P310 QLC 200 6.3 0.37 0.37 64
Kingston NV3 QLC 274 5.7 0.25 0.25 63
Crucial P3 Plus QLC 275 4.4 0.10 0.10 63
  • chuckholes on transition
2 Likes

This is an interesting topic that I don’t see discussed in very much detail often.

Seagate has a good explanation on how they come up with these workload ratings:

The TLDR is that the HDD workload rating is the knee in the workload throughput to drive failure graph; So basically it’s the how much you can read/write to the drive per year before there is a statistical increase in failure. However that statistical increase in failure when going beyond the workload can be very very small… or it could not.

Another thing I don’t see discussed very often is that workload intensity is a far bigger contributor to increase in drive failure than throughput, what I mean by this is that constant seeking behavior wears out a drive quicker than sequential read/writes. It would be very hard to quantify this in a single statistic though.

While it seems self evident that “thrashing” a hdd reduces its lifespan. Here’s some corroborating evidence from Exos 2X14 drive datasheet:


The max workload is per LUN on the SAS drives rather than per drive.

Now that you mention it, I don’t think I’ve ever seen a SAS 2x18 in the wild.

1 Like

Wouldn’t something like head flying hours per TB get pretty close as a measure of seek time per unit of data transferred?

Me neither. And the SATA 2X18 I have is the only one I know of IRL installed anywhere. Server Part Deals gets the SAS version occasionally and it sells out, so presumably there are small deployments around somewhere.

1 Like

never might be a strong statement. Too much performance boost and R&D investment in nand flash. I would contend HDDs are going extinct

This data is definitely accurate. I have a few P41 Platinum SSDs and its SLC fills up real fast… it apparently has some bug where it doesn’t clear out the SLC… so alot of the time the write speed ends up under 200MB/sec.

1 Like

Going is one thing, it will take a long time before they really are. They still have unmatched density and price. Which are big factors in the computer electronics market.

Haven’t you been paying attention? 30TB SSDs are now available for consumers and 122 TB drives are now available for purchase by Enterprise. Meanwhile HDDs are spinning their wheels at 30TB and are projected to double that by 2035. So, density is already lost with HDDs and once 122TB E1.L form factor is out, a 1U blade server can replace an entire rack of HDD storage shelves.

How long do you think, before prices will drop to affordable levels e.g. less than $30 per TB?

Unfortunately for many of us, NAND flash manufacturers are reported to have already reduced their output in an attempt to drive SSD prices per TB back up. We’ll see if the Chinese SSD makers follow that trend also, but with the new tariffs on pretty much anything made in China, South Korea and Taiwan, prices for SSDs, RAM etc are already going up.

1 Like

Given NAND price per TB stability since the ~10 nm scaling limit was reached, possibly never for primary availability. The lower end of used drives gets down to US$ ~30/TBish here but a lot of the used market overlaps with new pricing on more performant TLC NVMes.

Price per TB isn’t everything. Endurance makes a bigger difference for a large number of enterprise situations. Newer SSDs are actually less reliable than older ones. We have been going backwards in this regard for the past years.

While endurance is important, it’s not as if the NVMe SSDs are trash at endurance. There are roughly five factors to weigh in for a TCO analysis:

  • Price per TB (Liquid capital)
  • Density (Real estate)
  • Energy efficiency (price / kWh)
  • Performance (hours to perform the same operation)
  • Endurance & Reliability (Replacement rate, predictability)

SSDs currently win on 4 of the 5 metrics, and are also more predictable when they will fail. This, we know. SSD endurance? If you write 16 000 MB/s (current theoretical limit of PCIe 5.0 x4 lanes) constantly with current wear leveling algorithms and whatnot, you would not thrash that drive for decades, we are talking 30+ years of constant use. Most HDDs give up after 10 years of constant use as the mechanical parts simply cannot keep up.

It is only in the specific instance of cold storage that there are some endurance problems, possibly - SSDs are too new so we do not know retention rate of modern SSD drives. It could be a nothingburger, or it could be something to keep in mind. SSDs are too new for a conclusive proof. What we do know is that one lab did some experiments on drives that have reached end of write cycles, and those reports cite between 20 weeks to 300 weeks depending on temps. More data is absolutely welcome.

For nearline and online storage, SSDs make more sense for 99.99999% applications provided the cost per TB isn’t astronomically higher. You will always be able to find that one exception, but mass markets don’t really cater to outliers.

Not even REMOTELY true. it’ CAN be true as long as controller’s HEURISTIC algorithms work on your access patterns. If oyur patterns ever break through that, whole drive can easily fail as a house of cards that it is.

BTW, similar thing is with DRAM “robustness”. Theoretically, Rowhammer “attack” should be impossible.
WHole Rowheammer “attack” means degrading the content fo the cell by repeatedly, but TOTALLY LEGITIMATELY accessing the content in adjacent rows.
WHich means that modern implementation of modern DRAM cells is FATALLY FLAWED.
ANd the modern “solution” ?

  1. IMplement counters that detect frequent access to some rows and in that case activate premature refresh for adjacent rows.
  2. Even thought that the “solution” above costs performance, it isn’t enough, it’s just a band-aid. Second band-aid is randomizing the row numbers, so that rows N and N+1 in general doesn’t hit adjacent rows.
  3. But since even both “solutions” are trivial patches that can’t really stop determinet attacker, let alone stop coincidences, here is teh third part. Declare any access pattern that falls through those two “soluitons” as “evil”.

MUch of the same solutions are in DEEPLY FLAWED NAND FLASH industry.
BTW, did you know that NAND FLASH has VERY similar weaknesses, like “READ amplification degradation”, alongside many othere unique to it ?

I did use numbers from a 64GB flash drive from 2018 and extrapolated. Seeing as that is not enough, I now looked around for more recent data and found some numbers from Solidigm. Their 61 TB drives allows for 36 TB of data throughput each day for 5 years before putting a dent in the warranty claims.

36TB is 36 000 GB so we are looking at a constant write speed of 400 MB per second, which is still faster than hard drives. That’s the theoretical worst case scenario with no wear levelling. Adding WL brings the throughput number to 4 000 MB / s. Remember, warranty numbers are set at the 95% part of the bell curve, meaning 5% of all drives produced will fail before warranty expires and the other 95% at a later date.

Endurance is already better than hard drives - in the general case. You can of course find exceptions to this, as there are exceptions to everything. Life is messy. Deal with it.

That is not to say SSDs are perfect. But definitely better than hard drives, on this metric.

Statistics is often art of lie through mathematics. manufacturer’s statistics often doubly so.
If you love these kind of things, suit yourself, just don’t expect from everyone else to follow.

People are free to do all kind of weird shit.

16 GB/s writes 1 TB in 62.5 s, which runs out 600 write endurance in 10.4 hours.

So 30 years is high by a factor of 3000-25,000 for drives of 1-8 TB.

It’s easily lower for consumer flash, even comparing to WRL drives rather than continuous rated ones.

Have you not been paying attention?

The only space that matters when talking volume production, is the enterprise. Whatever shenanigans goes on in consumer space, well, it is consumer space. The shit tier of technology. Consumers get whatever crumbs are trickling down from enterprise.

Enterprise move in cycles - a form of plan economies, actually. 5 years, buy the tech, work dem contracts, get good deals, replace in 5 years. Thus, are there harddrives around still? Yeah, enterprise ain’t about to chuck out their pricy 6 month old solution for something newfangled. Heck, they won’t chuck out their 16 year old solution unless something remarkably better come along. Biz like it slow.

Storage, at the moment, has a shelf life, both in technical cost terms and implementation terms. If $1000 bought me 16TB of NVMe SSD or 30+30 TB of HDD space today, well I’d probably go with the 30+30 TB. Fast forward 5 years, those $1000 now buy me 256 TB of NVMe SSD but only 120 TB of HDD (40+40+40). Worse, I can build a 6PB 1u server with the SSDs, compared to an entire rack of HDDs doing the same. That is some serious energy and server hall savings right there.

HDDs can have the upper hand in some small niche applications, but by 2030 the market for new HDDs will rapidly disappear.

… Or atleast it would if Trump wasn’t hell bent on bringing down the global world order, now pretty much all bets are off. :slight_smile:

Somewhat.
By now it seems to me that your whole account has been set up to serve as shit-stirrer, provocateur, or at least automated bot, debate trigger, that is meant to cause avalanche of responses, so that automated tool can analyze them and profile users for further tracking.

Bro, I go after the data, always. Provide solid data, not just anecdotal evidence, and you have a case.

I have argued why I believe my analysis is correct, using the best data available to me. It may be a bit on the optimistic side, but always I base my world view on data, data, data. I try to avoid arguments based on feelings, but since I am human and I know I am human, that will fail sometimes.

If you find no value in an analysis that says, HDDs are doomed according to all known modern economic theory, then fine, let’s agree to disagree, I still do not see much data pointing the other direction.

Ofcourse.
Accounts that fit this pattern often follow their datapoints - subcloud of “interesting” users to be tracked.
“Hybrid warfare” as it is now called.
“Invisible warriors” and the rest of the Fort Bragg crap:
Fort Bragg Marketing materials - honeypot to attract a street meat:
GHOSTS IN THE MACHINE: PSYWAR
GHOSTS IN THE MACHINE 2

Typically, “Bro” accounts don’t work alone. They have other workers that side with various branches of the “debate”.
It’s all automated now.

At least across EU, most of thses accounts are doing GLADIO-type stuff - organizing covert murders/“accidents”/“dissapearances”/etc:

New twist: “debates” while asleeep - RF BrainScan etc toys.
“Bro accounts” used to do their work in pubs etc during WWII, then they went online to forums, mailing lists etc only to end up on big platforms and now they get to visit you in your dreams - typically as a cloud of interesting strangers in a coincidental friendly chat, perhaps exchange over a beer in a “pub” etc etc.

Virtual world inside a virtual world ! WTF! Ain’t that something ?
WmWare should sponsor these things, perhaps stage competitions etc etc. :roll_eyes:

But now they have making visits to people while asleep on a MASSIVE scale across EU vassal states at least (some 10+ yrs at least).
Check the whole TW/X thread about new, “revolutionary” tech that was “accidentally sprung up out of nowhere” on various spots, latest one being Meta:
Remember RF BrainScan ?

Dual Actuator are just an additional technology like new recording methods and other improvements that we know from HDDs for the last decades. In goods so homogeneous as HDDs, you really want to have an argument to sell your stuff. Because otherwise a 20TB HDD is nothing more than a bag of sugar. It’s all the same. Storage devices are a homogeneous commodity these days. €/TB being the only relevant decision for the customer? bad for sales…so keep on innovating.

Dual port SAS, Fiber Channel HDDs, 520 byte sectors, SMR, dual actuator…they are the extra bit that makes you buy them because that feature is worth more to you than the price hike.

For me personally it’s more like a “hey do you want to pay 50% extra to get maybe 50% more, sometimes and after some minor troubles?”. I just keep on using and buying bog-standard enterprise stuff with best €/TB. Doesn’t get drives mixed, keeps things simple and smooth upgrading…and who knows how long feature X is supported, special driver support is provided?

I stay on the conservative storage path…CMR, €/TB, no experiments, carefree and cheap experience.

In the age of software defined storage, you just need reliable and good storage and figure out the details and requirements in software.

As long as software doesn’t demand certain features, I never pay a premium for storage. That was important to me when I only had one HDD in my only computer. Today I have arrays and clusters and individual drives don’t really matter.

2 Likes