Looking around for drive options, I’ve noticed that both Seagate (Exos x2) and WD offer dual-actuator drives, but neither goes above 20TB capacity.
Which seems weird, as if it was just a failed experiment that hasn’t managed to take off the ground.
Are they here to stay, perhaps with further evolution ?
As nice as they are, one would want to see both actuators covering all surfaces (=two heads per surface) for redundancy.
Also it would be nice to be able to R/W on all heads simultanously.
I’m not sure why this hasn’t been implemented. Reason used to be that servos can keep only one of the heads on the track at one time, but latest gen have stacked actuators with micro-piezo on the head doing micro-corrections, so one would expect for each head to be able to stick to its track…
I don’t think so. Existing FLASH can never replace HDD in some roles - for a massive storage old cold store. It lacks robustness, longevity and write endurance.
I thought multi-heads would be great for RAID, but people say that when anything fails in one headstack, whole drive with the other half goes down, too.
This seems to have been a roadblock, but it seems they’ll evolve this branch to get around that soon:
With 122.88 TB drives out this year and 255 TB SSDs due out next year, I think this way of thinking is obsolete now. HDDs will possibly reach 100TB in 2030, but before 2035 we will see the first PB SSD come out on the market. HDDs can no longer compete in storage density, and cold storage is not really a problem if the drives are powered on every 6th month or so (something you need to do for HDDs regardless).
HDD glory days are over. By now it’s just a question how long the market realize this, the sooner high capacity SSDs reach that $25 / TB number (for a 256TB drive that is ~$6k), the sooner the market will shift. This is an inevitability now.
HDDs are going… But they are not gone. Not yet. Same with ICE cars, same with coal plants, same with copper driven telephones, same with CRT screens and smartphones and DVD rentals and…
Dual Actuators in this sense is kind of a desperate attempt to get more write speed out of a dying technology. Maybe here to stay for HDDs, but it’s just too much data being stored now, SSDs are just too darn fast.
Check their price. Even most expensive HDDs are cheap, compared to that.
TLC NAND FLASH cell can go through 1000 or so write cycles. mechanical HDD can rewrite the same sector over and over again practically forewer. (IIRC guaranteed 10¹5 or so cycles)
HDD can be shelved basically forewer. NAND FLASH degrades even when stationalry and has to be refresed/rewritten periodically by the controller.
NAND FLASH is perishable media. Good for hot data in datacenter not for long-term data storage.
Not that long I’ve had some of the early IDE hard disks in my drawer.
Quantum 52 MEGABYTES. It was probably 35+ years old at that point.
Still perfectly functional.
I don’t think so. Dual actuator was an attempt to adress sore spots - shorten the vulnerabilites during RAID recalculations, speed up self-test etc.
HDDs may die of, but NAND FLASH isn’t the one that is to finish them off.
BEsides, NAND FLASH has hit the ceiling itself. All of the capacity increase are due to more layers on chip.
Each layer has its cost. SO NVME ain’t getting cheaper since there is no significant process shrink.
ICE Cars aren’t going anywhere buddy. You were right with everything else but ICE cars. EV’s won’t take over anything unless we have personal Nuclear Reactors in every single vehicle.
Coal plants were never efficient but coal is cheap. Those would have been gone a long time ago if it wasn’t for all the anti-Nuclear propaganda for the past 30 Years.
HDD’s Glory days have definitely ended. My Personal NAS has had mostly SSD’s for the past 2 years now. The large slower SSD’s are cheap enough to not have to try to deal with HDDs anymore… especially when many of the newer ones are so unreliable.
However SSD’s are also hitting a wall… 2TB SSD’s were cheaper some years ago. Today I find that many of them are more expensive and many of the consumer drives are unreliable… many don’t even have DRAM cache…
So for smaller storage = SSD’s will take it over. For large storage… I think that HDDs will continue for a long time. Especially when it comes to shelf life.
I’d put LTO into the archival storage category, you can only read/write to them several dozen times before they are no longer usable because the tape degraded/stretched.
I suspect at some point solid state storage will get cheap enough that rotating HDD’s no longer make sense, but I think that is a long, long way away.
Businesses - of course - have to balance hardware cost against labor costs and other costs, but I still think we are a ways away from where spinning hard drives no longer make sense.
This is less of a problem the larger the drive gets, as while each cell can only be rewritten a limited number of times, there are a lot of them, and in mass storage applications most of the data just sits there. The portions of the drives used for dynamic content are much smaller.
For mostly this reason (and for the fact that The TLC drives we use today have much improved write amplification), a modern consumer TLC drive will have much greater write amplification than an early small enterprise SLC drive.
I think you overstate the reliability of shelved HDD’s, and overstate the vulnerabilities in solid state media. Solid state media today is very good. I would consider them on average to be MUCH more reliable than hard drives when it comes to random failures. Sure, they have a finite life when it comes to writes, but unless you are doing something extreme, you are unlikely to wear them out before they are obsolete.
Yes, they do need to see power every now and then, so they can monitor the state of the voltages in each cell, and make sure they don’t drift to the point where you get a flipped bit, but this endurance is getting better all the time, and already today they can go without power for several months without risk.
So, full disclosure. I’m no Enterprise and I don’t work in Enterprise IT, but I do have a larger sample size in my personal collection than most do.
At any time I have about 40 spinning hard drives in active use between near-line and backup servers.
I also at any time have close to that amount of SSD’s across my server and client machines. And some of these SSD’s get horribly abused in caching and other operations.
And I have been doing this for over a decade now.
In the early days of consumer SSD’s (~2009-2013) I had many failures. It was BAD. The OCZ SATA SSD’s which I started out with (Various OCZ Agility, OCZ Vertex, OCZ Vector, etc.) would seemingly fail if you looked at them wrong (I seriously never had one of those last longer than 2 years)
But ever since 2014 or so I have only had two SSD have issues. And only one of those suffered data loss. A Sabrent Rocket 4 that suffered a random failure in my workstation. (I also for a while used Samsung 980 Pro’s in a boot mirror in one of my servers, as I got a good deal on them, but one would intermittently, every 6 months or so lock up and require a power cycle to come back online, requiring a resilver with the other member of the mirror. That was my bad for using a non-enterprise drive, in an enterprise-like application)
My mix of solid state drives that I have used elsewhere ranging from the very high end (Optane) to more pro-sumer models (Samsung Pro’s and WD SN850x) even consumer models like Samsung Evos (in low write applications) and even a large number of discount branded “Inland Premium” (MicroCenters store brand) have worked very very well.
I’d argue the risk of data loss form hard drives is orders of magnitude higher than the risk of data loss from SSD’s at this point.
Of course, this is why we have redundant configurations (and backups)
Would I put an SSD in a time capsule to be unearthed 100 years from now? No*. But for all practical purposes I’d argue their reliability beats the pants off hard drives these days.
*(and I probably wouldn’t do that with a hard drive either)
As far as I am concerned, the biggest benefit today when it comes to hard drives is simply cost. You get a lot more gigabytes per dollar from hard drives than you do from SSD’s, especially high end and very large Enterprise SSD’s. This even holds true when you factor in such things as the cost of rack space, the cost of power, and the cost of personnel to occasionally swap out a bad disk in the pool and resilver.
I’ve never used them. Last time I refreshed my main storage server in 2022 I bought 12x 16TB Seagate Exos x18. I looked at the multi-actuator drives at the time, but they were a little bit more expensive, and I didn’t think they had been on the market long enough for me to trust them. I wanted to give them a little more time to mature, and see what long term reliability looked like.
If I got a good deal on them today - however - I’d totally consider them. But my main storage pool is probably all set for a while
I don’t see it. Check the latest FLASH development and new memories. Do you see much smaller cells with new processes ? Best and finest CPU stuff is now at 2nm. Flash cells are still at 16nm, crawling toward 12nm. Good part of the advances was done by robbing the customer with cramming more bits into one cell (MLC,TLC. now even QLC) and totally obliterating even what little endurance it could have. There are no generational upgrades on finer geometry with higher capacities. Higher capacities are achieved with more layers, which have fixed cost. So chips are getting denser AND more expensive.
Beat me to it. Only got one thing to add, and that is, HDDs take up a lot of space. When you can trade a full rack of 4U 24x20TB drives for a single 1U 24x256 TB server, I have a feeling price per TB isn’t really going to matter that much - that is slicing your (storage component of) your server farm by over 90%.
The opportunity to rent out a ton of rack space or halve your electricity bill is worth quite a bit of money, and it already makes a ton of sense for most companies requiring Petabyte levels of storage.
What drives costs down? Consumers? No man, we get the bottom scrape of the barrel after Enterprise have had their way every way on a Sunday. If we are lucky sweet deals trickle down to us plebs.
Sorry, I keep forgetting you guys in the US. of A. now have official policy of sticking your head in the sand while hoping Ford and GM recovers. Have a look at the global data instead of what Fox News is spoonfeeding you.
You want to keep arguing that, feel free to start a different thread, but the data is just not agreeing with your opinion.
Actually, this is real problem. I’ve worn out quite a few drives this way. Yes, you have many cells, but with good write bandwidth and frequency drives start failing after some time. Also:
data retention of heavilly worn-out cells is worse.
controller does its best to spread the writes, but it can’t do magic and eventually some parts of the drive get hit more than others.
writing has its limits. Controller can only erase bits (write zeros) in fresh pages. If something has to be corrected by writing “1” anywhere, only good way to do it is either to relocate whole sector elsewhere or to erase whole friggin block ( = MANY pages) and to rewrite fresh content back. It’s easy to see how even a small change can trigger a lot of writing.
controller has to keep and update metadata with the same set of shitty problems.
All this is non-existent on a HDD. Want to rewrite the same sector with any data bazzilion times ? No problemo.
ALl this handwaving seems simple until the moment that YOU have to touch NAND FLASH.
Check a datasheet and spend a minute on a mind experiment how would you tackle those with your controller and everything will become much clearer.
Space is cheap in homelab. Drives aren’t.
Let’s check some prices:
Top of the line 26TB HDD Western Digital Ultrastar DC HC590 26TB - € 550,36 (€ 21,168/TB)
Micron 9550 MAX - 3DWPD Mixed Use 25.6TB, SED, 2.5" / U.2 / PCIe 5.0 x4 € 7403,40 (€ 289,195/TB)
This is 13,66 TIMES more expensive per TB. I don’t see that falling to parity any time soon.
But even if it did, this STILL can’t replace HDD in all of its roles - long term archival, cold storage etc.
HDD’s taking up space isn’t too much of an issue in most applications. Power is a bigger problem than space right now. That is now front and center in the world of AI.
In terms of ICE cars vs EV’s; unfortunately you are lost in globalist propaganda. China is irrelevant as a whole and all data they publish is fake. Their cars are total dogpoo but their ultra cheap prices are causing problems in many parts of the world. However many people are upset with them because they suck, they have constant software and hardware problems… support isn’t good… and just like everything else with the Chinese… good luck getting parts after a few years have gone by. They retool factories and what you bought no longer exists.
I think that is enough for this thread. Just make sure to not spread around propaganda and mix computers with cars. Since the industries are totally different in every way.
Yeah, but you are completely ignoring the comparatively high risk of random failures with absolutely any mechanical component that has moving parts.
I touch NAND flash all the time. Sometimes even in inadvisable ways (like running consumer TLC drives in write heavy cache applications for years on end)
Right now I have two striped 4TB WD SN850x in a write heavy cache application in my ZFS pool. They have been in this role since December 2023. They still haven’t registered a single percent in the “Percentage Used” category as pulled via smartctl.
I decided to take a calculated risk with these, as being cache drives, even if they die, I don’t lose any data.
The thing is this:
Flash NAND wear is highly predictable. If you monitor it you just replace the drive before it fails.
Hard drives - being mechanical devices with moving parts - have a relatively high rate of random mechanical failures.
Sure, SSD’s can suffer random failures, but they do so at a an orders of magnitude lower rate than hard drives do.
From that perspective (if we ignore redundant pools and backups for a moment) if I could only have a standalone drive, I’d consider my data MUCH safer on an SSD than on a hard drive. Just keep an eye on it, replace it when the wear goes up.
Any hard drive is a ticking time bomb that could last a couple of days or could last decades. You never know.