NAND Flash is catching up on endurance really quickly (Micron and Toshiba doing great stuff lately), and write IOPS are getting into 300-400k range, so the gap is decreasing every year.
Those old 32GB 3DXPoint (H10/20) really don’t cut it anymore as their throughput is very limiting by today’s standards.
I do see the (3D-X) point in 4800x and 5800x devices however. But I’m sure this technology won’t be buried and the buyer of IP and manufacturing will certainly release related products for the target audience.
This may also be an opportunity for non-Intel systems to get support for NV-DIMMs mid-term.
For ZFS SLOG, UnRaid Cache devices or anything not writing 24/7 straight, I’m confident in recommending modern datacenter mixed-use SSDs today. Wear-out and performance characteristics per € is just way more competitive in 2022.
Perhaps I speak only for myself, but I care very little that Optane as product or brand is dead, I mourn the loss of 3D XPoint as a technology.
While there is an interesting potential for changing the overall system architecture when storage is closer to memory in terms of speed and byte-addressable:
My interest in 3D XPoint has always been in the endurance and longevity on offer:
Long Lived Storage Devices
While backups are essential — and best practice, and one should have everything on three devices, and in two physical locations (insert standard spiel about nothing being really saved if it is not backed up) — maybe I am just starting out, and do not have the knowledge or the money to set up an elaborate backup scheme that is tailored to my workflow.
I want to have something I can use without worrying for a few decades. I want to have a part of my system that I can trust to outlive my CPU or memory.
I have described my ideal of data storage before:
While Optane was hardly perfect, it is not guaranteed to retain data for years without power, it was at least an improvement, a step in the right direction. Something that was not visibly walking its way to the dustbin with every use like NAND is, or vulnerable to everyday bumps and physical shocks like HDDs.
Could you provide links about this? As far as I am aware, endurance has been inexorably dropping since manufacturers began moving to MLC rather than SLC.
Each step in the density progression, from 2-bit MLC to 3-bit (TLC) to 4-bit (QLC) to 5-bit (PLC) makes the cells more frail then the previous configuration on equivalent silicon. I have heard very little about improved endurance through manufacturing; far more often I heard about improved weary levelling techniques on behalf of the NAND controller.
The only thing I can think of is that the endurance of a disk as a whole increases with capacity, and generally capacities have sky rocketed since then. If you’re comparing an ancient 64GB SSD with a modern 2TB SSD then yeah if they were both 64GB the endurance would have decreased, but no one is using 64GB drives anymore.
I think the biggest travesty of Optane is that intel clearly had no idea how to sell it. Or rather dickhead internal marketing types were trying to do the vendor lock in thing with it
Limiting it to intel platform when you have a shit platform: DUMB
Selling it with hard drives as a cache on shitty low end platforms: DUMB
If they had simply pushed it as a general high speed flash at scale it would have survived. but no… dicking around muddying the message by implying it was intel platform only was the biggest self own I’ve seen in the storage industry in a long time.
Just on NAND endurance… outside of massively extreme edge cases… it simply isn’t an issue for almost everybody.
Most people simply don’t generate or modify enough data fast enough for it to be a concern.
The Optane speed advantage can be mitigated somewhat by bigger RAM caches. the number of people using Optane as non-volatile system memory is tiny (probably due to the platform lock in BS).
its cool tech but the cost and limited addressable market plus platform lock in for the advantages it has killed it. It just didn’t solve enough problems that people weren’t solving other ways for less.
I’m referring to the data sheet of their datacenter products. Kioxia still has SLC drives with 60 DWPD and Micron has 12TB drives with 60PB endurance (along very good PCIe 4.0 characteristics and IOPS), which is 3 DWPD, but capacity and wear leveling makes up for it.
I wouldn’t use them in a DIMM slot like PMEM, but for storage purposes, the case where you wear out 60PB is very special.
Yup i have a ye olde Samsung 860 pro and it was a main OS drive for a while with whatever i wanted fast on it, i havent checked but i suspect it is still in the high 90%s of life left estimation, i should check. Its just a random storage drive now not even use much as i have multiple m.2 to fill what it did previously. Its just on show in the case as its a good looking 2.5"drive.
Nothing i do in my PCs practical life will wear out any solid state drives i have, i am more worried about the still existing spinning rust, need to move that to some cheap flash.
I imagine a multi tier company like intel you can’t really compete against opponents who specialize on that area. Intel should focus on good CPUs specially now that AMD is eating their lunch.
This reminds me of everytime Apple killed one of its accessories: airport station, printers etc. Although the Airport was an extremely dumb router it was the most “forgot there is a router” experience. They didn’t had the luxury of competing against companies whose sole business is soho network equipment.
Interesting, DWPD is another metric to add to MTBF and TBW in my list of things to remember if comparing drives: TBW - dependent on size of drive DWPD - dependent on length of warranty MTBF - dependent on unknown assumptions, may vary by manufacturer?
I understand that in most cases even TLC can be fairly safe to use endurance-wise, but I do very much worry about accidentally misconfiguring or programming the equivalent of a filesystem fork bomb.
For ripped CD/DVD/BluRays, or games and music that can be re-downloaded, this not a concern, unless the original discs are damaged, or the online store goes bankrupt.
For my irreplaceable personal data files, however, I almost prefer SATA or PCIe 3; the faster the interface, the faster you can burn through that limited TBW/DWPD rating.
I want “fix it and forget it” hardware; let the endless rat race or cat and mouse affair go in in software or compute, but let us have a rock solid foundation for local data storage I say.
Maybe, but I previously found speculation that Xeon was subsidising the cost of Optane; partly as an explanation for why we never saw Micron’s QuantX:
Yes, didn’t Wendell do a video at one point using it with AMD StoreMI/Enmotus Fuzedrive and come to the conclusion that it was dramatically better than the usecase that Intel actually intended?
And actually, isn’t Intel Optane the reason they hired Allyn Malventano? One of the good things about PCPer back in the day was that Allyn seemed to know more about storage than any other adjacent pundit
As a consumer, I find DWPD (full disk writes per day) more intuitive and meaningful to use as long as we agree on let’s say 5 years of typical use.
Is 600 TBW good? Well, yes and no. For a WD Blue 4TB SATA SSD, it’s really really bad. But for a 1TB Samsung 980 Pro it’s really good by consumer standards. modern 800GB Datacenter drives have like 4000 TBW on TLC. All three drives are listed as TLC.
TBW isn’t useful by itself, you always have to check drive capacity and do calculations to see if it’s good. With varying drive capacities, this gets increasingly more difficult.
DWPD let’s consumers figure out the bad apples of endurance more transparently and compare. But this is consumer land and companies don’t want to present bad figures or be easily compared to competition, so they stick to TBW.
Christ, what’s the standard disk usage for consumers? I don’t think I even do two or three whole disk writes a year with my SSDs. I think metrics like this just demonstrate to consumers how little most of us have to worry about endurance.
I think both are useful, though. When I’m looking at endurance, I don’t think “how many times can I fill this up” is necessarily as useful as “how long can I use this until it breaks”. I know that a big capacity drive will live longer than a small drive with the same NAND – I think that’s part of the reason people opt for large drives in the first place so it’s relevant to compare.
For consumers, endurance values are really good now. I do think that most drives are enduring enough to not worry about it. But we’re also talking about heavy use scenarios especially comparing to Optane.
My point was that even in most enterprise scenarios, NAND Flash came a long way when talking endurance. And especially for the L1T audience, having to not rely on MLC or Optane for your Unraid cache SSD, ZFS SLOG or having a scratch drive for heavy writes is entirely doable with NAND Flash (datacenter) drives that are miles away from Optane prices.