Just on NAND endurance… outside of massively extreme edge cases… it simply isn’t an issue for almost everybody.
Most people simply don’t generate or modify enough data fast enough for it to be a concern.
The Optane speed advantage can be mitigated somewhat by bigger RAM caches. the number of people using Optane as non-volatile system memory is tiny (probably due to the platform lock in BS).
its cool tech but the cost and limited addressable market plus platform lock in for the advantages it has killed it. It just didn’t solve enough problems that people weren’t solving other ways for less.
I’m referring to the data sheet of their datacenter products. Kioxia still has SLC drives with 60 DWPD and Micron has 12TB drives with 60PB endurance (along very good PCIe 4.0 characteristics and IOPS), which is 3 DWPD, but capacity and wear leveling makes up for it.
I wouldn’t use them in a DIMM slot like PMEM, but for storage purposes, the case where you wear out 60PB is very special.
Yup i have a ye olde Samsung 860 pro and it was a main OS drive for a while with whatever i wanted fast on it, i havent checked but i suspect it is still in the high 90%s of life left estimation, i should check. Its just a random storage drive now not even use much as i have multiple m.2 to fill what it did previously. Its just on show in the case as its a good looking 2.5"drive.
Nothing i do in my PCs practical life will wear out any solid state drives i have, i am more worried about the still existing spinning rust, need to move that to some cheap flash.
I imagine a multi tier company like intel you can’t really compete against opponents who specialize on that area. Intel should focus on good CPUs specially now that AMD is eating their lunch.
This reminds me of everytime Apple killed one of its accessories: airport station, printers etc. Although the Airport was an extremely dumb router it was the most “forgot there is a router” experience. They didn’t had the luxury of competing against companies whose sole business is soho network equipment.
They totally could have sold a lot of it if they weren’t trying to be assholes with the restrictions on it; price would have come down with economies of scale.
Interesting, DWPD is another metric to add to MTBF and TBW in my list of things to remember if comparing drives: TBW - dependent on size of drive DWPD - dependent on length of warranty MTBF - dependent on unknown assumptions, may vary by manufacturer?
I understand that in most cases even TLC can be fairly safe to use endurance-wise, but I do very much worry about accidentally misconfiguring or programming the equivalent of a filesystem fork bomb.
For ripped CD/DVD/BluRays, or games and music that can be re-downloaded, this not a concern, unless the original discs are damaged, or the online store goes bankrupt.
For my irreplaceable personal data files, however, I almost prefer SATA or PCIe 3; the faster the interface, the faster you can burn through that limited TBW/DWPD rating.
I want “fix it and forget it” hardware; let the endless rat race or cat and mouse affair go in in software or compute, but let us have a rock solid foundation for local data storage I say.
Maybe, but I previously found speculation that Xeon was subsidising the cost of Optane; partly as an explanation for why we never saw Micron’s QuantX:
I can only hope that if 3D XPoint is being permanently abandoned that we eventually find something better, and are not stuck with NAND for a quarter century.
Yes, didn’t Wendell do a video at one point using it with AMD StoreMI/Enmotus Fuzedrive and come to the conclusion that it was dramatically better than the usecase that Intel actually intended?
And actually, isn’t Intel Optane the reason they hired Allyn Malventano? One of the good things about PCPer back in the day was that Allyn seemed to know more about storage than any other adjacent pundit
As a consumer, I find DWPD (full disk writes per day) more intuitive and meaningful to use as long as we agree on let’s say 5 years of typical use.
Is 600 TBW good? Well, yes and no. For a WD Blue 4TB SATA SSD, it’s really really bad. But for a 1TB Samsung 980 Pro it’s really good by consumer standards. modern 800GB Datacenter drives have like 4000 TBW on TLC. All three drives are listed as TLC.
TBW isn’t useful by itself, you always have to check drive capacity and do calculations to see if it’s good. With varying drive capacities, this gets increasingly more difficult.
DWPD let’s consumers figure out the bad apples of endurance more transparently and compare. But this is consumer land and companies don’t want to present bad figures or be easily compared to competition, so they stick to TBW.
Christ, what’s the standard disk usage for consumers? I don’t think I even do two or three whole disk writes a year with my SSDs. I think metrics like this just demonstrate to consumers how little most of us have to worry about endurance.
I think both are useful, though. When I’m looking at endurance, I don’t think “how many times can I fill this up” is necessarily as useful as “how long can I use this until it breaks”. I know that a big capacity drive will live longer than a small drive with the same NAND – I think that’s part of the reason people opt for large drives in the first place so it’s relevant to compare.
For consumers, endurance values are really good now. I do think that most drives are enduring enough to not worry about it. But we’re also talking about heavy use scenarios especially comparing to Optane.
My point was that even in most enterprise scenarios, NAND Flash came a long way when talking endurance. And especially for the L1T audience, having to not rely on MLC or Optane for your Unraid cache SSD, ZFS SLOG or having a scratch drive for heavy writes is entirely doable with NAND Flash (datacenter) drives that are miles away from Optane prices.
In typical consumer use I haven’t even killed the Intel SSD 310 80GB MSATA drive (From 2011) in my ancient W520 Thinkpad (HD Sentinel says it has 84% life left), like hell I’m gonna kill the used enterprise drives that have +99% of their life left. Which is exactly why I buy them.
Good old MLC. Basically unbreakable. I have a 120GB consumer drive with MLC that won’t die no matter how hard I try. That’s why TLC and QLC have such a bad reputation.
Modern remnants of the MLC age still are rated for 60 DWPD. With way less capacity than TLC and probably worse performance. But your average consumer could use this in a DIMM slot and be fine with it.
The fastest I’ve seen full drive write from Micron 3D TLC is 2GB/s from the FireCuda 530. But this write performance should be the constant on a cheaper PCI-E 3.0 only drive. The MLC 970 Pro is still faster.
It’s typically do large writes for application install, and 80% plus is read.
Video editing (as a common consumer workload) excepted, but even that “common” consumer workload is maybe 1-5% of the consumer userbase and a smaller fraction of that are doing anything above 1080p which requires large amounts of high speed write.
Almost all home users won’t out run SATA SSD for their workload.
Edit: Unfortunately in M.2 factor that only goes up to a max of 2.2GB/s. You’d have to jerry rig a M.2 to Slim SAS to U.2 adapter to get the most speed out of it.