Anyone else having failure issues with "SanDisk SSD Plus"? (2TB)

Not sure if this is a crazy coincidence, or is there a hardware issue with this model/series.

I have 3 out of 3 failure after 1 year of usage (bought in the same year), for “SanDisk SSD Plus” 2TB model. All 3 dying at almost the same time. With the same failure mode (undetectable by bios)

1 in a NAS last week
2 in a gaming PC yesterday (Raid 1 death !!!)

On 2 separate machines, both of which has its own separate UPS.

I have 20+ SSD’s of various age and model, in various other machines at home with no issues. (i run my own home rack, so that number does stack up)

The most frustrating of which is the RAID 1 failure. As all the data is now potentially gone (yes yes 3-2-1, but this was just a gaming PC, so no real sentimental data lost)

Wanting to see if others faced similar issue with this drive, and perhaps wrote it off, as it was only 1 drive (even if its <2 year old).

Or is there potentially a manufacturing issue with a batch of this, and if so, how do i validate this?

If we assume a 1% AFR for an SSD, this is kinda 0.000001%

I stopped using those SSD plus drives due to failures, changed to using crucial / other drives.

Can you share more details?

such as your usage timeframe & type? And how many you bought / is still alive of this series? So we can see if there is a clear problem trend for this model.

I really hope is not a 40k hour bug, “consumer edition”

had two die within a year in a gaming PC, not heavy use, just seemingly total death. I didn’t want to have to deal with that more so moved on, sorry for lack of detail, think i still have one of the drives somewhere around, but it’s totally dead, maybe a controller or power failture, didn’t investigate further.

1 Like

Yikes, another 3 out of 3 within a year. With similar symptoms.

Similar use case, not heavy use, gaming PC. Im guessing nearly on 24/7 ?

Anyway, thanks for the details, i think if there is enough confirmation of similar experience, i might just want to throw one of my dead drive to a recovery lab (and bite the bill) to see what the cause of death.

1 Like

Ok more anecdotes of high failure rates.

Especially since this model is still in market & is frequently on sale

I have one (240Gb) and it works fine but it’s slow (to be expected), it’s used in very light load scenarios (booting LibreELEC) and works fine for that purpose and has done it for years. Occationaly it craps out around 2 times a year but I’m not sure if it’s the USB-SATA adapter to blame or the SDD. It happens so in frequently I haven’t bothered to look.

While I except products to work their obviously made for a cost and you already know you’re not really getting “premium grade” hardware by looking at performance. If you want something cheap and SATA I’d recommend something that uses Marvell 88SS1074 and TLC memory, that seems to be a solid combo as far as SATA devices goes (and has decent performance) however you can probably find cheaper MVNE drives these days. If you “feel lucky” I guess Samsung QVO SSDs might be an option but I don’t think it’s worth the difference in price.

Don’t even bother with DRAM-less SSDs, they’re e-waste except for very specific applications.

1 Like

I mean it’s not like I’ve said that you should avoid Sandisk/Toshiba NAND due to random failures :see_no_evil: for anything remotely important. It’s cheap, but most of the time it’s pretty clear what you should expect from those drives.

Not much to discuss here, it’s a known issue with Toshiba/Sandisk NAND, it has random failures, some people guess that it might be nVidia nForce chipset kind of situation, although, imo the variation of manufacturers that have failures is too large, the best part is that SSD Plus doesn’t even hit a top 3 spot on random failures, we have Kingston A400, Intel 605p and WD:Green there and 2 of those drives share the NAND manufacturer…

1 Like

Urgh please don’t talk about QLC, in it’s current shape it’s a scam.

While DRAMless tend to be lower quality, there are situations where they make more sense, but overall if you have the budget and need a SATA drive, just get something like Crucial MX500 or ADATA SU800

1 Like

I’ve had very good experience WD Blue 3D and Sandisk Extreme Pro (which is essentially the same drive) even abusing a bunch using ZFS but they’re for sure not enterprise grade.

I know everything is moving to solid state these days… But I personally trust spinning rust (HDD) storage more than solid state drives.

It’s very rare to have a HDD destroy itself so badly that you can’t salvage the data off the platters, at least in my experience. I’ve been using them for nearly 40 years now and haven’t lost any data on HDD’s. Though i’ve had drives fail over time, none of them were ever so far gone that I lost data before I was able to transfer it off.

By contrast when an SSD goes, it’s typically just gone, and it can be very sudden. They don’t give the same warning signs that you get from HDD’s.

So I still use HDD’s for long term storage of any kind.

1 Like

That’s why it’s called a random failure, you never know when it’s coming, personally, I have, I think 3 Kingston A400’s that are still alive after years of use, but I’ve encountered a lot of dead ones too, so always back up your data.

In perfect conditions avg HDD on avg will last for longer than an avg SSD, but in real world usage I’d argue that both are very unpredictable.

As for data loss, if an HDD is dying most of the time you can pull the data off of it before it completely dies and even when it does have a head failure or PCB failure you can often still recover data, when it comes to SSD’s after some failures your data is physically gone, so no way to get it anywhere.

Yea, i do get the sentiment of not trusting these for “important stuff” - and is of same opinion. (Used it mostly for gaming computer OS, and steam drive, so losing it is “ok”)

The issue is more of failure rate, we have 3 separate user reports of having a 100% failure rate within a year of continuous use of 3-5 disks (always on computers).

Not with ZFS, not in some sustained use case.

That is no way acceptable even for low end consumer drives. There are even cheaper drives that does better by this metric (ie. TeamGroup)

In my experience, avoid Toshiba/Sandisk NAND whenever possible to minimalize random data loss, but yeah, imo it’s not acceptable to have a 100% failure rate within a year for some users, personally, I’ve only had a few that go through me, so I can’t comment on irl failure rates, might need to write to a few contacts in retail.

Update: I decided to send one drive to a data recovery specialist, so that i can find out the failure component - without speculation between NAND, Controller, or power delivery :sweat_smile:

Will find out in a week

( its really not worth it, the cost for the data, but i would go crazy in speculation otherwise )

3 Likes

I appreciate the urge to know :slight_smile:

I suppose the one you sent in was from the failed raid1? Do you know whether they failed in the same instant, or within a longer timespan? If they failed at the same instant it’s unlikely to be random failures, but rather something triggered by a deterministic process. Which makes it even more interesting to find out more about the cause. Thanks for taking the cost and effort to investigate!

Yes its the raid1 failure, that was sent in. Both died in the same instant.

Hence the desire to know what happened as well :rofl:

2 Likes

UPDATE FROM DATA RECOVERY!: It seemed like the SSD has really really poor quality data/power controller. And can be way too sensitive to power spikes. Unsure if this is within or out of spec.

It’s also somewhat consistent with what @stratego shared. That they used really poor quality controllers.

What seemed likely, was due to some voltage/current spike (maybe the new RTX 3070?) or transient load, or bad PSU, or something… it blew the SSD.

That explained why dual SSD could fail together, both being the same, had the same “poor tolerance” to the same spike.

However that being said, it is suspected (but not confirm) that it failed near the boundary of the the current PSU specification. Because the Samsung SSD on the same power rail, had no issues.

So leave us two possible scenario

  1. I have a really bad PSU now, which may go out spec slightly, possibly. And the first SSD failed for what it seems like a separate coincidence. This explanation does not seem to impossible by odds (still low). And since most of my parts are of reasonably good quality, they handled this without issue

  2. These SSDs are perhaps running at the borderline of the ATX PSU specification, or may even be below the specification. This would explain the elevated failure rates as informally reported by several users. It would also explain the reports of multiple failures, back to back, if it was plugged into a poor PSU (or poor power cable) which runs at the edge of the specification.

If you are running with a good set of PSU, and power cable connectors, these SSD’s might be ok?

To be clear, i dun think it’s a PSU failure, cause many more things would have died otherwise. It could very well be a little bit of both 1 & 2. Hack we are in the territory where it could have been cat fur in the PSU that caused the power spike :rofl:

But i think its safe to say these SSD’s are running at way worse tolerances then other typical SSD’s.

However it is uncertain if this was in violation of the specification (and a real design flaw that needs a recall), or being on specification at the bare minimum (and is basically easily killed by small random out of spec things) - short of factory testing more SSD’s lol


Note to self: In the future, if you are gonna pair this with another low quality SSD, apply the advice we use in datacenters, mix up the brands. Pair it with a teamgroup or something. (since you were probably aiming for a budget build).

Or perhaps once again, backup on another system :sweat_smile:

1 Like