Looking for a high endurance SATA SSD for metadata storage

So, I currently have a consumer based mindset to only purchase Consumer MLC for metadata storage for a ZFS pool, and that has resulted in me considering 2 2TB 860 Pros on the used market for metadata storage striped as a mirror.

I have no clue about enterprise options that have better endurance or are more optimized for the I/O of metadata storage in SATA, (since I don’t have much to spend on HBAs) so which one can I buy two identical of to serve as a mirror for metadata that’s SATA? My plan is 6x 16TB Exos X16s alongside this.

I would also like M.2 SATA HBA recommendations, ideally one that doesn’t use port multipliers.

if you have the PCIe slots, or bifurcation available most U.2 enterprise drives will grossly outlast any consumer flash storage.

Else, there exists enterprise sata flash from Samsung, Micron, etc.

Kingston DC600M is the only enterprise SATA SSD I know.

Consider 50/50 mix of brands/manufacturers, for paranoia reasons.

1 Like

pffft, when do seagate’s fail?..

the exo is beating the WD ALE6L0’s in 16TB, huh… did not expect that

860 pro is a good option. Also the SM883 is a good option. And the older 850 pro and SM863/a.

I’m planning to get Manufacturer Recertified from Wendell’s source for Exos drives.

1 Like

I’m getting massively different quotes for this MAX series at 1.92TB. Not plentiful on the used market neither.

The generation before (5300 MAX) seems to perform better at the same TBW, so I think I might just go for that.

1 Like

This is the second part of the puzzle cause I need one 4 lane M.2 to handle all the SATA for JBOD and ZFS (I cannot bifurcate)

Hmm 5400 MAX series is still available locally and prices are not bad (960GB model). But we are late to the party, 08/2023 was excellent time to buy brand new.

Prices are for brand new drive from retail distributor with 21% VAT on top included.

Pretty sweet pricing for 5DPWD rated drive.

Same for larger capacities, 1.84 TB variant used to cost less than 960GB right now.

1 Like

More often than any other brand on the list:
image

1 Like

Or not. @TryTwiceMedia’s been debunked on the multiple error pathways here and is still pushing the misinfo anyways, so I’ll just leave it at the link.

1 Like

Since you did not describe your workload and you go for RAIDZ2 with 6x 16TB drives, I will just assume your workload is mostly linux ISOs and only store metadata not small files :wink:

The requirements for IO performance and endurance is extremely low.
Heck, I can’t even think of another use case that has lower requirements for an SSD than your use case! Even trash like QLC would be fine! Sure, some SSDs will have lower latency, but that is almost irrelevant for your setup.

That is why I would get the cheapest and worst 128GB SSDs you can get your hands on.
Even used ones from eBay.

BUT, I would use two different vendors and two different controllers (a lot of them use Phison) so in case of a bug not both drives are affected. We had broken firmware on ADATA SSDs with Phison controllers in the past. Or even better if you have the ports, use 3 way mirror.

1 Like

QFT

metadata vdev has the least amount of writes of any drive in a ZFS pool. It just reads stuff if your ARC is too small and sits idle 99.9% of the time. Writes are trivial…any old 250GB SATA will be plenty. Your HDDs will wear out 10x before any crap consumer SATA SSD will.

Get cheap reliable drives and put money into more useful stuff. Once you write PBs of data a day (and no, no HDD is rated for that kind of stuff), then we’ll talk about “SSD endurance”

When it has to be a SATA SSD I’m a big fan of the Kingston DC600M Data Center Series Mixed-Use SSDs:

  • Powerloss Protection

  • Energy-efficient (especially for SSDS with proper powerloss protection)

I’ve been using these to record video to with Blackmagic cameras, absolutely 0 issues from 0-99 % drive usage.

Cough:

1 Like

That’s quite an extensive thread. I need a OOTB solution so I would absolutely buy a blessed controller PCB combo from the Level1 store if it was available. I just need 6 ports for the HDD array and I’ll put the SSDs on the B650 chipset. the only other device I might add is a slim UHD BD drive by dremeling the case to convert slot load ODD for a tray ODD.

I could simply buy a 960GB one for metadata storage then as opposed to a 2TB one. 2TB was simply for my TBW rating understanding on consumer SSDs.

TLDR:

  • Look for an M.2 6x SATA adapter with the ASM 1166 chipset and a reenforced backplate that gives it more structural integrity.

  • Update the chipset’s firmware to the latest public version (search in thread)

  • Carefully plug SATA cables into the M.2 adapter to not bend the PCB.

  • Be sure to have some airflow over the adapter.

  • Test the adapter for a day or so, for example by overwriting every block of the 6 connected HDDS before filling them up with your data (that should be done with any HBA)

  • Check all HDDs’ SMART data before and after that complete overwrte process, especially look out for C7 errors.

Enterprise SSDs tend to start at 1 DWPD for read-intensive drives, and go up from there for mixed-use or write-intensive drives, even while using TLC.

Eg Intel/Solidigm S4520 has 2.5 - 3 DWPD depending on the size.

Once you see the kind of endurance enterprise SSDs have, consumer SSDs (including MLC), start looking like toys.

Endurance was a concern in ages past. Today TBW/DWPD are so high, even consumer drives outlast older enterprise drives. 1DWPD for a 4TB drive is writing 4TB each day, 365 days a year for 5 years straight to hit that number.

99% of the time, you won’t need 10% of that.

Capacity increases and general improvements have made SSDs basically indestructible. Exception being cheap crap SSDs and special use cases where you write multi-TB to each disk on a daily basis. And even writing 10TB to an array of 10 drives is still just 1T/drive.
And you can always use overprovisioning and/or namespaces to increase your DWPD. You don’t have to stick with stock overprovisioning values. Even manufacturers tell you how to do it to tweak it for your needs.

TLDR; SSDs don’t wear out

1 Like

Yeah. The most used SSD or NVMe I have at the moment that’s been under a consumer/homelab type workload’s seven years old and at 76% life.

Current consumer TLC’s mostly ~0.33 DWPD and sometimes a bit over 0.4 DWPD. At work our most used drives are ~0.1 DWPD. So paying 2x+ for enterprise NAND’s unattractive.

1 Like