SAS SSDs (when and why)

What are the situations in which one would choose SAS SSDs over NVMe devices? I can basically only think of one: you’re out of PCIe lanes / you can get a lot more SAS devices in one node than PCIe devices, generally. Are there other use cases that I’m missing? More recent SAS standards are quite fast, so it seems like not a bad option overall, but I rarely hear of SAS SSDs being selected.

They can sometimes be cheaper as well, but I would say it’s mostly that 4 or 8 PCIE lanes can become 8 or 16 or 24 or even 36 SAS devices, with each one able to hit it’s rated bandwidth in isolation, on a <$100 pcie card, as apposed to a $400 pcie card to turn 16 lanes into 8 4x devices.
NVME is cool and all, but big corpos have made it really, really suck.

1 Like

There is a huge installed base that runs on SATA and SAS SSDs. That’s probably the majority of people interested in those drives. And SSDs are fast and good enough for the job they are doing.

Sure, 24Gbit SAS is nice and all…but it can’t keep up with PCIe bandwidth which is 32Gbit per lane atm with PCIe5.

I’m not that much into SAS SSDs, but I don’t see any major development for SATA/SAS. We have 32TB NVMe drives for almost 2 years now with 60TB approaching next year. SATA/SAS feels more like EOL and legacy support when I check manufacturers and products.

It’s a bit like 15k RPM SAS HDDs…yeah we still sell them but you should really move to SSDs soon and that’s where the future is.

From a home server perspective…we’re mostly dealing with 24/28 lane mainboards (or less → RPi/miniPC/embedded). I have my special metadata vdev and boot drives all on SATA drives. Works great and I don’t waste precious PCIe lanes.

And if you want capacity…you can fit 5 NVMe drives in your average board, more if you sacrifice your x16 slot. With drive capacities available up to 32TB, that’s a lot of NVMe possible if you really want it. And I doubt SATA/SAS can compete in performance. Enterprise SATA/SAS vs NVMe is same pricing. Expensive part is the NAND, not the controller.

I see drives just as the market does…HDDs still going strong and remaining king of $/TB, SATA/SAS being phased out, NVMe for anything new that isn’t HDD.

Pretty much 100% spot on analysis, just want to add that within five years NVMe SSDs will reign supreme simply because HDD capacity is tapering off and this is due to multiple reasons:

  • New tech like HAMR / MAMR still not out on the market, and is expected to carry a premium at the start ($30-$50 per TB). If critical sales volume is reached for mass production, this can still plummet to $5-$10 per TB though.
  • Even with HAMR, theoretical max limit the next 10 years is 120TB - Samsung has announced E1.L drives of 256TB SSDs coming out in 2026 or so
  • 3.5" drives are just no longer space efficient enough when you can get 32TB E1.L NVMe drives that takes up half the volume
  • 3.5" HDDs take three times the power budget per TB as a corresponding NVMe SSD setup of equivalent capacity
  • SSD price/TB are halving every 18 months right now, from $200 in 2020 to $50 in 2023 to (probably) sub $20 in 2025.

In the enterprise space, which is usually the earliest adopters of HDD technology, even HAMR HDD drives are no longer making much sense except for specific niche use cases (like cold storage).

For consumer, SATA is already a legacy interface and has moved away from desktop computers almost entirely. Old SATA drives are still around but most consumers are happy to throw out their old 1TB harddrive for a newfangled $80 m.2 NVMe 2TB drives now. Some will keep their old 4TB and 8TB HDDs for a while longer, but it is just a matter of time before 8TB m.2 SSDs become affordable.

The only exception is consumer NAS where HDDs are still a viable option, but here new devices like my favorite NAS solution Asustor Flashstor with 6-12 m.2 bays are pushing the envelope and are making all-SSD NAS an affordable option already, though not the best solution for everyone. If we are at low capacities then an all SSD 6x4TB Flashstor solution costs ~$1.5k and provides 20TB of redundant storage. In comparison a 4x12TB HDD Synology solution costs ~$1.2k for 24TB of mirrored storage. All-SSD storage is now viable for consumers, but does not cover high-capacity storage (think 100TB) yet, nor is it the cheapest option. Both are bound to change.

HDDs are therefore being squeezed both from above and below. They will likely be around for another 10-15 years, but the writing is on the wall now and HDDs will most likely disappear from mainstream store shelves before 2030. You can still find floppy disks and burnable CDs so I do not believe HDDs will disappear completely, but I do not think HDDs will become bigger than 80TB at most either. After that, R&D costs just do not make any sense any more.

As a mainstream media, HDDs and subsequently SAS and SATA are being phased out. But I think Pops says it best about the current HDD situation:

Short version:
I’d only go for SAS SSDs when you can get good surplus ones extremely cheaply and have a controller to handle them already.

It might be a slightly different answer in an enterprise context, but then you wouldn’t be asking here. If you want cheap, big and fast, you won’t buy a new SAS SSD to get it.

Longer version:
AFAIK the market has spoken and judged SAS SSDs a niche that only ever made it into SANs or other dedicated storage appliances. And out of these some might still have enough of a second life left, if space and power are lesser concerns.

Server SSD form factors still undergo a rapid transformation but if you still have 2.5" drive cages today, they tend to support all three, SATA, SAS and PCIe for U.2 form factor NVMe drives.

What’s less visible is that then you have to wire the backplane to a controller/switch that matches what you want to put into them.

With PCIe you tend to need switches if you have more than an handful of drives and nothing EPYC for the host, those effectively come included in SATA and SAS controllers, but traditionally at much lower bandwidths (but storage protocol overhead) than what NVMe could do with four PCIe lanes per drive.

The latest LSI offshots support all three protocols, but you may pay a ton of money for those before you can connect your first drive. Ever so slightly exaggerated: the €100 that buy you 2TB of NVMe today is what you’d pay for an empty 2.5" drive caddy on an official server vendor price list.

In other words: NVMe is so cheap these days, it beats SATA.

By the time you got your first bit of SAS going, you’ve spent at least 8TB of NVMe without a single bit of storage.

A SAS drive can also be connected to two controllers, for redundancy. When one controller dies, the system could switch to the second one.


Don’t dual port NVMe SSDs offer the same functionality?

Not sure, but hot-plugging on PCI seems to be flaky in general. Woukd they always occupy twice the lanes to make it work?

1 Like