Adding a bunchof M.2 NVMe drives to system

Hi Everyone,

New to the forums here.

I have been thinking about upgrading/building a new home server. My next incarnation will probably be an all Flash based system. SAS/SATA was my original thought as you can get controllers with a bunch of ports (16/24) and add SAS Expanders to increase the number of drives. However, the NGFF/M.2 form factor also seems like a great way to go as you can fit a lot of drives in a relatively small space.

What I am wondering is, what solutions exist that might allow (for instance) 30 M.2 NVMe drives to be attached to a single processor system. I wouldn’t need the performance of the NVMe per se, so connecting each drive via a PCIe x1 connection would be sufficient. Multiple drives are needed from a capacity point of view. (Shooting for around 100TB.)

Are there adapters that let you use a PCIe x16 slot to connect to 24 or 30 NVMe drives? Or are there PCIe “expanders” that allow a single x4 connection (eg SFF-8613) to four or eight x4 connections?

This is more of a thought experiment right now.

Thanks.
-Brian

MB873MP-B_8 Bay M.2 NVMe SSD PCIe 4.0 Mobile Rack Enclosure for External 5.25" Drive Bay (8 x OCuLink SFF-8612, no Tri-mode support) qty8 nvme in a 5" case bay.
This solution requires 8 OCuLink ports on the mobo or an add-in card.
If you put multiple drives into 1 pcie slot (such as a riser card) the mobo or the add-in adapter has to support bifurcation otherwise your system might not see more than 1 drive per pcie slot.
The cost is prohibitive in almost all non-business use cases but it is nice to dream.

*probably the cheapest way to accommodate that many M.2 drives to to get 2 apex storage x16 carrier cards, but they are like 1800USD each.

Have you considered SAS SSDs? they are getting much faster and can be easier to deal with. I think the current faster dual port SAS SSDs can transfer at 4.6GB/s each and are easier and more importantly more reliable to connect to hosts than NVMe storage.

Maybe the Adaptec 1200p-32i or Broadcom 9600W-16e, depending on requirements. Not a lot of love for Broadcom here (see the neverending and 9500-16i threads) but several other 9600 parts are PCIe 4.0 x8 HBAs supporting 32 internal NVMes.

HighPoint has several x16 to eight M.2 options like the 1508 if less aggressive scale out per slot is ok. I’ve found very little info on HighPoint but the 1508 costs less per drive than the Apex x21 and, unlike the Apex x16, is shipping.

Rather than stay within a 4 TB limit I’d also consider larger options like Micron’s data center lines.

If you’re going for density, M.2 is on par with SATA. After factoring in adaptors, cabling, and enclosures, you get roughly the same density for both. Only U.2 goes higher.

A single 15 mm U.2 SSD from Solidigm (the D5-P5336 61.44 TB) would get you more than halfway there, and it occupies the footprint of 2 × 7 mm SATA or 4 × M.2 2280 SSDs. It’s unfortunate that NAND prices went up so quickly. It was $3,600 in December, and went up to $4,600 just last month.

Still, I reckon it’ll get you over 100 TB cheaper than 24–30 M.2 SSDs (the cost of adaptors, cables, and/or enclosures factored in). But you’re going to be hurting for bandwidth if you’re counting on it since two of them combined would only be good enough to hit PCIe 4.0 × 8 speeds. With 2 SSDs, you’re also limited in terms of RAID redundancy options/efficiency if you were considering a bucket full of M.2 with that in mind.

Yeah. We can infer, I think, from the OP that one lane per 4 TB is ok and that at least one x16 slot is available. That sort of implies an upper bound of 128 TB = 4 TB/lane * 16 lanes.

For example, eight 12.8 D7-P5620s would be right next to the 100 TB target at 102 TB. If there’s two x16 slots supporting x4/4/4/4 bifurcation it could be done with the existing mobo and a couple redrivers. If not, x16 to 8xU.2 seems easier than 24–32 NVMes but not as easy as picking up four 30 TB drives (or maybe 3x30 + 15).

Thanks everyone for the replies.

Unfortunately a lot of the new super-high capacity SSDs like the Solodigm 61G are a bit high for me to justify. The Apex X16 Destroyer is neat but again adds a level of cost I was hoping to apply to the storage itself and not the storage controller.

I am probably going to try the SAS route. I picked up a 16-channel SAS HBA and a SAS expander. (These items are used and were pretty cheap off eBay.) I’m going to play with these items for a bit and see how they work and what king of performance I get. With this setup I can add a bit of storage at a time as needed.