4xM.2 in 2U form factor and M.2 NVMe Recommendations

This is going to have quite a bit of back story so there’s a TL:DR at the end.

I have had this project idea rolling around in my head for a while but a few things have kept stopping me from pulling the trigger. I have a PROXMOX server that’s up and running well but given the ever decreasing cost of SSD’s and higher capacities I’ve been wanting to replace the primary mechanical storage pool with entirely SSD based storage. I’ve thought to just go with SATA SSD’s but I love the idea to step up strait to NVMe if possible.

This server runs a wide variety of personal and business services varying from high capacity storage, to high speed networking, hardware pass-through, and VPN services. Reliability is important and I want it online and operating as often as possible.

The current primary storage pool arrangement is 8xWD Gold 2TB drives in RAID10. This has served me well for the years I’ve been running it but I’d like to upgrade to SSDs. My first thought was SATA based SSD’s but then I though why not go NVMe? Prices are always dropping on those. The only issue is with my server’s layout in order to get the performance and redundancy I’m looking for I need 4 NVMe drives on a single PCI_e card in a 2U form factor.

Well, just yesterday I stumbled across the GLOTRENDS 4 Bay NVMe Adapter and I was wondering if anyone has used it. If not does anyone have any other 4xM.2 2U compatible cards they use that they know work well? My motherboard supports bifurcation.

Although I have 8TB of usable mechanical storage I’ve found myself hard pressed to really ever utilize much more than 1.5TB so my thought process right now is 4x2TB NVMe. 4TB usable in a RAID10 config.

However when I first thought about doing something like this Linus from LinusTechTips uploaded a video about his experience with RAIDing NVMe storage and it wasn’t good:

I understand that half of his issue was he used drives that he really shouldn’t have been using for the application but can anyone say if what I’m looking to do here with four drives would have similar issues or if the array would be small enough that it’s a non-concern?

Additionally I’m looking for drive recommendations. Enterprise grade M.2 NVMe SSD’s seem all but non-existent. Are there any particular desktop variants that may suit the application fine? Samsumg seems to make some good ones.

TL:DR

  1. Would a simple RAID10 of four NVMe SSD’s run into the same drive reset issue Linus had experienced when he tried to deploy an all NVMe storage server or would it be very dependent on system hardware & kernel version?

  2. Has anyone ever used the GLOTRENDS 4 Bay NVMe Adapter and can say it works well or can anyone recommend another low profile (2U) 4xM.2 adapter? My motherboard supports bifurcation.

  3. Can anyone recommend a particular series of M.2 NVMe SSD for PROXMOX/ZFS as the primary storage pool (non-cache). Server grade NVMe SSDs in the M.2 form factor seem all but non-existent. I’m looking for 2TB models.

1 Like

I’m sure @wendell will chime in later, as he was close to the source, but from what I understand was that the main issue Linus had was numbers: he used too much drives for the system to cope. On your planned upgrade you should be fine. (notice the use of the word should here :stuck_out_tongue: )

As for enterprise SSD’s, have a look:
https://business.kioxia.com/en-us/ssd.html

Probably not cheap.

I must suggest to rethink the RAID level you plan to use. RAID10 was “invented” to circumvent the writing penalty RAID 6 introduces, but NVMe drives are so fast and current computing power so massive that IMO it’s becoming more and more irrelevant for virtually any use case, except those where very high write throughputs are involved. If you insist on 4 drives, take the 4GB models and create a RAID 6 with them, otherwise just buy 2x 4TB in a RAID1 if speed is crucial. Remember, any RAID level with a 0 in it will eventually lead to data loss. Fortunately, SSD’s are much less prone to drive failures as spinning rust with their mechanical “single point of failure” architecture, but eventually the SSD cells will fail/degrade to a degree that causes data loss. (it’ll be a while, on average. But averages have peaks and valleys and if your drive just fails quickly, you are out of luck while the average still holds! :roll_eyes: Feeling lucky?)

I always try to take advice from the internet with a grain of salt but mixing others theories and experiences plus putting in my own research works out for me most of the time.
(notice the use of the word most here :smiley:)

Looks like the KIOXIA XD5 would be my only option but for a NVMe SSD that write performance looks rough. I understand that datacenter SSD’s trade-off speed for features and durability but… also it is quite expensive for the capacity I’m looking for.

I did just spot the Micron 7300 3.84TB. Little bit better performance. Still expensive but not as bad as the XD5. Not sure if you’d categorize it as being any good but I know Micron is in the server space.

I understand where you’re coming from. Quite frankly I’m open to all suggestions in this endeavor. If I gave up on NVMe and just made a RAID10 with server grade SATA SSD’s like the Intel D3-S4510 the best I’d see is around 1,000~1,100MB/s reads/writes and it’d cost me around $2,000. Meanwhile, just slap a pair of 3.84TB XD5’s or Micron 7300’s in RAID1 and I’m looking at similar write performance, exponentially higher read performance, and for anywhere around $1,500~$2,000. Additionally it should bypass the drive reset issue entirely if that would have been a factor in my case at all.

As for whether or not I feel lucky. No. I never feel lucky. Every time I want to do something crazy with computers that nobody I know has ever tried it always fights me every single step of the way. :laughing:

I guess from here my questions have shifted:

  1. Do you (or anyone else) know of a good/reliable 2xM.2 (2280/22110) to PCI_e card that fits in a 2U chassis? Again, my motherboard supports bifurcation.

  2. Aside from the XD5, how does the Micron 7300 stack up? Any other M.2 drive suggestions? I’m looking for 3.84/4TB models.

Micron has a name to loose in the server space, so you should be good.

As for the adapter, no need to pay more then you’d have to, so I looked some stuff up on Aliexpress. Here’s a candidate:

Choose the right option, they have single, dual and quad M.2 slot models available (at different prices!). No idea on performance, but from the images the dual slot version is M-key (which is what you want as that’s the indicator for true PCIe/NVMe, the B-key is for SATA drives) There may be other options, but keep the difference between the various M.2 keys in mind. (SATA is usually cheaper, there are many, many offerings of a mixed M-key + second B-key slot boards) Asus offer a fully compliant quad slot (M-key) adapter, but at a price. No idea if they offer a dual slot version, didn’t look that up.

I have no experience with either the Kioxia nor the Micron drives, so can’t help you there.

I’m very far from an expert on any of this but I have another server that uses Intel D3-S4510 server/enterprise SSDs. Using their data sheet as a point of reference the Micron 7300 PRO looks to be very comprable. As drives targeted at the server space they have very similar characteristics.

(SATA) Intel D3-S4510:
64-Layer TLC 3D NAND
9.9 Petabyte Write
2,000,000 MTBF

(NVMe) Micron 7300 PRO:
96-Layer TLC 3D NAND
9.8 Petabyte Write
2,000,000 MTBF

So at least on paper it’s a suitable less expensive option than KIOXIA that should do the job and do it well.

As for the PCI_e adapter my only issues buying directly from China is that on average it’s a 30 day shipping period and that’s assuming customs doesn’t decide to do a random package search on me as they did that last time I direct purchased something from outside the U.S. :roll_eyes: I was thinking more along the lines of Newegg, Amazon, or eBay.

Well, you’ve run into the ever present dilemma: if you want it fast, you pay the Jackpot. If you want it cheap, you pay in time.

The Asus adapter can be found on your preferred platform but like I said, you’ll be paying considerably more over a direct purchase from Asia. I don’t know if US customs has a threshold amount below which they won’t be interested, like various EU Customs do. It might be worth finding that out and ensure you stay below said threshold, if it exists. Best of luck!

1 Like

At the office have a 2.5 inch U.2 enclosure that has 4 m.2 slots inside that can be individually addressed by the system

With a x4 or x16 connection?
Or does it gave a plx chip?

It’s 16x physical 4x electrical

Got a link?

What’s the item? I’m only aware of the 3.5" OWC shuttle which carries 4 M.2 drives, and the 2.5" QNAP QDA-U2MP that can carry 2 M.2 drives.

A long time ago I remember seeing an announcement for a 2.5" Kingston DCU1000 which as best I can tell never materialized on the market.

Sorry thought you were talking about the ghetto Riser card, I think they have a PLX chip @wendell what were the specs of those kioxia/Toshiba U.2 contraptions

yeah just a plx on an x4 card, not ideal for what they want to do… x16 would be better. There are some plx bridge adapters on ali express but the passive ones are better if your platform supports it

1 Like

Oh wow this took off like a rocket out of nowhere. :grin:

Sorry for my slow replies. I have a side project that’s eating 100% of my free time.

I had the thought of just going pure U.2. Not even the adapter route as you have. I’d use a NVMe compatible HBA with four SFF-8643 connectors and just remove one of the backplanes on the front of the server and feed the wires through.

Not ideal but it’s an idea. Kind of prefer the M.2 mounted to a PCI_e riser though. Unfortunately the goal is also to maximize throughput. Putting two or four NVMe SSD’s on a shared x4 link would throttle them severely.

The server is going to be going through a platform upgrade shortly and will be moving up to a Supermicro X10DRi-T motherboard. Whichever arrangement of SSD’s I do end up going with it will be much better overall if each drive is transparent to the OS and has a dedicated 4x link.

My experience with PLX chips is non-existant but I think its function isn’t similar to a RAID card. I assume the OS/ZFS would still see individual drives but yes the bandwidth would be my issue and I’m gonig to have ample PCI_e lanes. There’s really no reason for it. :smiley:

I am going to be hurting for PCI_e slots though. You might think “6 slots? Oh wow that’s a lot!” Not when it’s a virtualization server it’s not. :laughing: Each slot is going to be very precious so I really want to maximize my use of the x16 slots.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.