Best solution for SSD expansion card for M.2 NVMe

Hello, am searching for best universal solution to replace my hdd archive, and place only main used files on SSD mainly M.2 NVMe drives 2TB-4TB , so I could have 8-16TB using expansion card with 4 extra M.2 NVMe drives. I don’t need more 1000 MB/s - 2000 MB/s. So am asking your experience and advice for best universal solution batwing platforms, like AMD AM5 , 12- 13Gen Intel platform, Threadripper.

I saw mainly this cards but not clear in the manuals they say on some platforms will work only with 2-1 drives not all 4, even if I don’t need full speed of them?:
ASUS Hyper M.2 X16 Gen 4 Card
ASUS Hyper M.2 X16 Card V2

If some have advice and can help will be very happy to hear.** :roll_eyes:**
Other solution for my usage?

1 Like

Those ASUS adaptors need the CPU+motherboard+UEFI firmware to support PCIe bifurcation, because they simply connect 4 lanes of PCIe to a M.2 slot. You can tell because there isn’t a big chip (PCIe switch) with a heatsink on it, and it costs less than $500 :slight_smile:

All AMD Ryzen, Threadripper and EPYC platforms support PCIe bifurcation and can use them to the full capability, as long as the motherboard has 16 lanes going to the slot you put them in - so check the manual of the motherboard you’re getting.

For Ryzen CPU+boards, they have 24 lanes, so you might see 16 lanes going to one slot and 8 lanes going to another, which could be configured to split the lanes over more slots. If you want to use 4 SSDs on one adaptor you need to put it in a slot which gets 16 lanes - usually where you put your GPU. You can always put your GPU on a 8 lane slot - which is usually plenty PCIe bandwidth even for demanding games.

For Threadripper (64 lane to 128 lane on “Pro”) and EPYC (128 lanes) you have more options since you have up to 128 PCIe lanes - most motherboards have multiple slots which are 16 lane and you could put such a card in any of them.

I haven’t heard/seen any Intel platform that can use the adaptors. Technically VROC on Xeon SP (since Skylake) should support them, but I think there are UEFI restrictions (need a VROC key, only support Intel SSDs, etc).

There’s another type of card with a PCIe switch chip, which allows the adaptor to work on any Intel platform also, with any number of PCIe lanes. The switch chip does the PCIe packet switching so that bifurcation support isn’t needed. The OS uses the normal NVMe drivers and sees the SSDs directly. Only brand-name example I know of this type is the discontinued Intel A2U44X25NVMEDK (with U.2, not M.2), but you can find many off-brand examples - no idea what performance is like.

There’s also another type of card with a PCIe NVMe RAID controller which will work on any platform with any number of PCIe lanes, which you need special OS drivers for, and to configure the SSDs as either JBOD or a RAID array. Gets complex and expensive there, especially when you need adaptors because some (e.g. Highpoint SSD75xx) have M.2 slots but some don’t (e.g. Broadcom eHBA 9600-16i).

8 Likes

To reiterate: bifurcation is the ability of a motherboard to split the electical lanes of a physical PCIe slot into multiple sections.
Most of the time you see a 16-lane PCIe slot converted into 4x 4-lane slots. A special but relatively cheap PCIe card is required to take advantage of bifurcation.
Bifurcation is a feature of the motherboard UEFI and such you will have to research support for each motherboard you consider.
As a general rule: the cheaper the mb the less likely bifurcation is supported. Intel does not support bifurcation on their consumer platforms. You can find it on AMD boards - ASUS has a page that lists bifurcation support for their consumer MBs
https://www.asus.com/us/support/FAQ/1037507/
Other MB manufacturers have this information typically more hidden in the technical specs of their MBs. If you don’t see the word “bifurcation” in the technical documentation it is not supported.

If bifuration is not supported any mb has the ability to use a more expensive PCIe to 2x or 4x or even 8x m.2 slots. This is technically enabled with a PLX chip, which acts as a switch routing the traffic of multiple m.2 slots into a 8-lane or 16-lane PCIe slot on the motherboard. Highpoint offers a large selection of such adapters.
Top of the line cards cost up to $1000, you may find (outdated) PCIe3 variants that work for your setup for less than $300.

5 Likes

Thank you for the information, but to put it simpler , what is the best and optimal solution if I just want have more M.2 NVMe drives in the rig, so it would have speed like external USB inclusure gives 1000 MB/s? I just need the same speed and have it internal in the case like +4-8 drives. And so it would work on all platforms on a budget.

You’d need an active PCIe Switch NVMe HBA to be completely independent from motherboard compatibility (like bifurcation BIOS support).

Can recommend the Delock 90504 if PCIe Gen3 or 3.700 MB/s per SSD is enough or a Broadcom P411W-32P for PCIe Gen4 speeds (7.400 MB/s per SSD).

Then connect the M.2 or U.2 NVMe SSDs with adapters, PCIe Gen3 is pretty easy, Gen4 not so much.

The Broadcom doesn’t like system sleep/S3, the Delock one works completely fine with updated firmware.

2 Likes

Good to hear, are there other option if I need less speed like only 1000 MB/s. from the NVMe?

What motherboard do you have

" GigaBusterEXE" as mention I’m searching for universal platform solution for AMD AM4-AM5 , 12- 13Gen Intel platform, Threadripper.

So, a few options present themselves:

  1. Go with AM4 Ryzen. This works, except… Most boards only have a single x16 slot and that is only x16 if the second slot is empty - otherwise it is x8 or even x4, depending on board setups. This means you could go with two m.2 x2 cards, bifurb into x4/x4 for both slots, and presto.

  2. Go with AM5 Ryzen. One x16 PCIe 5.0 slot that is not beholden to other slots, but then you will have x4 or even x2 PCIe 3.0/4.0 in the other slots. It is possible the AM5 boards will have more m.2 slots by default though, especially in the second gen - that doesn’t do much good now though, does it? :frowning:

  3. Intel PCIe bifurb is fucked on the 1700 gen (6xx and 7xx series). A dual x8/x8 5.0 channel is what you will get at best, with a third 4.0 x4 from the chipset if you are lucky.

  4. Threadripper and/or Xeon. With 128 PCIe lanes you could fit, in theory, 32 drives. Waaaaay more than you need. In practice, that is a bit unrealistic as you would sacrifice all lanes for that, so the upper limit is probably around 24 drives or so. However, these platforms cost a lot of money.

If you’re looking for cheap but expandable I would go for something like the Gigabyte Aorus X570S Master, this would allow you to fit 6 m.2 drives (one 2x8 expansion plus one x8 GPU, three m.2 slots plus one of either a x4 PCIe card or another m.2).

Pair it with a 5700G and you won’t need a GPU which allows you to put in one more. It is all up to your needs though.

If you don’t care about the speed, and do care about the price, then a USB to NVMe adaptor for each SSD and a USB hub would probably be the best approach.

Just make sure they’re based on ASMedia ASM236x or Realtek RTL9210, not JMicron JMS.

Forgot that I had an Icy Box IB-1817MCT-C31 USB-C 10 Gb/s external enclosure for M.2 NVMe SSDs lying around, here are some comparisons with a Samsung 980 PRO 2 TB with native PCIe 4.0 x4, PCIe 3.0 x4 and the USB-NVMe bridge chipset:

1 Like

This is not 100% accurate. AM4 socket provides 24 lanes: 4 for the chipset, 4 for the primary M.2 slot, and 16 for PCI Express Graphics (PEG). The 16 PEG lanes can be bifurcated a number of ways, depending on the board, but the scenario you have described is not possible. If you put a passive x4/x4/x4/x4 M.2 carrier card in the first full-length expansion slot attached to the CPU, and a GPU in the second, either two slots of the M.2 carrier card or the GPU will be disabled (depending on the BIOS configuration).

To fully enable a passive x4/x4/x4/x4 carrier card (or two x4/x4 cards) using PEG lanes, you must either use an APU—which limits you to PCIe Gen3—or put the GPU in an expansion slot attached to the PCH, where it will suffer severe performance penalties. It’s probably fine for a server or 2D desktop, but it will hinder any applications that require heavy I/O or low response times like encoding/decoding, rendering, gaming, etc.

There are some options. I have an AM4 board (Asus ProArt X570) that provides two expansion slots and two M.2 slots attached to the CPU out of the box (in addition to a third expansion slot and a third M.2 slot attached to the PCH). It does this by dropping the first CPU-attached expansion slot down to x8 and the second CPU-attached expansion slot down to x4 when the second CPU-attached M.2 slot is populated. This enables up to four CPU-attached M.2 slots with a passive x4/x4 carrier card in the first slot—assuming you’re willing to live with a GPU running at x4 in the second slot, but at least it’s attached directly to the CPU, not through the PCH.

2 Likes

I stand corrected :slight_smile: I bought an EPYC purely because this was too complex for me to want to think about.

1 Like

Good info , but my head hurts from all the thinking =) on what platform what will happen, best if it would be just simple buy 1 platform you get clear information. Marketing makes no efort to make it easy for buyers.

Hi, maybe some have any update on the best NVMe external enclosures for 2023, what are the best solid don’t need to be the fastest, and are “sabrent” thunderbolt version are better ot good quality for the money? Also, any ideas on best dual NVMe or dual mSATA external enclosures that have or not have mirror features for redundancy. Thank you for any the help.

Any ideas maybe something changed in upcoming year that is better? maybe new controllers came for better external nvmi drive use?

If a NAS is what you want the Asustor Flashstor might be of interest, currently the cheapest 12 bay NAS on the market (though discs cost a fortune); CPU is a tad on the weak side, it is great at being a NAS only but do not expect it to be fast at things like compressed filesystems.

There is now also the Apex Storage X21, which still is an x16 card but allows you to do up to 21 m.2 slots:

https://www.apexstoragedesign.com/apexstoragex21

And of course, if you don’t mind a bit of jank, here are a few other options that allows you to convert x1 / x8 / x16 slots to NVMe:

Hope this helps :slight_smile:

Yes I saw a good review as for portable soplution asustor 1 I was thinking to invest next year need some money for that it’s not cheep, I need still ass also to buy for it.