PCIe adapter with four M.2 slots

Hello there,

I’m considering buying PCIe to 4 M.2 adapter for future expansion. My two M.2 slots are already filled, and although I have a few more sata ports free, physical space is the problem.

My GPU is already on a vertical mount, so I thought - hey, why not go this route.

If you’re wondering how did I fill up my PC: a combination of audio libraries (they take up a massive amount of space) and dodgy programming in Matlab. Well, it finally caught up with me…

At any rate, is there something I should pay particular attention to? Is a x16 adapter better than an x8?
Should the adapter have a chip on it?
Can I mix and match M.2 sticks?

Unless you’re getting an expensive redriver card, PCIe->M.2 cards require bifurcation support in your motherboard. You can check this by entering BIOS and looking for an option to change slot whatever to x4/x4/x4/x4 mode. Theoretically you could get an x8->x2/x2/x2/x2 card, but I’ve never heard of a motherboard supporting that particular bifurcation, so if you want four M.2s you’ll probably need to use an x16 slot.
And unless this is a workstation board, you should check your manual. PCIe lanes are limited on consumer platforms, so odds are if you have two x16 slots, they share bandwidth, and having a GPU in one will limit the other to x8. Such motherboards usually have an extra x4 slot connected to the chipset, so in that case you should move your GPU to that slot and install the M.2 adapter card in the x16 slot (which you bifurcate to x4/x4/x4/x4.
The x4/x4/x4/x4 option is sometimes called “NVME RAID”. This isn’t the same as motherboard RAID (which you shouldn’t use), but simply an annoying and confusing alternative term for bifurcation.


I’ve had similar adapter on my PC and though i could split it in bios it turned out to be even more limited due to PCIE lanes cpu limit. After few benchmarks it was even slower than decent sata ssd.

1 Like

Amazing! This is exactly the kind of info I was looking for. I have the x570 Strix-E. This is what it says on the website:


And this is what it says in the manual:

Huh… I’m a bit lost to be honest. And what’s up with the PCIE_X1_1 and PCIE_X1_2 ? These two are not mentioned anywhere.

Actually, they are: PCIEx1_1 and PCIEx1_3. I assume this is a typo, and what they mean with PCIEx1_3 is actually PCIEx1_2 ?

I think I found something useful here: https://www.asus.com/support/FAQ/1037507/

It says 4/2, so I guess it can do x4x4x4x4?

Isn’t it PCIEX16_1? I bet it’s already occupied by gpu. Your best bet is to use 2nd port because 3rd is limited by 2nd NVME in M.2_2.

I think I’m catching on.

So the PCIEX16_1 would be used by the GPU (in x8 mode)

The PCIEX16_2 can host two M.2 drives (in x4x4)

And the PCIEX16_3 can’t be used since my second M.2 slot is filled. Otherwise, it could work in a single x4.


and if you want 4x M.2 in the x8 slot, you need to buy something similar to this:

The 2 M.2 bifurcating adapter vill cost you 30-40USD, the above card with the PLX redriver goes for 450USD at msrp …


There is also another approach to this problem - after being many times in situation where resources limit my fun I’ve tried to analyze pros and cons of doing it vs not and in some cases benefit of having 2 or 3 extra drives can outweight cost of some % reduction of lower pcie lanes count. It’s not in the subject but I have good example of my own - i got tesla p4 in dual gpu combo with radeon 6700xt in one PC though tesla is limited to pcie 3.0 x4 speed due to b550 chipset. Conclusion is that amount of scenarios it will be useful even with some degree of performance reduction. Maximizing PCIE throughput is an area where people usually don’t even have idea how much they really use it.
You might find out that those x2 drives x4x4 is all you really need.


Holy crap. Why is it so expensive?

This is probably the case. It seems more likely that the platform itself will become obsolete by the time I need to expand my memory beyond the extra 2xM.2 drives. Although, I am hoping the 5800X3D will keep it relevant for few more years.

What @Susanna said:

The bifurcation cards are just that, they split the pic bus into mutiple slots and map the lanes 1 to 1 (or 4to4 in this case), the costly cards have a PLX bridge that maps a x8 or x16 pci slot into multiple x4 ones, usually 16, but even more nowadays, and that costs money, lots of money and adds complexity to the design (more lots of money) and the need for it is server/niche (even more lots of money)

… from the article:

The PLX PCIe bridge chip on the Quad x8 provides the wide platform support. 
The PLX chip isn't a cheap component now that Avago owns the technology;

Alternatively, if you still want 4x M.2 NVMe drives for storage, consider putting it in a different system (cheap, headless) and connect via the LAN. You still need bifurcation in that system, but as it’s headless you don’t need a GPU, or just a basic one (like a GT710) to plonk into a secondary PCIe 4x or even 1x slot. That leaves the 16x slot free. Use NFS (Linux) or SMB (Win-OS) for sharing with your PC. Downside is speed: even PCIe gen 3 reaches 3.5 GB/s, your network may struggle to get even 1 Gbit/s (which is about 100MB/s). A decent HDD can reach 250MB/s and for a SATA SSD it’s 550MB/s

Fairly recently I got myself this one:
Currently residing in my fileserver (EPYC-based) with just 2x 1TB NVMe drives in it on a 4x4x4x4 bifurcated 16x slot. Will expand as and when budget allows.

1 Like

I see. Interesting.

Well, I guess x4x4 should be enough - it’s definitely the most cost effective solution.

Now the question is, which one do I get?

Something like this: https://a.co/d/4mj6Tca ?

Does the PCIE 3 vs 4 in the name make any difference? The traces should be the same, right?

I considered this option but it’s just too much of a hassle. I already have a SFF server (CoolerMaster NR200P), but that one is way too crowded. (Managed to fit in a 5700G with a 280 AIO).

It means the card is rated for PCIE3.0, and may not work at 4.0 speeds. Not all traces will carry a signal equally. Just look at the struggles with DDR5 memory; it’s not just the memory controllers, it’s convincing the traces in the motherboard to carry the signal cleanly.
You can usually manually downgrade the speed in bios though, but running a PCIE3.0 riser at 4.0 speeds can cause all kinds of hard to diagnose stability problems. It might work fine, but it also might not. That is, in essence, what these cards are; glorified riser cables in printed circuit boards.

Another option I’ve noticed recently cropping up is PCIE 3.0 to U.2 cards being fairly affordable on ebay right now. You can get U.2 to M.2 adapter cables, and mount the M.2 drives with double sided foam tape somewhere convenient. It’s a more expensive option, probably approaching $100 for the card and cables, but it could get you your 4+ NVME drives at PCIE3.0x8 bandwidth.
I’ve never used one, though.
Sun Oracle 7096186 7064634 NVME 8-Port PCI-Express Switch Card | eBay
Maybe someone else can elaborate on any pitfalls with using one of these.

Turns out the card I listed, and similar server cards, still require software support to function, and that support hasn’t materialized in the broader consumer markets.

How about this one:

Susanna is right. You might run into some problems when using a consumer motherboard’s bi-furcation. After looking at ASUS website, I am not confident that it will work like you expect.

Took a similar approach to hardware configuration for my nas. I purchased a RIITOP 4 Port M.2 NVMe and installed 4x 2TB Crucial P3 NVMe which are working fine with bi-fircation set 4x4x4x4 on a AsRock EPYCD8-2T.

Depending on the type of case you have, maybe look at doing the following? A lot more flexible and expandable.

LSI 9400-16i
Icy Dock Cremax RD MB516SP-B 16BAY
and however many SSDs you want to populate

If you did the above will avoid any headache of bi-furcation on that board. And yes it maybe a little more expensive but I am much more confident that it will work for high speed storage.

1 Like

Would probably work fine with 2x4 configuration at gen 4 speeds. However, the next thing to check is if your motherboard can run in x8 and x4x4 simultaneously. It may require the top slot be bifurcated only in 4x4x4x4x mode, or it may require both slots to be 4x4x instead of 8x and 4x4x.

The card is cheap enough and useful to have around, so it probably doesn’t hurt to give it a go.