Unless you’re getting an expensive redriver card, PCIe->M.2 cards require bifurcation support in your motherboard. You can check this by entering BIOS and looking for an option to change slot whatever to x4/x4/x4/x4 mode. Theoretically you could get an x8->x2/x2/x2/x2 card, but I’ve never heard of a motherboard supporting that particular bifurcation, so if you want four M.2s you’ll probably need to use an x16 slot.
And unless this is a workstation board, you should check your manual. PCIe lanes are limited on consumer platforms, so odds are if you have two x16 slots, they share bandwidth, and having a GPU in one will limit the other to x8. Such motherboards usually have an extra x4 slot connected to the chipset, so in that case you should move your GPU to that slot and install the M.2 adapter card in the x16 slot (which you bifurcate to x4/x4/x4/x4.
The x4/x4/x4/x4 option is sometimes called “NVME RAID”. This isn’t the same as motherboard RAID (which you shouldn’t use), but simply an annoying and confusing alternative term for bifurcation.
There is also another approach to this problem - after being many times in situation where resources limit my fun I’ve tried to analyze pros and cons of doing it vs not and in some cases benefit of having 2 or 3 extra drives can outweight cost of some % reduction of lower pcie lanes count. It’s not in the subject but I have good example of my own - i got tesla p4 in dual gpu combo with radeon 6700xt in one PC though tesla is limited to pcie 3.0 x4 speed due to b550 chipset. Conclusion is that amount of scenarios it will be useful even with some degree of performance reduction. Maximizing PCIE throughput is an area where people usually don’t even have idea how much they really use it.
You might find out that those x2 drives x4x4 is all you really need.
This is probably the case. It seems more likely that the platform itself will become obsolete by the time I need to expand my memory beyond the extra 2xM.2 drives. Although, I am hoping the 5800X3D will keep it relevant for few more years.
The bifurcation cards are just that, they split the pic bus into mutiple slots and map the lanes 1 to 1 (or 4to4 in this case), the costly cards have a PLX bridge that maps a x8 or x16 pci slot into multiple x4 ones, usually 16, but even more nowadays, and that costs money, lots of money and adds complexity to the design (more lots of money) and the need for it is server/niche (even more lots of money)
… from the article:
The PLX PCIe bridge chip on the Quad x8 provides the wide platform support.
The PLX chip isn't a cheap component now that Avago owns the technology;
Alternatively, if you still want 4x M.2 NVMe drives for storage, consider putting it in a different system (cheap, headless) and connect via the LAN. You still need bifurcation in that system, but as it’s headless you don’t need a GPU, or just a basic one (like a GT710) to plonk into a secondary PCIe 4x or even 1x slot. That leaves the 16x slot free. Use NFS (Linux) or SMB (Win-OS) for sharing with your PC. Downside is speed: even PCIe gen 3 reaches 3.5 GB/s, your network may struggle to get even 1 Gbit/s (which is about 100MB/s). A decent HDD can reach 250MB/s and for a SATA SSD it’s 550MB/s
It means the card is rated for PCIE3.0, and may not work at 4.0 speeds. Not all traces will carry a signal equally. Just look at the struggles with DDR5 memory; it’s not just the memory controllers, it’s convincing the traces in the motherboard to carry the signal cleanly.
You can usually manually downgrade the speed in bios though, but running a PCIE3.0 riser at 4.0 speeds can cause all kinds of hard to diagnose stability problems. It might work fine, but it also might not. That is, in essence, what these cards are; glorified riser cables in printed circuit boards.
Another option I’ve noticed recently cropping up is PCIE 3.0 to U.2 cards being fairly affordable on ebay right now. You can get U.2 to M.2 adapter cables, and mount the M.2 drives with double sided foam tape somewhere convenient. It’s a more expensive option, probably approaching $100 for the card and cables, but it could get you your 4+ NVME drives at PCIE3.0x8 bandwidth.
I’ve never used one, though. Sun Oracle 7096186 7064634 NVME 8-Port PCI-Express Switch Card | eBay
Maybe someone else can elaborate on any pitfalls with using one of these.
Turns out the card I listed, and similar server cards, still require software support to function, and that support hasn’t materialized in the broader consumer markets.
Susanna is right. You might run into some problems when using a consumer motherboard’s bi-furcation. After looking at ASUS website, I am not confident that it will work like you expect.
Took a similar approach to hardware configuration for my nas. I purchased a RIITOP 4 Port M.2 NVMe and installed 4x 2TB Crucial P3 NVMe which are working fine with bi-fircation set 4x4x4x4 on a AsRock EPYCD8-2T.
Depending on the type of case you have, maybe look at doing the following? A lot more flexible and expandable.
Would probably work fine with 2x4 configuration at gen 4 speeds. However, the next thing to check is if your motherboard can run in x8 and x4x4 simultaneously. It may require the top slot be bifurcated only in 4x4x4x4x mode, or it may require both slots to be 4x4x instead of 8x and 4x4x.
The card is cheap enough and useful to have around, so it probably doesn’t hurt to give it a go.