PCIe M.2 Expansion card and Bifurcation

I have some questions about configuring this card: https://www.aliexpress.com/item/1005004011484959.html

I have 2x M.2 connected atm and had to set the PCIe lanes to 4x4x4x4x to see the second drive on this expansion card.

The motherboard is a Asrock 650M and I have a 4090 in the LAST end PCIe slot with this expansion card on the primary slot (this was due to space)

The issue atm is I notice the 4090 runs at PCIe4.0 X4 and not X16. I tested some games and it seems ok, but it just seems wrong.

Anyone know how to fix this? is there a trick to using M.2 expansion cards? exact config?

The bios is overcomplicated with hundreds of options so I have no idea if I can force the last slot to be X16 and first to remain X4.

Not sure exactly what my options are since I don’t understand the bios options. You can do A LOT with the PCIe Slots but it makes no sense.

This is a consumer platform board. All consumer boards only have a single electrical x16 slot (16 lanes). Some of the more expensive boards have the ability to bifurcate to two separate slots with 8 lanes each (x8). The only slot that will work with the GPU at anything above x4 is the top slot. The only slot that will work with a multiple M.2 expansion card is the top slot. What you are trying to do requires HEDT to do properly. Unfortunately, X299 was the last proper HEDT before prices got wildly out of hand. Both Intel and AMD seem to be interested in only offering Lite Workstation and Professional Workstation now. There are also the occasional ‘odd’ board that uses PCIE switches to have more lanes. For example, SuperMicro has at least Z490 and Z590 consumer boards that have two pairs of 16x+0x or 8x+8x slots - but ultimately, the time on the PCIE bus is switched between the devices so you cannot use all of the bandwidth you would think is available on all 4 slots at the same time, though for situations like yours it would be just fine.
Basically - unfortunately - your hardware cannot do what you want it to.

Get a board with more PCIe lanes. Xeon W or Threadripper, or full server board.

Consumer boards only have 16 lanes for the GPU slot and (depending on board) maybe x4 slot instead of an M.2. Everything else are crappy chipset slots. And chipset is overbooked more often than not already.

Effectively you have one x16 slot + some scraps.

Some boards offer to split the x16 into two x8 slots. But that’s the best you can get. Or run a 4090 on (hopefully not chipset) 4 lanes :joy:

Oh I feel that I should probably also add what bifurcation actually is. Bifurcation is taking a set of physical lanes - in this case x16 - and logically dividing them into multiple virtual slots. This can sometimes also involve physically switching where those lanes are connected to - such as 16x+0 or 8x+8x. When you change your setting from 16x to 4x+4x+4x+4x you are telling your motherboard to look for four separate 4 lane (4x) devices on the top physical x16 slot.

Yeah I may need to put the waterblock on sometime and move card back to slot 1 as slot 4 seems to goto 4x lane by default.

I can then at least run PCIe4.0 8x on main slot 1.

I’ll get around to it sometime.

Sadly the 4090 card blocks ALL SLOTS if put in slot 1, lol

I have also been looking at this issue.

On B650 boards you have 24 CPU lanes total, meaning you can only have the following configurations, while everything else is switched off:

  • 1 GPU in PCIE 4.0 x16 slot + 2 x M.2 x4 + 2x SATA 6Gbit
  • 1 GPU in PCIE 4.0 x16 slot + 1 GPU in PICE 3.0 x4 (or x1, or x2)
  • 1 GPU in PCIE 4.0 x16 slot + 1 x M.2 x4 + 4x SATA 6Gbit
  • 1 GPU in PCIE 4.0 x16 slot + 3 - 4 x M.2 x4
  • 1 GPU in PCIE 4.0 x16 slot + misc PCI devices (add in cards etc) x1, x1, x2 + 1 x M.2 or 2x SATA

20 CPU lanes total. The other 4 are dedicated to the chipset.
The main 20 lanes are often set as the top M.2 for four lanes and the top PCIE slot has all 16 lanes. Everything else is routed thru the chipset.
On more expensive boards there are two slots physical 16 lane long with the top slot being capable of 16 lanes or 8 lanes with the second 8 lanes being routed to the second 16 length slot (or middle when they have additional 16 length slots that are 4 or 1 lane electrical and chipset routed). There are also very uncommon boards such as the Z490 and Z590 boards with a PCIE switch that allows for the top and third slot to have 16 lanes or for all 4 to have 8 lanes - but you cannot utilize the full bandwidth of all the devices at the same time since the switch still only has 16 lanes to the CPU.

Sorry for the double, for some reason the first post was not showing up as a reply until after I deleted it and made this one.

2 Likes

The GPU should run at the full 16 lanes in the top slot no matter how much other stuff you put in. You haven’t been very specific about what board you have, and while I’m sure some B650 with 8x + 8x may exist; I highly doubt that there are any micro ATX boards that do (the m in B650m).

To do what you want, you need a different board. If you just want to have a lot of space and do not care about raw speed of that space, you can get one of those really expensive boards that has like 5 M.2 slots on it. Aside from the top one (and occasionally the ones that take lanes from the GPU) they would all be connected to the chipset so you can’t really get more bandwidth than 4 lanes even if you were to stripe (raid 0) them all together.

Its a ASROCK B650 PG Riptide

I have 2x m.2 slots used, and 2x m.2 drives IN the PCIe expansion card. Only one is Gen 4, rest are Gen 3 I think.

Not much I can do since the only place the card can go atm is bottom slot, or I forgo my drives on the expansion card which is not really viable since I actually use them all.

I can’t even use a PCIe ribbon cable as the damn 4090 super blocks all the ports, kind of a dump design really by ASUS. Oh and mounting the 4090 upright doesn’t work as it bumps into the CPU cool, lol.

I COULD get a water cooler to solve that but I believe it hits something else on top of that. Plus it would then be up against the glass which means overheating. Basically forced to go waterblock route, and I do have the block and pump etc… just cbf atm.

Installing a DIY waterblock loop is not a fun process, takes forever to do properly.

Yes, good post.

I’ve had to choose between B650 and B650E.

In my opinion, B650 is the best value if you don’t need excessive onboard SATA or USB connectivity and only use 1 GPU.

In my case, I needed support for 2 GPUs in order to pass the first through and use the second one for the host.

My problem with B650 was that the second, physical PCIe 4.0 (3.0) x16 slot is usually only wired x1 or x2. So the question was whether a second GPU could even be operated in the second physical x16 slot, that is only x1 or x2 electrically.

The answer is yes, for anyone else wondering.

Remember the good times of AM4?

Being able to do 8x4x4x on 2 16x slots?

Never forget.

AM4 does not offer this. AM4 only offers 20 lanes (plus 4 lanes used for the chipset). There can be a PCIE switch used to make more slots, but the bandwidth limit is still 16 lanes plus 4 more for the SSD.
AM3 wasn’t an SOC and apparently had the ability to have far more varied configurations - but you were stuck with FX. blech

image

right from the boards page. if you were trying to use 16 lanes on slot 4, and 4 lanes on slot 1 its probably not there physically

I think there is miscommunication here. In any case, I am certainly not imagining having a GPU in 8x mode in one slot and a NVME expansion card with 2 SSDs in another.

Yes, it does appear that way. I thought you meant two 16 lane electrical slots for some reason. Pretty high end board from a company that actually cares to be able to pull off that config. Not many have the dual 8 lane electrical slots and of those that do, not many have that bifurcation option from what I have heard.

Finally did the water block on the 4090, took the whole day since my case isn’t the easiest to work with (massive case but everything hits each other somehow).

Anyway looks like the expansion card does not work in bifurcation mode on last slot which is only place for it.

Guess I’ll have to save up for a proper expansion card.

EDIT: I’ll just get a second Riser card for the single X1 slot, that should do the trick.

Is there a way to bifurcation a x4 slot into x1 x1 x1 x1?
A single PCIE5 line is equivalent to PCIE3 x4.

@The_Riddick

Bifurcation is possible only the first and only x16 slot if boards supports it at all. This is not specific installation configuration question, is hard limit.

If you can get carrier board with pcie switch ( the expensive ones), then while avoiding bifurcation limittations you will encounter bandwidth limits from the other slots.

You are after all connecting it to x4 slot and dividing bandwidth per drive event further. At least its not connected via chipset.

@jxdking

No board support this (so far as I know, might change in future), its either x8x8 or x4x4x4x4 for primary x16 slot. If its PCIE5 boards, then x8x8 dominates for some reason.

Efficient ways to redistribute abundant pcie5 bandwidth to pcie 4 and lower also do not exist. You can plug PCIE4 or PCIE3 device in PCIE5 slot, but you gain nothing versus native.

There is only single product that can do this, samsung 990 EVO drive. Its controller can operate both in PCIE5x2 or PCIE4x4 mode transparently without losing performance. It might be first sign of future approach to otherwise useless PCI5 connectivity.

Its kinda sad that we have so much bandwidth, but in completely useless implementation. Market segmentation for the win I guess.

Yes.
On the CPU side, we need intel and amd to add support to bifurcate down to x1 line.
On the chipset side, the function is already there. Just need the board vendors to implement.

When trying to hookup a bunch of nvmes on a consumer motherboard, for example 12 m.2/u.2 drives, I don’t really care the individual bandwidth. They are running in parallel anyway. The overall bandwidth is not an issue. I am a big fan of USB/Thunderbolt disk enclosure. The connectivity is limitless.

Yeah figured it was only Slot 1 thing.

I did pick up a PCIe3.0 1x M.2 Card which should be fine for this PCI3.0 Gen3 NVME, which btw is only a 2000MB/s read speed drive.

From what I’ve read the X1 port should be fine for these slower NVME drives, I can put my much faster drives in the dedicated M.2 slots.

If I get another M.2 in the future I’ll be removing one of my smaller older current ones being used.