AM5 Motherboards and PCIe Lanes

With the release of B650 today, I took a peak at all the boards (https://www.asus.com/microsite/motherboard/AMD-AM5-X670-B650/b650/download/ASUS-B650-Series-Specs.pdf). My question is on the limiting of speeds and how that interfaces with current generation cards. PCIe5 doubles the rates of data compared to PCIe4. If a PCIe5x16 slot operates at PCIe5x8, does that mean it will still be able to operate at PCIe4x16 since the rates are the same? Would it be better to wait until they are able to release boards with better access to full speed lanes? I have a LSI card and a 10Gb NIC to include in addition to a GPU and the slots/lane assignments are deterring me from purchasing these first gen boards.

No. A card supports a maximum per-lane rate (corresponding to a particular PCIe revision), and is electrically wired for a number of lanes (corresponding to number of traces/pins actually connected). The board slot also supports a maximum per-lane rate and is electrically wired for a number of lanes (the operating number).

The two simply use the maximum lane count (connected pins) and lane rate (PCIe revision) they have in common, which becomes the total bandwidth of the connection. That’s it; there’s no conversion or equivalence.

Being this simple is what allows for backwards and forwards compatibility among boards and cards. All of the equivalent-bandwidth discussions you see are people doing “what if” math and wishing things were wired differently.

3 Likes

I am very disappointed with the motherboard manufacturers’ AM5 boards. Most of the AM5 motherboards only have two fiscal slots. I won’t be able to use my go-to motherboard, the Asrock X670E Taichi, because it only has room for a graphics card or an ethernet card, not both. Also, I can’t switch to the Intel platform because Linux still doesn’t support Intel’s modern Hybrid cores. So I guess I am stuck unless the AMD motherboard manufacturers get their collective brains out of their asses.

1 Like

You could try the Asus ProArt X670E Creator WiFi (Can’t link to asus:/Motherboards-Components/Motherboards/ProArt/ProArt-X670E-CREATOR-WIFI/techspec/). That already has 10GE and Thunderbolt (USB4), and has the standard x16 or x8/x8 slots from the CPU plus an x16 physical/x4 electrical slot from the chipset. The added latency of the chipset might not be an issue, even if you do need to add another nic.

I’d be willing to live with PCH-attached expansion slots at this point, since that would alleviate and potentially eliminate the need for additional PCIe switches, redrivers, etc. Just give me more expansion slots—not everybody needs five M.2 slots! Hopefully one of the mobo makers comes out with a “WS” AM5 board that opens up some alternative configurations at a lower price point than sWRX. Threadripper Pro can’t be the answer for everyone.

1 Like

@SWPadnos I looked at the motherboard you suggested, and it has the same problem as the Asrock X670 Taichi, not enough expansion slots for my use case. It seems like I am stuck on AM4 or upgraded to Threadripper Pro.

Well, it looks like when it comes to AM5, Threadripper Pro is the answer when you are looking for more than a gaming rig.

1 Like

Hi, I am trying to learn more for doing stuff like home servers and what not but don’t know too much about the various PCIE expansion slot use cases other then GPU, additional networking for faster speeds? and that it can be a way to have additional m.2s can you tell me a bit about what you need the expansion slots for? Since I know looking at it myself it seems like the motherboard might be the right one for me since the USB 4 seems to be rare on X670E motherboards though is an ecosystem I’m interested in.

Personally I have a second GPU and USB PCIe card. My computer runs Linux on the host, and Windows in a VM. The second GPU and USB card are passed through to the VM - so only the VM can control them, not the Linux host. Then the Linux host has it’s own GPU.

This means I effectively have a Windows gaming machine and Linux workstation in one system.

Yeah that’s kind of like what I was planning to do though I wasn’t sure what would be a good GPU price so I was thinking to maybe get one much later and have the Linux workstation or maybe unRAID with a Linux VM for workstation with the linux stuff using the iGPU and the Windows doing GTX 1070 stuff.

Didn’t realize about the USB PCIe thing.

the MSI MEG X670E Ace is so far the “cheapest” board with x8x8x4 from the CPU.
I wonder how good the onboard 10Gb/s network card is, I have my doubts that you will see 10Gb/s if you use one of the 3 m.2 slots attached to the chipset at the same time.
I don’t think we will see a board that e.g. x8x8x8x4 realized via PLX switch.
I’ll wait until January, maybe Intel will bring something useful in Q1

Sorry for the late reply, @ultraforce. I have gotten behind on my emails. I am also new to servers and virtualization, and my next build will be either a Proxmox virtualization server or ESXi. Right now, I am using Vmware workstation 16.02 and am hitting a wall because of the limitations of the Vmware Workstation.

There are two problems I need to overcome: the shortages of PCIe lanes and fiscal PCIe slots. I was hoping AM5 would increase the PCIe Lanes available on their non-Threadripper Pro and EYPC CPUs. On the other hand, I didn’t expect motherboard manufacturers to reduce the number of fiscal PCIe slots on their ATX and EATX motherboards.

You may ask why I care about PCIe lanes and fiscal spaces; I want to use two fiscal graphic cards, an Intel X550 or an X710 and an NVMe. My problem with the AM4 platform is it is hard to find a motherboard that will break out the PCIe lanes for that much equipment. I would like to see eight lanes for each graphics card, four for the ethernet card, and four for NVMe. The problem with the AM5 platform is not enough fiscal SLots and PCIe lanes, or AM5 won’t allow me to break out the PCIe lanes as I want.

I want to use an ethernet card instead of a motherboard nic because most motherboard manufacturers don’t use NICs with features like Intel Virtualization Technology for Connectivity or SR-IOV.

It’s been a while since I posted here. My Graphics card is stuck at PCIe 3.0 x8 which really blows. The only hope was that RDNA3 was going to support PCIe gen 5 to get anything close to PCIe 4 x16 speeds. I was able to get my system up and running with a ROG Strix X670E-E Gaming wifi board. It’s not the greatest, but when newer AM5 CPU’s come out with more PCIe lanes, I’ll likely do a board swap and a CPU swap.

This situation won’t improve in the coming years. We might see 32 lanes instead of 28, but that won’t change the “one x16 or two x8 slots + chipset x1 crap”. The 4 lanes we get over AM4 are either locked into an M.2 slot (most common) or Thunderbolt. Some boards have a x4 CPU slot (instead of CPU M.2 slot), but there is no board with x8/x8 split slot AND CPU x4 slot.

It is deliberately designed so you can’t expand to a second GPU or more NVMe/10+G Networking. You need Workstation class board+CPU for that.

I really want to see the upcoming Mid- to low range Radeons to have a PCIe x8 format, so we can at least get a board with x8/x8 and have 8 free lanes, which could fuel 2x 5.0 NVMe, HBA or 10/25/40G Networking. Or just take the hit on the GPU and run it via PCIe 4.0 x8.

TL;DR: “Consumers don’t want to expand, pls buy our Threadripper/Xeon” :slight_smile: "

1 Like

But it is sinfully expensive for a “consumer” board.

:+1:

1 Like