Supermicro H13SAE-MF weird Bifurcation U.2 Problems

Hi Guys,

so long story short, I got a brand new Supermicro H13SAE-MF Board last week, plugged everything in and it booted right up. Memtest and Stresstest went successful, so I wanted to check on my 2 Samsung PM9A3 U.2 3.84TB Drives but they did not show up.

Tried to change the Lanes for Bifurcation Support on my BIOS, this didnt work out.
Called my Technican at the Distri, he told me its blocked by AMD as this is a consumer platform.
Talked to Supermicro, and they were like “lol, rtfm, fk newbs, k, bye” and sent me over a Schematic from their Manual. Turns out, they mixed some Lanes with a PCIe Switch and the Board doesnt support 4x/4x/4x/4x Bifurcation.

//clarification from me, they are not using a PCIe Switch but they are muxing!

My bad. So i switched plans, installed two Samsung 980 Pro 1TB in the M.2 Slots for Booting the OS, doged by 4x Bifurcation Card and bought two Break-Out Cards (10G Tekt) that adapt from PCIe x4 to SFF 8643 which I then connected to my SSDs.

The SSDs are now connected to the PCIe 5.0 Ports that are routed directly to the CPU, the one x4 Slot from the Chipset is now blocked by my X520 Networking Card.

Now the weird part, the SSDs randomly show up or are gone. I cant figure out why?
I tried stresstesting the connection with Benchmark Tools and had no Problems.
Both disks were at some time detected and showed the correct connect (PCIe 4.0 x4).
So I swapped cables (two different manufacturers) but to no avail, the problem stayed.

So I switched the PCIe Slots, nothing changed.
I tested the Disks with the 4x Bifurcation Card on my previsous ASROCK Rack Board (B650) and had 0 Problems.

What changed from then to now?
I swapped out the ASRock Rack Board for a Supermicro H13SAE-MF, swapped the Base OS from Win 10 (for testing) to Windows Server 2022 (for production use) and I swapped out the 10G 4x Bifurcation Card for Single Break-Out Cards.

Did anyone expierience similar issues? IPMI and BIOS are already updated, Windows is patched and all drivers (Win 11 Base) from the AMD Site were installed successfully.
When available, both Disks (for Software RAID 1) show good R/W Speeds and dont have any Problems. But after a reboot or some other random thing they seem to timeout and not come back up, only after a reboot.

Thanks for reading.

I’m sorry to hear about your experiences.

While I can’t specifically help I can attest that Supermicro’s Tech Support response is full of shit.

Here’s the block diagram from the manual:

  • There ain’t no PCIe Switch chip on that motherboard

  • What’s there is a PCIe Mux Chip that routes 8 lanes from the main x16 CPU PCIe Slot to the second PCIe x8 slot. If both slots are occupied both slots will get 8 lanes each.

  • This PCIe muxing is box-standard design and has nothing to do with PCIe Bifurcation.

  • I checked to make sure that I’m not spouting BS (I don’t have an AM5 system yet), an example is the ASUS ProArt B650-CREATOR, it also uses PCIe muxing between the “large” CPU PCIe slots and properly supports PCIe Bifurcation (for reference: [Motherboard] Compatibility of PCIE bifurcation between Hyper M.2 series Cards and Add-On Graphic Cards | Official Support | ASUS Global )

  • It’s an arbitrary choice by Supermicro to not implement this feature into their BIOS, has nothing to do with AMD or general technological limitations.


My first inquiry would be:

- Which specific adapter did you use to connect your U.2 SSDs to the motherboard? PCIe Gen4 can still be very fragile here.


Hi Pleb, thanks for reaching out to help me.

First off, I was wrong the PCIe Switch, I swapped terms with Switching and Muxing, my bad!

About your inquiry, I used a 10GTek PCIe x4 to SFF8643 Adapter from Amazon (2pcs).
Now I bought more Adapters from Startek and Glotrends.
Startechs says its PCIe 2.0 and 3.0, Glotrends says 3.0 and 4.0.
However, I dont know how good or bad signal integrity will be, so if its really “PCIe 4.0” can only be told by the test of time.

I would also be happy with having those running stable at PCIe 3.0, no Problem with that because its still blazing fast. Do you know a way to “force” it into a lower standard?
In the BIOS I didnt find an option to force the Link to a previous Gen (3.0).


I think I have tested both these brands some time ago and both were trash (have a long PCIe adapter thread where I vomit out my personal thoughts about the “quality” in the industry).

The absolutely only (!) passive PCIe-to-U.2 parts that really are PCIe Gen4-capable (meaning absolutely no PCIe Bus Errors even after 24 h of full transfer load) are newer models from Delock:

That doesn’t mean other quality models don’t exist, I’ve just given up on looking for them. You don’t need a “test of time” to check how good adapters are regarding PCIe Gen4, just a motherboard with enabled PCIe Advanced Error Reporting option in the BIOS, on AM4 systems this option was only functional for PCIe lanes coming directly from the CPU, not from an X570 chipset, for example. As mentioned I’m still ignorant regarding AM5 platforms, the earliest I’ll check it out is Zen 5 (or whenever they upgrade the CPU-to-Chipset interface to PCIe Gen5 x4).

  • First off: If you’re only using a single PCIe x4 AIC adapter for a single U.2 SSD you don’t need any PCIe Bifurcation support from the BIOS. That comes into play if you want to operate multiple x4 SSDs in a single slot (for example 2 in a x8 or 3 to 4 in a x16 slot).

  • The option to manually specify the PCIe Gen of a given slot is unfortunately optionial for manufacturers, most of ASUS motherboards have these options, but for example an ASRock Rack X470D4U does not.

Maybe you could ask Supermicro for a custom BIOS that shows these options. They 100 % have such versions for development/testing purposes.