Supermicro H13SAE-MF weird Bifurcation U.2 Problems

Hi Guys,

so long story short, I got a brand new Supermicro H13SAE-MF Board last week, plugged everything in and it booted right up. Memtest and Stresstest went successful, so I wanted to check on my 2 Samsung PM9A3 U.2 3.84TB Drives but they did not show up.

Tried to change the Lanes for Bifurcation Support on my BIOS, this didnt work out.
Called my Technican at the Distri, he told me its blocked by AMD as this is a consumer platform.
Talked to Supermicro, and they were like “lol, rtfm, fk newbs, k, bye” and sent me over a Schematic from their Manual. Turns out, they mixed some Lanes with a PCIe Switch and the Board doesnt support 4x/4x/4x/4x Bifurcation.

//clarification from me, they are not using a PCIe Switch but they are muxing!

My bad. So i switched plans, installed two Samsung 980 Pro 1TB in the M.2 Slots for Booting the OS, doged by 4x Bifurcation Card and bought two Break-Out Cards (10G Tekt) that adapt from PCIe x4 to SFF 8643 which I then connected to my SSDs.

The SSDs are now connected to the PCIe 5.0 Ports that are routed directly to the CPU, the one x4 Slot from the Chipset is now blocked by my X520 Networking Card.

Now the weird part, the SSDs randomly show up or are gone. I cant figure out why?
I tried stresstesting the connection with Benchmark Tools and had no Problems.
Both disks were at some time detected and showed the correct connect (PCIe 4.0 x4).
So I swapped cables (two different manufacturers) but to no avail, the problem stayed.

So I switched the PCIe Slots, nothing changed.
I tested the Disks with the 4x Bifurcation Card on my previsous ASROCK Rack Board (B650) and had 0 Problems.

What changed from then to now?
I swapped out the ASRock Rack Board for a Supermicro H13SAE-MF, swapped the Base OS from Win 10 (for testing) to Windows Server 2022 (for production use) and I swapped out the 10G 4x Bifurcation Card for Single Break-Out Cards.

Did anyone expierience similar issues? IPMI and BIOS are already updated, Windows is patched and all drivers (Win 11 Base) from the AMD Site were installed successfully.
When available, both Disks (for Software RAID 1) show good R/W Speeds and dont have any Problems. But after a reboot or some other random thing they seem to timeout and not come back up, only after a reboot.

Thanks for reading.

1 Like

I’m sorry to hear about your experiences.

While I can’t specifically help I can attest that Supermicro’s Tech Support response is full of shit.

Here’s the block diagram from the manual:

  • There ain’t no PCIe Switch chip on that motherboard

  • What’s there is a PCIe Mux Chip that routes 8 lanes from the main x16 CPU PCIe Slot to the second PCIe x8 slot. If both slots are occupied both slots will get 8 lanes each.

  • This PCIe muxing is box-standard design and has nothing to do with PCIe Bifurcation.

  • I checked to make sure that I’m not spouting BS (I don’t have an AM5 system yet), an example is the ASUS ProArt B650-CREATOR, it also uses PCIe muxing between the “large” CPU PCIe slots and properly supports PCIe Bifurcation (for reference: [Motherboard] Compatibility of PCIE bifurcation between Hyper M.2 series Cards and Add-On Graphic Cards | Official Support | ASUS Global )

  • It’s an arbitrary choice by Supermicro to not implement this feature into their BIOS, has nothing to do with AMD or general technological limitations.

:frowning:

My first inquiry would be:

- Which specific adapter did you use to connect your U.2 SSDs to the motherboard? PCIe Gen4 can still be very fragile here.

3 Likes

Hi Pleb, thanks for reaching out to help me.

First off, I was wrong the PCIe Switch, I swapped terms with Switching and Muxing, my bad!

About your inquiry, I used a 10GTek PCIe x4 to SFF8643 Adapter from Amazon (2pcs).
Now I bought more Adapters from Startek and Glotrends.
Startechs says its PCIe 2.0 and 3.0, Glotrends says 3.0 and 4.0.
However, I dont know how good or bad signal integrity will be, so if its really “PCIe 4.0” can only be told by the test of time.

I would also be happy with having those running stable at PCIe 3.0, no Problem with that because its still blazing fast. Do you know a way to “force” it into a lower standard?
In the BIOS I didnt find an option to force the Link to a previous Gen (3.0).

BR

I think I have tested both these brands some time ago and both were trash (have a long PCIe adapter thread where I vomit out my personal thoughts about the “quality” in the industry).

The absolutely only (!) passive PCIe-to-U.2 parts that really are PCIe Gen4-capable (meaning absolutely no PCIe Bus Errors even after 24 h of full transfer load) are newer models from Delock:

That doesn’t mean other quality models don’t exist, I’ve just given up on looking for them. You don’t need a “test of time” to check how good adapters are regarding PCIe Gen4, just a motherboard with enabled PCIe Advanced Error Reporting option in the BIOS, on AM4 systems this option was only functional for PCIe lanes coming directly from the CPU, not from an X570 chipset, for example. As mentioned I’m still ignorant regarding AM5 platforms, the earliest I’ll check it out is Zen 5 (or whenever they upgrade the CPU-to-Chipset interface to PCIe Gen5 x4).

  • First off: If you’re only using a single PCIe x4 AIC adapter for a single U.2 SSD you don’t need any PCIe Bifurcation support from the BIOS. That comes into play if you want to operate multiple x4 SSDs in a single slot (for example 2 in a x8 or 3 to 4 in a x16 slot).

  • The option to manually specify the PCIe Gen of a given slot is unfortunately optionial for manufacturers, most of ASUS motherboards have these options, but for example an ASRock Rack X470D4U does not.

Maybe you could ask Supermicro for a custom BIOS that shows these options. They 100 % have such versions for development/testing purposes.

3 Likes

Sorry to wake this thread, but this has now shown up in the BIOS:
CPU SLOT6 PCIe Bifurcation: Auto/x4x4x4x4

2 Likes

Yes, that’s a box-standard feature for current AMD CPUs, if the motherboard manufacturers don’t mess up their BIOSes.

It means that the 16 main PCIe lanes in this case from the AM5 CPU get split into 4 logical PCIe devices.

With this setting you can use

  • a single x16 PCIe AIC with 4 x x4 in the main x16 slot

  • two x8 PCIe AICs with 2 x x4 slots each

But a sign Supermicro messes around here:

There should also be x4/x4_x8 and x8_x4/x4 in the same option menu to be able to only bifurcate a single PCIe slot and be able to still operate the other slot with a single PCIe device at x8.

1 Like

Does this H13SAE-MF motherboard have any overclocking (CPU and/or RAM) support for Ryzen CPUs?

Supermicro doesn’t seem against this idea as they support it on the H13SRA-TF for Threadripper 7000.

Just wanted to nudge this one and see if there has been an update.

Unfortunately there is still no update to the bios to enable x8x4x4 or x4x4x8 option, still only the x4x4x4x4.
I contacted support to ask if there is a plan to add the option but all they would do is refer me back to sales for a custom bios request.

I am using the x4 slot for an X550-T2 NIC, the main pcie slot for my HBA and because I am using 1 of 2 onboard m.2 slots for an optane drive for zfs log, would like to bifurcate the second slot to have dual m.2 there but currently not possible :frowning:
How can we get the request for bios option escalated? It seems basic?

Otherwise the board is pretty great inside the silverstone cs382 case for primary NAS aside from cooler compatibility with the odd layout.

Sorry for the delayed response but with x4x4x4x4 you can also use PCIe Bifurcation in the second (x8) CPU PCIe slot.

Why?

  • The Mux chip on the motherboard detects that something is present in the secondary CPU PCIe slot. This automatically leads to 8 CPU PCIe lanes being routed to that secondary CPU PCIe slot.

  • The BIOS setting is telling the Ryzen 7000/9000 CPU IO Die to configure its 16 main CPU lanes as x4-x4-x4-x4, the CPU itself doesn’t care if this happens in the primary x16 CPU PCIe slot alone or if the secondary CPU PCIe slot is also populated.

  • But this setting limits the maximal amount of PCIe lanes to a single regular PCIe AIC to x4. If the settings were present x8 in one CPU PCIe slot and x4-x4 in the other would be also be possible without potentially “wasting” 4 CPU PCIe lanes.