Supermicro H13SAE-MF Build - Problems with U.2 Bifurcation

Aloha together,

so, long story short, I’m a bit unsure what to do with my newest server build and I cant figure out what went wrong.

Problem: Only 1 of my 2 U.2 SSDs will show up in my System (Port 2 of 4) and the speeds are about 60 Mbyte/s for regular read/write.

Side-Note: My Programm for testing the disk is indicating 2,15Gbytes of Speed when I run the Trim Command… I’m not sure what to think about that.

Parts List
Supermicro CSE-826 Chassis with BPN-SAS3-826-N4
Supermicro H13SAE-MF
AMD Ryzen 7950X
Dynatron A47
4x 32GB Crucial UDIMM Non-ECC 4800MHz
AMD Firepro W6400
Supermicro AOC-STG-I2S (X520-DA2)
Delock 90777 Bifucation Card
2x Supermicro CBL-SAST-0590
2x Samsung PM9A1 512GB
2x Intel P4605 3.2TB U.2 Drive (99% Health) (About 197 TB written, 95 TB read)

My BIOS and BMC are up to the newest revision, inside my BIOS I activate Bifucation on Slot 6 and my Delock Bifucation Card is sitting in this Slot.
The PCIe Cables (0590) are brand new directly from my distributor, as is also the Mainboard.
On my Backplane there are absolutely no other Disks, neither SAS or SATA, just the 2 U.2. Disks in an 3.5" to 2.5" Adapter, both fully slotted in.

I tried the following:

  • Checking the BIOS
  • Updating everything
  • Used TRIM on the SSD thats shown to me
  • Swapped Ports (Its always Port 2 (electrically, started counting by Port 1) or Port 11 (Mechanically, started counting by 1)
  • Tried the Speeds of my Tools on the M.2 SSDs (above 3Gbytes)

Does anyone else have a tip or clue for me?
With my first H13SAE-MF Build, I had to try a lot of different breakout cards, as bifucation wasnt supported yet. So as I tried so many of them, I figured I go with Delock as my hopes were/are that they are reputable company.
They specifically list their card as PCIe 4.0 Compliant.

Currently I dont have another U.2. to check against my two Intel Disks, and I dont have another Bifucation Card.
I will order some online to check against these possible faults.

Software:
EraseIT (Erasuretool, that we use to sanitize some disks) - Crashes with Init Errors and shows 0GB.
Miray HDShredder 7 (Starts the Erasure, reports correctly, SMART works, but only gets 60Mbytes)
Ubuntu 22 LTS

I would really appreciate your thoughts and ideas.

Thanks in Advance, Luke

Trim is done internally on the drive, so it’s not indicative of bus speed.

Does things work if you limit the PCIe slot to PCIe 3.0? You should be able to find the setting for this in the UEFI somewhere.

Those cables are 70 cm long it seems? And you have a backplane plus 3.5" to 2.5" adapters in there as well?

1 Like

This backplane is extremely sensitive how it’s connected to the mobo.
Both the placement of NVMe drives and the cabling need to be in the correct order (according to slot labeling) and the connectors need to match the order the mobo enumerates (sees) them.

Start placing a single nvme drive into the first nvme slot, connect cable to mobo.
If it works, place second drive into slot labeled nvme2 (on the backplane) and attach cable to mobo.
If it doesn’t show up, try connecting backplane to different port on mobo.
Once it does show up, continue with drive #3, …

All of the above assumes that bifurcation is correctly enabled in UEFI, if necessary.

2 Likes

when in doubt with enterprise gear: reset it
it sucks, but that is the way of the enterprise world
but that will not help you here

there are only 3 pcie slots on the H13SAE-MotherFucker board, unless you are counting the M.2 slots.

at which point, we only get up to 5x pcie slots

Now let me rephrase,

the pcie slot (physical card slot, not counting M.2) to the right is x4
the center slot is x16 or x8 if the left slot is populated
the left slot is x8 or disabled if there is a x16 device in the center slot

your proposed config cannot operate properly

the primary limitation to EPYC on AM5 is the PCIe lanes, you have hit the limit.

Swap your (very nice) bifurcation card for a simple PCIe x8 to 2x U.2 card and your current component list will work, but you’ll lose your backplane. The alternative is a PCIe switcher on the x8 slot to the backplane. Though, you will not have very good bandwidth or results.

Only other option is pulling the GPU and runnin the bifurcation card at full tilt in the center x16 slot.

I have built on that board and can firmly say, you shoulda gone 9004/9005 EPYC if you want a GPU accelerated sever with fully functional NVME backplane.

In summary: Welcome back to the forum, good inclusion of details.
Time to build another server.

2 Likes

Typo? This drive doesn’t seem to exist; first hit on duckduckgo is this thread. :slight_smile:

Why not? He’s only connecting two cables, to two NVMe drives, for 8 lanes total. (And the AMD Firepro W6400 is PCIe 4.0 x4, so either x8x4x4 or x4x4x4x4 bifurcation should work.)

Looking at the manual for BPN-SAS3-826A-N4 (not sure if this is another typo or if I’m looking at the wrong backplane), the cables should be hooked up to JSM4 and JSM5. Drives should be in “SAS #8/NVMe #0” (label “J9”) and “SAS #9/NVMe #1” (label “J10”). No risk of confusion there! :sweat_smile:

Not sure what the jumpers JP8 and JP9 are about? (Controls which NVMe slots are connected to “which CPU” according to the manual.)

you forgot about the NIC

the bifurcation card is x16
it’s behavior cannot be guaranteed in a x8 slot

Theoretically, the Delock 90777 is a simple form factor adapter from PCIe to SFF-8643 and recabling can activate 2 of the ports on the card. But this is coloring pretty far outside the lines.

1 Like

Ha, I really did! But the AMD Firepro W6400 is a 1-slot PCIe x4 card, and the “PCH SLOT7 PCIe 4.0 X4” looks like it might be open-ended. If the M.2 mounting screw post is removable, the GPU might fit there. Then put the NIC and the Delock 90777 in SLOT6 and SLOT4.

I would assume that the “dumb” 90777 adapter card would work in an x8 slot (obviously using only two connectors), but you’re right, it isn’t certain.

Edit: Nope, scratch that, then the 2x Samsung PM9A1 obviously won’t fit. Okay, I give up, haha. :slight_smile:

Edit2: Unless… Take a dremel to the graphics card?


Edit3: But the card has a 50 W TPD, so… might not do too well in an x4 slot!

1 Like

So theoretically he’d put the bifurcation card in the left most slot, GPU in center slot, NIC in right x4 slot…
That way the bifurcation card doesn’t accidentally take all 16 lanes from the center slot.

But we’re still operating under the notion that the bifurcation card can function with only 8 lanes available and is merely an interface adapter lacking logic.

1 Like

I’m wondering: how did he manage to put all three PCIe cards plus the two M.2 drives in there in the first place?

Edit: Ah, the M.2 mounting post in the MB image is in the 22110 position. When moved to the 2280 position the x8 NIC should fit in “PCH SLOT7”. So that would explain it.

forgot about that
So it should be left to right:
bifurcation card
NIC
GPU

I reduced a Nvidia1030 to PCIe x1 with a Dremel and used it in a ZEN3 system in a chipset slot. Sometimes the BIOS didn’t recognize the card as a GPU when booting, but basically it worked

Good Evening everybody,

thanks a lot to TryTwideMedia and homeserver78, you Guys are really helping here. I appreciate it.

So to get going on all these Ideas, I would start from Top to Bottom.

#2 Yes, the cables are really 70cm long, the adapters I use are just dumb plastic spacers, so they bring the U.2 directly to the backplane.

#3 I will double check the connection and if I should use other ports. I assumed that these U.2 Ports are just “dumb” and just take the signal they get and forward it to the keystone.

#4 I was refering to the backplane. It hast 12 Ports in Total, of which I only the last 4 are dual personality u.2 Ports. With the reference (Port 2 of 4) Im trying to explain in which port (backplane) im using these drives.

I personally must have totally missed the PCIe Switchy Thing that Supermicro build into this board. You are right, I mean i have read about it earlier, but it didnt click. So my Guess is, as Supermicro only supports 4x/4x/4x/4x or 8x/8x Bifurcation, that because of my GPU, the System overruled my x4/x4/x4/x4 to enable the left most slot with x8/x8. (Even though it only uses x4 because of the GPU)

Purpose of this Server is to host our ERP System. As its a badly written software (of which I cant migrate off in the next 12 months, sadly…) I need a CPU that has some serious single-core performance and a high boost freqeuncy. Personally I might have gone with a nice H12SSL Board, and an EPYC, but 3,x something GHz just doesnt cut it.

#5 Yes, its a typo, sorry, in the past we used a lot of P3605, the SSD is now a P4610. Also you are right its BPN-SAS3-826A-N4.
I will re-do the cabling on this backplane. Port 0 on my Bifurcation Card is connected to JSM6 and Port 1 is now JSM7.
As this backplane was previously used with an X11DPI, I will check what the Jumper says. I should pin both of them to 2-3.

#7 I would prefer not to dremel anything.

#8 Well, the Mobo only supports bifurcation the middle slot, thanks supermicro. So I cant move the bifurcation card. Its either the middle slot, or nothing.

Generally speaking, I’m pretty much out of luck here, if I see it correctly.
I can either:

A) Call it defeat, and just have 1x U.2 Drive running. (Which I wont, because of Safety)

B) Swap out the Board for some Asrock Rack, which can do Bifurcation everywhere. (Which I wont, because as you can see in my history, I already fiddled with it.)

C) Throw out the NIC or the GPU, which in both cases, would not really make me happy.

D) Throw out the PM9A1 and just buy some m.2 22110 Drives with around 3 to 4 TB Storage for my main datapool, and get some 2x 512GB SATA Drives for the OS. (Safes a whole PCIe Port)
But then I can just use another chassis.

E) Get an LSI 9500 Trimode Adapter, throw it in instead of the bifurcation card, as it will work with an X8 Slot as its actively switching and isnt dependend on the shitty bifurcation. But thats another 500€ for something I didnt inted to. (And it chuggs a bit of power down the drain…)

So I’m f*!"§$ either way, now its time to find out which option is the best.

Btw. this may solves my problem with just seeing one drive, but it doesnt answer why this drive is so damn slow. I mean with 60 Mbyte/s I could just use an old HDD…

Any Ideas for the speed problem? Do you think it could be the signal integrity from the bifurcation card to the Backplane?

With my previous build, I had to use a lot of different bifurcation cards and Cables, until it ran stable. The Samsung SSDs I used previously just “dropped” the connection sometimes and the pool went offline.

Have a great weekend and thanks so much for helping out!

The CPU only really sees one x16 slot. The bifurcation setting applies to “CPU SLOT4” and “CPU SLOT6” in combination. So an x4x4x4x4 setting should turn both these slots into x4x4 slots (if both are populated). At least that’s how I understand things.

So you should be able to put the NIC in “PCH SLOT7” (resulting in PCIe Gen2 x4 = 2 GB/s or 16 Gbit/s of total throughput) and the other two cards in the other two slots. It should work, in theory at least. So don’t give up just yet.

My guess is it’s a signal integrity issue, yes. The signal is going all the way from the CPU, through the connector to the card, through more connectors to and through the long cables, then through two more connectors on the backplane and drives. Did you try to limit the PCIe speed in the UEFI? The drives are already Gen3, but try Gen2 (and even Gen1) and see if you get more consistent results.

Also try without anything in “CPU SLOT4” to begin with, to rule out issues with the card not working with the slot in x4x4 mode (as opposed to the full x4x4x4x4).

Is this motherboard’s PCIe Birfurcation support still limited to x4-x4-x4-x4…
…with x4-x4 (CPU Slot 6) + x8 (CPU Slot 4) or x8 (CPU Slot 6) + x4-x4 (CPU Slot 4) missing even in the latest BIOS version?

If so, could users that have an H13SAE-MF please contact Supermicro’s Tech Support and request these additional PCIe Bifurcation Options?

Reminders:

  • Yes, this platform can support these features with AM5 Zen 4 or Zen 5 CPUs (only AM5 APUs don’t support PCIe Bifurcation at all).

  • NO, AMD doesn’t block these options because it’s a “consumer chipset”. In fact the chipset has absolutely nothing to do with that since these PCIe lanes come directly from the CPU without touching any chipset except for transparent PCIe mux chips. But the BIOS has to tell the CPU’s IO Die what to expect PCIe interface-wise, a “fully auto” setup with support for PCIe Bifurcation does not exist (yet) to my knowledge.

I wanted to contact Supermicro about this since I’m also looking for an AM5 system with IPMI and this motherboard is still the only option where the PCIe lanes are somewhat usable. BUT Supermicro declined my emaild addresses, stating they don’t accept emails from those domains.

That’s also a piece-of-shit company behavior but if their motherboard at least doesn’t have arbitrary software limitations just due to incompetence I’d look over that. :frowning:

@wendell

You have had a contact to Supermicro, even if this motherboard just a small SOHO nugget and not some great EPYC or Xeon server, could you please ask them to fix this? I fear Supermicro’s public Level 0 Tech Support throws those requests directly into the garbage.

I mean they don’t even have to fix or develop anything, they should just stop cutting features away that are present on most of retail AM5 consumer motherboards.

I’m thinking about this motherboard and will be trying to do something similar without the graphics card as I’m only needing this box to run truenas:

Same Chassis: SM 825 but with the backplane updated to BPN-SAS3-825-TQ
Potential CPU: Epyc 4005(cheapest)
HBA: LSI SAS3008 9300-8I IT-Mode
I would then like to add a NIC (10-25gb) into the other x16-sized slot(bifurcated to 8x x 8x)

Do y’all think this would work?

I contacted tech support in July 2024, they said I would need to contact sales for customised bios support. :upside_down_face:

In latest BIOS version 2.5 the only option is still x4x4x4x4