A Neverending Story: PCIe 3.0/4.0/5.0 Bifurcation, Adapters, Switches, HBAs, Cables, NVMe Backplanes, Risers & Extensions - The Good, the Bad & the Ugly

Hi,

I’m creating this draft of a topic to seek out feedback from other users.

If you have a question, please start your response with “Nickname-Question-#”, an example would be “aBav. Normie-Pleb-Question-1”.

This makes it easier to search “-Question-” in the thread to hopefully not overlook anyone.

My motivation for this thread is my experience with many, many, MANY PCIe adapters and headaches and now at least I think I’m beginning to recognize “patterns” to further push for bug fixes.

(And yes, sometimes it really was crazy/absurd, devoid of any human-perceivable logic…)

Note: I don’t have much experience with multi GPU configurations, I like stuffing as many regular PCIe AICs/NVMe drives into a single system as possible, using up any PCIe lane a motherboard has to offer.

Regards,
aBavarian Normie-Pleb

3 Likes

old thread is very good for info

1 Like

Maybe, then I’ll change this thread to document the various concrete issues I’ve personally encountered.

Also I’m specifically looking at “consumer-grade” stuff, i. e. AM4/X570/B550 Zen 2/Zen 3 configurations.

2 Likes

I’ll throw this here for reference

Motherboard: Tyan Tomcat HX S8030 (S8030GM2NE)

The following are known good items for PCIe 3.0. I don’t currently have PCIe 4.0 drives, as the price/perf hasn’t been compelling enough.

  • ASUS Hyper M.2 X16 PCIe 4.0 X4 Expansion Card Note Needs 4x4x4x4x bifurcation set in BIOS, also note that the “1st” slot referenced in BIOS is actually the one farthest away from the CPU, with the 5th one being the closest.

This tyan board has some odd 8i slimline PCIe ports. The following seems to work well though it’s currently fairly pricey:

1 Like

While its build quality is certainly nice, have experienced some issues with the ASUS Hyper M.2 x16 Gen4 AIC, too.

“Strange” is the best way to summerize the experiences I’ve had the pleasure of witnessing within last 18 months with this topic :wink:

Also got Delock Gen4 Slimline stuff around, basically anything a “normal” customer would buy hoping “That would be nice if it works as advertized” and then has the joy of being left out in the rain by less than competent customer support where the go-to answer is “we don’t support the use of our product with a different product”.

1 Like

Life has been running a bit of interference but was able to look into a thingy completely new to me:

Delock 90504 PCIe Gen3 x16 AIC with Broadcom PEX8749 PCIe switch chipset to connect up to eight x4 NVMe SSDs

  • Individual Gen3 NVMe SSDs perform close to their native speed, but the used U.2 cables influence the performance numbers. This is a bit tricky since the AIC doesn’t have any management software, drivers or anything to check for PCIe transmission errors (Nothing in Windows Event Log, PCIe NVMe SSDs that are connected with cables not up to the task can sometimes produce WHEA 17 errors).

  • Nice: Being able to properly hot-plug NVMe SSDs in Windows without the system crashing (drives can be ejected like USB thumb drives, for example)

  • Weird: The AIC doesn’t have any drivers and connected NVMe SSDs show up in Windows as they should (also in tools like CrystalDiskInfo) but it also causes four “Base system devices” Windows doesn’t find the drivers for (Code 28) to show up in Windows’ Device Manager which looks a bit improper

Does anyone have an idea what to do about that?
(@wendell ?)

The final destination for the AIC after more testing is a DIY AM4 NAS where it’s going to get eight CPU PCIe lanes for eight NVMe SSDs.

3 Likes

PLX chips are extremely picky about what firmware is loaded onto them. This adapter would work better for GPUs and NICs, but not really for NVMe. The chip is meant to be fully integrated onto a motherboard like the ASUS Sage boards. If you aren’t a big vendor like ASUS, you don’t really have that expertise to tweak the PLX chip.

1 Like

Linus (LTT) did a thing on One Stop Systems who do this for a living:

Since the Delock 90504 seems to be working fine and those four missing-drivers entries in Device Manager seem to be just “cosmetic” I’m pretty sure that I’m keeping it - wanna look into testing it with TrueNAS and ESXi next.

The only other thingy that would spark my curiosity would be the PCIe Gen4 Broadcom P411W-32P but using Gen4 with DIY cables and backplanes is more of a PIA and the currently intended use case doesn’t need higher sequential speeds that would require Gen4.

Strange:

  • Seems that when the Delock 90504 is connected with 8 PCIe lanes you can only connect four x4 NVMe SSDs (every other port)

  • Only when it gets 16 lanes you can use all eight ports for x4 NVMe SSDs :frowning:

Don’t know yet if I got a lemon or this is expected behavior. Had previously thought that since it has an active PCIe switch without PCIe bifurcation it doesn’t matter how many lanes it itself gets to handle attached SSDs (other than peak performance is reduced, of course).

3 Likes

Good news everyone, both issues…

  • Only up to four x4 NVMe SSDs show up when the 90504 gets an x8 instead of x16 PCIe link

and

  • The four “phantom” 87D0 PCIe devices in Windows Device Manager where drivers could not be installed (Code 28)

Have been fixed by a Broadcom firmware update you get when you annoy Delock’s Level 2 support a bit.

Hope that they’ll leave this link active:
http://www.delock.de/download/Firmware%20Update90504.rar

4 Likes

Jolly-Question-1
Looking for m.2 → u.2 adapters, to let me connect a p5800x to a mb without onboard u.2 support.

I don’t want to go PCI → u.2, as I’m trying to build an SFF 12900k build for travel.

There are the official Intel ones and there are 3rd party like this one.

no idea on the quality of this one was just first one on ebay search.

Thats that part that worries me - esp since the optane is pcie 4.0.
If I knew it’d work, I’d buy a mini itx z690 board, otherwise I might play it safe and go with microatx

1 Like

Unfortunately have not been able to get a single chain of passive adapters ( 1: M.2-to-SFF-8643 (U.2) + 2: SFF-8643-to-SFF-8639 cable) to work that doesn’t result in PCIe bus errors under full load (Gen4).

The Intel ones that consist of one part instead of two (M.2-to-SFF-8639, bundled with older U.2 905P Optane SSDs for example) can only handle PCIe Gen3 without any errors.

To be fair, you get PCIe Gen4 speeds but I absolutely dislike knowing that the system is throwing errors/growing an error log during data transfers :frowning:

Also tried newer M.2-to-OCuLink adapters, not really different errors-wise compared to the more common M.2-to-U.2 adapters.

Gen3 isn’t an issue even when adding an additional U.2 SSD hot-swap bay like the ones from IcyDock.

Hm. Have you tried m.2 → pcie → u.2?
I’ll give it a shot with https://amzn.to/30GSiUF +
https://amzn.to/3qVPRbJ

1 Like

Sorry for the delayed response!

No, I personally haven’t tried such “PCB” adapters yet, only ones with at least 0.3 m cable attached to the M.2 end, for example.

Would be grateful if you could share your findings here!

Be aware and cautious of the mechanical stress such a contraption would cause on the M.2 slot (weight of two U.2 SSDs and the adapters themselves).

My at home “dream” set up would be a front backplane for two to four U.2 SSDs that is connected to standard M.2 slots of consumer motherboards and handles PCIe Gen4 without any bus errors.

IcyDock offers such backplanes (version 1 with SFF-8643, borderline false advertizing since the cables you have to get separately are the issue, version 2 with OCuLink that is not available anywhere even though it has been released for about a year)

@wendell

Since you were flaunting the ToughArmor MB699VP-B V2 in your last Icy Dock PCIe video I wanted to ask if you would recommend to purchase it (= code for OCuLink cables work at least better than “repurposed” SFF-8643 ones I’ve had the pleasure of experimenting with so far).

I remember you mentioning that the industry dislikes OCuLink’s connectors, was it just the variant without latches (some ASRock (Rack) motherboards) so that the current MB699VP-B should be fine?

I haven’t been able to regionally source a MB699VP-B V2 yet but I would like to dream that I can finally conclude my journey of shitty PCIe Gen4 U.2 backplane issues (MB699VP-B “V1”).

My curiosity got the better of me and since I haven’t heard anything back from Wendell I placed an order for a ToughArmor MB699VP-B V2, hope it doesn’t take too long to ship.

Now the crucial question:

Got M.2-to-SFF-8654 and M.2-to-SFF-8612 (OcuLink) adapters that are to handle PCIe Gen4.

Does anybody have a source for 0.5 m SFF-8654-to-SFF-8611 (OCuLink) or SFF-8611-to-SFF-8611 (OCuLink) cables that claim to handle PCIe Gen4?

1 Like

For SFF-8611-to-SFF-8611 (OCuLink) :

1 Like