I’m creating this draft of a topic to seek out feedback from other users.
If you have a question, please start your response with “Nickname-Question-#”, an example would be “aBav. Normie-Pleb-Question-1”.
This makes it easier to search “-Question-” in the thread to hopefully not overlook anyone.
My motivation for this thread is my experience with many, many, MANY PCIe adapters and headaches and now at least I think I’m beginning to recognize “patterns” to further push for bug fixes.
(And yes, sometimes it really was crazy/absurd, devoid of any human-perceivable logic…)
Note: I don’t have much experience with multi GPU configurations, I like stuffing as many regular PCIe AICs/NVMe drives into a single system as possible, using up any PCIe lane a motherboard has to offer.
The following are known good items for PCIe 3.0. I don’t currently have PCIe 4.0 drives, as the price/perf hasn’t been compelling enough.
ASUS Hyper M.2 X16 PCIe 4.0 X4 Expansion CardNote Needs 4x4x4x4x bifurcation set in BIOS, also note that the “1st” slot referenced in BIOS is actually the one farthest away from the CPU, with the 5th one being the closest.
This tyan board has some odd 8i slimline PCIe ports. The following seems to work well though it’s currently fairly pricey:
Slimline SAS 8-Lane for Dual Port M.2/M.3 NF1 NVMe SSD AdapterNote appears to be PCIe 3.0 Only, which is fine by me. Also don’t confuse this with the versions that are supposedly compatible with whatever bullshit is needed for use with Broadcom/Intel HBA’s and cables.
While its build quality is certainly nice, have experienced some issues with the ASUS Hyper M.2 x16 Gen4 AIC, too.
“Strange” is the best way to summerize the experiences I’ve had the pleasure of witnessing within last 18 months with this topic
Also got Delock Gen4 Slimline stuff around, basically anything a “normal” customer would buy hoping “That would be nice if it works as advertized” and then has the joy of being left out in the rain by less than competent customer support where the go-to answer is “we don’t support the use of our product with a different product”.
Individual Gen3 NVMe SSDs perform close to their native speed, but the used U.2 cables influence the performance numbers. This is a bit tricky since the AIC doesn’t have any management software, drivers or anything to check for PCIe transmission errors (Nothing in Windows Event Log, PCIe NVMe SSDs that are connected with cables not up to the task can sometimes produce WHEA 17 errors).
Nice: Being able to properly hot-plug NVMe SSDs in Windows without the system crashing (drives can be ejected like USB thumb drives, for example)
Weird: The AIC doesn’t have any drivers and connected NVMe SSDs show up in Windows as they should (also in tools like CrystalDiskInfo) but it also causes four “Base system devices” Windows doesn’t find the drivers for (Code 28) to show up in Windows’ Device Manager which looks a bit improper…
Does anyone have an idea what to do about that?
(@wendell ?)
PLX chips are extremely picky about what firmware is loaded onto them. This adapter would work better for GPUs and NICs, but not really for NVMe. The chip is meant to be fully integrated onto a motherboard like the ASUS Sage boards. If you aren’t a big vendor like ASUS, you don’t really have that expertise to tweak the PLX chip.
Since the Delock 90504 seems to be working fine and those four missing-drivers entries in Device Manager seem to be just “cosmetic” I’m pretty sure that I’m keeping it - wanna look into testing it with TrueNAS and ESXi next.
The only other thingy that would spark my curiosity would be the PCIe Gen4 Broadcom P411W-32P but using Gen4 with DIY cables and backplanes is more of a PIA and the currently intended use case doesn’t need higher sequential speeds that would require Gen4.
Seems that when the Delock 90504 is connected with 8 PCIe lanes you can only connect four x4 NVMe SSDs (every other port)
Only when it gets 16 lanes you can use all eight ports for x4 NVMe SSDs
Don’t know yet if I got a lemon or this is expected behavior. Had previously thought that since it has an active PCIe switch without PCIe bifurcation it doesn’t matter how many lanes it itself gets to handle attached SSDs (other than peak performance is reduced, of course).
Thats that part that worries me - esp since the optane is pcie 4.0.
If I knew it’d work, I’d buy a mini itx z690 board, otherwise I might play it safe and go with microatx
Unfortunately have not been able to get a single chain of passive adapters ( 1: M.2-to-SFF-8643 (U.2) + 2: SFF-8643-to-SFF-8639 cable) to work that doesn’t result in PCIe bus errors under full load (Gen4).
The Intel ones that consist of one part instead of two (M.2-to-SFF-8639, bundled with older U.2 905P Optane SSDs for example) can only handle PCIe Gen3 without any errors.
To be fair, you get PCIe Gen4 speeds but I absolutely dislike knowing that the system is throwing errors/growing an error log during data transfers
Also tried newer M.2-to-OCuLink adapters, not really different errors-wise compared to the more common M.2-to-U.2 adapters.
Gen3 isn’t an issue even when adding an additional U.2 SSD hot-swap bay like the ones from IcyDock.
No, I personally haven’t tried such “PCB” adapters yet, only ones with at least 0.3 m cable attached to the M.2 end, for example.
Would be grateful if you could share your findings here!
Be aware and cautious of the mechanical stress such a contraption would cause on the M.2 slot (weight of two U.2 SSDs and the adapters themselves).
My at home “dream” set up would be a front backplane for two to four U.2 SSDs that is connected to standard M.2 slots of consumer motherboards and handles PCIe Gen4 without any bus errors.
IcyDock offers such backplanes (version 1 with SFF-8643, borderline false advertizing since the cables you have to get separately are the issue, version 2 with OCuLink that is not available anywhere even though it has been released for about a year)
Since you were flaunting the ToughArmor MB699VP-B V2 in your last Icy Dock PCIe video I wanted to ask if you would recommend to purchase it (= code for OCuLink cables work at least better than “repurposed” SFF-8643 ones I’ve had the pleasure of experimenting with so far).
I remember you mentioning that the industry dislikes OCuLink’s connectors, was it just the variant without latches (some ASRock (Rack) motherboards) so that the current MB699VP-B should be fine?
I haven’t been able to regionally source a MB699VP-B V2 yet but I would like to dream that I can finally conclude my journey of shitty PCIe Gen4 U.2 backplane issues (MB699VP-B “V1”).
My curiosity got the better of me and since I haven’t heard anything back from Wendell I placed an order for a ToughArmor MB699VP-B V2, hope it doesn’t take too long to ship.
Now the crucial question:
Got M.2-to-SFF-8654 and M.2-to-SFF-8612 (OcuLink) adapters that are to handle PCIe Gen4.
Does anybody have a source for 0.5 m SFF-8654-to-SFF-8611 (OCuLink) or SFF-8611-to-SFF-8611 (OCuLink) cables that claim to handle PCIe Gen4?