I’m creating this draft of a topic to seek out feedback from other users.
If you have a question, please start your response with “Nickname-Question-#”, an example would be “aBav. Normie-Pleb-Question-1”.
This makes it easier to search “-Question-” in the thread to hopefully not overlook anyone.
My motivation for this thread is my experience with many, many, MANY PCIe adapters and headaches and now at least I think I’m beginning to recognize “patterns” to further push for bug fixes.
(And yes, sometimes it really was crazy/absurd, devoid of any human-perceivable logic…)
Note: I don’t have much experience with multi GPU configurations, I like stuffing as many regular PCIe AICs/NVMe drives into a single system as possible, using up any PCIe lane a motherboard has to offer.
While its build quality is certainly nice, have experienced some issues with the ASUS Hyper M.2 x16 Gen4 AIC, too.
“Strange” is the best way to summerize the experiences I’ve had the pleasure of witnessing within last 18 months with this topic
Also got Delock Gen4 Slimline stuff around, basically anything a “normal” customer would buy hoping “That would be nice if it works as advertized” and then has the joy of being left out in the rain by less than competent customer support where the go-to answer is “we don’t support the use of our product with a different product”.
Individual Gen3 NVMe SSDs perform close to their native speed, but the used U.2 cables influence the performance numbers. This is a bit tricky since the AIC doesn’t have any management software, drivers or anything to check for PCIe transmission errors (Nothing in Windows Event Log, PCIe NVMe SSDs that are connected with cables not up to the task can sometimes produce WHEA 17 errors).
Nice: Being able to properly hot-plug NVMe SSDs in Windows without the system crashing (drives can be ejected like USB thumb drives, for example)
Weird: The AIC doesn’t have any drivers and connected NVMe SSDs show up in Windows as they should (also in tools like CrystalDiskInfo) but it also causes four “Base system devices” Windows doesn’t find the drivers for (Code 28) to show up in Windows’ Device Manager which looks a bit improper…
Does anyone have an idea what to do about that?
PLX chips are extremely picky about what firmware is loaded onto them. This adapter would work better for GPUs and NICs, but not really for NVMe. The chip is meant to be fully integrated onto a motherboard like the ASUS Sage boards. If you aren’t a big vendor like ASUS, you don’t really have that expertise to tweak the PLX chip.
Since the Delock 90504 seems to be working fine and those four missing-drivers entries in Device Manager seem to be just “cosmetic” I’m pretty sure that I’m keeping it - wanna look into testing it with TrueNAS and ESXi next.
The only other thingy that would spark my curiosity would be the PCIe Gen4 Broadcom P411W-32P but using Gen4 with DIY cables and backplanes is more of a PIA and the currently intended use case doesn’t need higher sequential speeds that would require Gen4.
Seems that when the Delock 90504 is connected with 8 PCIe lanes you can only connect four x4 NVMe SSDs (every other port)
Only when it gets 16 lanes you can use all eight ports for x4 NVMe SSDs
Don’t know yet if I got a lemon or this is expected behavior. Had previously thought that since it has an active PCIe switch without PCIe bifurcation it doesn’t matter how many lanes it itself gets to handle attached SSDs (other than peak performance is reduced, of course).
Unfortunately have not been able to get a single chain of passive adapters ( 1: M.2-to-SFF-8643 (U.2) + 2: SFF-8643-to-SFF-8639 cable) to work that doesn’t result in PCIe bus errors under full load (Gen4).
The Intel ones that consist of one part instead of two (M.2-to-SFF-8639, bundled with older U.2 905P Optane SSDs for example) can only handle PCIe Gen3 without any errors.
To be fair, you get PCIe Gen4 speeds but I absolutely dislike knowing that the system is throwing errors/growing an error log during data transfers
Also tried newer M.2-to-OCuLink adapters, not really different errors-wise compared to the more common M.2-to-U.2 adapters.
Gen3 isn’t an issue even when adding an additional U.2 SSD hot-swap bay like the ones from IcyDock.
No, I personally haven’t tried such “PCB” adapters yet, only ones with at least 0.3 m cable attached to the M.2 end, for example.
Would be grateful if you could share your findings here!
Be aware and cautious of the mechanical stress such a contraption would cause on the M.2 slot (weight of two U.2 SSDs and the adapters themselves).
My at home “dream” set up would be a front backplane for two to four U.2 SSDs that is connected to standard M.2 slots of consumer motherboards and handles PCIe Gen4 without any bus errors.
IcyDock offers such backplanes (version 1 with SFF-8643, borderline false advertizing since the cables you have to get separately are the issue, version 2 with OCuLink that is not available anywhere even though it has been released for about a year)
Since you were flaunting the ToughArmor MB699VP-B V2 in your last Icy Dock PCIe video I wanted to ask if you would recommend to purchase it (= code for OCuLink cables work at least better than “repurposed” SFF-8643 ones I’ve had the pleasure of experimenting with so far).
I remember you mentioning that the industry dislikes OCuLink’s connectors, was it just the variant without latches (some ASRock (Rack) motherboards) so that the current MB699VP-B should be fine?
I haven’t been able to regionally source a MB699VP-B V2 yet but I would like to dream that I can finally conclude my journey of shitty PCIe Gen4 U.2 backplane issues (MB699VP-B “V1”).