A Neverending Story: PCIe 3.0/4.0/5.0 Bifurcation, Adapters, Switches, HBAs, Cables, NVMe Backplanes, Risers & Extensions - The Good, the Bad & the Ugly

If good SFF-8654-to-SFF-8611 cables turn up this might be an option:

https://www.delock.de/produkte/S_89030/merkmale.html?setLanguage=en

Somewhat strange development:

  • MicroSATACables’ support stated that they cannot supply an SFF-8654-to-SFF-8611 cable that can handle PCIe Gen4;

  • I personally am so used to manufacturers’ FAQs being useless that I had forgotten to check Icy Dock’s for the V2 OcuLink backplane and they state that a MicroSATACables SFF-8654 cable is to be compatible:

Known compatible cables:
Micro SATA Cables RML42-0503

Great.

I think these guys test without enabling PCIe Advanced Error Reporting in the BIOS or don’t know that only CPU PCIe lane errors get properly reported…? *

:scream:

(*Regarding AM4/Ryzen 3000/5000 and X570)

Yay, the thingy arrived:

Now just the Gen4 OCuLink cables are the only thing missing, hope they also arrive soon.

What remains is that I’m pretty pissed at Icy Dock:

  • Me last year: Dear Icy Dock support, you state on your website that the MB699VP-B (V1) is PCIe Gen4-compatible. I’ve tried 3 units and not a single bay doesn’t produce PCIe transmission errors when it is actually running a Gen4 NVMe SSD, no errors up to PCIe Gen3.

  • Icy Dock support: Hey, that’s your cables’ or your motherboard’s fault.

  • Me last year: Well, can you tell me specific SFF-8643-to-SFF-8643 cables that can handle PCIe Gen4?

  • Icy Dock support: Yeah sorry, there are so many different HBA manufacturers, we can’t give any recommendations for cables.

  • Me last year: What does that have to do with a recommendation for a standard, non-proprietary* SFF-8643-to-SFF-8643 cable? Why don’t you offer cables yourselves where you can ensure the quality?

*(For example with Broadcom HBA 9400 models you have to use proprietary Broadcom SFF-8643 cables if you want to connect them to NVMe drives)

  • No further response

I generally like Icy Dock’s products and have been building many systems with them but that leaves a bitter aftertaste in my mouth :rage:

The label on the box pretty much clears it up that Icy Dock themselves don’t really consider the MB699VP-B (V1) to be PCIe Gen4-capable. I’d say this is close to false advertizing and while most text I write has ironic or sarcastic subtones I’m 100 % serial here…

@wendell
I would really appreciate it if you would hold their feet to the fire a bit in any upcoming content with Icy Dock products.

1 Like

Funny you mention that, when I looked around their products, I also noticed the distinct omission of what cable to actually buy, and none provided by them. It’s definitely not an acceptable response.

…I wonder if there are any $15K Gen4 cable testers we could get Wendell to buy.

Yeah, I also left a comment with a question for such an SFF cable tester especially for PCIe Gen4 on the latest DisplayPort-themed video.

2 Likes

GODDAMNIT!

PCIe Gen4 seems to be becoming the Bane my existence… :sob:

UPS brought the MicroSATACables OCU-1708-GEN4, as giddy as a school girl I went testing the MB699VP-B V2 for the very first time with my Samsung PM1733 U.2 SSDs and got the worst results I have ever seen:

More than 10,000 PCIe bus errors in Gen4 mode over the course of a single CrystalDiskMark run with default settings :frowning:

@wendell

In your first Icy Dock PCIe video at around 3:06 you state that the blue-ish wires indicate PCIe Gen3…
…and of course if you check through the cable shroud of the MicroSATACables OCU-1708-GEN4 you can see blue wires.

Could you explain the reasoning behind your"blue is bad" comment a bit?

thats just the stuff I’d ordered from ali express. I’ll take some pics at work tomorrow. The blue-ish cables always were bad.
the only ones that worked were silver.

That’s not hard and fast just me trying to spend my way out of my problems

3 Likes

Yeah, looks like I gave you bad advice, sorry about that.

I also got bit myself with the same issue. I picked up some of their ‌SlimSAS 8i to U.2 (SFF-8639) SSD dual port Adapters, and turns out they generate multiple correctable PCIe bus errors every minute that shows up in AER reporting even at just PCIe 3.0 speeds. Errors increase when any drives are attached. So I’ll be returning them shortly.

For reference, I had some M.2 boards (PCIe 3.0) from them that work perfectly fine with no errors.
Motherboard: TYAN® Computer - Motherboards S8030 S8030GM2NE
Cables: https://www.amazon.com/gp/product/B082NT2ZV9/
Working M.2 adapters: https://www.amazon.com/gp/product/B084GD4GKR/

1 Like

No worries, I had asked for sources, MicroSATACables’ item description looked good and of course it was ultimately my decision to order from them.

@wendell

Have you had any configuration where you could use the MB699VP-B V2 without PCIe bus errors on a system where PCIe AER is confirmed to be working for the backplane’s PCIe lanes?

Since I don’t really know what avenues to pursue next there have not been any new new developments since the bad MicroSATACables OCuLink “Gen4” cables.

@wendell

As an ambitious lay person regarding electronics (got a seperate soldering and hot air station, flux etc.) in the past I just followed instructions like Louis Rossmann’s videos to reflow bad SMDs but I wouldn’t know how to design HF signal traces like the 8 GHz lines that are needed for PCIe Gen4.

The only Gen4 part I could test without any issues are the most expensive Linkup PCIe Gen4 cable risers (with a 3060 and 3090), no PCIe Bus errors reported at all.

Now I’m thinking:

Can you use the tested-good wiring of such a PCIe Gen4 riser cable, remove its ends, design an M-Key M.2 PCB where the wires are placed in such a way that they are touching the motherboard’s M.2 slot pins directly on one end and on the other end attch a “premium” OCuLink connector?

I’m still refusing to give up on connecting a PCIe Gen4-capable U.2 backplane to regular M.2 slots, so hopefully maybe the signal integrity is good enough to not cause any PCIe Bus errors in a completely passive chain if you don’t have to use multiple adapters with the resulting signal losses from plugged connections.

Or is this more of a pipe dream?

I wasn’t able to find an oscilloscope that could handle 8 GHz to manually check signal quality so it would basically be just trial and error.

Ironically I had the least amount of PCIe Bus errors so far with the first-gen MB699VP-B with SFF-8643 connectors and “old” SAS3 SFF-8643 cables and an M.2-SFF-8643 adapter from Delock.

So I think we are pretty close to it working without PCIe Bus errors if the chain between the devices has less links and uses higher quality wires…

The next instrument of my suffering is on its way, a Broadcom P411W-32P
(Even though I’m kind of pissed at Broadcom)

  • A PCIe Gen4 x16-to-x32 (four SFF-8654 8i ports) switch add-on card without SAS or SATA functionality, so for the use with NVMe SSDs only;

Will be my first try with PCIe Gen4 over SFF-8654 8i to dual SFF-8643 or SFF-8611 (OCuLink)…

:thinking:
Broadcom_P411W-32P-UG104_PDF_Manual.zip (419.1 KB)

2 Likes

You got a good source for cables? Mine hasn’t planned out.

I have the 12th gen supermicro sp3 boards and one ASRock with the low profile pcie x8 . I need to order a full suite of cables to try various icy dock stuff.

1 Like

@wendell

Unfortunately no new cable experiences (everything documented in this thread; ironically everything with SFF8611 OCuLink has been (far) worse than older SFF 8643 SAS HD cables).

For the Broadcom P411W-32P I just ordered two different generic kinds of cables for each type (SFF8654 8i to dual SFF 8611 OCuLink and SFF8654 8i to dual SFF8643) to get a general “feel” of SFF 8654 8i from Amazon Germany, at least here I have an easy way to get 100 % of my money back if the cables are shitty.

I fear that the Broadcom switch chipset won’t report any PCIe errors to the host, this was the standard behavior when using NVMe SSDs with the Broadcom HBA 9400 and Broadcom’s absurd 1 m long proprietary U.2 cables.

I still want to use 5.25" 4 x 2.5" U.2 backplanes with “consumer” M.2 or regular PCIe slot motherboards and I’m trying to get the knowledge how to build my own cables, to not have multiple adapted connections that reduce the signal quality.

I don’t get how people like Icy Dock or Micro SATA Cables aren’t ashamed of themselves. I honestly think they just plug a cable in, look if the device is connected via PCIe Gen4 and that’s their entire “qualification process”.

1 Like

Also I was hoping that maybe you’d do a follow-up video with the guys from Liqid showing the physical connectors/adapters/cables that enable them to connect even PCIe Gen4 over distances.

I’m not married to the idea of using electrical copper wires for the cabling, if it is somehow “afordable” I wouldn’t mind getting suitable optical transceivers even if the cable length is less than 1m.

I HATE PCIe Bus errors :wink:

My prediction: You’ll just end up buying a 2U 24 bay U.2 EPYC server chassis that would have saved you money and suffering had you done that from the start.

NVME connectivity is such an expensive and proprietary shitshow.

2 Likes

Realistically speaking I think you are right.

Subjectively speaking I just don’t want to accept that situation that it all fails because of “shitty wires”.

While I already have non-RDMA 40 GbE, I mainly use Windows on my desktop systems that handle large files and only Windows “for Workstation” and “Server” supports RDMA, so I would like to use swappable NVMe drives locally to not throw performance away :frowning:

Windows for Workstation on the other hand doesn’t support/accept settings that comply with General Data Protection Regulation (EU), so I’m currently staying with Enterprise - changing Server to a desktop OS is a bit of a pain-in-the-ass.

Would that chassis not still require cables?

Or is the problem the cards?

Because it seems the cables are the killer?

Edit: even if the chassis has connections between mobo and backplane, with no raid/HBA, would still need the cables, even if proprietary?

No idea about your physical situation, but I actually have my computer in the garage, and use a combination of optical DisplayPort cable and OM4 cable and network compatible extron usb extenders to bring video and usb to my room and it’s worked flawlessly for almost 2 years now. I finally have the perfectly silent and non-room heating desktop I’ve always strived for, with all the power I could ask for.

I do appreciate your documentation. Hopefully it saves others disappointment.

Those types of server chassis come with cables installed, as the compatibility and connection quality of the motherboard, cables and backplanes is verified/certified as a unit and product line. Hence the enterprise price delta over the raw parts cost.

It’s a half humorous, half “but seriously though in 2022 this is basically the only fucking way to deal with this shit” suggestion that’s overkill. Unfortunately nvme connectivity is a half baked enterprise affair that simply doesn’t have a clean ad-hoc solution for the prosumer level yet.

3 Likes

@Trooper_ish

It’s a bit of everything:

  • My core desire is to connect NVMe U.2 SSDs to motherboards that only have regular M.2 or PCIe slots and be able to easily swap the SSDs;

  • In combination with PCIe Bifurcation no additional logic like HBAs is required in general;

  • The length of PCIe traces is pretty narrowly designed to perform optimally, while at least no issues with “quality” parts with PCIe Gen3, just adding a passive part to bridge the distance in my experience always leads to PCIe Bus errors when doing the same with PCIe Gen4;

  • Irony: An old SAS3 cable (SFF8643) that was bundled with an Intel SAS3 expander (in 2015?) leads to the fewest PCIe Gen4 Bus errors when using it with the SFF8643 variant of the 4 x 2.5" Icy Dock U.2 backplane;

  • This observation tickles my tummy that these issues have to somehow be possible to solve with “wire quality” and connectors with better high frequency properties like SFF8611 or SFF8654.

@Log

Silent systems are also an interest of mine. I’m currently designing a multi-room watercooling solution for computers in two rooms that uses the heat (worst case about 3 kW) to help drying clothes in a third room where a couple of radiators are mounted to a wall.

While also using a file server in another room I “have” to use bare metal Windows systems since I don’t have the energy to fight against goddamn DRM mechanics that don’t like virtualization.

3 Likes