A Neverending Story: PCIe 3.0/4.0/5.0 Bifurcation, Adapters, Switches, HBAs, Cables, NVMe Backplanes, Risers & Extensions - The Good, the Bad & the Ugly

it’s funny, they mention Broadcom only wanted to get back into consumer sales after Astera entered it.

Hmm… I see I was wrong about PCIe 6.0 being just as easy physically (traces/cables).

FWIW, I saw numerous complaints of failures after a few weeks to a few months. Nobody mentioned temperatures, airflow, cable geometry, or workload rates, though, so I think if you get those SSDs freed up steady state heatsink temperatures under the max transfer rate ASM1166s can sustain will be the best data (hopefully 1.7 GB/s with three drives each, though I’m guessing that’s optimistic).

Quite curious what results you get because it’d be really nice if this is a performant and reliable arrangement. Just a guess, but I have a feeling it might turn out to be a dedicated fan onto the card plus workload rate limit kind of situation instead.

1 Like

sorry for diggign up your old post
just got this board and wanted to let you know about this thread:

where someone also shared F09 BIOS (:

1 Like
  • Could finally start the max load test on the ASM1166 M.2-to-6xSATA Adapter with 6 relatively performant SATA SSDs.

  • Will add progress details after a day or so.

4 Likes

The test results are in, the ASM1166 chipset’s PCIe Gen3 x2 bottlenecking points are:

  • Unidirectional traffic from/to SATA drives, in total around 1,790 MB/s

  • Bidirectional traffic from/to SATA drives, in total around 3,000 MB/s

Refer to the dedicated review thread for more details: Short Review: Edging ASMedia 1166 PCIe Gen3 x2 to 6 x SATA HBA Chipset. It doesn't suck 👍

6 Likes

Our favorite company XD

3 Likes

I just spied something interesting amongst the websites I’ve been tracking: a new PCIe 5.0 switch to top them all? :hushed:

High Point Technologies Homepage on 4/12/2024

I can’t really tell from the image whether it’s MiniSAS or MCIO.


Someone is selling PCIe 5.0 riser cables as well.

LINKUP - AVA5 PCIE 5.0 Riser Cable


No PCIe 5.0 / Gen 5 news from the other usual suspects (Broadcom, Microchip / Adaptec, and Areca)

4 Likes

Details live on HighPoint’s website now:

  • Four products differentiated by M.2 or MCIO and switch or RAID:
    1. Rocket 1608A - PCIe Gen5 x16 to 8-M.2x4 NVMe Switch AIC (pre-order price: $1,499.00)
    2. Rocket 1628A - PCIe Gen5 x16 to 4-MCIOx8 NVMe Switch Adapter (pre-order price: $1,499.00)
    3. Rocket 7608A - PCIe Gen5 x16 to 8-M.2x4 NVMe RAID AIC (pre-order price: $1,999.00)
    4. Rocket 7628A - PCIe Gen5 x16 to 4-MCIOx8 NVMe RAID Adapter (pre-order price: $1,999.00)
  • Common features:
    • Single-slot width
    • PCIe 5.0 × 16
      • Up to 56 GB/s
    • 8 NVMe ports
    • Hardware secure boot
    • Synthetic hierarchy
Images



They seem like they’d be good in a physical PCIe × 16 slot that’s electrically PCIe 5.0 × 8 for 8 × PCIe 3.0 × 4 SSDs. The additional information hidden in the pop-ups further down the page mentions “can support as many as 32 NVMe devices via backplane connectivity via a single PCIe 5.0 x16 slot.” I hope there is a tool for direct connections like Adaptec…

4 Likes

Digging through the page it looks like its using a Broadcom chipset, specifically the PEX89048.

The Highpoint SSD7540 Rocket 1508 used the Broadcom PEX88048 so I bet their feature set is fairly similar.

lol, it looks like that have a typo on the product page:

2 Likes

I have a warning for you. I’ve got this card but mine seems to be broken.
Link to the item for anyone interested https://aliexpress.com/item/1005006075676266.html
model number: SU-EM5204(A3)-F50903


I’ve ordered PCIe3x2 version (“16GTS PCI-E X4”).
This card looks interesting because it’s cheap and should in theory have low power consumption. It has pci3x2 connection to the system and pci3x1 to each drive. It’s based on asmedia ASM2806 chip.
I’ve loaded it with 4 intel 660p SSDs but couldn’t reliably access the drives.
Nvme devices fail shortly after the boot and are not list-able by nvme-cli.
Running lspci on this system locks it up.
I forgot to save full dmesg, but I have a screenshot.

After some more testing on the second PC I managed to boot successfully once. Even then only 3 our of 4 nvme devices were visible in the system.
lspci output suggested that ASPM worked on pcie bridge and SSDs.

2 Likes

Ironically, your remark about the typo leads by example. :joy:

3 Likes

I lament the human condition:

For me it hasn’t been long since I have PCIe Switch/Tri-Mode HBAs where I don’t trigger complete system crashes (the Broadcom HBA 9500 was finally fixed last summer with firmware P28 and got the Adaptec 1200P-32i just recently) but with Zen 5 I’m likely looking to move my main daily-driver system from Zen 3/PCIe Gen4 to PCIe Gen5, it would be great to operate more PCIe Gen4 SSDs in a PCIe Gen5 x8 slot without them slowing each other down (at this time only have a single Zen 4 7800 X3D/ASUS ProArt X670E-CREATOR WIFI to experiment with).

But just giving Broadcom another chance…? :crazy_face:
(it’s their chipset, no matter who’s slapping their sticker on the box)

I’m looking into a few ways to make these weird, niche edge-case experiments more justifiable, maybe create proper YouTube videos with the monetization routes it entrails. I mostly ended up doing these experiments since I wasn’t able to find other people already having done them.

And the reason behind my focus on PCIe stuff on mainstream platforms is the absurd pricing of current HEDT/Workstation platforms (and their half-baked state as of yet, just look at the Threadripper threads). I could imagine other people are similarly annoyed by that.

4 Likes

Can confirm. Even if revitalized, reliable HEDT existed at reasonable pricing its power would be problematic to our building infrastructure and, since most of our office space is shared, workstation (or server) operating noise levels aren’t viable.

Another structural issue I keep hitting is, in desktop form factors, dGPU size and airflow demand make use of other PCIe slots not great. The response of moving lanes toward M.2 sockets is logical at the motherboard level but there’s a distinct lack past that.

I feel bifurcating an M.2 and plugging in some drives should be easy but that wants a move away from SATA to U.2/3 which hasn’t occurred with desktop parts. Siena type HBA integration seems like it should happen more widely. Probably be long wait for either.

2 Likes

For a home lab setup in an urban setting (think high-rise apartment), forget about getting a good night sleep during long running AI training workloads. :joy:

Bandwidth keeps going up, and motherboard vendors would be doing us a favor if they broke out those limited lanes into more slots. I don’t know if anything needs 16 or even 8 lanes of PCIe 5.0 all in one slot. Perhaps a quartet of quadruple-laned PCIe 5.0 slots―each one equivalent to PCIe 3.0 × 16―is all any consumer needs.

Probably the only sustainable way to do them, short of being already rich. Even a single cable doesn’t come cheap. :sweat_smile:


Scouring the market for what I can get in preparation for the PCIe 5.0 HBA…

Unbelievably, I could not find anyone else selling MCIO 8i cables with OCuLink 4i on the other end!

Interestingly, Serial Cables and some sellers on eBay have also started hawking PCIe 6.0-capable cables. So the claim that PCIe 6.0’s trace/cabling requirements not being much more challenging than its predecessor actually did hold some truth.

The upshot of the switch to PAM4 then is that by increasing the amount of data transmitted without increasing the frequency, the signal loss requirements won’t go up. PCIe 6.0 will have the same 36dB loss as PCIe 5.0, meaning that while trace lengths aren’t officially defined by the standard, a PCIe 6.0 link should be able to reach just as far as a PCIe 5.0 link. Which, coming from PCIe 5.0, is no doubt a relief to vendors and engineers alike.

Sauce: PCI Express 6.0 Specification Finalized: x16 Slots to Reach 128GBps

In terms of signal to noise reduction, we’re losing approximately 9 decibels (dB) of signal moving to PAM4. From an interval timing loss perspective, this equates to roughly a third. For pad-to-pad loss budget, the PCIe SIG has established 36 dB for PCIe 5.0 and 32 dB for PCIe 6.0 as targets. All this is calculated at the Nyquist frequency of 16 gigahertz (GHz).

The PCIe specification stipulates a root complex or CPU on a PCB, and it calls for approximately 12 to 14 inches of trace on the mother board and an additional 3-4 inches of trace on an add-in card equating to a total trace length of approximately 18 inches. These trace length targets are essentially the same for PCIe 5.0, as well as PCIe 6.0 systems. Designing a system to support the same trace length for PCIe 6.0 with the new PAM4 IO and higher data rate presents significant challenges in the design of the PCB and packages. Designing these boards with the same reach is going to have many implications, including tighter manufacturing tolerances, higher layer counts, etc.

Sauce: https://semiengineering.com/using-a-retimer-to-extend-reach-for-pcie-6-0-designs/

1 Like

Here’s some interesting work from the Open Compute Project that I got reminded of by the last posts.
Unfortunately it’s not for consumer/HEDT platforms.
Data Center Modular Hardware System: Server/DC-MHS - OpenCompute
https://www.youtube.com/watch?v=sM0ZC3vB-Gg

A different PCIe switch that might interest people with more specific needs: ATTO ENVM-S4FF-000 (:paperclip: TechSpecs-ExpressNVM.pdf)

I’ve bolded some stand-out features below:

  • PCIe 4.0 × 16 host connection
  • 8 ports × 4 lanes or 16 ports × 2 lanes
  • Pure PCIe―No SAS/SATA/trimode
    • They do have NVMe-specific management tools, so its purity (device agnosticness) might be called into question.

It’s not clear whether ATTO makes their own PCIe switches or packages one from Broadcom/Microchip. This should be an interesting vendor to keep an eye on nonetheless. I may just wait an opt for their PCIe 5.0 iteration of the above product instead of getting more of what Broadcom or HighPoint themselves are offering if their current offering does what I think it does.


Also not sure when these (MCIO-2127-X4 and MCIO-2131-50) came out, but Micro SATA Cables is starting to release PCIe 5.0 adapters. It’s a PCIe CEM-to-MCIO redriver adapter though, which doesn’t help those who want to free those limited PCIe 5.0 lanes buried inside mainstream motherboard M.2 slots. PCIe 5.0 × 4 slots are comparatively rare, limiting the adapter’s applicability.


Hmm… suspicious! M.2-to-MCIO at PCIe 5.0 speeds, but totally passive.

After experiencing passive PCIe 4.0 adapters, I’ll pass on these.

4 Likes

Since I also tried passive M.2-to-SFF-8639 adapters with cables from Delock (that are labelled for PCIe Gen4) where the cable wires are directly soldered to the M.2 PCB traces but also throw PCIe Bus Errors (but at least very few) I’ve come to the conclusion that ANYTHING with copper cables and Gen4 or faster always needs an active component (if you’re out to get a configuration with 0 PCIe Bus Errors).

Even the best connector-plug connection (MCIO at this point in time) is worse for signal quality than a connection consisting of only a single wire.

4 Likes

Which is to be expected, I think, as practical motherboard trace lengths meeting PCIe loss budgets tend to be in the 22–35 cm range. Cables can be lower loss than traces but, after allowing for routing from the CPU socket to MCIO connector, even a 20 cm cable is likely a substantial increase in signal integrity challenge. PCIe 5.0 motherboards require redrivers to get CPU lanes to the lower slots and it wouldn’t surprise me if contraining trace lengths was one reason AMD made the X670E two Promontory 21s. The PCIe 4.0 x16 extensions I know of all use redrivers.

However, I’d be cautious of assuming MCIO is a larger factor than differences in characteristic impedance among the CPU package, motherboard, cable, and device end PCB. Or the vias needed to connect microstrips to a connector. Connectors themselves can be pretty transparent if well designed. Delock’s direct soldering might be worse.

I think it’s worth noting single wire connections don’t exist in a practical sense. Even if it’s a BGA to BGA route, like a chipset PCIe lane to a down NIC, there’s still the metallization on both dies. CPU lanes have the processor metallization, package, socket, motherboard, at least one PCIe connector, device traces, and the other die’s metallization.

If keeping the eye open on a 16 Gb (PCIe 4.0) or 32 Gb (PCIe 5.0) differential pair running through that was easy we wouldn’t be seeing implementation challenges even in uncabled links.

2 Likes

Here’s a TDR of some PCIe 5.0 signaling from that samtec article:

That huge spike in the 900-1000 picosecond range is the PCIe cable connector. The article was talking about using different impedance cables to help out with signal integrity; turns out the PCIe “standard” of 85ohms is often a bad choice… which is coincidently what a lot of lower quality passive cables use.

higher frequency reflections are a greater function of the mating connector impedance than the PCB itself.

2 Likes