I have a warning for you. I’ve got this card but mine seems to be broken.
Link to the item for anyone interested https://aliexpress.com/item/1005006075676266.html
model number: SU-EM5204(A3)-F50903
I’ve ordered PCIe3x2 version (“16GTS PCI-E X4”).
This card looks interesting because it’s cheap and should in theory have low power consumption. It has pci3x2 connection to the system and pci3x1 to each drive. It’s based on asmedia ASM2806 chip.
I’ve loaded it with 4 intel 660p SSDs but couldn’t reliably access the drives.
Nvme devices fail shortly after the boot and are not list-able by nvme-cli.
Running lspci on this system locks it up.
I forgot to save full dmesg, but I have a screenshot.
After some more testing on the second PC I managed to boot successfully once. Even then only 3 our of 4 nvme devices were visible in the system.
lspci output suggested that ASPM worked on pcie bridge and SSDs.
EDIT:
update 1
Only one of 4 m.2 ports is broken. Once I’ve removed nvme from that one port card started working. I haven’t stress tested it, but being able to boot the system is already a major improvement.
For me it hasn’t been long since I have PCIe Switch/Tri-Mode HBAs where I don’t trigger complete system crashes (the Broadcom HBA 9500 was finally fixed last summer with firmware P28 and got the Adaptec 1200P-32i just recently) but with Zen 5 I’m likely looking to move my main daily-driver system from Zen 3/PCIe Gen4 to PCIe Gen5, it would be great to operate more PCIe Gen4 SSDs in a PCIe Gen5 x8 slot without them slowing each other down (at this time only have a single Zen 4 7800 X3D/ASUS ProArt X670E-CREATOR WIFI to experiment with).
But just giving Broadcom another chance…?
(it’s their chipset, no matter who’s slapping their sticker on the box)
I’m looking into a few ways to make these weird, niche edge-case experiments more justifiable, maybe create proper YouTube videos with the monetization routes it entrails. I mostly ended up doing these experiments since I wasn’t able to find other people already having done them.
And the reason behind my focus on PCIe stuff on mainstream platforms is the absurd pricing of current HEDT/Workstation platforms (and their half-baked state as of yet, just look at the Threadripper threads). I could imagine other people are similarly annoyed by that.
Can confirm. Even if revitalized, reliable HEDT existed at reasonable pricing its power would be problematic to our building infrastructure and, since most of our office space is shared, workstation (or server) operating noise levels aren’t viable.
Another structural issue I keep hitting is, in desktop form factors, dGPU size and airflow demand make use of other PCIe slots not great. The response of moving lanes toward M.2 sockets is logical at the motherboard level but there’s a distinct lack past that.
I feel bifurcating an M.2 and plugging in some drives should be easy but that wants a move away from SATA to U.2/3 which hasn’t occurred with desktop parts. Siena type HBA integration seems like it should happen more widely. Probably be long wait for either.
For a home lab setup in an urban setting (think high-rise apartment), forget about getting a good night sleep during long running AI training workloads.
Bandwidth keeps going up, and motherboard vendors would be doing us a favor if they broke out those limited lanes into more slots. I don’t know if anything needs 16 or even 8 lanes of PCIe 5.0 all in one slot. Perhaps a quartet of quadruple-laned PCIe 5.0 slots―each one equivalent to PCIe 3.0 × 16―is all any consumer needs.
Probably the only sustainable way to do them, short of being already rich. Even a single cable doesn’t come cheap.
Scouring the market for what I can get in preparation for the PCIe 5.0 HBA…
Unbelievably, I could not find anyone else selling MCIO 8i cables with OCuLink 4i on the other end!
Interestingly, Serial Cables and some sellers on eBay have also started hawking PCIe 6.0-capable cables. So the claim that PCIe 6.0’s trace/cabling requirements not being much more challenging than its predecessor actually did hold some truth.
The upshot of the switch to PAM4 then is that by increasing the amount of data transmitted without increasing the frequency, the signal loss requirements won’t go up. PCIe 6.0 will have the same 36dB loss as PCIe 5.0, meaning that while trace lengths aren’t officially defined by the standard, a PCIe 6.0 link should be able to reach just as far as a PCIe 5.0 link. Which, coming from PCIe 5.0, is no doubt a relief to vendors and engineers alike.
In terms of signal to noise reduction, we’re losing approximately 9 decibels (dB) of signal moving to PAM4. From an interval timing loss perspective, this equates to roughly a third. For pad-to-pad loss budget, the PCIe SIG has established 36 dB for PCIe 5.0 and 32 dB for PCIe 6.0 as targets. All this is calculated at the Nyquist frequency of 16 gigahertz (GHz).
The PCIe specification stipulates a root complex or CPU on a PCB, and it calls for approximately 12 to 14 inches of trace on the mother board and an additional 3-4 inches of trace on an add-in card equating to a total trace length of approximately 18 inches. These trace length targets are essentially the same for PCIe 5.0, as well as PCIe 6.0 systems. Designing a system to support the same trace length for PCIe 6.0 with the new PAM4 IO and higher data rate presents significant challenges in the design of the PCB and packages. Designing these boards with the same reach is going to have many implications, including tighter manufacturing tolerances, higher layer counts, etc.
They do have NVMe-specific management tools, so its purity (device agnosticness) might be called into question.
It’s not clear whether ATTO makes their own PCIe switches or packages one from Broadcom/Microchip. This should be an interesting vendor to keep an eye on nonetheless. I may just wait an opt for their PCIe 5.0 iteration of the above product instead of getting more of what Broadcom or HighPoint themselves are offering if their current offering does what I think it does.
Also not sure when these (MCIO-2127-X4 and MCIO-2131-50) came out, but Micro SATA Cables is starting to release PCIe 5.0 adapters. It’s a PCIe CEM-to-MCIO redriver adapter though, which doesn’t help those who want to free those limited PCIe 5.0 lanes buried inside mainstream motherboard M.2 slots. PCIe 5.0 × 4 slots are comparatively rare, limiting the adapter’s applicability.
Hmm… suspicious! M.2-to-MCIO at PCIe 5.0 speeds, but totally passive.
Since I also tried passive M.2-to-SFF-8639 adapters with cables from Delock (that are labelled for PCIe Gen4) where the cable wires are directly soldered to the M.2 PCB traces but also throw PCIe Bus Errors (but at least very few) I’ve come to the conclusion that ANYTHING with copper cables and Gen4 or faster always needs an active component (if you’re out to get a configuration with 0 PCIe Bus Errors).
Even the best connector-plug connection (MCIO at this point in time) is worse for signal quality than a connection consisting of only a single wire.
Which is to be expected, I think, as practical motherboard trace lengths meeting PCIe loss budgets tend to be in the 22–35 cm range. Cables can be lower loss than traces but, after allowing for routing from the CPU socket to MCIO connector, even a 20 cm cable is likely a substantial increase in signal integrity challenge. PCIe 5.0 motherboards require redrivers to get CPU lanes to the lower slots and it wouldn’t surprise me if contraining trace lengths was one reason AMD made the X670E two Promontory 21s. The PCIe 4.0 x16 extensions I know of all use redrivers.
However, I’d be cautious of assuming MCIO is a larger factor than differences in characteristic impedance among the CPU package, motherboard, cable, and device end PCB. Or the vias needed to connect microstrips to a connector. Connectors themselves can be pretty transparent if well designed. Delock’s direct soldering might be worse.
I think it’s worth noting single wire connections don’t exist in a practical sense. Even if it’s a BGA to BGA route, like a chipset PCIe lane to a down NIC, there’s still the metallization on both dies. CPU lanes have the processor metallization, package, socket, motherboard, at least one PCIe connector, device traces, and the other die’s metallization.
If keeping the eye open on a 16 Gb (PCIe 4.0) or 32 Gb (PCIe 5.0) differential pair running through that was easy we wouldn’t be seeing implementation challenges even in uncabled links.
That huge spike in the 900-1000 picosecond range is the PCIe cable connector. The article was talking about using different impedance cables to help out with signal integrity; turns out the PCIe “standard” of 85ohms is often a bad choice… which is coincidently what a lot of lower quality passive cables use.
±10%'s a typical impedance tolerance (for example, it’s what Amphenol specs for their MCIO connectors). So, in addition to 93 Ω cases, it’s commonly necessary to design also for 77 Ω off an 85 Ω target. Krooswyk’s slides don’t appear to parameter sweep these tolerances or consider link equalization (maybe it’s in the audio, didn’t see a transcript).
PCIe needs to work across millions (billions?) of parts combinations. I wouldn’t get too caught up in optimization for a specific case.
I think the point of the article was mostly that you’ll have less insertion loss if you spec a higher impedance cable, leading to better signal quality.
Hinting that PCIe devices connected via cabling would be better served with SAS-spec impedance instead of PCIe-spec impedance cables.
I’m not an EE but I’m having a fun time reading this article which is much more comprehensive on the problems with PCIe 5.0:
Historically ATTO has been a Broadcom partner for everything they’ve done in recent memory.
I’m not sure why we don’t see more manufactures using Microchip PCIe switches, they all seem to use Broadcom with the sole exception being the C-Payne.
At least with that I could imagine motherboards in two generations or so featuring such connectors (instead of M.2 slots, for example) and a larger volume of all parts necessary for that meaning prices coming down and (hopefully) reliable retail sources for regular end customers and not just OEMs.
I read CopprLink as a green light to go all in on MCIO.
As far as I can tell, MCIO (SFF-TA-1016) is internal CopprLink. That they’re investigating its extension to PCIe 7.0 speeds means it’s going to stick around in the market. I guess that’s good for economy of scale? ¯\_(ツ)_/¯
SFF-TA-1016 specifies connectors. CopprLink is a cable spec. I don’t have the ability to get through PCI-SIG’s paywall but I think it’s pretty safe to assume CopprLink will be (back) compatible with existing MCIO connectors, at least for PCIe 5.0 signaling. However, SFF-TA-1016 does state “Additional connector SI requirements and any cable assembly SI requirements are application specific and are out of the scope of this specification.” So maybe some fine print will emerge.
Given the reliability issues around PCIe 4.0 cabling, 5.0 redrivers, and 12V-2x6 I’m skeptical of PCI-SIG’s ability to get CopprLink right on the first try. So my attitude’s more wait and see. Given what I can actually get for transfer rates from application software (as opposed to benchmarks), if CopprLink being supposed to do 5.0 x4 means it does 4.0 x4 reliably I think I’d be ok with that.
Yes, Teilan et al. offer a considerably broader perspective than that particular talk by Krooswyk (his interconnect guidelines are broader but IMO still feel narrow), though it appears Teilan et al. are writing mainly on the basis of one complex 5.0 project they’ve implemented.
I’ve gone back and forth between electrical and software for a few decades, including about five years on signal integrity, and would suggest the single most important thing to recognize is link quality’s the integral of everything going on over the past few flight times across the connection. So the design unit for the past 25-30+ years has usually been the transistors on one chip all the way through the link to the transistors on the other end.
Hello guys, this topics is really interesting, you made a lot of research here, I appreciate it.
My question is - why this guy - ETA PRIME - do not have any problems with m.2 → oculink → gpu transition even on pci 4.0 speeds? It looks like he buys random stuff from china and it is working.