A Neverending Story: PCIe 3.0/4.0/5.0 Bifurcation, Adapters, Switches, HBAs, Cables, NVMe Backplanes, Risers & Extensions - The Good, the Bad & the Ugly

@aBav.Normie-Pleb
When I called them up, Samsung Memory support asked for the serial number for one drive to get started and then what the last character (whether it was a number or letter) of the other drives were.
I was then told that SSDs drives were sold as Retail/Consumer (which word it was I currently do not recall).

I am re-emailing them due to a stalled response time.

Info I sent them…

SSD Info:
983 DCT
Model #: MZ-QLB960N
Part #: MZQLB960N960HAJR
All of the serial numbers end in a letter.

Firmware Revisions:
EDA5202QB
EDA5302QB

*Edit:
**Update 2023-11-09: Samsung Support is extremely squirrelly about helping when there isn’t a vendor in the middle. One option was to do an RMA for a firmware update… Who wants to go first? Not I.
Apparently, private servers with firmware are made available to those who work a vendor.

1 Like

Since I still don’t trust them I have basically an Anti-Shinto Shrine with my Broadcom HBAs:

  • 1 x HBA 9500-16i
  • 1 x P411W-32P
  • 2 x HBA 9400-8i8e

Everytime I see them I get a bit angry and from time to time I open Broadcom’s Download website for these HBAs - if there is something new I’ll try it out when I have some time (I ACTUALLY WANT to use these pricy HBAs for real-world stuff!), for the past few years it has just been a spiral of increasing anger towards Broadcom, I seem to always have gotten the least motivated support staff that’s straight up telling “untruths” to get rid of me.

No, CDI cannot see any drives connected to the HBA 9500-16i with P28 firmware + drivers :frowning:

What sucks: The same happens with the HBA 9400-8i8e and in the past (years ago, got the first one around 2018 it actually worked, so there is a feature regression).

Since you seem to have contact with someone who seems to actually care and might be able to actually improve something at Broadcom, could you please relay to them that they please port the BSoD crash fixes from the HBA 9500-16i’s P28 firmware and drivers to the HBA 9400-8i8e?
The latter displays the exact same behavior as the HBA 9500-16i pre-P28 firmware, so the causes might be pretty similar.

What would be great: Ask them if it’s possible to change the HBAs’ general behavior from pseudo-RAID Adapters with JBOD disks (changing drives connected to them) to an actually transparent HBA - how it used to be with earlier generations which was the whole point of deliberately choosing an HBA model instead of a RAID Adapter.

Even without the BSoD crash it still sucks that you basically cannot update a drive’s firmware since the drive’s manufacturer software won’t detect/recognize a drive handled by these “great, modern” HBAs.

For the first time have an Icy Dock ToughArmor MB699VP-B V3 for testing, previously only had V1 (MiniSAS HD SFF-8643) and V2 (OCuLink SFF-8612 like the V3, but without compatibility with Tri-Mode HBAs, which shouldn’t matter with a pure PCIe Switch HBA…).

ANOTHER HUGE UPDATE, I could identify a new pattern with the Broadcom P411W-32P:

  • ONLY WHEN USING the V3 backplane, the P411W-32P doesn’t crash the system with the usual Broadcom S3-Suspend-to-RAM-causes-the-system-to-always-crash-when-trying-to-wake-it-up-from-sleep-again bug.

  • It shouldn’t matter since this HBA is a pure PCIe Switch model that cannot handle SATA or SAS drives like the Tri-Mode models

When does the usual Broadcom S3-Suspend-to-RAM-causes-the-system-to-always-crash-when-trying-to-wake-it-up-from-sleep-again bug appear?

  • When just seating the P411W-32P in a system, even without any cables attached to it.

  • When directly connecting SSDs to the P411W-32P with cables, without any backplane in-between.

  • When using an Icy Dock ToughArmor MB699VP-B V1 or V2 backplane

SAD:

  • The old P14.2 firmware package is still the latest version that can detect any drives connected to the P411W-32P, had hoped that maybe the Icy Dock ToughArmor MB699VP-B V3 revision of the backplane that finally introduced compatibility with Tri-Mode HBAs also changed the backplane’s behavior to be “UMB-compatible” which Broadcom demands for the P411W-32P with newer firmwares than the mentioned P14.2 to detect any SSDs… :frowning:

I think that hints at an unfixable hardware issue (the mentioned crashes without the V3 backplane) and would explain why Broadcom killed its originally advertised feature of directly attaching SSDs to the P411W-32P without any backplane.

TLDR:

What’s still broken, even with an Icy Dock ToughArmor MB699VP-B V3 backplane without the usual Broadcom S3-Suspend-to-RAM-causes-the-system-to-always-crash-when-trying-to-wake-it-up-from-sleep-again bug?

  • No SSDs detectable with firmware versions newer than P14.2

  • Driver version mismatch with the installed firmware will always cause a BSoD during booting Windows

4 Likes

@aBav.Normie-Pleb
What are the model #s, part #s, and the last three digits of the serial numbers for your Samsung SSDs?

Thanks for confirming. Though I wonder if the issue is on CDI side as other software like HDD Sentinel is working fine apparently.

I don’t have special contacts tbh I just go through the official support channel. I’m exchanging emails with 1/2nd level support which is always the same guy for me and he basically just forward back and forth between me and the 3rd level engineering teams.

It takes a lot of patience and time. Just answer to them, do the tests they ask you to do, send a maximum of logs files. They should care of it.

Actually now should be a great opportunity for you to reach to them, confirm P28 solved things (that can only be positive if multiple users reach out to them acknowledging they are/were impacted by that S3 bug.)

Specially these days as if they still have that Windows test machine they used to reproduce this issue! Jump fast on that opportunity to ask for back porting and testing more with different cards.

You have far more knowledge than me on these cards, I’m just a noob on that question.

But regarding upgrading disks firmware, it’s weird, as I can confirm earlier this year when I initially posted here (around May), I confirm I did upgrade my Seagate Exos X20 20TB disks firmware without any issues, while being connected to the HBA.

Was with SeaChest Lite / SeaChest / SeaTools, one of these.

1 Like

I am just reporting that these are the longest and least jank working gen5 cables ive ever had.

Plug n play on this ASRock Genoa board. Fio 11 gigabytes sec with fs overhead !!


… I… I… Can’t believe this is working this easy!

5 Likes

MCIO 8i to 2x SFF-8639? Any special brand? They seem rather long…what makes this even better.

MCIO or cable doing gods work? I need to source (hard to find in Europe) some 4x MCIO 8i cables when I get my hands on a Siena board.

And what’s nice about this working is that PCIe 6.0 is expected to have the same reach as PCIe 5.0, so the cabling should be a drop-in replacement when PCIe 6.0 comes out. If it works for PCIe 5.0, it will also work for PCIe 6.0. :grimacing:

The question is, what is backing the signal at the ends of the cables? Is there a retimer/redriver sharpening the signal?

1 Like

Don’t be such a tease and mention the cables’ manufacturer :slight_smile:

2 Likes

I guess we’ll see an upcoming video about PCIe5 NVMe server soon. 240 GB/s bandwidth killing CPU, memory and networking in the process :wink:

Checking on CM7-R pricing, I decided that PCIe4 is totally fine for me. Data sheet seems to indicate that IOPS doubled as well and 4k writes are +50%. So you get a lot more than just bandwidth out of these NVMe controllers. I’m pretty sure we cross the 30W barrier for these drives.

Lower picture looks like cable plugging into on-board MCIO connector. Not sure what that “adapter” on the SSD side is.

Do the kioxia u.2 form factor SSDs require airflow through the ssd enclosure itself (narrower side facing airflow direction) or is it enough if there’s airflow over the enclosure ( larger side facing airflow direction) for proper cooling?

I see there are holes in the narrow side of the Kioxia CM7 ssd enclosure.

@LiKenun is your combination with OcuLink ReDriver and cable from MicroSATACables still going strong and stable? I’ve been desperately searching for a place with the ReDriver in stock. I don’t need that long of a cable but I almost certainly need Oculink for the low profile connector.

It still is. :slightly_smiling_face:

But I do recall that there were several factors which determined whether it would work stably:

  • The cable model
    • The cable length
  • The presence or absence of a re-driver
  • The M.2 or PCIe slot on the motherboard
  • Whether I turned on the motherboard re-driver in the BIOS for that slot

Micro SATA Cables only sells OCuLink cables in 50 cm or 100 cm lengths. But I’ve found that 25 cm works fine with LINKUP’s cable.

I’ve got a 13 cm OCuLink cable from Supermicro that I haven’t been able to test yet since it’s so short that it doesn’t reach even the enclosure within spitting distance of the OCuLink port.

2 Likes

Interested in learning more about the gen 5 cables manufacturer myself

Typically been using LinkReal stuff across the board but interested in what else is out there

I’m mostly stuck on gen3 and 4 in my systems right now anyway but this is interesting

I’m thinking of converting one horizontal row of my Intertech IPC4424 (basically same as the 24-bay Norco) from the SAS-backplane to U.2/U.3 flash storage. The case has 6 backplanes, each connecting to 4 drives, so partial conversion is as easy as just removing one backplane which exposes the rear of the drives in the bays.

I’m drooling over some U.2/U.3 SSDs on Ebay, but I am still a bit unsure what would be the most reliable way of connecting the drives. I understand there are cables, many of which work unreliably, but are there any clear cut good choices? I wouldn’t need long cables, I imagine 50cm would do easily. Since I am connecting up to 4 drives, I would need two cables.

Depends on your board. You can get PCIe carrier cards for the U.2 (most reliable because no cables involved), PCIe card with OCulink or MiniSAS connectors or use your on-board ports if present.

Otherwise there are cables connection from M.2 slot to SFF-8639.

DeLock has a variety of products regarding this. I buy my stuff mostly from Reichelt in Germany, as they have all the stuff from DeLock and most other stuff related.

https://www.delock.de/produkte/G_1767_U-2---U-3.html?setLanguage=en

edit: New connector this generation is MCIO. So if you have a new server board, MCIO cables are what you need.

1 Like

Thanks for help!

I actually found myself on that Delock site you linked so I was heading in the right direction. My board has one SFF-8654 8i-port which I could also utilize, but I am going to need some additional cards/adapters/cables still.

I actually just realized that my SAS2008 card and HP SAS Expander might be redundant now. Can’t I just replace them with stuff like this?

SAS2008 and the HP Expander have served me well but they just seem a bit redundant now. The parts above alone could give me connectivity for up to four backplanes with way higher bandwidth. Again, it’s not like I actually need it, but rather using more modern connectivity and removing excessive PCIe cards.

The SSD upgrade is somewhere in the future when I start to get the signs of failing hard drives, but this connectivity upgrade would actually be more worthwhile for near future.

These are SAS backplanes, not NVMe. You can’t plug NVMe speaking cable into a SATA/SAS backplane. Not compatible. You need an NVMe backplane or Tri-mode backplane. These are basically unobtainium atm. Which is why people use Icy Dock U.2 productss for their stuff. Or you buy a 10k Supermicro server.

For me both options are too expensive and I stick to internal 2.5" mounting. Icy Dock is a bit too dense and the small fans are a bit too loud for my taste anyway.

1 Like

Ah got it, thanks!

I’m sure I won’t have the option for NVMe backplane with the current case so that’s out of the window. In my situation I would still need a Broadcom 9500 Tri-Mode adapter to be the “translator” in between CPU and storage. Those things cost a pretty penny and still seem to be buggy according to this thread. Since I need 6x SFF-8087 to connect to all the backplanes, I would need more than one card for the connectivity… That sounds expensive.

So it will still be easier to just drop the backplane out of the way and use the caddies without them. That way I can just connect straight to the drives and not worry about incompability issues. Only downside I can think of is that I won’t get the pretty blinking lights in the from the case for those caddies :frowning:

PCIe storage is directly speaking to the CPU, that’s why they are so fast. You don’t need an intermediate. Simple M.2 drives don’t need this either and they work just fine.
Tri-mode HBAs just allow you to plug in SATA, SAS and NVMe into the same controller. It doesn’t change the backplane.

No. The backplane only speaks SATA/SAS, not NVMe. Your backplane only works for SATA/SAS. It’s dead weight as far as NVMe is concerned.

Regarding Inter-Tech cases… 4U-4410 has 5.25" options so you can put up to 18x NVMe in those bays via Icy Dock with blinking lights.

I’m personally going with 4U-40255 to get more flexibility on what I put in those 5.25". 9x 5.25" allows for a lot of stuff. And I like to mix HDDs with NVMe so that’s perfect for me. Plan is to get 5xHDD backplane for 3x5.25" slots and put NVMe on internal mounts until NVMe backplanes get more affordable.

1 Like