Why are the 8i and 16i Tri Mode HBAs all x8?

Am I missing something here? I want to hook up four u2 drives but even cards like the 9500-16i or 9600-16i only come in PCIe x8.

Why are there HBA cards that can only give x2 to each drive? It makes sense with HDD because they’re so slow, but I want the full performance from my SSDs.

Because Broadcom sucks.

You’d need an HBA from a manufacturer for real men:

2 Likes

Huh. For some reason I thought Adaptec had been bought by Broadcom.

I’m a bit surprised that there’s no bifurcation HBA versions. Seems like that would be cheaper and easier for NVMe.

x16 Gen 4.

1 Like
  • That thing is a fully-fledged hardware RAID controller and not an HBA

  • Its design is strange: It uses 16 PCIe lanes to the host and only 16 lanes to target drives

  • At the same time it claims to support up to 32 NVMe drives, so less than 1 PCIe lane per NVMe drive

  • Am sceptical if it actually supports directly attaching NVMe SSDs to it and not need an additional UBM backplane in between: On the PCIe Switch P411W-32P (“was” the perfect product on paper) Broadcom removed the feature of attaching SSDs directly to it with a firmware update, you now HAVE to use an UBM backplane for the SSDs. At the same time the old firmware where directly attaching SSDs works is haunted by system-crashing bugs.

2 Likes

Good point. I zoomed past the HBA part.

How about this one:

9600W-16e

I hadn’t heard about a firmware change the prevented direct connect to NVME’s with the 9670. I had heard that they won’t support it. I should have the cable today; I can run some tests on my 3.2TB CM7-V to see if it works at all.

  • That test would be greatly appreciated! The HBA version has the same design choice of only 16 PCIe lanes in, 16 PCIe lanes out my gut feeling is telling me something is up with that.

  • Bugs and Broadcom’s tech support gas lighting me have caused me many uselessly frustrating hours so I’ve become pretty biased against them but that’s not a good mindset to have.

  • Especially the way they f’d me over with the P411W-32P (removing the support for directly attached SSDs, a feature it was advertised with) has left me disappointed, at least the other issues with the HBA 9500 were finally addressed in a firmware update last year.

  • If you have the opportunity please leave your experiences in this larger blog-style thread, even if everything works perfectly for you that would be helpful data, too. That thread evolved over the years and has become a general hub for talking about practical experiences regarding everything around PCIe Adapters.

1 Like

The drive wasn’t detected at all. I emailed support to see if they will comment on the 9670 being crippled with respect to direct connected NVME. I’m not ruling out a wonky cable, though. I’ll look over the thread, thanks! This whole NVME gen 5 thing is frustrating. I’ve bought pretty much every cable, card, and adapter I can find trying to make it work on my ASUS threadripper board.

1 Like

Has anyone gone through all of the LSA configuration options to make sure there isn’t an option that needs to be turned on for the Broadcom controller to enable direct attached NVMe drives?

The June 2024 version of the LSA user guide explicitly mentions:

“The Managing PCIe Storage Lane Speed feature allows you to change the lane speed between a controller and an expander or between the controller and a drive that is directly connected to the controller.”

That’s a good question. I went and re-checked, and the only lane speed I can modify is for the drives detected (SAS drives in this case). This is with one NVME drive on one slimsas connector and 4 SAS drives on the other slimsas connector.

1 Like
  • I’d find the removal of being able to connect NVMe SSDs directly absurd. But Broadcom’s Tech Support confirmed that this was their decision when we (I and another user with a P411W-32P) were in contact with them regarding the various issues.

  • The P411W-32P having to use an additional UBM backplane is especially absurd since it’s not a Tri-Mode HBA but a PCIe Switch for NVMe SSDs only.

  • After seeing Broadcom’s behavior for a while I could see that this was some middle management’s decision to “streamline the user experience”, meaning every current or new product will have to use an active UBM backplane even if you don’t want to.

1 Like

What were the issues for the 9500 that were addressed? Is it finally a good option for direct attaching u.2 drives?

Can you elaborate what you mean by this?

legacy server compatibility

Bifurcation and switching. Before the days of EPYC and 128 PCIe lanes, systems had fewer resources to deal with. MCIO connectors are the way for big NVME arrays now.

As an aside, a proper hardware RAID card (don’t do this) only needed to pass the data in and out with the logic handled by the RAID controller.

Regarding my Broadcom issues (HBA 9400-8i8e, HBA 9500-16i, P411W-32P) look over to my PCIe Adapter blog thread:

  • The Firmware Bug that causes the total system crashes when trying to use S3 Sleep/Suspend-to-RAM was the one that was fixed with Firmware version P28 on the HBA 9500. The HBA 9400 is SOL since Broadcom decided for it to be EOL even though I reported that bug before that. Firmware P24 is the latest firmware the HBA 9400 was allowed to get.

  • How do I know it was a Firmware Bug? Broadcom’s Tech Support confirmed it in contact with another user here, their change logs of course never mention anything about that. In my support tickets Broadcom claimed they tested my claims in their “testing lab” but couldn’t find any issues.

  • Still messed up: Connected U.2 SSDs don’t get their SMART data passed through to Windows and the drives get virtualized as SAS drives, they don’t show up as native NVMe SSDs and manufacturer firmware update tools can’t detect them.
  • 1 NVMe SSD needs at least 1 PCIe lane to work. Broadcom stating that controllers with only 16 lanes going to SSDs support up to 32 NVMe SSDs means there has to be additional active hardware involved in generating at least 32 required PCIe lanes and not just cables, my guess would be active UBM backplanes (a shitty move for DIY users)

  • Controller designs with 32 PCIe lanes to NVMe SSDs don’t need that (examples: Broadcom P411W-32P, Adaptec HBA 1200-up-32i). But Broadcom killed the feature of being able to connect SSDs directly to the P411W-32P with a firmware update).

  • Currently I use the Adaptec HBA 1200up-32i with 8 x4 NVMe SSDs in two 4 x U.2/U.3 Icy Dock OcuLink V3 backplanes and am happy with it, not a single crash happened with the Adaptec controller.

  • Adaptec isn’t perfect at all but Broadcom’s firmware bugs causing complete system crashes disqualified them for me.

What kind of performance are you seeing on that setup? How are the drives configured? It seems like you’d only get x2 lanes per drive if they were in an array.