A Neverending Story: PCIe 3.0/4.0/5.0 Bifurcation, Adapters, Switches, HBAs, Cables, NVMe Backplanes, Risers & Extensions - The Good, the Bad & the Ugly

I must admit, I’m slightly new to this, but what’s a UMB backplane and why can’t I find anything about it on google? :smiley:

A summary of working HBA + cables + firmware would be good, I’ve either missed it in my skimming of the past 180 messages or it’s been an evolving conclusion.

Thank you for all the work you’ve done to get this far!

I saw this on wccftech:

21 m.2 cards via gen 4 x16.

1 Like

Has anyone seen a PCIe Gen4 active Switch U.2/U.3 HBA where the chipset is NOT from Broadcom?

I’ve got a 3258p-32i/e, its supposed to do x1, x2, x4 or x8 wide nvme connections; connects to the host via an gen4 x16 link.

1 Like

Was looking for a pure PCIe Switch AIC, only have V1/V2 Icy Dock U.2 backplanes that unfortunately don’t work with Tri-Mode HBAs :frowning:

Tom’s Hardware quoted the AIC as having “an average read and write access latency of 79us and 52us, respectively.” Definitely not something to put Optanes onto for sure.

How would that compare to a normal M.2 slot, for comparison?

It would be great for throughput.

I have been thinking about this card with 21 times a Transcend 220S 2TB. This SSD is cheap (PCIe3) and has great durability with 4.4PBW per SSD. That would be 92.4 PBW (92400TBW) making it suitable to act as swap for virtual machines and a lot of writes with scripts that write temporary stuff. I could really use it as a harddrive this way. Total capacity would be 42TB. But i would probably start with a few and expand as a need more storage. ZFS already allows for RAID0 expansion. Two HDDs as backup that are mostly spun down, and it would be great for power efficiency too.

These SSDs are 119 euro making the 42TB array cost 2499 euro. Doable over time, particularly if i can start smaller with just a bunch of them and add more over time. Not sure how much the card would cost, though.

It probably has some PLX chip or something to account for the greater downstream bandwidth than the x16 upstream bandwidth? It looks very interesting to me.

If you want Optane you probably will use it directly on the motherboard M.2 slots. Optane not that interesting anyway? One of them as ZFS separate LOG device would be great.

This card with cheap SSDs instead would be awesome for ultra fast bulk storage. 32GB/s 42TB high durability and limited cost could make it really attractive for enthusiasts like me.

I don’t think durability works like in your calculation. The total durability will be determined by the min(durability of single device) not sum(durability of single device). Especially, if you plan on configuring all 21 m.2s in a RAID0 configuration - otherwise the config will not reach the expected 42TB of storage.

But what makes you think that in a RAID0 configuration the total bytes written would differ significantly from other RAID member drives?

Actually, this would apply to striping with ZFS, since ZFS can ‘load balance’ the writes, so that some drives that are faster would get more writes than drives that are slower. But with identical drives this shouldn’t cause too much of an issue even with ZFS.

Scratch that comment. I am thinking of durability as ‘expected time to failure’.
You are correct in that it is possible to write more data (TBW) to an array of disks than to any single drive (although I doubt you’ll get anywhere close to the numbers cited).

However, if you are concerned about durability and intend to use this array of drives as a single logical device, you’ll need to add redundancy in some form (RAID1, RAID5, RAIDZ1, …) which will reduce the usable capacity to less than the total capacity of the array.

Sure but MTBF is 2 million hours so 228 years (!). So what are the concerns about durability? Perhaps that the specs are wrong, that is possible.

Redundancy is not always needed. I intend to use two high capacity harddrives (18-20TB) in stripe to act as backup. And perhaps some online/offsite backup that i sync monthly. With ZFS you can do incremental backups nicely and efficiently, also remotely.

RAID-Z1/2/3 does not work well with random IOps. Unlike RAID5 where multiple drives can do individual I/O, so parallel I/O, this is not possible with RAID-Z1/2/3. One I/O will affect all drives and as such the IOps is the same as a single drive, which is not the cause for traditional RAID5.

Also, expansion for RAID-Z1/2/3 is still a work-in-progress (see: RAIDZ Expansion feature) and might take a year still. Expansion of striping has been available since ZFS version 1 i believe and is considered very safe, since it does not shuffle any data around. It also works instantly (<1 sec).

I value any feedback you and others have on my plan. But… it is pretty cool isn’t it? For not that much money you get ultra-enterprise grade performance, durability and capacity. The only downside is that the SSDs do not have PLP (power-loss protection) so they can corrupt themselves on improper shutdown. But since i would run RAID0 i will protect the data another way, using automated backups. Every morning the harddrives which act as backup would spinup and receive an incremental snapshot using ZFS script and then the drives spindown again.

Super low-power, low noise; this seems the holy grail of enthusiasts?!? :heart_eyes:

Broadcom P411W-32P Summary for Direct Attached Devices

This is the latest working firmware:
https://docs.broadcom.com/docs/P411W-32P_4_1_2_1_HBA_signed_P14.2.fw.zip
Firmware newer than v4.1.2.1 requires a UMB backplane before it will detect anything storage related.

Be careful about drivers - it’s perfectely fine to let Windows track them down for you.
Edit:Windows 11 appears to be a different process. Caution is advised!!

These cables work at PCIE 4 speeds:

Manufacturer MPN Cable Description Adapter Connector Backplane Connector
Broadcom 05-60001-00 x8 8654 to 2x4 8612, AltWiring 1M 1 x8 SFF-8654 Slimline Two x4 SFF-8612 OCuLink
Broadcom 05-60002-00 x8 8654 to 2x4 8643 (W), SMC 1M 1 x8 SFF-8654 Slimline Two x4 SFF-8643 Mini-SAS HD
Broadcom 05-60004-00 x8 8654 to 2x4 8654, 9402 1M 1 x8 SFF-8654 Slimline Two x4 SFF-8654 Slimline
Broadcom 05-60005-00 x8 8654 to 2xU.2 Direct, 1M 1 x8 SFF-8654 Slimline Two U.2 SFF-8639
Broadcom 05-60006-00 x8 8654 to 8xU.3 Direct 1M 1 x8 SFF-8654 Slimline Eight U.3 SFF-8639
Broadcom 05-60007-00 x8 8654 to 1x8 8654, 9402 1M 1 x8 SFF-8654 Slimline One x8 SFF-8654 (SlimSAS)
HighPoint TS8I-8639-060 x8 8654 to 2xU.2 Direct, 60CM 1 x8 SFF-8654 Slimline Two U.2 SFF-8639
Micro SATA Cables OCU-1708-GEN4 x4 8611 OcuLink to x4 8611 OcuLink, 50CM 1 x4 SFF-8611 OcuLink One x4 SFF-8611 OcuLink

Adapters and Backplanes that work at PCIE 4 speeds:
Icy Dock EZConvert M.2 to 2.5" U.2 SSD Adapter MB705M2P-B
Icy Dock 4x U.2 backplanes MB699VP-B 2 and MB699VP-B V2

SSD wise… Just pick one.
Or 32 of them…

Edited to remind folks not to update past a certain firmware version.

2 Likes

Got this baby which turned out to be exciting and disappointing at the same time—it works, but it blocks my second PCIe 5.0 slot. It was claimed to be PCIe 4.0 signal-compatible. I got them so short because I also expect them to work with PCIe 5.0 at that length. Unfortunately, I have yet to get my hands on a PCIe 5.0 U.2 drive to test it on.


Perhaps I should get a different bracket that holds the drive upright instead of on its side.

2 Likes

@LiKenun
Could I get a model number or a link to that cable please? I’d really appreciate it.

Hi, can you give me a little help with my P411w-32p?
I purchased a P411w-32p some days ago, but it will BSOD in the already installed Win11 system, and when I plan to reinstall Win11 system, it will also fail to install because system thread exception not handled.
However, it works fine under win10 (even the latest win10 22h2), and the firmware only works with 4.1.2.1, while 4.1.3.1 can’t find the hard disk (the latest firmware under win11 can enter the system normally but can’t find the hard disk, while 4.1.2.1 will BSOD).
My CPU is 13900K, motherboard is ASUS Z790 EXTREME, and P411w-32p can only run at x8 bandwidth because of the presence of the graphics card.
I can’t think of any better way to do it, can it only run under Win10?

In reference to the prior post, this is the M.2 to U.2 adapter: PCIe 4.0 x4 U2 Interface SFF-8639 To M.2 Key-M M2 Adapter Riser Card Ribbon

1 Like

@LiKenun

Thank you kindly! I really appreciate it. :star_struck:

I guess this’ll be an experiment to try out windows 11 with my card.

As far as I have seen in this thread, you’re the only one to have tried this with windows 11.
Known good working configuration is with windows 10, Linux, ect… and firmware up to 4.1.2.1 and no higher.
Firmware newer than 4.1.2.1 requires a UMB backplane before it will detect anything storage related. There are posts about this throughout the thread.

Update: I successfully used P411w-32p on Win11.

  1. Install another win10 system to update the firmware to 4.1.2.1 (or downgrade).
  2. removed the P411w-32p out of the system.
  3. removed all mpi2ses.sys, itsas35.sysh and itsas35i.sys related file names on the win11 system.
    (Because I was using the 9500-16i before so many folders have these files, in fact a newly installed system should just delete the files in the DriveStore.)
  4. Reinstall the P411w-32p device, you will find the system pex88xxx device and display error code 35, at this time to install the P16 (not sure if the updated driver can be used properly) driver and reboot can be used.

Because the system’s DriveStore has 95xx HBA/Raid card P25, P18 drivers cause P411w-32p match on these drivers and then blue screen.
And I use the backplane is icy dock MB699VP-B V3, can be used normally.

2 Likes

Now, please try to use system sleep (suspend-to-RAM, S3), wake the system up again and access and change files on a connected SSD :slight_smile: