U.2/3 PCIe card

Docs = documentation or
docs = docking cages?

I am currently having 1x U.2 PCIe adapter but I would like to have another one to replace my mirrored SATA SSDs.
Unfortunately it seems that more and more enterprise NVMe flash is no longer available at M.2 slots but only on U.2/3.

Edited it already but too slow Docks

They do make m.2 to u.2 adpaters, the main reason for the u.2 shift is capacity you just cant fit that much on m.2 even with it being bigger layer cells.

The dock system I have used for my u.2 internally that is ok is EverCool Dual 5.25 in. Drive Bay to Triple 3.5 in. HDD Cooling Box. in the one thats only holding 2 I just used the 2.5" cage, for the one I had 3 installed in I got 2.5 to 3.5 adapters to fit all 3.

image
Replaced the fan with a noctoua industrial and called it a day.

already outdated photo
image

but this is my lab rack

The cage looks cool - unfortunately it seems that is only available for US but not for Europe.

My server chassis is a previous version of this: IPC 4U-4129L - Inter-Tech Elektronik Handels GmbH

Mine does not have front fans but on each side 16x 2.5" cages for SATA/SAS drives splitted in to 4x 2.5" per Icydock cage.

Two cages were connected via a mini SAS cable to the onboard SAS & the other two cages were connected to an HBA card.

Unfortunately I havenā€™t found any other cages beside Icydock which looks reliable as a replacment here in Europe. Most stuff is plastic from China.

1 Like

My homelab/work thing.

1 Like

Donā€™t forget about performance. M.2 has power limitations where U.3 is much more generous. Thatā€™s why you see better stats on U.2/3 models and why we donā€™t see M.2 drives drawing 20W.

M.2 is cheaper because you donā€™t need a cable and thatā€™s why we use them in consumer hardware.

And there is just no space for m.2 on a server board and you canā€™t hotplug M.2. Itā€™s mainly used for the boot drives to have something better than eMMC or SD card.

1 Like

Yeah I use M.2 drives in my 2U server for different stuff but this is being tied to the above mentioned chinese 4x M.2 card. (I replaced the Samsung SSDs)

Regarding your motherboard:

  1. Some Server motherboards have PCIe hot-plug options built into their BIOSes, can you check that with your Supermicro model?

  2. If hot-plugging U.2/U.3 PCIe/NVMe SSDs is a critical feature then you should go with dedicated HBAs with PCIe switch chipsets even if your motherboard can supply enough x4 PCIe links to the SSDs directly from the CPU. The reason behind it is that hot-plugging PCIe devices here is quite ā€œstressfulā€ for the CPU, potentially leading to sudden stability issues or system crashes in the event of changing NVMe SSDs.

HBAs with PCIe Switches sit between the motherboardā€™s PCIe lanes and the SSDs and generate their own PCIe lanes and they are designed to handle hot-plug events.

There are two kinds of PCIe Switch HBAs:

  1. A pure-blooded PCIe Switch (they are only able to handle PCIe/NVMe SSDs), for example getting 16 PCIe lanes from the motherboard and creating 32 lanes for SSDs. These HBAs are completely transparent, SSDs show up the same way as they would directly connected to the motherboard. Of course, these HBAs create a bottleneck if you exceed around 32 GB/s (example with PCIe Gen4) with more than four SSDs connected to the HBA. On paper, the best model here is the Broadcom P411W-32P

  2. ā€œTri-Modeā€ HBAs: These also function as a PCIe Switch, however they can also handle SAS and SATA drives. Annoying: Current models like the Broadcom HBA 9500-16i donā€™t just passthrough the drives but add a layer of virtualization between the drives and the host, even though they arenā€™t RAID adapters. Potential issues: Many SMART monitoring tools or SSD manufacturer firmware update tools canā€™t recognize the SSDs connected to such an HBA :frowning:

Extra issue: Some backplanes canā€™t handle ā€œintelligentā€ Tri-Mode HBAs, be sure to check compatibility documentation.

Note regarding U.3: ā€œU.3ā€ backplanes connected to Tri-Mode HBAs can be used to house PCIe/NVMe as well as SAS or SATA/AHCI SSDs due to the universal SFF-8639 connector on the SSDs

Sometime next week I should get an Icy Dock ToughArmor MB699VP-B V3 for testing purposes, previously only tested V1 and V2. I also got Broadcomā€™s P411W-32P and 9500-16i lying around, these guys are a very long storyā€¦

ā€¦BUT with your intended use case I donā€™t see obvious issues, I have a very heavy disdain for Broadcom due to the lack of quality of customer support and how they handle bugs. If you want to raise your blood pressure visit my PCIe adapter thread:

TLDR: If you want to use these HBAs with Windows, for example for workstation purposes, Broadcom HBAs beginning with the 9400 models royally suck because of Broadcomā€™s bad firmware and driver quality.

But next week I could possibly test specific use cases for you, but I only have a 5950X/128 GB ECC system for testing, no EPYCs.

3 Likes

Thanks for your really detailed reply!

I already thought that those PCIe card have kind of an internal switch chip to handle multiple NVMe drives in parallel.

In general a separate, dedicated U.2/3 cage would be totally sufficient for me as I have a separate cage for SATA/SAS drives which is connected to the server mainboard.
I havenā€™t found something with regards to PCIe hot-plug feature but trial and error would be the way to go I guess.

Would be interessted in your story about Icy Dock ToughArmor MB699VP-B V3 - maybe you could leave a quick note here if you have done that?

Sorry to resurrect this thread, but I was curious about what you said about the 9500 series of HBAs adding their own virtualization, so that drives arenā€™t passed through properly. Does that mean that they arenā€™t safe for use with TrueNas?

Which HBAs would you recommend as an alternative? I was looking at the 9500 because it seemed to properly support ASPM and low power states unlike the older HBAs.

More on topic for this thread, do you have any suggestions on how to add U.2 / U.3 backplanes to an existing case, or do I need to buy one that already has one designed for it?

Yes, you understood my view of the Broadcom HBA 9500 correctly.

  • It seems that there arenā€™t any ā€œnativeā€ high-performance HBA (= simple, not RAID-capable) chipsets anymore. Itā€™s the same complex RAID controller chipset thatā€™s just handling the connected drives in JBOD-only mode.

  • No one seems to care about that in the present, unlike about 10 years ago when if you considered these kind of controllers for software-defined storage youā€™d be taken behind the shed and put down. I dislike this since it means that in 99 % of the cases drive manufacturer firmware update tools have issues detecting drives handled by such a controller.

  • I also donā€™t like this situation since these HBAs seem to be plaqued by bugs, currently the HBA 9500 models have firmware version 31.0 and until version 28.0 it reliably crashed a system when you wanted to use S3 sleep (a feature officially supported by Broadcom), it literally took Broadcom years to fix this bug that had already been present in the HBA 9400 line they designated EOL without fixing it even when knowing about it for years.

  • Havenā€™t had issues with the HBA 9500-16i with firmware 28 and newer.

  • Not joking: For small-scale SATA-only stuff I actually prefer these simple controllers, contrary to Broadcom theyā€™ve never failed me once, before selecting one Iā€™m always checking that Iā€™m able to get firmware updates for that specific model.

  • For NVMe U.2 backplanes: You need a case with 5.25" bays to add backplanes like the Icy Dock ToughArmor MB699VP-B V3, be sure to get the V3 version if you ever want to use it for PCIe Gen4 or faster SSDs. Unfortunately 5.25" bays have become increasingly rarer with the decrease of demand for optical disc drives by the general public due to the proliferation of online streaming services.

3 Likes

Thatā€™s a really tragic situation regarding the LSI HBAs, I guess Broadcom is relying on the good reputation from LSI. Iā€™d have to reconsider the 9500 then, it may be worth getting the older ones instead even with the higher power consumption.

Does the extra layer of virtualization only affect NVMe drives, or SATA as well?

I was thinking of going for an Epyc based Siena system, I wonder if PCIe passthrough works well enough that it could remove the need for an HBA.

Have you used the ASMedia 1166 for intensive operations like ZFS rebuilds and the like? I think a lot of TrueNas folks donā€™t recommend these because theyā€™re unsure of whether it survives those sustained loads.

The ToughArmor MB699VP-B V3 and other IcyDock products look insanely expensive for what they are, the lowest price I was able to find for it online was $338. At that price point, I wonder if it makes sense to go for a rackmount chassis with U.2 support, e.g some sort of hybrid 4U chassis with support for 4-8 SATA drives and 4-8 U.2.

Actually havenā€™t checked that, yet since Iā€™ve only been testing the 9500 with NVMe SSDs or SAS expanders, will check it out.

Regarding the ASM1166: As detailed in my review I linked above I tested the controller chipset with 100 % load over a week without any pauses, its PCIe Gen3 x2 interface was completely saturated meaning there isnā€™t any ā€œmore intenseā€ workloads these chipsets could be doing.

2 Likes