I am currently having 1x U.2 PCIe adapter but I would like to have another one to replace my mirrored SATA SSDs.
Unfortunately it seems that more and more enterprise NVMe flash is no longer available at M.2 slots but only on U.2/3.
They do make m.2 to u.2 adpaters, the main reason for the u.2 shift is capacity you just cant fit that much on m.2 even with it being bigger layer cells.
The dock system I have used for my u.2 internally that is ok is EverCool Dual 5.25 in. Drive Bay to Triple 3.5 in. HDD Cooling Box. in the one thats only holding 2 I just used the 2.5" cage, for the one I had 3 installed in I got 2.5 to 3.5 adapters to fit all 3.
Replaced the fan with a noctoua industrial and called it a day.
Donāt forget about performance. M.2 has power limitations where U.3 is much more generous. Thatās why you see better stats on U.2/3 models and why we donāt see M.2 drives drawing 20W.
M.2 is cheaper because you donāt need a cable and thatās why we use them in consumer hardware.
And there is just no space for m.2 on a server board and you canāt hotplug M.2. Itās mainly used for the boot drives to have something better than eMMC or SD card.
Yeah I use M.2 drives in my 2U server for different stuff but this is being tied to the above mentioned chinese 4x M.2 card. (I replaced the Samsung SSDs)
Some Server motherboards have PCIe hot-plug options built into their BIOSes, can you check that with your Supermicro model?
If hot-plugging U.2/U.3 PCIe/NVMe SSDs is a critical feature then you should go with dedicated HBAs with PCIe switch chipsets even if your motherboard can supply enough x4 PCIe links to the SSDs directly from the CPU. The reason behind it is that hot-plugging PCIe devices here is quite āstressfulā for the CPU, potentially leading to sudden stability issues or system crashes in the event of changing NVMe SSDs.
HBAs with PCIe Switches sit between the motherboardās PCIe lanes and the SSDs and generate their own PCIe lanes and they are designed to handle hot-plug events.
There are two kinds of PCIe Switch HBAs:
A pure-blooded PCIe Switch (they are only able to handle PCIe/NVMe SSDs), for example getting 16 PCIe lanes from the motherboard and creating 32 lanes for SSDs. These HBAs are completely transparent, SSDs show up the same way as they would directly connected to the motherboard. Of course, these HBAs create a bottleneck if you exceed around 32 GB/s (example with PCIe Gen4) with more than four SSDs connected to the HBA. On paper, the best model here is the Broadcom P411W-32P
āTri-Modeā HBAs: These also function as a PCIe Switch, however they can also handle SAS and SATA drives. Annoying: Current models like the Broadcom HBA 9500-16i donāt just passthrough the drives but add a layer of virtualization between the drives and the host, even though they arenāt RAID adapters. Potential issues: Many SMART monitoring tools or SSD manufacturer firmware update tools canāt recognize the SSDs connected to such an HBA
Extra issue: Some backplanes canāt handle āintelligentā Tri-Mode HBAs, be sure to check compatibility documentation.
Note regarding U.3: āU.3ā backplanes connected to Tri-Mode HBAs can be used to house PCIe/NVMe as well as SAS or SATA/AHCI SSDs due to the universal SFF-8639 connector on the SSDs
Sometime next week I should get an Icy Dock ToughArmor MB699VP-B V3 for testing purposes, previously only tested V1 and V2. I also got Broadcomās P411W-32P and 9500-16i lying around, these guys are a very long storyā¦
ā¦BUT with your intended use case I donāt see obvious issues, I have a very heavy disdain for Broadcom due to the lack of quality of customer support and how they handle bugs. If you want to raise your blood pressure visit my PCIe adapter thread:
TLDR: If you want to use these HBAs with Windows, for example for workstation purposes, Broadcom HBAs beginning with the 9400 models royally suck because of Broadcomās bad firmware and driver quality.
But next week I could possibly test specific use cases for you, but I only have a 5950X/128 GB ECC system for testing, no EPYCs.
I already thought that those PCIe card have kind of an internal switch chip to handle multiple NVMe drives in parallel.
In general a separate, dedicated U.2/3 cage would be totally sufficient for me as I have a separate cage for SATA/SAS drives which is connected to the server mainboard.
I havenāt found something with regards to PCIe hot-plug feature but trial and error would be the way to go I guess.
Would be interessted in your story about Icy Dock ToughArmor MB699VP-B V3 - maybe you could leave a quick note here if you have done that?
Sorry to resurrect this thread, but I was curious about what you said about the 9500 series of HBAs adding their own virtualization, so that drives arenāt passed through properly. Does that mean that they arenāt safe for use with TrueNas?
Which HBAs would you recommend as an alternative? I was looking at the 9500 because it seemed to properly support ASPM and low power states unlike the older HBAs.
More on topic for this thread, do you have any suggestions on how to add U.2 / U.3 backplanes to an existing case, or do I need to buy one that already has one designed for it?
Yes, you understood my view of the Broadcom HBA 9500 correctly.
It seems that there arenāt any ānativeā high-performance HBA (= simple, not RAID-capable) chipsets anymore. Itās the same complex RAID controller chipset thatās just handling the connected drives in JBOD-only mode.
No one seems to care about that in the present, unlike about 10 years ago when if you considered these kind of controllers for software-defined storage youād be taken behind the shed and put down. I dislike this since it means that in 99 % of the cases drive manufacturer firmware update tools have issues detecting drives handled by such a controller.
I also donāt like this situation since these HBAs seem to be plaqued by bugs, currently the HBA 9500 models have firmware version 31.0 and until version 28.0 it reliably crashed a system when you wanted to use S3 sleep (a feature officially supported by Broadcom), it literally took Broadcom years to fix this bug that had already been present in the HBA 9400 line they designated EOL without fixing it even when knowing about it for years.
Havenāt had issues with the HBA 9500-16i with firmware 28 and newer.
For NVMe U.2 backplanes: You need a case with 5.25" bays to add backplanes like the Icy Dock ToughArmor MB699VP-B V3, be sure to get the V3 version if you ever want to use it for PCIe Gen4 or faster SSDs. Unfortunately 5.25" bays have become increasingly rarer with the decrease of demand for optical disc drives by the general public due to the proliferation of online streaming services.
Thatās a really tragic situation regarding the LSI HBAs, I guess Broadcom is relying on the good reputation from LSI. Iād have to reconsider the 9500 then, it may be worth getting the older ones instead even with the higher power consumption.
Does the extra layer of virtualization only affect NVMe drives, or SATA as well?
I was thinking of going for an Epyc based Siena system, I wonder if PCIe passthrough works well enough that it could remove the need for an HBA.
Have you used the ASMedia 1166 for intensive operations like ZFS rebuilds and the like? I think a lot of TrueNas folks donāt recommend these because theyāre unsure of whether it survives those sustained loads.
The ToughArmor MB699VP-B V3 and other IcyDock products look insanely expensive for what they are, the lowest price I was able to find for it online was $338. At that price point, I wonder if it makes sense to go for a rackmount chassis with U.2 support, e.g some sort of hybrid 4U chassis with support for 4-8 SATA drives and 4-8 U.2.
Actually havenāt checked that, yet since Iāve only been testing the 9500 with NVMe SSDs or SAS expanders, will check it out.
Regarding the ASM1166: As detailed in my review I linked above I tested the controller chipset with 100 % load over a week without any pauses, its PCIe Gen3 x2 interface was completely saturated meaning there isnāt any āmore intenseā workloads these chipsets could be doing.