LSI 9500-16i HBA IT-Mode/Equivalent Questions

I’m building a new server with an LSI 9500-16i HBA card. I previously used a pair of LSI 9207-8i cards that had to be flashed to IT mode, but my understanding is that the 9500 cards work only in HBA mode, and don’t have hardware RAID.

Still, I’m a bit confused. Looking at storcli64, the card is listed with the following properties:

  1. Secure boot enabled (hard secure)
  2. Enclosure: Virtual SES
  3. support JBOD=NO but capabilities -> ENABLE JBOD = YES
  4. Each drive shows status as JBOD.

That last one confuses me quite a bit. The card doesn’t support JBOD, but JBOD is enabled, and each drive idenifies itsellf as being in JBOD mode? I don’t know what it’s trying to tell me there (unless storcli64 or the firmware just doesn’t know how to communicate its actual status?).

Why I care about this.
Even though the drives show up properly via lsblk, and I can see them and their serial numbers in smartctl, in Proxmox the drives are listed by WWN/SAS Address , which I’ve never seen before.

lsblk lists the SAS Address as the logical unit ID.

Is this a problem? Or is this how these cards are supposed to behave? I’m very confused at this point, and want to make sure I don’t have issues when I pass the controller through to TrueNAS.

I’d appreciate any advice. Thanks!

1 Like

I also habe a ferw Broadcom HBAs including a 9500-16i and this is presently considered “normal”, just an example of shittification like there is in almost all parts of society.

  • LSI and its original customer support and product quality does no longer exist. It’s Broadcom now.

  • HBAs beginning with the 9400 model line somewhere around 2018 became “Tri-Mode” designs, meaning they can talk SAS, SATA and even NVMe.

  • It seems that these controller chipsets can’t just pass-through a connected drive natively to an operating system anymore. They introduce an abstraction layer where performance is lost and these can also break compatibilty with standard SMART monitoring and SSD manufacturer firmware update tools. The default situation now is that you get a JBOD device, similarly to the situation in the past when you wanted to just use a single drive connected to a fully-fledged hardware RAID controller.

  • Please contact Broadcom’s support and offer constructive criticism that you would like an HBA that acts like a “real HBA” and not just like a hardware RAID controller with drives in JBOD mode.
    (I already did that years ago but a single voice doesn’t help much)

These developments are the primary cause I looked into “cheap” ASMedia and previously JMB SATA HBA chipsets:

3 Likes

TrueNAS did not like the SATA Asmedia controller I tried to use with it. Hopefully, I don’t end up a situation where I need to deal with that again.

The only oddity I’m seeing is that Proxmox chooses to identify them by their SAS addresses … which doesn’t really bother me if TrueNAS doesn’t care about it once I pass the drive through to the TrueNAS VM.

smartctl seems to be able to read drives on the HBA, including seeing individual serial numbers. :slight_smile:

# smartctl -a /dev/sdf
smartctl 7.3 2022-02-28 r5338 [x86_64-linux-6.5.13-1-pve] (local build)
Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Vendor:               HGST
Product:              HUSMM1616ASS200
Revision:             A204
Compliance:           SPC-4
User Capacity:        1,600,321,314,816 bytes [1.60 TB]
Logical block size:   512 bytes
Physical block size:  4096 bytes
LU is resource provisioned, LBPRZ=1
Rotation Rate:        Solid State Device
Form Factor:          2.5 inches
Logical Unit id:      0x5000cca050b00f74
Serial number:        0SY3UMSA
Device type:          disk
Transport protocol:   SAS (SPL-4)
Local Time is:        Thu Mar 21 18:28:23 2024 CDT
SMART support is:     Available - device has SMART capability.
SMART support is:     Enabled
Temperature Warning:  Enabled

=== START OF READ SMART DATA SECTION ===
SMART Health Status: OK

Percentage used endurance indicator: 8%
Current Drive Temperature:     29 C
Drive Trip Temperature:        70 C

Accumulated power on time, hours:minutes 41424:54
Manufactured in week 21 of year 2016
Specified cycle count over device lifetime:  0
Accumulated start-stop cycles:  0
Specified load-unload count over device lifetime:  0
Accumulated load-unload cycles:  0
Elements in grown defect list: 0

Vendor (Seagate Cache) information
  Blocks sent to initiator = 195740704805224448

Error counter log:
           Errors Corrected by           Total   Correction     Gigabytes    Total
               ECC          rereads/    errors   algorithm      processed    uncorrected
           fast | delayed   rewrites  corrected  invocations   [10^9 bytes]  errors
read:          0        0         0         0          0    1770574.024           0
write:         0        0         0         0          0    2977427.984           0

Non-medium error count:        0

No Self-tests have been logged

Luckily, hdparm can see the drives well enough to run a speed test, and i can’t really complain about the performance there for home use.

# hdparm -t /dev/sdf

/dev/sdf:
SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 18 00 00 00 00 20 00 00 c0 00 00 00 00 f8 21 00 00 00 00 00 00 00 00 00 00
 Timing buffered disk reads: 2910 MB in  3.00 seconds = 969.88 MB/sec
root@andromeda1:~# hdparm -t /dev/sdf

/dev/sdf:
SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 18 00 00 00 00 20 00 00 c0 00 00 00 00 f8 21 00 00 00 00 00 00 00 00 00 00
 Timing buffered disk reads: 2910 MB in  3.00 seconds = 969.57 MB/sec
root@andromeda1:~# hdparm -t /dev/sdf

/dev/sdf:
SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 18 00 00 00 00 20 00 00 c0 00 00 00 00 f8 21 00 00 00 00 00 00 00 00 00 00
 Timing buffered disk reads: 2916 MB in  3.00 seconds = 971.82 MB/sec
root@andromeda1:~# hdparm -t --direct /dev/sdf

/dev/sdf:
SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 18 00 00 00 00 20 00 00 c0 00 00 00 00 f8 21 00 00 00 00 00 00 00 00 00 00
 Timing O_DIRECT disk reads: 2568 MB in  3.00 seconds = 855.90 MB/sec
root@andromeda1:~# hdparm -t --direct /dev/sdf

/dev/sdf:
SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 18 00 00 00 00 20 00 00 c0 00 00 00 00 f8 21 00 00 00 00 00 00 00 00 00 00
 Timing O_DIRECT disk reads: 2572 MB in  3.00 seconds = 856.84 MB/sec
root@andromeda1:~# hdparm -t --direct /dev/sdf

/dev/sdf:
SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 18 00 00 00 00 20 00 00 c0 00 00 00 00 f8 21 00 00 00 00 00 00 00 00 00 00
 Timing O_DIRECT disk reads: 2538 MB in  3.00 seconds = 845.76 MB/sec

I’m hoping hdparm just doesn’t know how to talk to SAS2 drives correctly, and there’s not something more sinister going on with those SG_IO errors.

But you are using native SAS SSDs though, not SATA, right (HUSMM1616ASS200)?

hdparm, at least years ago, was only meant for (S)ATA drives, not SAS.
The user-friendly-ish equivalent (that I know of) is smartmontools’ smartctl, though sg3‐utils will let you do anything and everything (others more current with SAS drive management should have better suggestions)

JBOD mode, even on LSI RAID (IR mode) controllers has generally worked fine and will work for you. Modern smart utilities have no problem passing through the controller.

All that said, I agree completely with @aBav.Normie-Pleb on the, ahem, complexification of “IT mode”. I see things like switching “personalities” with storcli/GUI, but who knows if that’s even possible with pure HBAs. I have a 9500-8i on the way to experiment myself.

Yes. They’re 12 Gbps SAS2 SSDs.

hdparm, at least years ago, was only meant for (S)ATA drives, not SAS.
The user-friendly-ish equivalent (that I know of) is smartmontools’ smartctl, though sg3‐utils will let you do anything and everything (others more current with SAS drive management should have better suggestions)

Yeah, I just used hdparm because it was installed and I know how to make it go. I don’t consider its actual reported numbers an authorititive indicator of how the drive(s) will actually perform in a ZFS pool, but I just wanted to make sure it wasn’t giving me garbage read numbers.

As it is, one of those drives could easily saturate a 10 Gbps NIC on read, and I have 16 of them. Even if the HBA introduces a 20 percent efficiency hit, my bottleneck will always be the 2x10Gbps LACP’d NICs on the server. I’ll probably explore whether the disks are bottlenecked just so I know, but I’m not going to worry about it unless something goes badly enough wrong that the disks themselves become a bottleneck.

The quoted error, from my limited research so far, is triggered by smartd, the smartmontools daemon, doing an automatic health check to feed the drive’s health status to Proxmox/TrueNAS/whatever. The HGST firmware handles it fine, but the HP Firmware sends data to smartd that translates into “success but something weird happened,” which I’m interpreting to be the HP firmware sending “something weird” back that would make sense to an actual HP server/backplane.

So, this is probably fine, but I don’t like it because I’d rather my logs not be filled with meaningless warnings so that actual warnings and errors are easier to see, especially when those meaningless errors and warnings are elevated to the point they spit out on my console and in dmesg.

JBOD mode, even on LSI RAID (IR mode) controllers has generally worked fine and will work for you. Modern smart utilities have no problem passing through the controller.

I just upgraded the firmware/EFI/BIOS/SOC (these are 4 separate things, though of course you’re only using EFI or BIOS, never both) on the LSI 9500-16i. Part of the new firmware (and updated storcli64) was cleaning up some of the remnants of IR-mode features, which aren’t supported by the 9500. It’s an IT-mode only card with no hardware raid at all, and when you ask storcli for the list of supported commands, you’re asking for the list of IT commands, so for better or worse, Broadcom considers this a card that only supports what they consider IT mode.

All that said, I agree completely with @aBav.Normie-Pleb on the, ahem, complexification of “IT mode”. I see things like switching “personalities” with storcli/GUI, but who knows if that’s even possible with pure HBAs. I have a 9500-8i on the way to experiment myself.

It’s definitely simpler on, e.g., an LSI 9207-8i. The 9500-16i is an actual PCIe 4.0x8 card with enough bandwidth to support all my SSDs at full throttle, which is the main reason I wanted it. I could also use it with NVMEs later. As a secondary benefit, it runs cooler and uses less power than the previous generation.

The 9500 does not support personalities (show personality will fail as an unsupported command, and only supports the IT mode command set. I’ll be interested to see what someone more experienced makes of it. :slight_smile:

I suspect that “JBOD not supported”/“JBOD enabled” thing might be a bug, as they are still working on both the firmware and storcli to get it to correctly represent what the card is doing. The last software update was in December, I think.

Please report back with your experiences. I’m willing to deal with quirky behavior if the card is actually stable and does what it’s supposed to do well enough for a homelab/home office NAS.

1 Like