Yes. They’re 12 Gbps SAS2 SSDs.
hdparm, at least years ago, was only meant for (S)ATA drives, not SAS.
The user-friendly-ish equivalent (that I know of) is smartmontools’ smartctl, though sg3‐utils will let you do anything and everything (others more current with SAS drive management should have better suggestions)
Yeah, I just used hdparm
because it was installed and I know how to make it go. I don’t consider its actual reported numbers an authorititive indicator of how the drive(s) will actually perform in a ZFS pool, but I just wanted to make sure it wasn’t giving me garbage read numbers.
As it is, one of those drives could easily saturate a 10 Gbps NIC on read, and I have 16 of them. Even if the HBA introduces a 20 percent efficiency hit, my bottleneck will always be the 2x10Gbps LACP’d NICs on the server. I’ll probably explore whether the disks are bottlenecked just so I know, but I’m not going to worry about it unless something goes badly enough wrong that the disks themselves become a bottleneck.
The quoted error, from my limited research so far, is triggered by smartd
, the smartmontools
daemon, doing an automatic health check to feed the drive’s health status to Proxmox/TrueNAS/whatever. The HGST firmware handles it fine, but the HP Firmware sends data to smartd
that translates into “success but something weird happened,” which I’m interpreting to be the HP firmware sending “something weird” back that would make sense to an actual HP server/backplane.
So, this is probably fine, but I don’t like it because I’d rather my logs not be filled with meaningless warnings so that actual warnings and errors are easier to see, especially when those meaningless errors and warnings are elevated to the point they spit out on my console and in dmesg
.
JBOD mode, even on LSI RAID (IR mode) controllers has generally worked fine and will work for you. Modern smart utilities have no problem passing through the controller.
I just upgraded the firmware/EFI/BIOS/SOC (these are 4 separate things, though of course you’re only using EFI or BIOS, never both) on the LSI 9500-16i. Part of the new firmware (and updated storcli64
) was cleaning up some of the remnants of IR-mode features, which aren’t supported by the 9500. It’s an IT-mode only card with no hardware raid at all, and when you ask storcli
for the list of supported commands, you’re asking for the list of IT
commands, so for better or worse, Broadcom considers this a card that only supports what they consider IT mode.
All that said, I agree completely with @aBav.Normie-Pleb on the, ahem, complexification of “IT mode”. I see things like switching “personalities” with storcli/GUI, but who knows if that’s even possible with pure HBAs. I have a 9500-8i on the way to experiment myself.
It’s definitely simpler on, e.g., an LSI 9207-8i. The 9500-16i is an actual PCIe 4.0x8 card with enough bandwidth to support all my SSDs at full throttle, which is the main reason I wanted it. I could also use it with NVMEs later. As a secondary benefit, it runs cooler and uses less power than the previous generation.
The 9500 does not support personalities (show personality
will fail as an unsupported command, and only supports the IT mode command set. I’ll be interested to see what someone more experienced makes of it.
I suspect that “JBOD not supported”/“JBOD enabled” thing might be a bug, as they are still working on both the firmware and storcli to get it to correctly represent what the card is doing. The last software update was in December, I think.
Please report back with your experiences. I’m willing to deal with quirky behavior if the card is actually stable and does what it’s supposed to do well enough for a homelab/home office NAS.