LSI 9305-16e HBA cannot see SATA disks

Hello everyone,

I’m starting to exhaust available sources of information as none of us can seem to determine the issue with this HBA so far. Been on the Unraid forums, discord, Reddit, etc and so far no luck.

Current system:
-Dedicated server chassis with i9-12900k, 64GB DDR5
-2x SM JBOD’s connected to server HBA via 4x 8644 SAS cables (2xJBOD)
-15 disks per JBOD, 30 total
-10 SATA disks
-20 SAS disks

I purchased this card second hand and immediately flashed to IT mode, flashed latest Firmware (no BIOS) and ran it through lsiutil in Unraid terminal to reconfigure the ports and confirm they are open (annoying Dell card thing).

Now upon installation and replacement of my old 9206-16e card, the new 9305-16e is only seeing the SAS drives for some reason.

If I switch the cables back to the 9206-16e (both cards currently installed for troubleshooting and same 8644 ports on both hba’s) it sees everything no problem, but not on the 9305-16e. The SATA disks do not show up in the boot up, system devices, or the array gui.

Current troubleshooting attempts:
-triple check documentation
=card is confirmed to support SAS & SATA both
-removed SAS drives so only SATA drives in JBODs
=HBA then sees nothing at all
-swap PCIe slots
=No change, confirmed working
-swapped SAS cables
=No change, confirmed working
-triple checked FW, BIOS, & Ports
=All check out good
-checked all disks individually
=All check out good

We’re kind of lost at this point of what it could be, if it’s a bad card somehow only effecting SATA, or some weird setting within the lsiutil that is blocking the SATA.

Any help is much appreciated

Weird thing. Should work no problem, especially when 9206-16e HBA worked.

Have you tried swapping 1 or 2 SATA drives with SAS drives (between JBODs), just to see if they get recognized this way?

Maybe some weird block size that’s supported/enabled on the old HBA but not (yet) on the new?

If youre still having this issue, send an email to theartofserver. Hes a great guy and knows everything about these things.

don’t recommend for production use…

the 9305-16e is a SAS card first and foremost

In RAID, it can support both but in IT mode, you lose that functionality (depending on the firmware flashed)

You can try reflashing to another firmware to try and regain this functionality. But honestly, 9200-16e HBA cards are less than $100.

Not sure what you mean here. If it’s unraid, it’s probably zfs. If it’s zfs, then IT mode is the only way it should be done. There is no “production” anything factoring into this either, since IT mode is produced and distributed by the device manufacturer themselves. Besides that, IT mode is an enterprise solution when combined with ZFS.

I have not heard of any standard 92xx or 93xx IT mode firmware disabling the support for sata. Do you have any examples of this happening?

@Ezekial66
Forgot to ask - What jbod are you using? Does it have a sas expander, and if so, Does it support sata drives? Do you have a mixture of sas and sata drives?

You mentioned that you haven’t flashed the bios drivers to it. You should probably do that. Here is a guide for a way to do that with the commands, although for a different card. Additionally, are you using the ACM IT mode fw? Iirc it means “active cable management” and should be used with external port cards. Don’t remember exactly why, sorry.

OP is not using a

He’s likely using a 0vym4 based on:

Which is known to have crossflashing issues

leading us back to:

is false, you should use a dedicated HBA and not a RAID card nerfed into being an HBA by crossflashing firmware for production use.

With the disclaimer:
ALL of my devops machines are running IT flashed RAID cards.
Half of my testing machines are running IT flashed RAID cards.
None of my staging machines have IT flashed RAID cards.
None of our production machines have IT flashed RAID cards.

They have a place, but not in prod.

1 Like

Yep I misunderstood sorry. For sure, none of those megaraid or other general raid cards should be used in that manner if in a production environment and if it can be avoided. Some of them are “lite” raid cards and carry little to no cache, so preference to that if raid cards are the options for less bs. The bigger the heatsink, the more frequent the issues. The worst ones I have seen are a variant of the 9200 with external minisashd (8644). I have never seen such failure rates lol. They’re worthless for good reason.

I do still say, though, that if you are using zfs, you should absolutely be using IT mode. There’s big ol warnings plastered all over the pool maker in proxmox just to prevent people from using raid cards with it haha.

1 Like

I second that

We went 9300 series when they were still too expensive and have been happy.
Skipped 9400’s and deploy 9500’s

So funny story

One of the big OEM’s for turn key NAS billed towards A/V and Mac users is deploying $50-250k servers with ZFS on top of RAID5.

They use a reskinned FreeNAS (not TrueNAS) to do so.

Recoveries from those things is a fuckin monster and I quote $10k base recovery fee.

93xx series HBAs have the reputation for being rock solid and “just work” and I have dealt with them enough to agree with that sentiment. Haven’t heard too much on the 9400s as far as sas/sata goes at all anywhere, but I have heard absolutely awful things as far as the NVMe support goes.

I do have a 9400-16i and it had quite a few issues with sas/sata until I updated and flashed both bios onto it. Had to find that github gist to find the way to flash it with dos, there are no good guides for it otherwise and the windows one didn’t even let me login (bug they have never fixed). The nvme support has been said to be so abysmal that it isn’t worth buying the proprietary cables to give it a try until I stumble upon one. Even then, I have a raw pcie card, so why even use it unless I need more lanes than I have from the board. Haven’t heard much about sas/sata side from other people though so it is hard to make up my mind on them. Perhaps I am an outlier with my singular card. How has the 9500 been for you in sas/sata as well as nvme? Do you use it for nvme? I bought a 9500-8i recently because there were some for 60 on ebay and I got a backplane I want to mess with, so I want some anecdotes on what I can expect from them lol.

I know you probably can’t say but, just guessing, does the name share any similarities with a certain floating marine animal? This way that you described it rang quite a few bells for me.

Hhahhhahahhhahha. What in the world? The warnings are highlighted in bold red on a billboard the size of the moon. Why and how did this make it through lmao. Also, what does this kind of recovery involve? How would you go about recovering from that? New card, import pool?

i know who you are talking about, but they do not do that anymore.

It’s a fuckin nightmare

We do all the standard recovery things:
-bit for bit image to new larger drive
-send dead drives to clean room for further recovery
-export secure boot keys (there aren’t any, they boot legacy with all hardware security disabled)
-export RAID card configuration (last one had a dead card)
-replace card and import configuration
-realize 4 drives on dead RAID card port had the RAID configuration wiped and needs it manually rebuilt
-start drinking when the rebuild fails because 4 of the 12 drives were dead and could only partially be recovered
-swap in enough good drives and try to rebuild
-repeat those last 2 making tweaks to the raid array offsets until array begins rebuilding
-realize too much striping is missing to recover full array
-try to rebuild ZFS pool from TrueNAS
-recover 90 of the 118 TB of data
-receive call that clean room cannot recover any more data due to platter damage

Built them a new TrueNAS server with 20x 20TB WD enterprise drives and move on with life.
That one is configured 12 data drives, 4 parity drives, and 2 hot spares
Used LSI 9500-16i cards with the drives distributed across ports

1 Like

Dear god that sucks. I’d get too invested in it and lose my mind over that crap. I realize that we’re super off topic but wanna know more lmao. How does mixing the raid card and zfs cause such an issue here, as well as so many dead drives? Or is that just how awful it is recovering any pool made in raid cards?

That’s a RAID5 issue combined with SMART thresholds
you lose 1 / 5 drives and everyone thinks you can just rebuild
reality is most drives in an array are from the same lot and have lived the same life
so when you go to rebuild, another drive dies and you’re left with a lost array
hence the bit for bit copy before anything else

another layer of abstraction to slog through
we like to keep pools to 6 drives as any of our workstations can have 6 drives connected simultaneously for easy imports to TrueNAS

some day I’ll make a video of a recovery
it’s typically 20 minutes of interaction and 40 hours of let her run

1 Like

Damn I’d watch every bit of that you have, it sounds like something could be learned there. Thank you for the explanations, I really appreciate it. :slight_smile:

1 Like

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.