How many HBAs does one really need?

Looking at setting up a NAS and I’m a little confused by some of the literature around HBA’s that I’ve seen advertised.

Main Parts For the NAS:
EPYC 7282
Either AsRock Rack ROMED8-2T or Supermicro H12SSL-CT
AICYS RCK-410M case or similar
As many of the IcyDock 16x2.5" enclosures as the case will fit
Mellanox ConnectX3 40G NIC

I might end up with as many as 80 drives and 20 SFF-8643 connectors to connect.

Some HBA’s that I’ve seen claim to connect “up to 1024 SATA devices”, to reach those claims are they connecting the HBA to a bunch of SAS expanders and then out to the SATA drives? or are there some breakout cables that I’m not aware of?

Is there a performance difference between using SAS expanders or just a bunch of HBAs direct to the drive cages?

Short answer yes, see SAS expander backplane performance effects - Thomas-Krenn-Wiki for example

Without known the workload it’ll be near impossible to give you advice but in general you want to use HBAs and avoid SAS expanders however due to hardware limitations (PCIe lanes most likely in this case) it may not be possible to connect all drives to a dedicated HBA.

2 Likes

This is primarily going to be a media server for just a couple of clients, way overkill hardware, but that’s pretty typical of my projects. I’m surprised that I can’t find any HBAs that use a PCIe x16 connector, they’re all x8 either of Gen 3 or Gen 4.

I’d say you need two HBAs and four SAS3 expanders since your expanders will be limited to the x4 SAS3 connection to the HBAs with 48 Gb/s each, go for a PCIe Gen4 HBA to be on the safe side.

I had been using up to two x 24 mechanical SATA drives on a single PCIe Gen3 x8 HBA with two SFF ports going to an Intel 36-port SAS3 expander each.

Used two Lian Li cube cases for 24 3.5" bays each, currently only one is remaining due to storage density increasing:

Unfortunately SATA drives don’t support multipath meaning you can’t functionally connect two SFF ports of one HBA to a single SAS expander, for some reason that is to only work when also using SAS (not SATA) drives even though I don’t understand why the SAS chipset in the expander and the one in the HBA don’t just act like two ethernet switches with x4 or x8 LAG between them, not needing to involve the drives in any way).

1 Like

Would your recommendation change if I was using SSDs instead of HDDS?

If you remain on 40 GbE (how many ports?) I’d say it would be a waste to use more HBAs, since two HBAs connect with 192 Gb/s to their four SAS3 expanders handling the drives.

What would maybe make sense would be using four HBAs for the four SAS3 expanders and cross connect each HBA to each SAS3 expander, so that when using 36-port expanders 20 ports for drives would remain, this way you get a solid fail-over protection if one HBA dies.*

*Though I have personally never witnessed an HBA death that didn’t also crash the entire system.

Does this kind of fail-over work when using SATA drives? Have never tried it at home.

1 Like

The only devices on 40GbE are the NAS and the primary client PC, in principal there could be some clients connected at 10GbE, but more likely they would be either 1 or 2Gb.

Then I don’t see the purpose of adding more HBAs if performance is the only metric, not fail-over protection.

Also assuming the SSDs are SATA, not SAS, correct?

Note: SAS is full-duplex, meaning 12 Gb/s in each direction per lane with SAS3, SATA is only half-duplex so 3 Gb/s in each direction with perfectly mixed loads with SATA 6 Gb/s.

Correct, I’ve looked at SAS SSDs and as expensive as an entire NAS of SSDs is going to be, SAS is like 2-4x the price per TB.

1 Like

Then I’d wait a little while for maybe another person to confirm or challenge my statements and you should be good :upside_down_face:

1 Like

going by your question you have some reading to do.
you know your system requirements and your hardware options.
so this link should help you decide if you need a HBA or something else.

its about 10 mins read time. (likely less)

1 Like

That denser build Icy Dock is a 6x2.5in package, which will require thinner height SSDs
IF you’re looking to use 2.5 HDDs, it would be IcyDocks 4x2.5in package

1 Like

Any reason why you’re going for 2.5" for essentially storage? You can replace like 3-4 2.5" HDDs with one 3.5" which would also simplify a lot of things

1 Like

I’m going for SSDs to reduce power consumption and noise mostly.

I don’t follow you here, so of instead getting like 5-6x 16-18TB HDDs (RAID-Z2/6) you’re going to use 20+ SSDs and/or 2.5" HDDs instead? Keep in mind that most “high capacity” 2.5" HDDs are SMR and/or might be 12.5mm in height.

Right, and like I said, I’m looking at SSDs, not HDDs. All the SSD’s that I’m considering are 7mm, just fine for this enclosure.

Why are you even bothering with SATA in that regard? Just go with NVME which is actually cheaper than “decent” consumer SATA SSDs.

Mostly because I’ve seen other people struggle with addressing a large number of NVME devices on EPYC platforms. Also, I figured NVME SSDs were still more expensive.

Okay, I’ll wish you best of luck as I don’t follow the purpose of this more than “just because” and I haven’t seen anyone advocate using SSDs for “low-performance” storage. Not even large vendors such Netflix uses this setup, see Netflix | Open Connect Appliances .

Keep in mind that the cheap enterprise NVMe (U.2) SSDs that are quite a good deal used draw quite a bit of power even when idling.

So I can understand the OP wanting to go with SATA SSDs here, since distributing maybe 80 Gb/s network user IO over 80 drives means that each drive is mostly idling.

I would maybe add 4 enterprise NVMe SSDs with powerloss protection as something like a RAID10 SLOG/L2ARC/Cache whatever, this way going for “cheap” SATA SSDs for the bulk storage should be a bit less of an issue.