HBA->Backplane->Exos 2x18 drives configuration

Hi there, first post on the L1 forums. This is probably going to be a long one…

I’m building up a new NAS and planning to run ZFS on it, most likely Truenas but possibly Houston. The NAS will run in a Proxmox VM with HBA (and possibly NIC) passed through. But I don’t think the host software really matters much for my question.

Use case:

It will be a new, general file server for client backups, archival storage and also for Proxmox VM zVols. It will take over as my main NAS and also act as a Proxmox node. The “old” NAS (8 bay Silverstone backplane with Xeon E2136) will be relegated to backup for the main NAS and also still serve as a Proxmox node.

Network infrastructure:

Building out a 10Gbit backbone for my home lab/home network. I plan on getting a multi port SFP+ managed switch but for now I’m using only a couple of connections over SFP+ DACs between dual SFP+ PCIe cards and the two SFP+ ports on my Mikrotik switch. The new storage server/NAS would have at the very least a dual SFP+ card in it. I’m open to suggestions here if I’m way under-provisioning. Do I want 40Gbit direct connections between the NAS/Proxmox boxes??

Hardware:

  • I bought 6 of the SATA 2x18 Exos recertified drives (all 0 power on hours by the way!!). Will probably keep one as a hot spare and initially run a bunch of 2 drive mirror vDevs, of course taking this into account.

  • Will likely get some SATA SSDs to act as a separate pool for the zVols in the future, but for now, it will be all spinning rust on the array of new 2x18 drives.

  • I am planning to get a new chassis to house my demoted 5900x in a MSI Unify x570 motherboard. I’m debating between the HL15 (expensive) and a 3U/4U surplus Supermicro chassis. It appears that with the HL15, you get a “direct connect” to the drives so it’s 4 physical drives per SAS cable. So as I understand things, to properly populate and eventually use all 15 slots, I’d need a 16i HBA card (or 8i plus a SAS expander).

With the Supermicro chassis, there’s a couple of options, one with a TQ direct connect backplane and another with a SAS2 expander backplane.

So assuming I go with the less expensive option of Supermicro surplus, what’s the ideal topology? The motherboard has PCIe gen 4 slots so I shouldn’t be bandwidth limited by the slots themselves, but rather what I plug into said slots.

– Disclaimer – I’m omitting the “per second” for all of the bandwidth numbers below. It applies everywhere and is assumed. Think of it like making the Kessel Run in less than 12 Parsecs. The units matter but are sometimes not used correctly. :wink:

The scenarios as I see it are thusly:

  • 9305-16i (or 9400-16i) HBA to direct connect “TQ” backplane using all 16 SAS3 lanes, which presumably provides the least bottleneck. Looks like that would get me 60Gbit or so bandwidth from the card to the system. For sure, all drives would be connected at their full 6Gbit SATA bandwidth. Right? Or…

  • Go with the SAS2 backplane that has a single SAS2 expander chip onboard. That would then theoretically only need one, four-lane cable but I’d be limited to 4x6Gbit bandwidth. May as well use two cables to double that bandwidth so I’d then only need an 8i HBA.

I’ve searched a good bit and come up with conflicting answers regarding 6Gbit SATA drives through a SAS2 expander chip. One camp seems to say that yeah, you’ll get full bandwidth but another camp seems to suggest that the SATA bandwidth will be cut in half so I’d only get 3Gbit per drive after traversing the SAS2 expander. Seems strange to me but this is really my fundamental question here.

I realize that with only 5 or 6 of the 2x18 drives I’d only really need about 24Gbit of bandwidth to saturate the drive’s maximum possible throughput. (500MB is approx 4Gbit max throughput each) I do want to allow for future expansion into the rest of the 16 (or more) drive bays in the chassis. And most likely the expansion would be using SATA SSDs which are even more hungry for that bandwidth.

So how would you do it and with what SAS/HBA/backplane hardware? Trying to minimize costs, I’m open to surplus/server pull hardware but will likely avoid “new” clones from Asia.

The floor is open, and thanks in advance for any help/insights!!

Welcome!

Likely what is happening here is that people are getting confused on what the SAS2 dual port speed is and assuming the only reason a SAS connection of the same “generation” as SATA was twice as fast is because it was dual port. That is not the case, SAS really is just faster.

​​​ ​ ​

​​​ ​ ​
I would be very worried about using dual actuator drives with a backplane, I’m pretty sure the drives that enumerate as LUNs wouldn’t work on a backplane that didn’t have explicit firmware support, but the SATA DA drives discriminate via LBAs ranges, so I’m not sure of their compatibility with backplanes.
As a tangent, I’m making a 32 bay desktop case and I explicitly chose to directly attach all drives via cabling as opposed to a backplane because I wanted to upgrade to DA drives in the future.

Regarding HBAs, LSI/Broadcom has burnt up a significant portion of any goodwill they had. My confidence that they will release quality firmware/drivers in the future isn’t particularly high.

Another thing to keep in mine is cooling, the DA drives produce a good amount more heat than “normal” hdds, so 16 all bunched up could be a cooling challenge.

​​​ ​ ​

​​​ ​ ​

1 Like

Thanks for the reply!!

All very good points!! And duly noted.

I don’t think I’d add many/any more of the DA drives (or even any more rust) so the extra heat hopefully won’t become a concern. They’re more likely to get caddy-adapted 2.5" SSD neighbors that will be much cooler.

Regarding the expander vs. direct, I think you pushed me over the edge to just go direct to the HBA, and it seems like I’d then need a 16 lane HBA.

Sounds like you’re steering away from Broadcom/LSI, so what would you recommend in lieu of those? Ideally, I’d like to keep it to one PCIe slot so it’s gen3 or newer to handle all of the bandwidth.

I’ve personally switched over to Microchip/Adaptec adapters and am so far happy.
I’m also fond of Areca, even though they are based off of an LSI chipset they tend to do better with firmware.

2 Likes

My Adaptec ASR 71605 16 Ports SAS HBA with 4x breakout cables arrived today. All drives will be directly connected to the HBA via a SAS->SATA breakout cable. We’ll see how it all goes.

My current thought is to set it up with 4 vdevs of mirrors, like was suggested in the 2x18 thread:

A0-B1 mirror
A1-B2 mirror
A2-B3 mirror
A3-B0 mirror

Purposely offset/shuffled such that a single device failure won’t destroy the pool. I might try tweaking the script from @John-S to create this mirrored layout.

It should yield roughly 36GB of space that’s pretty fast. I’ll keep one as a hot spare and might use the other one as a “slow” 18GB spare to help deal with transferring tons of files.

Or to go with 5 drives in RaidZ2 and the sixth as a hot spare? I don’t really need that much space but…

1 Like