Hardware raid, motherboard raid, home server cheat codes, and adventures in limited bandwidth HBA options?

Howdy people.

I am putting together a nas enclosure. I’ve seen several of Wendel’s videos / rants on the perils of trusting raid devices, hardware etc. I’ve also seen the videos on bringing cheat codes / changing nvme slots to utilize other types of devices.

For my upcoming project I have a bit of a problem. I need 8 sata connections. I have one empty m.2 / nvme slot. The motherboard has 4 sata connections, but I recall Wendel advising against using motherboard connections for raid. I have seen m.2 > 6x sata adapters, but they don’t seem to be dedicated HBAs and that would still require me using 2 sata connections on the motherboard. Is my best option using a complete m.2 to x16 pcie slot adapter with a dedicated LSI HBA card? It seems a bit messy cable wise, but I was wondering what you would recommend reliability-wise.

I had considered stepping up an industrial board (asrock rack) with something like oculink onboard, but that and associated components would nearly triple my budget, or more. (w680 board and ecc ddr5 ram is super expensive, so I’d like to save that for an upgrade a few years down the line)

Needs: 8 sata connections, reliability to work with either truenas or unraid and zfs
Has: one single pcie 4.0 m.2 slot.
Possibilities: adapter lego my way to an LSI HBA or possibly an m.2 to 6x sata adapter?

Do you guys have any recommendations?

If you do decide on software raid, and only have a M.2 slot free, you could use an IO-M2F588-8I which uses JMB585 chipsets which aren’t as bad as some of the past budget SATA-PCIe bridge chips, but obviously not on par with hardware raid or an hba in terms of reliability.

what board and form factor are you considering? i assume you have a case, or are getting a case with a specific size and feature set?

Board is still up in there air. Form factor is probably set as ITX though.

The main thing is that currently the only full size pcie slot on an itx board is already dedicated to my connectx 7 card. I was looking at getting the new-ish jonsbo n3, as it comes with an SAS compatible backplane. The case is ITX only, unfortunately.

I may have an rog b550 itx to use, but I may go with something else that uses ddr4 and supports pcie 4.0 for the time being (depends on rma process lol). I could splurge and try to get an asrock rack board that might have more m.2 slots or even oculink, but they are rather pricey and at that point I’d have to start considering moving to a newer ddr5 platform (which would add even more to the price, especially with ecc). Unfortunately the only competitive asrock rack boards are from intel, as the am5 platform boards only feature 1 m.2 slot and a x2 bandwidth oculink slot. At the 500$ or more price point I was hoping to see a stronger showing from amd / asrock in terms of pcie connectivity.

Anyways. I’m basically limited to one m.2 slot for 8 drives. The drives are SAS, but the backplane is sata in (cable side) and sas out (drive side), so that should be fine.

I hadn’t seen that io jmb585 device before. It doesn’t look like there are actually 8 drives supported though? I think it’s one chip that supports 4 drives and a 2nd chip that supports 1 drive but is multiplied out through 4 ports.

I guess my question still stands, would it not be better to use one of the asm1166 M.2 to 6x sata adapter boards with 2 onboard sata connections or alternatively use an M.2 to pcie x16 breakout and put a full blown LSI card in there?

Hardware raid really isn’t a thing anymore, software raid is really good. If you have a raid card, I’d just use it as an HBA

I’m also currently looking for a solution that would give me 8 SATA ports form a single m.2 slot. Not looking for HW RAID because ZFS.

Currently I’m considering the following three options:

  • ASM1166: only 6 ports, but I could live with that, although I’d prefer 8 ports, also nice and compact
  • JMB585 + JMB575: the JMB585 is a 5 port controller where 1 port is connected to the JMB575 which is a 1 to 4 expander chip, so 4 reasonably fast and 4 slow ports. Nice and compact but I’m not a fan of that configuration.
  • M.2 to PCIe 4x Adapter + A proper HAB card: Messy cabling/mounting wise, more or less all of the adapters I found seem a bit ugly to me, but I can use a good server grade LSI based controller card I already have.

Thinking about it, is anyone aware of an adapter solution that can be mounted on some standoffs above the motherboard where the top most card slot would be located, resulting in a half height slot in a full height chassis?

Actually you should be able to fit a mini-DTX motherboard in that case and have 2 PCIe positions. I know Asrock still makes a decent amount of mini-DTX boards, but they don’t label them appropriately so they aren’t easy to find.

Oh you are right, I didn’t realize it was using an port multiplier, I rescind my recommendation of it, port multipliers are nothing but trouble even if they are supported in software.

Hardware raid still has a pretty big niche and the manufactures are constantly producing new hardware to fill it. There are many customers that explicitly put hw raid into a SOW; I even have one customer that put a program-wide moratorium on ZFS into a SOW in blood.

In big companies, maybe. But consumers or pro-sumer, nah

Here’s what the mini-DTX-ish motherboard looks like in the N3:

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.