ZFS SATA/SAS controler

I have a Supermicro 2U Server X7QCE server. The onboard SATA/SAS controller has a maximum drive capacity of 2tb per drive (X6 drives). I am wanting more storage than the 12tb this would allow. I am running ESXi 5.5 to run VMs on the machine one of the VMs I am running is FreeNAS. I plan on using ZFS for the storage for this machine. The question I have is this. Can I buy a PCIe SATA/SAS controller that doesn’t have all the RAID features that I wont need with ZFS? Or do I need to have a RAID card? If so can someone recommend a card for me? I am looking for it to support at least 6 drives @ grater than 4tb per drive. I am just using this for home use so I don’t need SAS capability in fact I will likely run SATA drive because of the cost difference.

Yes, you can buy PCIe cards that can manage disks without doing hardware raid. These are typically called HBAs. If you are comfortable doing some well documented software flashing of the card, a H200 is a very cheap and reliable option, although IIRC only can take up to 4TB drives. Not tested higher personally. SAS cards nearly always can use SATA drives, so don’t have to worry about that.

1 Like

So something like this? I says that it has already been flashed. I was hopping for something that was able to support larger capacity drives. 4tb per drive will work for me but I was hopping for more. Would you recommend getting one that has already been flashed or would it be better for me to get one and flash it myself? Also this doesn’t mention any cables will it come with them? Or will I need to get them separately? are they a standard connector or is it something i will need to get from Dell?

Flashing isn’t that hard but time is money

That’s more than I’m used to paying, ~$30 ebay. Like mutation said, pre flashed just saves you time. Doesn’t look like it has the disk size limit, different card.

Ideally you DON’T WANT a raid card, just ports. You do want to make sure that the ports get the full throughput per port otherwise you’ll be performance limited, but the dumber the controller the better. So long as it doesn’t get in the way by trying to be clever.

Fancy raid controllers try and lie to the host platform about whether or not writes have been committed to disk (when it is only in controller cache) for performance reasons and ZFS doesn’t want that.

But yes, a PCIe SAS/SATA controller without fancy features is ideal. Raid controller less so. If you do get a raid controller sometimes you need to jump through hoops to turn features off to get individual drives presented to the host.

This talk jogged my memory a bit and I’m surprised I hadn’t thought of it sooner when it comes to the ZFS topic. Remember raidcore? The gimmick being a hardware RAID card at reduced cost because they developed their own software stack without depending on some proprietary firmware. Thing is I don’t think their code was open source either. /shrug. Doesn’t matter they were gobbled up by broadcom and buried (at least I haven’t heard mention of them).

Anyhow, my point is that back then there was a lot of marketing FUD against doing RAID in software so they could push expensive hardware solutions. Come forward almost 20 years and not only are people looking for HBA’s w/o RAID functionality, but people are trusting their data redundancy to ZFS and other solutions. As well as all the other “Software defined X” in the enterprise things have certainly taken a 180 over the years. Mostly a good thing, yeah?

On topic - RAID cards usually have horrid BIOSes anyways, don’t pay the extra $$$. Though I suppose demand for HBAs might have made the price difference not as much as one would expect.

Hardware RAID still has its place, however that place is mostly inside of storage appliances with commercial support and their own supporting OS these days, or for running OS on bare metal that does not support something competent like ZFS (e.g., Windows, VMware ESXi, etc.).

Hardware RAID has always had the trade-offs of cross-compatibility with other RAID controllers (i.e., unless you can source a same model RAID controller when it fails, you’re potentially screwed due to on-disk format), price, etc. - but ZFS never used to exist.

Even without ZFS though, CPUs have been well fast enough for software RAID to be as fast or faster than low cost hardware RAID setups for a decade or more at least.

IIRC basically anything with MMX support is essentially capable of doing parity calculations really fast, and that stretches back to what… 1996? 1997? So since then, contemporary hardware paired with similar age storage for RAID was “good enough”.

Obviously people who make RAID controllers are going to poo-poo open, software solutions, but in some cases it is justified as per above.

Anything i really care about from a data integrity standpoint is on either an appliance (e.g., netapp) or ZFS, but i do use hardware RAID controllers for OS boot so that a hard drive shitting the bed doesn’t stop my machine from being able to boot an OS… e.g., ESXi lives on a hardware RAID controller mirror - all data is over the network.

Oh, look! It’s thro my fellow contrarian. 20 years is more or less the time frame I’m talking of mate. It’s also the time frame we had things like software based modems, sounds cards, etc… hell later we even had a CPU that ran x86 via direct translation of the instruction set. Wherever there is cheap hardware with spare resources companies will find a use for it. Anyways my point was the industry of course resisted the trend in the case of RAID. It’s good we have open, standards based alternatives now is all I was getting at.

I don’t doubt that RAID cards still have their use cases, but just the other day I was hearing about the trend of cloud service guys not even bothering with filesystem/disk level data redundancy in favor of things further up the proverbial software stack. To each their own for their needs.

Redundant boot disks? I guess you could. If you’re going to have another drive in there anyways you could just have a hotspare that in the worst case you can remotely switch to. But I imagine these management frameworks can do that for you. That way your not eating away at the MTBF of the redundant disk. Well the upside of mirroring it is read speed I guess. Again depends on your needs.

But this has nothing to do with ZFS or HBA SAS controllers :wink:

1 Like