HP ProLiant RAID setup suggestions

Hello!

I got a HP ProLiant DL360 G7 server with a P410i RAID controller. Everything is updated to the latest firmware.

I wish to use proxmox and a zfs-z1 configuration on this computer. However, I’m not certain if this will work? At least have one disk as a redundant, to hot swap if one fails.

The RAID controller does not allow HBA mode, but I did find a tool on github called “hpsahba” which supposedly will force it into HBA mode. However, I have not been successful in doing this.

The closest thing I can do is to set each individual disk as its own RAID 0 in the HW controller, and then set up zfs-z1 during the proxmox install. Will this work fine, or will it cause issues? Like, file system errors, or removes the drive hot swap functionality?

If this is a bad idea, what would be recommended RAID configuration be?

Doesn’t look like HBA cards are available for this server. 3rd party controllers will make the fans ramp up to unacceptable noise levels, so this isn’t an option either.

Just toss your current HBA and buy one already flashed to IT mode from Ebay… as I recall they are dirt cheap. Your system board might have non-raid SAS ports on it, explore the connectors and the diagram on the inside of the cover.

The P410i is an integrated controller, but I suppose it can be disabled in the BIOS.

Did a quick search, but didn’t find any P212 or P812 controllers on eBay with IT mode flashed. These are option cards for the system with internal SAS ports.

There seem to be some conflicting information on these controllers and HBA mode. Some forum posts suggest it is possible (but boot partitions have to be placed on the internal SD card), while others say it is a no go.

Reading multiple posts it seems I need to use genuine HP cards and they have to be for this generation. If I use something else, it will crank the fans to jet plane taking off noise levels.

Yes, HPE is a jerk like that, punish the end user for trying to save a buck.

Not ideal, but will your controller let you setup each drive as a single member raid 0 array, I’ve read that is a work around sometimes.

Have been reading around a lot. A lot of contradicting information on this topic, so very confusing.

From what I can tell, my best option is to use ZFS without RAID features and just let the HW controller handle that part.

Not optimal, but that’s what I have to work with.

There’s no advantage to using ZFS as just a file system. If you’re going to use hardware raid look at LVM, which offers many of the same organizational options that ZFS does, but is compatible with hardware raid backing storage.

1 Like

I’m not sure I’d go along with that.

Even if one passes individual raid0 drives through and uses them like partitions, one can still build zfs pools out of them, and get the dataset benefits (per child compression/snapshotting/sending) and still get the ARC.

I would say ZFS would be slower than other file systems, and using drives that are not passed through, one might loose some attributes, like true block level storage.
I presume one of the drawbacks is possible write amplification if the raid controller presents different sized sectors to the system?

I’m not sure of the actual losses one suffers from not passing whole disks through.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.