Ok, got my raid card, percraid h310. before I flash it to IT mode, I have a question.
The rig is just a fileserver running 8gb ram, core2quad at 3.6ghz running an 240gb ssd for boot and spinning rust x5 in software mdadm raid6 now. with a slow system like this, is there any performance to gain from letting the raid card raid instead of using it as a HBA? Ultimatly I may just run freenas, but I like playing with server stuff in centos. it’s a home storage jobbie, not critical for anything.
Guess I have 3 questions now that I think of it…
Second, boot drive should still run off the motherboard, correct?
Third question, can I create a partition on my ssd for caching, or does the whole drive need to be utilized on the HBA?
ultimate goal is to run in raid 6 and be able to add drives to the data pool. With the card I got I’m stuck with raid 0, 1, 5, 10 because I didn’t realize there wasn’t an advanced key for raid 6 for the h310, so if it would just be better to let the raid card do it’s thing, I would be ok with raid5 for now.
I’m all over the place, I know.
As compared to the on the HBA? Yes it should be on the motherboard.
I think core2quad is new enough to have sata2, so the SSD will not be too bottlenecked by the slower sata. Sata2 It is fast enough to take advantage of the better random read/write of an SSD, and plenty fast enough to staturate a gigabit network.
Depends on what caching software you planning to use. It is possible AFAIK with something like LVMcache or Bcache. I think it is possible on ZFS, although you might not be able to do in the FreeNAS GUI.
Bcache, more like based and cachepilled.
If your going to use freenas I would go IT mode on the h310 and use zfs on the drives. The performance would be better and zfs is great once you figure out all the little gotchas
well, was going to flash to HBA tonight, but the information is so convoluted it’s frustrating me. p16 or p20, 9211-8i or 2008, dell megarec or sas2flash… baaaaaaaaaaaaahhhhhh
i’m so bloody frustrated. I, of course, have one of the motherboards in my server that won’t boot without covering pins b5/6, but also won’t report the adapter in megarec, but the device works properly.
I have it in the top 16x slot because my intel dual gigabit card is in the 8x slot. should I switch slots and try it or just try it in another machine? I’ve been out of IT for 15 years and this bring back those great 4am memories of shit that should work, not working.
Iirc you have to boot bios/csm for megarec to work
I havent done one of those cards but I have done the IBM M1015
So after chasing my tail, I gave up and just set up raid 5. come to find out, RHEL/CentOs dropped support for almost all LSI sas2xxx controllers. fine, back to debian.
Now I can’t get the god damn array to format. from gparted, it hangs and then can’t find the device. reboot, it’s back, try from command line, same thing, just hangs. I don’t know what i’m missing.
You might consider selling the card and getting an m1015 which I know for a fact works and is well documented.
I used this tutorial on two Dell H310 and it worked both times.
I don’t remember the specifics though, I assume I just “monkey see - monkey do” it and did not much worry about it.
Instead of answering any of your questions, I will ask a new one: what if the raid card fails?
That’s the reason why I wouldn’t run a home nas on a raid card. In an enterprise, you would usually have spare parts, or at least have another system that you can use to recover the data, but at home, you would either risk losing the data or buy a spare controller, at which point I would just keep on using mdadm … even on this older system, I highly doubt it would be a bottleneck … the disks or the network would probably bottleneck the system before the CPU is fully utilized.
Note: The problem with raid controllers is that there is no standard for disk format. Each brand/model has its own layout and even another identical controller can face difficulty reading disks if it’s on an older or different firmware version than the controller on which the array was originally initialized. It makes sense in datacenter to offload RAID overhead to dedicated controller, where you have a farm of identical servers with exactly the same hardware and even same firmware versions, and you have spare parts, and where there is a blazing fast network and too much IO + too much work for the servers to handle already. Not to mention the added value of hotswapping a failed disk and have the controller automatically rebuild the array as fast as possible, with zero downtime. But at home, you don’t usually have or need any of that.
My work retired a server because the raid card failed and we would have to express ship one over the atlantic.
They do fail indeed … and it’s always ugly.
well, oddly enough, setting the drives as non raid in the perc h310 bios makes it act as an HBA. So for now, because i’m sick of messing with it and need to have my second point of backup, back up, I just set up mdadm again.
Thanks for the input folks. i’ll revisit it once I nail down a newer rig.
This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.