Avg PC User - looking for Enterprise Level RAID Card (hardware controlled)

I’m looking for a nice Hardware controlled, RAID controller card. Something that keeps my CPU/RAM from being taxed to control and maintain my RAID 1 configuration, that stores my more important files (family photos, videos, files I’m working on, etc).

Currently I’m using a High Point RocketRAID 620 card. It offers a nice robust WebUI with complete details on the RAID array, and drive health/status. However, it’s PCIe 2.0 and has slower than I’d like read/write speeds. As well, I’m not sure the largest drive size supported (lacking in documentation).

Any suggestions on a similar device, but is PCIe 3.0 and handles up to 10TB drives? Preferably with better throughput too.

1 Like

If you already have the array built on the HighPoint card and you are trying to build a new array with a different card you are going to have a very bad time. The biggest issue is you can’t swap out your old RAID controller with a different card because the HighPoint card is what the array is built on.

If you are able to backup your data and build a new array we need a bit more information on what you plan to build. How many disks? Size per disk? Is it RAID1 only or are other options you would consider?

Once we know the number of disks and the overall size of the build will I be able to give a decent recommendation.

1 Like

Negative. When I either run out of storage and have to buy new drives, I’m going to have to get a new card (new HDD’s new RAID controller).

I’m currently using an LSI-9211-8i flashed into IT mode in my media server and it’s more than enough for what I’m trying to do. I got it second hand on eBay for $50. It’s not PCIe 3.0 but I’m still able to get good R/W on my array. I’m using passthrough and built a RAID5 array with mdadm.

If you are going with hardware RAID and absolutely need it to be PCIe 3.0 going with either a MegaRAID 9341 or a Dell PERC H830 Adapter will be your best bet. They are expensive though. Both of these support 12GB/s on SATA or SAS and can handle up to 10TB drives.

RAID Controller for HDDs, max 2Gb/s per drive?
1 HDD = max 2Gb/s if it is very new and very fast
PCIE 1.0 x1 = 2.5Gb/s
PCIE 2.0 x1 = 5.0Gb/s
PCIE 3.0 x1 = 8.0Gb/s

Normal reasonable raid controllers are x4 or x8 lanes, so even PCIE 1.0 would be more than sufficient for 4 drives, and a LSI-9211-8i is PCIE 2.0 x8, so also more than sufficient. While PCIE 3.0 is nice it is not essential when you are using slow HDDs, and 12Gb/s SATA won’t make appreciable difference either. Lots of SSDs would be different, but you seem to want capacity which means HDDs.

3 Likes

Thanks samarium, I didn’t even put that into consideration.

I’m just not sure if the HDD’s or the chipset on the card are the bottleneck in terms of throughput.

Throughput speed aside, I’m more concerned about in the future, given how my setup is configured, I’d like to just stay with a RAID 1 configuration, which means getting larger drives. I’m just not sure what the max supported size is on the RocketRAID 620 chipset.

Don’t run RAID1 for performance, run RAID10.
Also RAID1 isn’t taxing on CPU/RAM, there’s nothing at all to calculate, so not much point in a discrete controller.

IMO just use a soft raid.

2 Likes

What he said.

Even with parity based RAID, any CPU more modern than say, 1996 (IIRC I think MMX support is the ticket for parity calculations being much faster in CPU - so say, Pentium 166MMX) is plenty to do software RAID with the contemporary drives from when the CPU was current.

Software raid has the advantage that you’re not screwed when your RAID controller dies (and they do, i’ve had to replace a few in my time - lucky, i had spare de-commissioned but still functional hardware or i would have been SOL), or you want to migrate to new hardware without reformatting your drives.

If you’re running a hardware RAID controller, you’ve just moved the single point of failure from your drives to your RAID controller.

3 Likes

@_Simon @thro I beg to differ. Windows 10 x64 was ‘syncing’ my RAID 1 config on a weekly basis. It was doing it so often it actually mulched through two drives in under a year. There’s something in Windows software RAID management system (Device Manager - Disk Management) that was causing this to happen A LOT! It would also task resources even though it wasn’t set as high priority in the be background. Still, I would rather something more dedicated, and designed specifically for this task, be used.

RAID 10 - I only have spare room for two HDD’s, as well this controller only has two SATA connections. My T720 Dell server has two RAID 10 configs, but those host VM’s, and I use some of those VM’s for game servers.

Given the build nature of my office system, I have only room for two HDD’s, and I’m not looking for anything glorious, so RAID 1 (simple mirror) is all I need. I’m not running VM’s off the drives. I just want a bit faster read/writes when I work on a project and then store them. Especially for the size of the drives, if I have to do any work or replace one, I want to be able to get the files off the drives as quickly as possible.

If I was running Linux on the machine, I wouldn’t mind using software RAID, but ‘hell no’ to using software RAID in Windows. Especially after the last experience. Never had the problem in Windows 7, upgraded to Windows 10 and started seeing issues immediately.

The reason for that is that I will bet that you were not using RAID supporting drives.

It’s a known issue that drives such as WD black, WD green, etc. do not support TLER and drop out of raid.

That’s not a software raid issue, that’s a drive problem.

2 Likes

I’ve not heard of this on a non Enterprise level. I know on my server I have IBM Nearline SAS drives. However, I’m just using two 3TB drives on my desktop, nothing fancy.

I also found windows built in software raid did terrible things syncing.

I had a could of new drives, set up a mirrored storage space, and within 6 months, one drive had ~1TB written the other was closer to 10! (Okay maybe 8.3 or something, but a heck of a difference for a mirror)
I never investigated whether it was my inexperience, a problem with SS, or because the space was partitioned as ReFS, but it was pretty shocking

Have you considered simpler RAID? RAID10 works pretty well and doesn’t require much compute. It might end up cheaper just buying one extra drive than tossing $$$ at a controller.

Also much better throughput than 5 or 6.


Edit: RAID1??? dude, that’s not much more CPU/performance intensive than just running a single drive. It’s literally just do the same operation twice.

You don’t need HW RAID for RAID1.

Most of the “software RAID sucks” hoofla came from 20-30 years ago when CPUs were slower, didn’t have a bunch of idle cores and it was centered on RAID types which did parity calculations. There are cases where it makes sense, this isn’t one of them.

1 Like

I run RAID 1 for redundancy (one drive crashes, still got the data on the other).

I have since changed my setup and have removed the RocketRAID controller. I’m using the top and bottom M.2 on my motherboard and using the chipset to run RAID1 on the two HDD’s. This seems to max the read/write performance of the drives, and seems much faster. However, I don’t get any warning or notification of drive failure nor any software reporting of the drives health.

Doesn’t exactly matter anymore, as my old Ryzen 7 1800X now supports a LSI MegaRAID controller, and some SAS drives in a larger RAID 5 config for storage, so for immediate storage I have my regular setup on my rig, and a larger storage array on my other PC.