Best most reliable Sata/raid controller for consumer platforms

Mate, you’ve come through again! Listen if you ever need anything make sure you message me. I feel like I owe you a Xmas present. As a minimum!

1 Like

@twin_savage i have one last 2 questions for you.

with the adaptec 3254, if im running Hardware array, and unplug the device from the platform, when i go back into the device after plugging it back in or into a new platform, will re-allocating and re-setting the drives into the same array they where in before, be recognized and auto catalouge the drives back into the array they where in. or will it ask to format. ive had an old controller that when you plug it into a new platform or watever, setting the same array settings on the same array inputs auto recongized the array and just worked.

EDIT: ive found a ‘low profile’ PCIE riser cable.
and second question, do you know of any PCIE riser cables that that are bunched up into a single ‘ribbon’ or round cable.

and thankyou again for showing me the way. everything you have said ive listened too and really, really appreciate all of your help. your a gentleman.

yes, plugging in the entire card from one system to another will let you transplant the entire volume without have to rebuild the array or reformatting data.

I might just be old school, but I always make sure I keep the order of the hdds plugged into the raid card (i.e. hdd “1” is plugged into port 1 ect) when breaking it down and reassembling from scratch; this may no longer be necessary with modern raid but I haven’t bothered to test yet.

I’d be real careful with PCIe risers, there are alot of crap quality ones out there that will give you hard to track down errors. That being said I’ve have good luck with linkup branded pcie riser cables in the past.

Yeah thats cool, is there a way too do a raid 6 array, but instead of 2 redundancy drives you can do 3 or 4? looking at my use case (big files torrent box, my work project info, massive files…) 256 terabytes (16x 16tb’s) may not be enough… i might need to buy a second raid card, and make a second array.

but i do not want to create 2 arrays, to achieve 4 redundancy drives. i want 4 drives with redundancy on a single array., instead of 2, raid 6 stripes, to create 4 redundandcy drives, id rather the one with 4 drive alocated as redundant replacements. id rather the single raid 6 (raid 6, raid 6.5 or raid 8, idk what its called…) with 4 drives redundancy, than 2 arrays to achieve 2 drives per array replacement.

if my drives are 16tb each, i am more than happy to spare 64tb (4 drives) for a raid 6 level redundancy, but raid 6 is 2 drives not 4. is there a way to alocate this?

here at 2000mbps, (a really rough scaling performance loss of 16 drives, it could be more…) IM STILL looking at fucking years of rebuild time. i guess the main attraction of an raid 6 array is 2 replacement drives and not rebuilding of an array at all. Anyway, can we create a raid6 array with 4 redundant/replacement drives?

The nested raid levels you’re talking about are currently the best way to achieve more than 2 parity drive redundancy.
In the future there may be hardware raid 7.3 for a triple-parity setup, but going beyond that is really really going to tank write performance. raid 60 is probably your best bet even when raid 7.3 comes out just due to it’s speed and high reliability.

That’s hundreds of thousands of years of time it takes before the probability of loosing the entire array kicks in (because more than 2 hdds failed).
The actual rebuild time of the array is going to be ~1 day

hardware raid is going to rebuild the array (populate a fresh drive that is replacing a failed drive) as fast as the hdd can be completely filled, which at 200MB/s on a 16TB drive is about 21 hours.
<this is in contrast to certain other high reliability software raid setups that must effectively do random seeks the entire rebuild process and run at a tenth to a hundredth the speed of sequential while thrashing and stressing the remaining hdds; I think this is why there is alot of concern about rebuild times on the internet; because people are calibrated to this specific software raid resilver behavior.

ok i get it now :smiley:

I don’t put too much stock in those raid reliability calculators, they never account for realistic failure modes.

yeah, thats why i wanted to have 4 disks for rebuild. But even if those calcs are blown out or incorrect. i think i should be fine. i mean 4 drives dieing at once… i hope not :sweat_smile:

im changing my mind in my case choice. i might go with the thermaltake W200, with its modularity and ability to add further cavities, i could expand it into a Network accesible Nas type system with a router built into the unit. im still undecided on the case. One thing i really liked about the Enthoo 2 was its vertical GPU support out of the box. i have to do some more research into this new case and its options.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.