Best most reliable Sata/raid controller for consumer platforms

Hey gang,

what is the most reliable Sata controller available for consumer platforms, like AM5 LGA1700 etc. with nice array of Sata ports, like 6+. and if it has 6 or 10 ports or whatever, will you be able to reach full throughput on each drive simultaneously through PCI-E interface, if you populate all of the ports. AND have the controller consolidate the data correctly over a long period of time, so like if you populate all of the sata connections and read/write through the controller constantly will your data corrupt? or error?

also, is it possible to Raid between the motherboard controller and a PCIE sata controller. i assume you would need to use OS software to pair the drives, but im more interested in whether modern day bios’s can address and assign the drives into a Raid array, through different controllers. for example have 6 drives off the motherboard, and 6 drives off a third party controller into Raid 0 or raid 6 for example.

Is this just for testing out an Idea or planning on using day to day? cause if it’s for data backup / storage I would suggest getting a consumer grade nas (asustor / synology / qnap) as those will be generally more reliable / easier to setup.

Just my $0.02 for what it’s worth!

your kinda right when it

your kinda right, but i need the build to also have somewhat of a conventional PC use case, like being able to put a grfx card into it too play a game or whatever. it will be for Data storage, with a day to day use case also. if i was just looking for data backup i probably would buy a nas array. but im pretty sure im gonna need it to also do some serving, and gaming also.

i really like how nas system connections are not with the use of a third party controller, but mostly part of the PCB, and ultimatly this is the best case scenario. But i need it in a consumer case, not a rack case or nas box, something like the phanteks Enthoo Pro 2/server case.

i think as for reliability, as long as i have a good controller, thats capable of doing the work it should be fine. point a fan at it or something. im even thinking of using a PCIE riser cable and relocating it on the back of the case once i populate the case with drive caddys, and Routing power and sata and labeling them to each for future crappy ‘plug and play’ function.

im more interested in what kind of controller i can rely on, and was hoping someone with experience or who is doing this particual scenario can give me advice, also ive never attempted a raid from different controllers, on a bios level so i am unsure if it is possible. or if there will be performance issues. i understand if my controller fails, so does my array. and im fine with that :crazy_face: :upside_down_face:

For me, I have used SATA add in cards, and they have been okay, but I prefer SAS cards where I can.

Others have had problems with SATA controllers, but they cost like $20/30 new for 4 ports, maybe $80 for 8, so pretty much inexpensive?

SAS controllers, might go for $70 Used but you need breakout cables, which might be another $30 or so.

I guess the prices is not so different, and I rather go used SAS than new SATA.

My main NAS, has a 4 port SATA card, and it runs fine on a 1X PCIe socket. It does probably bottleneck, but is fine for NAS

3 Likes

Can you use a SAS controller for Sata Drives? i dont mind, whatever the cost, its going to be a permanent solution so id rather buy it once and buy it right.

if so know best sas controllers?

1 Like

SAS controllers can do sata and sas drives as it’s the same connection type physically. sata controllers cannot do sas as it requires more than just the standard sata controller chip to access the drives.

That’s my understanding at least!

2 Likes

I personally go something like this:

2 sets of cables like these:

And a controller like this, which I flash to “IT” mode, but you can buy pre-flashed:

But this is an old model, so you could get a more modern combo for a bit more:

Cables like this, which have a different connector to the ones above

Have you considered getting a used server or a hedt build so you can do data storage / server / light gamming off of it?

The server / servers I have in mind:

They would allow for adding in a sata / sas controller along with a graphics card for gamming.

Just an option to keep in mind!

2 Likes

A ready made, second hand server, would make a great machine.
And a tower version, is more likely to be quieter.

Please make sure there are power connectors inside, if you plan on adding a GPU

And as I said, I personally have had fine luck with SATA PCIe add in cards, but others might have horror stories; I could just be lucky

a ready made server is something i considered, but current and last gen threadripper is very expensive, and thats more the ball park im looking at in terms of 3d application performances. sorta why im thinking i can keep it light, build something am5 atm, and throw in an add in card.

That SAS controller your showing off, really has peaked my interest. i see its for ZFS use, and i believe that ZFS is quite demanding. I think i should look into doing ZFS, but i dont even know where to start. i have too look into this.

i think im going to try and find an SAS controller add in card and go that route. is there any brand that stands out? or anything will do i guess. i see you linked one that had been removed from a dell server/machine. i guess thats good thinking as there machines have to be sold as reliable.

image

a SAS controller in australia is 600 bux… thats 400 freedom dollars man., i see why you linked a second hand one.


^this one looks to be way better… and its cheaper on ebay. is LSI any good? EDIT: i just found this controller for less than a hundred. brand new… makes me weary…


im guessing LSI is good… because this brand, does 36 ports. and its 123 dollars…

man problem now is, i have NO experience with add in cards. there performances and reliabilities based on branding. i should just shop servers/nas’s and if one of them use a specific model SAS controller, just copy that.

I have two AMD computers with the following RAID cards:

ASR-8405 Dell ADAPTEC ASR-8405 TXCMC SAS/SATA 12Gb/s RAID Controller 0TXCMC

LSI MegaRAID 9260 I think but it’s flashed as ServerRAID M5015

Each computer has 4 6TB Hitachi SAS drives HUS726T6TAL5201 as RAID 5 arrays.

Both have been flawless so far for at least 2 years and both the drives and cards were used when I got them. The drives are in an enclosure and they do get warm sometimes but like I say, I’ve not had a problem so far.

I just checked and there are newer RAID cards similar to mine that are less than $50. Then of course you may have to buy the cable.

If you do get a SAS card, I would recommend getting one that says “flashed to IT mode” if you are using freenas/ ZFS, and do the raid on software.

I have flashed most of my cards, and it does work, but can be a little tricky

But, you can use a card in IR or normal raid mode, but then you will need to use one of the same types of card, and you can’t mix the array with motherboard SATA connected drives

(In my opinion)

MAN! after you said that i immediatly started looking at my drives, i kinda already have 1/3 of the total array i want to create. Ive already purchased 4 Exos X16, 16tb Seagate drives… and now im seeing that they are available in SAS aswell! fml (maybe im reading it wrong and my drives will take an SAS connector? but i doubt that). im thinking i should try sell the drives at a loss and buy SAS variants. soley based on my 5 minute googling stating that an SAS array would be regarded as more reliable, and faster in every category. but im reading reviews and SAS variants of the Exos X16 perform the same, even though the interface is double the bandwidth at 12gb/s. is there a benefit to having double the bandwidth access to the drive? i dont think so because it would up too the header count and RPM right. unless there is a benefit to SAS 12gbps.

im not sure why i would want this drive in SAS variant when its kinetic 7200rpm and Cache architecture is maximized through SATA anyway. is SAS more reliable in anyway? fml…

now that ive kind of got an idea of what kind of controller i want/need. my idea is too buy a 12 input SAS controller(or larger idk…). and build an array on that. (or not…, i might just look into slowly backing up the data too a “cloud” or some bullshit idk yet. Raid 6 comes to mind, but i dont trust it,.) with 12 exos 16tb drives. But like i stated above, the array is going to be more or less a permanant solution, with the Platform possibly being upgraded over time, sell an old CPU, buy a new one etc, change mobos even, add more ram, change a grfx card. simple stuff.

heres a big question. if i put 12 drives with a throughput of 250mbps, the exos’s (not even inching too the 6gb/s (~500ish? mbps)) will the controller be able to sustain Full data throughput for long periods of time? i understand that that the total throughput of an X4 pcie3.0 is around 3500mbps (~250mbps x 12 = 3000mbps). so double that if the controller is pcie4.0, effectivley maximizing/using the full width of the pcie3.0 bus. BUT will the controller cpu and architecture be able to sustain the work, long term. This is my thinking that brought me to question the brand/model of a SATA/SAS controller. i need something made of iron.

fuck man, im so keen to get the SAS variants of the drive and be done with this project for atleast a decade, with a really nice controller, that i never have to cry about.

1 Like

Whats the benefit of “flashing” the card? are you changing its file structure/access protocol to something that better suits a volume instead of an array?

and i would imagine flashing a bios would be a ‘community’ offering. is there good support for moving away from the consumer operation?

1 Like

IT mode passes the drives through, so the computer can access and control the drives.

Raid mode / IR mode / normal mode, the card controls it, and you might have to format a drive with the SAS card, and pass that through to the OS, either on its own, or as part of an array

Also, the SAS cards are sometimes PCIe 2 x8

So the throughput may be limited to that.

There is still a lot for spinning rust, you would need SSD’s to saturate it, I think

PCIe 2.0 x8 is net 3.5GB/s. That’s a lot for HDDs…16 drives no problem

1 Like

god damnit im out of my depths. What do you mean SSD’s to saturate it? are you saying im going to need SSD drives to act as a ‘cache’ for the data to be able to be buffered before it works its way into the drives? and if that is the case!!! how the hell, do i assign SSD drives as caches… please teach me more.

i understand this. i have had a small time 4x sata card back in socket 1366. and i would have to boot into the card to create an array. But it auto passed the drives through to the OS if i wasnt doing anything crazy. is raid on software easier to work with? i always thought Raid on the card would be more reliable or better. but im seeing alot of people praising ZFS and zfs is software…

im really happy with how much knowledge and maturity this thread has grown into, comparing from the starting post.

1 Like

You do not need SSD’s

I would not worry about 6-10 HDD’s on a PCIe v2x8 card, even plugged in the a x4 socket

If you use a raid card in IR/normal raid mode, you can use the on-board raid to configure the array.
(Reboot into the card’s bios, the same as your old SATA card)

If you use IT modez then you can use the OS to create & manage the array