Best most reliable Sata/raid controller for consumer platforms

cool, can you tell me more about creating a SSD saturation anyway? would i create an array with software on IT flashed card, and on that software assign an SSD array as cache?

1 Like

Iā€™ll rephrase

SAS SSDā€™s can do the max throughout of like 550MB/s Both ways at the same time. HDDā€™s about half that.

If PCIe 2x8 can do 3500MB/s then 16 HDDā€™s at the same time, can run at almost full speed. It would only run like 7 SSDā€™s at the same time to make v2x8 a bottleneck.
So a v2x8 might not run 16 SSDā€™s at the same time at full speed, but a v3x8 would allow for more SSDā€™s

Caching can get complicated, and I would not worry about it till later.

If you use ZFS, it is quite straight forward to add. I am not sure about raid card cacheing

sounds ez pz, buy a 12 port pcie4.0 4x card, upgrade to SAS drives. work into a ZFS, add small (10% maybe) ssd cache array before the Exos drives.

and lastly cross your fingers.

1 Like

Iā€™d recommend getting a QNAP or Synology NAS

haha, it sounds crazy hey. but if i get a nas im going to be in the same boat. learning the crap, assigning stuff, idk.

ill look deaper into getting a nas system. but i think doing this myself would create a better overall performance, ease of use aswell, more easily replaceable components, a wider bus of inputs also with going with AM5 rather than whatever cheapy nas platform they chuck in. i can also upgrade it to double or more my current requirements if i do it myself. with a nas im gonna pay out the nose, and get the same result. except the server will also be lesser in performance, ill be trapped into storing data only.

1 Like

HDDs do have their own hardware cache that can contribute to ā€œburstā€ speed; for example Iā€™m running 30 MG08 HDDs and each one has 512MB of DRAM cache on board, that is 15GB of cache that can be quickly filled at much faster speeds than what the HDDā€™s platters can sustain. The MG08s are SAS 12Gbs and can take advantage of the extra bandwidth under certain workloads.
SAS does have some other benefits over SATA, mainly a deeper queue depth, option to run dual port, increased signal integrity and the fact that it is still actively being developed and newer, faster generations of it continue to come out; typically falling inbetween the the newest and last generation of PCIe in speed per lane (but is actually reliable because it signals with higher voltages).

SAS SSDs are very fast, the current generation of SAS SSDs will do 4.5GB/s in dual port configuration. The next generation of SAS is coming fairly soon and will double that number.
You are probably talking about SAS SSDs connected to an a slower LSI 2000 based controller which would limit their speed to ~550MB/s which is correct; I just wanted to make the distinction here for OP.

Keep in mind that if you buy an LSI 2000 series RAID/HBA card you will be limiting yourself to PCIe 2.0, so if you put the card in a PCIe 3.0 x4 slot, the card will only run at PCIe 2.0 x4 speeds which translate to about 2GB/s, which would be a bottle neck for sequantial sustained transfers on 12 HDDs.

ā€‹ā€‹ā€‹ ā€‹ ā€‹

ā€‹ā€‹ā€‹ ā€‹ ā€‹
Iā€™m personally not a fan of LSI based controllers anymore. The whole IT vs IR firmware flashing thing is a joke and a product of artificial market segmentation. ā€œunrestrictedā€ RAID/HBA cards let you configure each portā€™s behavior on the fly, which is absolutely necessary if you want to run LTO drives off of them or want mixed arrays.

1 Like

Got any links for cards/cables?

This is the current generation of Adaptec cards which is getting a little long in the tooth at this point, PCIe 5.0 canā€™t come soon enough:

As for cables, microchip seems to offer more variety than broadcom does. In the last picture in the above article it lists all the cables available, digikey should have them all if you search the associated part number.

1 Like

Anywhere one can buy one of those?

digikey is your best bet, mouser probably has them too but their stock isnā€™t usually as good.

1 Like

This is great advice!, but because im unfamiliar with this tech, i cant seem to identify the cables i would need for the adaptec 2200 for example. are those ports for breakout cables too SAS plugs? the adaptec does 32 drives, but i cant find that adaptor to 16 drives breakout cables. or does this controller plug into something else?

edit: i think i get it, these cards would plug into directwire backplanes i would slot drives into. very very cool. i think the guy above who mentioned to get a nas box or something is a pretty good idea. or a server style storinator chassis i can populate with hardware and cards. jsut need the chassis and backplane pcbs and psu.

Youā€™ve picked an exciting time to get up to speed with hdd cabling, it has never been more complicated than it is now, not even in the SCSI days.

yes, or different variety (U.2 or U.3) and link widths (x1, x2 or x4 PCIe lanes) of nvme drives alternatively.
The port(s) on the back of a modern RAID/HBA card are for x8 SFF-8654 cables. The SFF-8654 cables can fan out to 8 SAS/SATA plugs and directly connect to the drives.
There are also SFF-8654 cables that plug directly into nvme drives; and finally there are also SFF-8654 cables to connect to backplanes but that doesnā€™t seem very applicable here.

edit: I just realized thereā€™s a typo in the articleā€™s classification of the eight nvme drive fanout cable, this chart is correct:

1 Like

yeah cool, ill get to the bottom of this.

man even synology or qnap boxes are expensive blown out of proportionsā€¦

i can build a rig into a phanteks Enthoo pro, with bays galore for a FRACTION of the cost of a synology/qnap device. why is it so expensive?

even just a Storinator case/chassis with only a pcu and backplanes is wildy expensive.,

do you guys know of a cost effective way to acomplish the chassis part? orshould i just build into the phanteks and make it nice.

Youā€™re definitely paying a premium for the software the prebuilt solutions run on.

Doesnā€™t the Phanteks Enthoo pro only have 6 hdd bays? If I was trying to do a cost effective build with a bunch of hdds, I think Iā€™d just build a PC into a Fractal Define XL case. Instead I tried building a case from scratch and it was most definitely not cost effective.

1 Like

The enthoo pro 2 can have like 12 HDDs in the cage area. And then whatever else you can fit around. Probably another 4 next to the PSU. And U can probs stack 3-4 against the cage area on the motherboard side. Im Going to build this box and Iā€™m going take pictures to show everyone. Itā€™s going to be epic.

FYI, with my config my disks are down to 6Gbps per disk. I believe the issue is my enclosure is 6Gbps. So everything is 12Gbps except for the enclosure.

Not sure where to look. Iā€™m not a hardware guy. In Win 10 I can copy from my SSD to my array at 460-490MBps. I benchmarked it a while back but donā€™t remember the score.

I just wanted a large space with some redundancy ( I ended up with 16.3TB formatted ) and this was the cheapest way to do it. I let the controller decide what to do. I didnā€™t tweak it. So a lot of the guys here might be able to get a lot more out of it.

Let us know how it goes!

I suspect that the SAS variants are more robust than consumer drives! Just guessing but I must say the used ones I have came in working from the start.

Only advice I can give on the SAS part is that if you go for an enclosure that your enclosure supports BOTH SAS and SATA. I think many are out there that support both. Mine supports both but the SAS portion is supported at 6Gbps. No biggie to me.

Iā€™ve decided to stay with sata. Because the drives will be easier to sell or repurpose when I replace them, with no added benefit of having an exos 16tb in sas. Iā€™ve found 3254UC16EXS for sale. And Iā€™m snagging it, itā€™s second hand but I think it will do the job. Iā€™m gonna put it into the phanteks enthoo pro 2, by creating a PCI device bay slot on the back of the case, and using a riser cable to route it back to the motherboard, through the bottom where you put your power button cables. Like this I can upgrade the platform, but keep the discs and controller all plugged in and stationary.

Thereā€™s a Reddit post of a guy who modded the phanteks enthoo pro 2 with a disc stack all the way up that Iā€™m going to replicate and then further add more bays into on there side for that extra ā€œgod damnā€ effect.

Iā€™m gonna create a raid 6 array Iā€™ve decided, through the controller, and just use windows on the machine. I canā€™t wait to start accumulating my gear and crying over the difficulty

1 Like

For the build i want to source some 48gb ecc udimms, i cannot for the life of me find any in stock. i live in australia, and im struggling to find them. IF worse comes to worst. im going to just buy some, Kingston Server Premier DDR5-5600 DIMM CL46-45-45. its Hynix A, 4 dimms of this for 64gb ECC. but 2x 48gb ECC udimms would really really be optimal for what i want to do honestly.

The hardest part for me was connecting and squeezing all the cables into my computer. Iā€™m using a smaller and narrow full tower case and I have an enclosure that uses 3 5.25" bays to accommodate 5 disks. Was not easy.

If you get the correct cables for your card I think youā€™re almost there. I donā€™t have experience with the card youā€™re looking at but for mine, they were just about plug 'n play. The case youā€™ll be using looks huge so your wiring is going to be easier.

As for the SATA disks, if youā€™re getting them new what youā€™re doing sounds like the right way. I have my SAS disks used for cheap so I doubt theyā€™ll be worth much on the resale market. Iā€™d imagine about the same for SATA disks equally as old.