Is there any direct attached raid card that supports 8x u.2 nvme raid? All of my searching suggests no. " Hardware Raid is Dead and is a Bad Idea in 2022" also suggests no.
I just need a good way to let people above me know, they’re trying to do something they probably shouldn’t be trying. They bought this card MegaRAID 9560-8i, seeing “Devices Supported SAS/SATA: 240, NVMe: 32” Thinking that it could handle 32 u.2 nvme drives. it looks like it can but only in x1, though I’m struggling to see how to run them as x1.
x8 SFF-8654 from what I’m seeing only splits into 2x - 4x SFF-8654, unless there is another way that I’m overlooking, this isn’t the right card for the job.
NVMe is 4 PCIe lanes per drive, so max per 16x PCIe slot is 4 drives if you want full bandwidth. Anything past that you need a PCIe switch which increases the cost and you’re still limited to the bandwidth of 16 lanes to the CPU.
A SmartRAID Ultra 3258p-32i would be the easiest solution because you should be able to use the existing nvme drive cages by using x8 slimsas to x4 slimsas cables, but the cards are perpetually sold out, every time stock comes in there all gone within days.
It’d only make sense to use a hardware RAID card on NVME drives if you really really cared about maximum performance (NVME directly to CPU is already pretty performant), if memory serves me right hardware raid would be like something like 50% faster than software raid on modern CPU, I can look for the benchmarks if anyone is interested.
A Gigabyte R182-Z93 should have PCIe 4.0 x16 risers in it unless something is going on with populated OCP slots?
It sounds like the hardware is already purchased…and you wan’t to demonstrate that the hardware is a bad idea.
Can you attach the 32 drives without the card? If so, why not build it and demonstrate performance for the hardware RAID and then a software version. You don’t say which OS you need, but ZFS will work under Linux and BSD, and you can software RAID drives in Windows.
More PCI lanes for the drives will far outweigh any hardware optimisations but with less PCI lanes…
Forget hardware raid, you’ll add a bottleneck if you want to maximize performance. You’re stuck with software RAID, a graid card (same bottleneck), or xiRAID. The latter seems promising, although I haven’t tested it myself, will be in the lab next week.
graid’s favorable use case is contrived, mdraid will beat it in sequential reads and hardware raid will handily outperform it in everything but highly threaded sequential reads.
xiraid is just a rebrand of raidix due to geopolitics, its been around for more than 15 years and has it’s place in the enterprise for those willing to pay for support to solve the edge cases that tend to crop up in use.
The MegaRAID 9560-8i can only handle two NVMe U.2 or U.3 drives. The 9560-16i can handle four. To handle the eight drives you’ll need another 9560-16i card, but that will have the RAID arrays be separate unless Broadcom has some RAID across the card?
Still it all depends on the desired use case. I just installed a system with the 9560-16i to hardware RAID four NVMe drives, again the use case required it. Hardware RAID is as dead as x86_64 vs new uprise in ARM CPU’s, we all hear how much better it is but who is actually using it. I use software RAID a lot, ZFS, LVM2, mdadm but there are some use cases for hardware RAID no matter how much better software RAID may be screamed on a soapbox. They still sell hardware RAID cards right? It boils down to use case and requirements.
This is why they put it on their site. Good for marketing purposes. It’s like “up to 256 HDDs for older LSI series”, but this is only with SAS Expanders. So you need to get NVMe expanders to get to 32. Good luck finding that stuff. And with 32 NVMe on a x8 card…yeah, you might as well use SATA SSDs at this point.
Looks like a cheapskate ripoff from the 32i broadcom card. So there are card out there for 8x NVMe. But CPUs run code just fine, why use controllers? feature set on controllers is very limited.