I recently got two Kioxia CD8-R drives and am having trouble finding a PCIe x8 AIC that can support their speeds. I’ve spent a lot of time reading through various forums, but I’m even more confused now.
Could anyone recommend an AIC that can reliably handle both drives without any issues?
(Maybe this: Delock PCI Express 4.0 x8 to 2 x internal U.2 NVMe SFF-8639)
You need 2 things to make this happen captain:
CPU with bifurcation support and at least PCIe 4.0 lanes
MoBo with bifurcation support and an open 8x PCIe 4.0 slot that’s wired to the CPU
both are common on server boards and rare in the consumer space
Beyond that, anything supporting PCIe 4.0 break out will work fine as the hard stuff happens on the MoBo and CPU
same motherboard as me. It supports bifurcation, so the inexpensive card you referenced above will work. I used a similar model on that motherboard with an intel u.2 pcie4 drive.
I later got the mcio to u.2 adapter cable so that I wouldn’t consume the slot.
My drive is pcie4, and worked without redrivers. The mcio slots are much closer to the CPU socket, and don’t cross as many wires, so a cleaner signal.
u.3 devices will work with u.2 slots. I was looking at that when I got my u.2 drive. u.3 has some hot plug features that is absent on u.2. However I am not using a backplane that would support hot plugging. I don’t think that the hot plug feature is a motherboard pcie feature, but a sas4 feature.
@MikeGrok, what read/write speeds are you getting with that cable? I’m using the exact same one, but I’m getting unusually slow read speeds (750 MB/s) when I connected my Micron 6500 ION U.3 drive to that cable, but the write speeds are fine.
I also have an SP5 AMD Epyc build so similar setup to OP.
Be sure to enable PCIe AER (Advanced Error Reporting) in BIOS and check your operating system’s logs before and after doing a benchmark with the drives.
Have you also checked that the drives have the latest firmware version installed?
My gut feeling would say that the used PCIe x8-to-2xU.2 adapter is trash, there is sooo much of it out there.
The only ones working in my tests with PCIe Gen4 without introducing any PCIe Bus Errors are newer Delock models (model number 90xxx, the older ones had issues dispite saying they supported PCIe Gen4.
Before buying anything else check the mentioned PCIe AER settings.
Since you’re running an EPYC CPU and are using PCIe lanes coming directly from the CPU PCIe AER should work properly to help you diagnose stuff (on AMD systems PCIe AER doesn’t work with PCIe lanes coming from the Chipset instead of the CPU).
PCIe Adapters can introduce issues so that the PCIe Error Correction has to step in and repeat transfers (basically the same as lost packets when using a network connection with issues) which results in lower usable performance.
But it can also be that the SSD’s firmware isn’t optimized for this kind of sequential workloads but only reaches maximum throughput when doing many transfers in parallel.
If you don’t have any PCIe Bus Errors in your log I’d boot Windows and do a benchmark of the drives with CrystalDiskMark to have a “clean” reference point.
Another user had a similar issue with enterprise gear and it turned out to be some suboptimal configuration in Linux, not a PCIe Adapter issue and the SSDs were completely fine under Windows.
I don’t see any reason why you couldn’t go with PCIe Gen4 NVMe SSDs.
I’d only get SAS SSDs in the present IF I had to use a legacy backplane for some reason… Even PCIe Gen3 NVMe SSDs (32 Gb/s or around 3,700 MB/s) are faster than the fastesr current SAS SSDs (24 Gb/s)
The Broadcom HBA 9500 is okay with the latest firmware version, but is a Tri-Mode HBA not just a PCIe Switch.
The most current Tri-Mode HBAs lose a tiny bit of performance compared to using NVMe SSDs directly connected to CPU PCIe.
My personal reason to get a PCIe Switch (the Broadcom P411W-32P began the suffering) was to be able to use four to eight U.2 NVMe SSDs while only having 8 PCIe Gen4 lanes available and be able to swap SSDs without having to reboot the system (Windows). I don’t use all connected SSDs simultaneously so I don’t mind the bottleneck because of the PCIe Gen4 x8 host interface.
I also like to use S3 sleep (suspend to memory) for inactive periods so I avoid unnecessary heat/power consumption/noise. The Broadcom HBAs (9400-8i8e, 9500-16i and P411W-32P) are the only pieces of hardware that ever caused complete system crashes for me and it turned out it was because of firmware bugs by Broadcom, the 9500-16i got fixed with Firmware P28 and later, the 9400-8i8e will never be fixed since Broadcom changed it to EOL in the meantime.
The P411W-32P cannot be updated in my circumstances since Broadcom killed the ability to connect SSDs directly to it, with the current updated firmware you have to use an active UBM backplane, simple backplanes like the Icy Dock V3 won’t do anymore.
That’s what led me to the Adaptec HBA Ultra 1200p-32i as a replacement for the P411W-32P, the Adaptec never crashed for me.