Kioxia CD8-R PCIe 8x AIC (Gen 4)

Hello,

I recently got two Kioxia CD8-R drives and am having trouble finding a PCIe x8 AIC that can support their speeds. I’ve spent a lot of time reading through various forums, but I’m even more confused now.

Could anyone recommend an AIC that can reliably handle both drives without any issues?

(Maybe this: Delock PCI Express 4.0 x8 to 2 x internal U.2 NVMe SFF-8639)

Thank you!

You need 2 things to make this happen captain:
CPU with bifurcation support and at least PCIe 4.0 lanes
MoBo with bifurcation support and an open 8x PCIe 4.0 slot that’s wired to the CPU

both are common on server boards and rare in the consumer space

Beyond that, anything supporting PCIe 4.0 break out will work fine as the hard stuff happens on the MoBo and CPU

should work fine

1 Like

Thank you!

I managed to order this one (below), please let me know if this could be problematic.

GLOTRENDS PU21 Dual U.2 SSD to PCIe 4.0 X8 Adapter, Supports 2 x U.2 SSD

if it fits, should be fine

1 Like

How can I check if the hardware or software is correcting errors when communicating with the SSDs?

What motherboard will this go into? and which CPU are you using?

We need that info in order to know if your board supports bifurcation. If your board doesn’t do bifurcation we would recommend different hardware.

1 Like

Motherboard: Supermicro h13ssl-nt
CPU: AMD Epyc 9174F

Thank you!

same motherboard as me. It supports bifurcation, so the inexpensive card you referenced above will work. I used a similar model on that motherboard with an intel u.2 pcie4 drive.

I later got the mcio to u.2 adapter cable so that I wouldn’t consume the slot.

1 Like

Thank you so much for this! I’ve been looking for one of these but couldn’t find it in my area.

Does it have integrated re-drivers? (Or do you know of any similar options with re-drivers?)

Is the Molex connector safe?

What drives are you connecting via this cable? Do they work well in terms of error correction and performance?

Does the cable get hot?

Has anyone tried using the MICO connector to U.3 with this card (or similar card):

Highpoint Rocket 1628A? (I believe it is an HBA.)

My drive is pcie4, and worked without redrivers. The mcio slots are much closer to the CPU socket, and don’t cross as many wires, so a cleaner signal.

u.3 devices will work with u.2 slots. I was looking at that when I got my u.2 drive. u.3 has some hot plug features that is absent on u.2. However I am not using a backplane that would support hot plugging. I don’t think that the hot plug feature is a motherboard pcie feature, but a sas4 feature.

2 Likes

@MikeGrok, what read/write speeds are you getting with that cable? I’m using the exact same one, but I’m getting unusually slow read speeds (750 MB/s) when I connected my Micron 6500 ION U.3 drive to that cable, but the write speeds are fine.

I also have an SP5 AMD Epyc build so similar setup to OP.

1 Like

Thank you very much for sharing this!

I’m currently a bit confused about which setup would be the safest:

  1. The AIC in the PCIe slot.
  2. The MCIO cable.
  3. An HBA (Gen 4x16) with cables connected to the drives.

I just want to ensure the drives are in the best possible environment I can provide for them.

I have some stuff to do for the next week or so, but I can probably test after the 7th of October.

2 Likes

Thank you!

Be sure to enable PCIe AER (Advanced Error Reporting) in BIOS and check your operating system’s logs before and after doing a benchmark with the drives.

Have you also checked that the drives have the latest firmware version installed?

My gut feeling would say that the used PCIe x8-to-2xU.2 adapter is trash, there is sooo much of it out there.

The only ones working in my tests with PCIe Gen4 without introducing any PCIe Bus Errors are newer Delock models (model number 90xxx, the older ones had issues dispite saying they supported PCIe Gen4.

https://www.delock.de/produkte/suchen.html?setLanguage=en

2 Likes

Thank you very much!

Would something like this work? Item No. 90162:
https://www.delock.de/produkt/90162/merkmale.html?g=1767

Also, would an HBA + Cable be a better choice compared to an MCIO cable?

Before buying anything else check the mentioned PCIe AER settings.

Since you’re running an EPYC CPU and are using PCIe lanes coming directly from the CPU PCIe AER should work properly to help you diagnose stuff (on AMD systems PCIe AER doesn’t work with PCIe lanes coming from the Chipset instead of the CPU).

PCIe Adapters can introduce issues so that the PCIe Error Correction has to step in and repeat transfers (basically the same as lost packets when using a network connection with issues) which results in lower usable performance.

But it can also be that the SSD’s firmware isn’t optimized for this kind of sequential workloads but only reaches maximum throughput when doing many transfers in parallel.

If you don’t have any PCIe Bus Errors in your log I’d boot Windows and do a benchmark of the drives with CrystalDiskMark to have a “clean” reference point.

Another user had a similar issue with enterprise gear and it turned out to be some suboptimal configuration in Linux, not a PCIe Adapter issue and the SSDs were completely fine under Windows.

PCIe NVMe HBAs open a whole other can of worms (refer to my other thread that evolved over the years and is still ongoing: A Neverending Story: PCIe 3.0/4.0/5.0 Bifurcation, Adapters, Switches, HBAs, Cables, NVMe Backplanes, Risers & Extensions - The Good, the Bad & the Ugly ).

Using cables is much harder for signal integrity compared to a quality PCB.

2 Likes

Thank you!

I read through the entire thread… now I feel like I’ve opened Pandora’s box by getting these SSDs. (Kioxia CD8-R)

Lesson learned: from now on, I’ll go for Kioxia PM7 - SAS.

How do you feel about the Adaptec 1200 HBA compared to the Broadcom 9500?

I also have a Broadcom 9600—will it perform better than the AIC if paired with the Icy Dock B-V3? (I got one of those as well.)

I’m sorry I just realized I got my wires crossed; all my responses were directed at @TackoTooTallFall who mentioned having performance issues.

@ALin

  • I don’t see any reason why you couldn’t go with PCIe Gen4 NVMe SSDs.

  • I’d only get SAS SSDs in the present IF I had to use a legacy backplane for some reason… Even PCIe Gen3 NVMe SSDs (32 Gb/s or around 3,700 MB/s) are faster than the fastesr current SAS SSDs (24 Gb/s)

  • The Broadcom HBA 9500 is okay with the latest firmware version, but is a Tri-Mode HBA not just a PCIe Switch.

  • The most current Tri-Mode HBAs lose a tiny bit of performance compared to using NVMe SSDs directly connected to CPU PCIe.

  • My personal reason to get a PCIe Switch (the Broadcom P411W-32P began the suffering) was to be able to use four to eight U.2 NVMe SSDs while only having 8 PCIe Gen4 lanes available and be able to swap SSDs without having to reboot the system (Windows). I don’t use all connected SSDs simultaneously so I don’t mind the bottleneck because of the PCIe Gen4 x8 host interface.

  • I also like to use S3 sleep (suspend to memory) for inactive periods so I avoid unnecessary heat/power consumption/noise. The Broadcom HBAs (9400-8i8e, 9500-16i and P411W-32P) are the only pieces of hardware that ever caused complete system crashes for me and it turned out it was because of firmware bugs by Broadcom, the 9500-16i got fixed with Firmware P28 and later, the 9400-8i8e will never be fixed since Broadcom changed it to EOL in the meantime.

  • The P411W-32P cannot be updated in my circumstances since Broadcom killed the ability to connect SSDs directly to it, with the current updated firmware you have to use an active UBM backplane, simple backplanes like the Icy Dock V3 won’t do anymore.

  • That’s what led me to the Adaptec HBA Ultra 1200p-32i as a replacement for the P411W-32P, the Adaptec never crashed for me.

2 Likes