U.3 is built on the U.2 spec and uses the same SFF-8639 connector. It is a ‘tri-mode’ standard, combining SAS, SATA and NVMe support into a single controller. U.3 can also support hot-swap between the different drives where firmware support is available. U.3 drives are still backward compatible with U.2, but U.2 drives are not compatible with U.3 hosts.
Thanks for your reply, I appreciate it very much.
I have another related question.
I’m still waiting for my gen. 5 u.2 cabels ,
I have a gen 4 cable that works with Optane P5800X.
I plugged in a gen 5 kioxia CM7-R in the gen 4 U.2 to m.2 cable adaper. The drive doesn’t show in the bios or Windows disk management.
There is no obvious reason why the drive does not get detected by the system. PCIe is backwards compatible.
I assume the system tries to negotiate gen5 speeds, fails and somehow gives up.
You may have a BIOS/UEFI setting controlling the speed of your m.2 slot. Try setting it to Gen3 or Gen4 (from Auto).
My guess is cabling and compatibility with the redriver. I’m still compiling my results… But also had no luck using Serial Cables M.2 to MCIO redriver with the Serial Cables MCIO to U.3 cable. I even carefully read the manual for the redriver, and tried several different configurations of the dip switches.
My early suspicion… Is that Kioxia drives are particularly finicky about what precise pinout a U.2/U.3 cable is using. I’ve had all kinds of weird issues, even using fancy Gen5/Gen6 cables… Like a CM7-R syncing at Gen5, but only in a 1xGen5 link? Did eventually get it working at 4x Gen5 with the on-board MCIO port, but it was trial and error to get there.
For context, I basically purchased every U.2/U.3 cable available off Amazon, ebay, and Serial Cables site. Extremely mixed results. I’m in for like ~1.5K worth of cabling/adapters/redrivers at this point, so figured, once all is said and done, least I could do would be to do a full write up of my results… Hopefully in the next week or two.
Oh, and because I’m a crazy person, I even grabbed the new Gen5x16 to 4x MCIO-8X card from Broadcom (I mean… Highpoint Rocket 7628A). Just to see eliminate the motherboard itself as a factor. Still experimenting with it.
tl;dr; As everyone else has already said in the forum, or Wendell on YouTube, Gen5 is tricky AF to get working correctly with enterprise drives. The only properly validated solution is to use a real server chassis with a backplane. That said… It can be done? Just expect pain along the way.
I’m going so crazy. I watched the holy grail video and order the p5800x, but. I ordered that cable you said, the PCI5-39MU3x4-EDSFF-0.5M Rather than the * PCI4-39MU3x4-EDSFF-0.5M like shown in the video, and I received the cable and I’m not the most tech savvy but look! It doesn’t have sata power for the plug in area into the p5800x!!! What do I do? Is there an adaptor to plop in that has a power plug in?
I am also super interested to see if this new generation of cabling works to connect something like Optane direct to CPU’s on consumer motherboards. Personally I’m kind of put off by the whole MCIO adapter + expensive cable approach. Can they not make m.2 to u.2 adapters similar to the pcie 3.0 ones that worked perfectly fine?
Everything that is PCIe is directly connected to the CPU. That’s why we’re talking about PCIe lanes, it’s cables coming out the CPU (overgeneralized). Good old SAS cables (also expensive) did the job so far.
We’re at the edge of what conventional cabling can do. That’s why you see fiber cables and really expensive wiring lately. We got around this in the past by limiting e.g. USB, DP and Thunderbolt to very short lengths so it still kinda works.
But in a PC, bandwidth is orders of magnitude higher…cheap cables don’t work anymore and even 50cm can be too long. We could strap everything to a board without cables (e.g. PCIe cards and M.2), but you can’t stack 32 drives on a board. But people need 400/800Gbit with good PCIe signal integrity, not just some lousy 40Gbit USB cable we already have more trouble with than we really want.
Yeah, the more bandwidth you want from cables, the more expensive they get. And connectors must be made so that it reduces/minimizes any losses at these junction points. The chosen connector for PCIe 5 is MCIO. I’ve seen boards with 14x MCIO 8i, although most have 2-5. That’s a lot of on-board connectivity for “off-board” PCIe5 devices.
We can do 12x 30TB NVMe on an ITX board with this, today.
And price is also high because the volume isn’t there. SATA cables have been made by the billions. MCIO cables just don’t have that demand.