Questions about nvme raid card

Hello guys,

I have a GPU server at work that I wanted to virtualize with ESXi. It’s a gigabyte G292-Z42 and should supports up to 4 nvme U2 drives.

Stock i had a single PM9A3 2TB U2 ssd connected to the motherboard with a slimlink 4i connector.

As ESXI required Hardware RAID, I have bought a raid card (9560-8i) and 3 PM9A3 drive to build a RAID5 array.
I have also bought a cable SFF 8654-8i to 2x SFF-8654 4i

First I think I have mistaken because a single SFF-8654 4i connector support only one disk ? ? I’m new with nvme RAID but with SATA i can have more than 2 disks on my XXXX-8i cards.
But the problem is that no disks are detected > faulty ‘new’ cable ? or SSD backplane ?

I understand that I should by a 9560-16i raid card to get more than 2 disks but with the current setup I should see 2 disks detected.
I can’t find anything from server manual, and for the ssd backplane too (which is referenced CBPG083)

my best bet would be to buy a cable (SFF-8654 8i to 4x SFF 8639 or U2) and remove the backplane. but i should take power somewhere, cable have molex :frowning:

Did you guys have some advices to get me on the right track ?

Thanks a lot !

Hard to say what the issue is without knowing more about the backplane.
Does the backplane have four SFF-8654 4i connectors on it (one for each NVME slot)? How are the remaining SATA/SAS bays connected to the motherboard?

It would be non-ideal to stop using the backplane in such a tightly integrated chassis, but if you could find power from somewhere it might be possible to stop using the backplane as a last resort.

The current paradigm for hardware raid/hba is tri-mode, where a card will have so many lanes, each SATA hdd will use up 1 lane, each SAS drive will use up either 1 or 2 lanes with 1 being the most common and finally each NVME drive using up 1,2 or 4 lanes with 4 lanes being by far the most common (so common that I’m not even sure broadcom supports connecting NVME with 1 or 2 lanes).

There was a thread with someone 2-3 weeks linking an official 8x breakout cable for SlimSAS 8i (afair) connector on a broadcom card. So there are official broadcom cables to use NVMe in single lane config.
I don’t see x1 NVMe as very useful to me, but I’m sure there are people out there just in need for massive capacity where HDDs aren’t an option. I’m not convinced hooking 15-30TB drives via x1 is a wise move (just get a board with more lanes?)…but always good to have options.

1 Like

oh that’s right, broadcom does have a 8x breakout for U.3; I was thinking 1 lane of U.2 in my head for some reason.

x1 NVMe on PCIe 4.0 is still 2GB/s, thats not bad considering most drives will perform under that threshold when doing things other than sequential read/write.

On the backplane I can see 8 connectors, I guess 4 for SATA and 4 for NVME. Nothing is mentioned on server manual. I took a look on another gigabyte server manual (G293-Z41-AAP1) and they provide a picture of the backplane which has other reference but looks very similar. They talk about MCIO Connector (U_2_0, U_2_1 etc). I can see around the web that it could work, but I can find also some cable MCIOx8 to SFF-8654 which leds me to say that they are different even if SFF-8654 fits perfectly on backplane connectors.

Regarding speeds, I use to work with sata RAID 5 with standard SSDs an get around (R/W) 2000/1000 MB/s sequential speeds. Like you said with x1 lane I will not able to get more that 2GB/s. But if i get the SlimSAS 8i S‌FF-8654 to 4x SFF-8639 cable it will be x2 right ? and if I get a 9560-16i with two 8i to 2x4i, it will be x4 ?

Speeds and storage size (I will install 3x1TB RAID5, so 2TB) are not so important, I will try to get the best I can with this hardware.

Thanks to both of you, I really appreciate your help.

Got a picture of the back of the backplane? I’m having a hard time visualizing it.
I think the G293-Z41’s backplane is different than the G292-Z42’s; The G293’s connectors are MCIO but I’m pretty sure the G292’s aren’t.

correct on both.

I just saw that @ipclevel couldn’t get U.3 NVMe drives to work on the broadcom cards in another thread with the official x8 U.3 broadcom breakout cable. Perhaps the broadcom cards can’t do x1 nvme after all.

1 Like

I tried 1 drive at a time and two at a time and plugged them in on different connectors and couldn’t get them to be recognized by the 9670-24i card. The cheapo Divilink cable (2x U.2) did work with my U.3 drives and the 9670 card.

2 Likes

I’ve long suspected broadcom didn’t support x1 nvme in the firmware of the card itself because it was such a rare use case.
I’ve got an adaptec raid card I’m going to try doing x1 nvme on with the U.3 breakout cable once I get a U.3 drive.

2 Likes

Please tag me once you test this as I want to buy their Ultra 32i card if you can make it work

Will do, I just ordered one of the M.2 to U.3 adapters; it looks like you can have it emulate U.2/U.3/Gen-Z/NVME PCIe HDD. I’m not exactly sure what the differences between the modes are, but it’ll be good to test because there is a severe lack of information about this on the internet.

1 Like

The 9560 User guide says that x1 should be supported :

The backplane :

That support does seem pretty explicit. Perhaps we’ve all be using the wrong type of x1 cables when trying to hook up NVME drives @ x1. Maybe the cables need to be U.2 cables instead of U.3 cables.

​​​ ​ ​
​​​ ​ ​

That backplane is much more complex than I thought it would be. Are you certain the right SFF-8654 i4 connectors are in the right slots to support the bays your trying to use? it looks like there are at least 9 SFF-8654 i4 ports on the backplane. I’d image there would be some silkscreen next to the SFF-8654 i4 receptacle on the backplane as to which bay it serves.

Well, you’re right about the backplane. 10x sff-8654 but only 8 enclosures. I unmounted the backplane to have a closer look. I can’t see why there is 10 ports…

Today I tested sata drives and it works well with the two bottom right sff ports (one vertical, one horizontal). So sata drives are detected, but not nvme. I tried many configuration with no luck. I used the same cable. I could see 3 sata drives using my 2x 4i cable, so i guess 4 sata drives should work

I have also installed ESXi on a sata raid1 array, attached one nvme and tried a dd command, I got 1.5GB/s.

Then I tested deploying a truenas VM with attaching my 3 nvme drives, created a raidz1 and configured iscsi to get the volume on ESXi, it works but performance are bad, around 600Mo/s (and the truenas VM consumes a lot of CPU during speedtest)

A quick solution will be to go with sata raid configuration.

Also i’m about to buy new servers, should be supermicro 1U servers with nvme backend and raid. The raid card proposed is a “AOC-S3908L-H8IR-16DD-O”

2 Likes

speculation cap on:
Likely this backplane is shared among multiple servers with different drive configurations. I’m betting there are eight nvme specific ports and 2 ports specific to sata (each sata specific port carries 4 sata connections).

nvme in red and sata in green:

and perhaps that purple chip takes care of the demuxing of the sata connection when nvme is active.

But back to your original goal, I don’t think I’ve ever seen a SFF-8654 8i to four/eight SFF-8654 4i (with reduced link width) cable that would be required to get your current card full connectivity. I think if you wanted to hook all the nvme slots in the cage up you’d need something like this:

That raid card is last generation (aka 12G), the current generation of tri-mode raid adapters are 24G. It doesn’t look like Supermicro offers any 24G raid cards yet so you wouldn’t have much of a choice.

That makes sense regarding port count. I will not be able to investigate more time on this problem, I will go with sata RAID on this server. I will try NVME RAID on new servers, they should arrived on february 2024. I will post some informations here if I have more sucess !!

Thanks again for your help :slight_smile: