I have a GPU server at work that I wanted to virtualize with ESXi. It’s a gigabyte G292-Z42 and should supports up to 4 nvme U2 drives.
Stock i had a single PM9A3 2TB U2 ssd connected to the motherboard with a slimlink 4i connector.
As ESXI required Hardware RAID, I have bought a raid card (9560-8i) and 3 PM9A3 drive to build a RAID5 array.
I have also bought a cable SFF 8654-8i to 2x SFF-8654 4i
First I think I have mistaken because a single SFF-8654 4i connector support only one disk ? ? I’m new with nvme RAID but with SATA i can have more than 2 disks on my XXXX-8i cards.
But the problem is that no disks are detected > faulty ‘new’ cable ? or SSD backplane ?
I understand that I should by a 9560-16i raid card to get more than 2 disks but with the current setup I should see 2 disks detected.
I can’t find anything from server manual, and for the ssd backplane too (which is referenced CBPG083)
my best bet would be to buy a cable (SFF-8654 8i to 4x SFF 8639 or U2) and remove the backplane. but i should take power somewhere, cable have molex
Did you guys have some advices to get me on the right track ?
Hard to say what the issue is without knowing more about the backplane.
Does the backplane have four SFF-8654 4i connectors on it (one for each NVME slot)? How are the remaining SATA/SAS bays connected to the motherboard?
It would be non-ideal to stop using the backplane in such a tightly integrated chassis, but if you could find power from somewhere it might be possible to stop using the backplane as a last resort.
The current paradigm for hardware raid/hba is tri-mode, where a card will have so many lanes, each SATA hdd will use up 1 lane, each SAS drive will use up either 1 or 2 lanes with 1 being the most common and finally each NVME drive using up 1,2 or 4 lanes with 4 lanes being by far the most common (so common that I’m not even sure broadcom supports connecting NVME with 1 or 2 lanes).
There was a thread with someone 2-3 weeks linking an official 8x breakout cable for SlimSAS 8i (afair) connector on a broadcom card. So there are official broadcom cables to use NVMe in single lane config.
I don’t see x1 NVMe as very useful to me, but I’m sure there are people out there just in need for massive capacity where HDDs aren’t an option. I’m not convinced hooking 15-30TB drives via x1 is a wise move (just get a board with more lanes?)…but always good to have options.
oh that’s right, broadcom does have a 8x breakout for U.3; I was thinking 1 lane of U.2 in my head for some reason.
x1 NVMe on PCIe 4.0 is still 2GB/s, thats not bad considering most drives will perform under that threshold when doing things other than sequential read/write.
On the backplane I can see 8 connectors, I guess 4 for SATA and 4 for NVME. Nothing is mentioned on server manual. I took a look on another gigabyte server manual (G293-Z41-AAP1) and they provide a picture of the backplane which has other reference but looks very similar. They talk about MCIO Connector (U_2_0, U_2_1 etc). I can see around the web that it could work, but I can find also some cable MCIOx8 to SFF-8654 which leds me to say that they are different even if SFF-8654 fits perfectly on backplane connectors.
Regarding speeds, I use to work with sata RAID 5 with standard SSDs an get around (R/W) 2000/1000 MB/s sequential speeds. Like you said with x1 lane I will not able to get more that 2GB/s. But if i get the SlimSAS 8i SFF-8654 to 4x SFF-8639 cable it will be x2 right ? and if I get a 9560-16i with two 8i to 2x4i, it will be x4 ?
Speeds and storage size (I will install 3x1TB RAID5, so 2TB) are not so important, I will try to get the best I can with this hardware.
Thanks to both of you, I really appreciate your help.
Got a picture of the back of the backplane? I’m having a hard time visualizing it.
I think the G293-Z41’s backplane is different than the G292-Z42’s; The G293’s connectors are MCIO but I’m pretty sure the G292’s aren’t.
I just saw that @ipclevel couldn’t get U.3 NVMe drives to work on the broadcom cards in another thread with the official x8 U.3 broadcom breakout cable. Perhaps the broadcom cards can’t do x1 nvme after all.
I tried 1 drive at a time and two at a time and plugged them in on different connectors and couldn’t get them to be recognized by the 9670-24i card. The cheapo Divilink cable (2x U.2) did work with my U.3 drives and the 9670 card.
I’ve long suspected broadcom didn’t support x1 nvme in the firmware of the card itself because it was such a rare use case.
I’ve got an adaptec raid card I’m going to try doing x1 nvme on with the U.3 breakout cable once I get a U.3 drive.
Will do, I just ordered one of the M.2 to U.3 adapters; it looks like you can have it emulate U.2/U.3/Gen-Z/NVME PCIe HDD. I’m not exactly sure what the differences between the modes are, but it’ll be good to test because there is a severe lack of information about this on the internet.
That support does seem pretty explicit. Perhaps we’ve all be using the wrong type of x1 cables when trying to hook up NVME drives @ x1. Maybe the cables need to be U.2 cables instead of U.3 cables.
That backplane is much more complex than I thought it would be. Are you certain the right SFF-8654 i4 connectors are in the right slots to support the bays your trying to use? it looks like there are at least 9 SFF-8654 i4 ports on the backplane. I’d image there would be some silkscreen next to the SFF-8654 i4 receptacle on the backplane as to which bay it serves.
Well, you’re right about the backplane. 10x sff-8654 but only 8 enclosures. I unmounted the backplane to have a closer look. I can’t see why there is 10 ports…
Today I tested sata drives and it works well with the two bottom right sff ports (one vertical, one horizontal). So sata drives are detected, but not nvme. I tried many configuration with no luck. I used the same cable. I could see 3 sata drives using my 2x 4i cable, so i guess 4 sata drives should work
I have also installed ESXi on a sata raid1 array, attached one nvme and tried a dd command, I got 1.5GB/s.
Then I tested deploying a truenas VM with attaching my 3 nvme drives, created a raidz1 and configured iscsi to get the volume on ESXi, it works but performance are bad, around 600Mo/s (and the truenas VM consumes a lot of CPU during speedtest)
speculation cap on:
Likely this backplane is shared among multiple servers with different drive configurations. I’m betting there are eight nvme specific ports and 2 ports specific to sata (each sata specific port carries 4 sata connections).
and perhaps that purple chip takes care of the demuxing of the sata connection when nvme is active.
But back to your original goal, I don’t think I’ve ever seen a SFF-8654 8i to four/eight SFF-8654 4i (with reduced link width) cable that would be required to get your current card full connectivity. I think if you wanted to hook all the nvme slots in the cage up you’d need something like this:
That raid card is last generation (aka 12G), the current generation of tri-mode raid adapters are 24G. It doesn’t look like Supermicro offers any 24G raid cards yet so you wouldn’t have much of a choice.
That makes sense regarding port count. I will not be able to investigate more time on this problem, I will go with sata RAID on this server. I will try NVME RAID on new servers, they should arrived on february 2024. I will post some informations here if I have more sucess !!
I just got my server and was thinking the same thing, can I move the two cable from sas/sata to the nvme top port. Also I guess the jumpers would be used to tell if it need to use sata/sas or nvme port?
Anyone got it to work? we need to find another image of this backplane with all u.2.
I found a configuration of a 3rd party server using gigabyte with what it look like 4nvme. I don’t know if it use the same backplane tough
there are few version on amazon but i’m not sure I would trust it. Anyway this card was $19. You will also need this cable (1 or 2 depending if you want to populate the card)
I just tested some Gen4 NVMe’s through the original disk bays that the unit came with.
The manual says it’s PCIe4, but my tests reveal that it’s operating at PCIe3 speeds (8GT/s x4).
I’m running Debian and tried disabling ASPM for the PCIe’s through grub. No change. I also removed one of two NVMe’s to test only one at a time. On another forum, someone said booting from a different OS fixed a similar problem, so I tried that as well. Nothing.
It seems either I was lied to, I’m missing something, or I’m going crazy. Is there a BIOS setting I forgot to change? Each NVMe can see 4 lanes, so this is correct according to the manual.
I did verify I had the G292-Z20 model through BIOS.
If something is lying, it’s probably the manual. It looks like the disk bay uses Gen3. Maybe disabling the mentioned OCP switch will allow you to use the two SlimLine connectors for SATA as NVMe? That way you can get 4 NVMe drives without any adapter, just switch the cables.