System can't access more than 4 NVMe drives? [Solved-ish]

So I bought quite a few of the Optane M.2 sticks that have been on sale for the past few months, one pair of 32GB sticks that each operate in PCIe x2 mode, and a set of four 58GP sticks that run at full PCIe x4 mode. The 32GB sticks are on a QNAP QM2-2P-344 adapter card (I’ll refer to this as Set 1), while the second set is plugged into a QNAP QM2-4P-384 card (I’ll refer to this as Set 2).

I initially only had Set 1 in the system on the PCIe 2.0 x4 slot on the very bottom of the board, and both sticks showed up and worked great. I had that alongside an LSI 9207-8i SATA HBA that took up x8 PCIe 3.0 lanes that was in the top-most slot, and I had 8 SATA SSD’s plugged in and working just fine.

Today I swapped out the SATA HBA card for Set 2, and I’m faced with a weird situation.

  • If the Set 2 is in the system (top x8 slot), then only the Optane drives from that set show up. The pair from the Set 1 are nowhere to be found in the BIOS nor in TrueNAS SCALE.
  • If I unplug Set 2, then the Set 1 shows up fine. I tried moving Set 1 from the bottom PCIe 2.0 x4 slot to the middle PCIe 3.0 x8 slot on the board (where I normally have my 10Gbps NIC) but the issue continued.

Is there some weird limitation for motherboards or CPU’s in terms of how many NVMe drives they can have plugged in at any given time?
For reference, here are the specs:

  • AMD Ryzen 5700G APU
  • Asrock X470 Taichi (can confirm that it can run in x16+x0 or x8+x8 or x8+x4x4 modes with the 5700G)

I’m not really sure what to check or change that could cause this. Any help would be greatly appreciated

Solved-ish
tl;dr Ryzen 4000G/5000G APU’s do not support having more than 4 NVMe drives through PCIe slot adapters

I bet you would need to bifurcate in the following manner:

x4+x4+x4+x4

But it could be that this motherboard has a limitation in this regard so you may not be able to do this.

Hopefully somebody with this model motherboard can give you some insight.

I’m also wondering if an x570 motherboard would support this but not on previous chipsets ?

1 Like

So the thing with the QNAP cards is that they come with chips on them to handle the bifurcation. That’s why the card is Set 2 is an X8 card and all 4 cards are detected when in x8 mode, and why Set 1 was working fine in an x4 slot prior to adding Set 2 to the system.

Idk if an X570 board will support anything differently than what I already have (aside from PCIe 4.0 instead of 3.0) but my board also has 8 SATA ports on it which I need to have all 8 of my HDD’s connected to, otherwise I’d need a SATA HBA which would take up a slot and thus defeat the purpose of moving to another board to fix this issue.

There are some X570 boards that have 8 SATA ports and the right number of PCIe slots, but they’re also a $300 investment :sob:

Yeah I’m not sure in this case and hopefully somebody has some more insight on this matter :wink:

I’ve no real experience with this specific sort of configuration, but I have done my fair share of gathering more information with the limited knowledge and hardware (or software) at hand.

So, just as a sanity check that set 1 and 2 should function perfectly together under normal circumstances, I would try to do the following things:

  • test 16x adapter with 2 from set B and 2 from set A
  • repeat with the other 2 from set B
  • verify all drives work on the adapter supporting only 2 drives

I know this seems really stupid and like trying to rub sticks together, but you might be surprised how many times I learned something that helped me a step farther, just by doing stupid obvious things and writing down what happens. At least until a better idea springs to mind or if someone with more experience replies to your request for help, it’s at least something you can try.

The Cezanne CPUs (Ryzen 5700G) come with weird restrictions on bifurcation. At least on ASUS boards - check ASUS support note that documents separate bifurcation support for Cezanne CPUs. Look for ASUS mobos with X470 chipset for comparison.
Asrock’s documentation in this regard is quite limited. But I would assume that this is the key to your mystery.

1 Like

That’s a good point - I went through that doc and saw that on their X470 boards with Ryzen 5000G APU’s that the top slot is limited to 3 or 1 SSD’s, with the following slot allowing 0 or 2 (depending on what the top slot is doing), and the top slot only allowing one.

Even though this isn’t completely identical to my situation (mine always shows all 4 SSD’s from Set 2 and none from Set 1), I’m led to believe that it’s less of a PCIe lane allocation issue and more of a limit on the number of PCIe lanes that can be used for NVMe, even if one of the PCIe slots is from the chipset.

Marking this as Solved-ish and I’m just going to work around this limitation. Instead of having keeping Set 1 in the system and replacing my SATA HBA with Set 2, I’m going to replace Set 1 with Set 2 and keep the HBA installed. With Set 2 containing 4 Optane NVMe drives, I’m going to use those two pairs as mirrored metadeta devices for my two HDD pools. The SATA HBA supports up to 8 SATA drives, which I’ll be putting in eight 1TB SSD’s to act as read caches and possibly it’s own dedicated SATA SSD pool.

Thanks all for the assistance and providing info!

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.