Single PCIe x8 slot: Support 8 NVMe drives?

I’m thinking about upgrading my NAS to support NVMe drives. My chassis supports a total of 16 drives, 8 of which can be NVMe drives.

I am somewhat limited by the number of available PCIe lanes and PCIe slots (the NAS also uses PCIe lanes/slots for an HBA, a NIC, a GPU, and separate NVMe drives that I use for the Apps/VMs storage).

I’d like to support 10Gbps bandwidth to the NVMe drives (since the bottleneck is the 10Gbps networking, at this point).

As I understand it, a single PCIe Gen 4 lane supports about 16Gbps. So theoretically a single x8 slot could support the bandwidth I’d need for 8 drives, but the CPU/BIOS only supports bifurcating the x8 slot to x4/x4.

Is there hardware that supports splitting that x8 slot to give 1 lane to each of the 8 drives (without needing to use the motherboard/cpu for bifurcation)?

I understand there are other implications other than just the read/write speed (like the protocols themselves and others), but for the purpose of this discussion, I am just trying to suss out hardware.

1 Like

If the mainboard/BIOS doesn’t support it, you’d need to get a PCIe switch to get what you want.

Those are expensive :roll_eyes: Never mind difficult to find as a ready-made card to use.

You could go the DIY route (s’cuse the pun :stuck_out_tongue: ) but that’s a lot of work (design & prototype a card, source components and stuff) and you’re basically limited to PCIe Gen 2 or at best Gen 3 speeds for a reasonable price, but it’s not cheaper then a ready-made card anyway.

Depends. How big is your budget?

https://www.apexstoragedesign.com/apexstoragex21

It does bring down power consumption but costs $2.8k. Gotta start somewhere, amirite?

You could also upgrade your entire NAS to an Asustor Flashstor 12 Pro for $800. Or start investing in the E1.L form factor chassis. But no, there are no cheap alternatives available at the moment, your best bet is to go with an existing server designed for flash storage, but those are also not really cheap.

1 Like

Does that mean 8 2.5" drives e.g. U.2 format? Guessing you have a server chassis?

If my guess is correct, you can use an HBA

or

The latter one is much cheaper but may not offer as many drives on x8.

If you’re on a budget you can search e.g. ebay for older PCIe 3.0 models… At 10Gb/s that would not be a bottleneck at all.

1 Like

Related question - if we have a 4x PCIe4 drive that’s advertised as capable of read rates exceeding 2x bandwidth, is it a fair assumption it’ll be able to saturate 1x or 2x when only operating in 1x or 2x mode?

The server Chassis supports U.2 drives as the interface between the drives and the backplane, and the backplane to the server is Occulink.

Then look for an HBA with external ports. PCIe 3 should suffice easily to saturate 10Gbps