Quad M.2 SSD Cards with a PLX chip - An option for getting your PCIe lanes back

This is a follow-up to an old thread I posted 2 couple weeks ago.

My complaint is modern motherboards don’t have enough PCIe slots. For those of us that have networking cards, lots of M.2 cards, storage media, video cards, or lots of other PCIe hungry devices, what are we supposed to do? How are we supposed to give lanes to all the devices in our builds? Because like the rest of you, I don’t want to drop effectively a huge premium on a modern HEDT platform just to get more PCIe lanes. So what is a PCIe enthusiast to do?

I found a solution. It’s not the best one, but it’s worth exploring:

I recently purchased this:

https://www.aliexpress.us/item/3256805724779576.html

Found a cheaper one on Amazon:

This is PCIe gen 3 x16 to quad m.2 adapter. But if you look at the spec page, this is actually just a PLX8747 PCIe switch chip on an expansion card:

In other words, all this card does is create more PCIe lanes where there were none before. And PCIe lanes don’t just have to be used for SSDs, they can be used for anything. So I purchased this expansion card, and ran several tests on it. I bought this to be the guinea pig and answer several questions to the three other people out there who have the same specific problem:

All tests were done on a Gigabyte X99-UD4P Motherboard with a Xeon E5-2697 v4.

  1. Does this device require PCIe bifurcation to work?

No. One of the reasons I specifically bought this card is because no bifurcation is required as my motherboard doesn’t support bifurcation. Just plug it in and go. The PLX chip handles everything.

  1. Can you use things other than M.2 SSDs in this?

Absolutely. I have one of these M.2 to USB C front panel header adapters hooked up to the expansion card right now and it works great:

https://www.aliexpress.us/item/3256806770633853.html

  1. Can you boot from the SSD?

Yes, you can boot from this card. I have my Intel P4800X SSD connected to this card right now.

  1. Does drive information display correctly in the bios?

Yes. All drive information including the SSD displays in the BIOS and in Windows. TRIM and garbage collection both work on my other SSDs.

  1. Can you shove this card into a PCIe slot on a motherboard with fewer lanes than the card has?

Yes, this works.

I was skeptical this was going to work at first because this is a PCIe switch. Some Quad m.2 cards require all the lanes present on the card to be connected to the motherboard. But not this one!

I taped off most of the PCIe lanes on the card except for enough contacts to effectively turn it into a “PCIe x1” expansion card. After trying this, my PC was still able to boot just fine and all other SSDs were visible. Obviously bandwidth took a massive hit, but it worked.

So where would this be useful?

You could buy a cheaper B-series AM5 motherboard with two expansion slots. Then shove this quad m.2 ssd card in one of the bottom expansion slots (that is usually PCIe x4). This will effectively give you access to four more m.2 slots which can be used for anything from wifi to extra SATA ports to even 10GB ethernet.

What are the downsides?

The biggest downside I see is speed. This is a PCIe gen 3.0 device. Now this isn’t the biggest deal in the world for now, but it will be in the future as I/O becomes ever faster.

I think I may have just solved my extremely first-world problem. Hopefully this is useful to someone else.

14 Likes

Useful to know, yes. IMO PCIe speeds above gen 3 are only useful if you need to move a lot of data very fast. Most people don’t :dash:

[edit] The Amazon card is not available in my country, bummer :rage:

1 Like

If memory serves, PCIe Gen3 x4 is 4ish GB per second.
Closest network spec would be SFP28 at 25Gbit/s, which would not saturate that Gen3 x4 link.

There are some u.2 options as well like the Viking U20040-02. As long as you can keep it cool. :slight_smile:

You’re going to need to be very selective with your SSDs. Depending on where you stick the assemblage, there might be no clearance for heat spreaders and such.

checks ebay

What… exactly is this thing? Is it just a U.2 to quad m.2 ssd adapter? I’m assuming underneath the heatspreader is a PLX chip?

It seems your evaluation is correct. Needs a high airflow environment (i.e. servers!) to keep cool.

Meanwhile, I found a cheap quad M.2 card, claiming to have the PLX8747 PCIe switch chip on Aliexpress. I ordered one, which will be shipped after Lunar New Year, and then I’ll give it a test on an old AMD system with an actual PCIe x1 slot and a mining-era contraption to expand that x1 slot into x16. I’m not gonna post the link yet, as it might be bogus. Although the brand itself is a better reputed one I believe. We’ll see :person_facepalming:

Yup, 8 GT/s 128b/130b, so 985 MB/s bus capability per lane and 3.94 GB/s for an x4 link. Max data transfer rate after protocol overhead’s ~3.5 GB/s, so full rate 25 GbE’s about 90% utilization.

Also I don’t know of any drives with cache folding rates above 2.4 GB/s. So it’ll probably be a while still before 3.0 x4’s likely to constrain writes past pSLC.

1 Like

Nice post. I’ve also been ‘plagued’ by this først world problem. I already have 4 m.2 SSD’s and I hate having such fast storage I can’t use for anything.

For me it’s not about the outright speed, but m.2 are cheaper than 2.5" SSD (in my country). So I don’t mind m.2 drives share PCIe bandwidth. It is still faster than SATA. One problem might me compatibility across CPU platforms. But still it would be nice seeing this work for people.

If you want 4 good things (10Gbe, 2 USB-C 10 Gbps and 2 M.2 slots) on a single card that also doesn’t require bifurcation I can recommend this card:

I use it myself and it’s a fine thing imho.
You can safe some space with it and still get 2 m.2 pcie slots that can be used.
I don’t have much in the way of other m.2 cards than storage so I haven’t tested other cards.

4 Likes

Yeah, U.2 to quad m.2. However each port can do Pcie x2. Which makes it a little more interesting.

That might be useful if you are REALLY constrained for space. But i’m not. And a $120 quad m.2 adapter works just fine for a fraction of the price.

Just remember that a PLX doesn’t magically create bandwidth. It basically just virtualizes it. Your overall throughput is still limited by the number of lanes connecting the PLX to CPU or southbridge. And, a PLX also does add some latency as well. So if you’re doing RAID0 across a bunch of NVME drives behind a PLX, and doing sequential workloads, don’t expect much of a boost.

1 Like

If you need bandwidth, more drives will solve this. I’m totally fine with PCIe 3.0. Most stuff I do rarely exceed 2GB/s and I am either bottlenecked by network (WAN or LAN bandwidth) or random IO which could be handled by SATA for bandwidth purposes.
Cases where you want >4GB/s on a regular basis are rare…even 2x PCIe M.2 will saturate a 100GbE NIC on sequential load. And at this point, you need more lanes for NICs than for storage to avoid network bottleneck.
NVMe is fast, PCIe gen doesn’t really matter much. I certainly won’t pay a single Euro premium for PCIe 5 drives. And although using Gen4 drives as Gen3 feels wrong, it probably doesn’t matter and neither does half the bandwidth on that 8x switch card. You won’t get 64GB/s transfers with 8x Gen4 either…cause CPU, software and other stuff will cap out before that.

M.2 and PCIe switches are there for one reason…to counteract small capacity of M.2 by just using more drives to get capacity. It’s a form factor thing and not a technical reason. U.3 is just better…single 16T drive instead of 8x gimped M.2. A shame there never were consumer U.3 around…I’m pretty sure there is a market for 8-32T (QLC) consumer drives. 32TB with a single M.2 → U.2 adapter, done.

1 Like

I’m buying refurbished 16TB HDD’s for about 230-250 USD. A U.2/3 16TB SSD is about 3-3.5k USD. Mind, heavily used! :roll_eyes:

As long as large capacity SSD’s, whatever form factor or connectivity, is waaayyy overpriced, just because they can, SATA remains the connection of choice for many home-labbers that need large storage. Unfortunately, SAS SSD’s aren’t cheaper then their NVMe based brethren, really. :rage:

3 Likes

Something else to point out about this card is that it will take make a PCIe Gen 3 x4 NVMe perform at full speed in a Gen 2 x8 slot (or x16). The bandwidth would be shared between two drives, of course.

Yes, consider the following though:

  • M.2’s that are reasonably priced top out at 4TB.
  • PCIe-Hot-Plug is not widely supported

Leaves PCIe-switches as the best option is to hook them all up at boot, and then take an entire book out of the networking-library where oversubscribing to the n-th degree works out great because load-spikes don’t overlap often enough for it to be fine.


For the non-networking people and those who never checked load on their edge switches: In office-switch-land, having a 48-Port gigabit switch only uplink with a single 10G port is entirely fine. There are two load spikes per day: In the morning when everyone boots up and in the afternoon when everyone saves and goes home.

1 Like

ASRock with their MCIO connectors already on enthusiast boards could maybe be nagged into figuring this out…

I like their approach with MCIO and and on-board 25GbE. But there have been OCulink and SlimSAS Ports on enthusiast board before. This is only the PCIe gen5 upgrade connector.

I bought a x16 PCIe card with 2x MCIO 8i yesterday for 50€. The expensive stuff are the breakout cables at 70€ each. That’s still 50€ per drive to get MCIO and cable on top of the drive itself. M.2 is “free”. And people don’t want cables because window in case and RGB…cables look bad :wink:

Otherwise those nice PCIe carrier cards are really are widely available now even with 4x U.2 in a x16 slot.

And as long as people are buying high capacity 4-8TB M.2 at a premium price, I doubt this will change. Even high prices on Data center drives are still the better deal than buying an 8TB M.2 for 700+

Otherwise I agree with overbooking bandwidth. Works fine in Networking and still works fine with SAS and expanders. Only thing that is keeping me away from PCIe switches is the price atm. I rather get a board+CPU with all the lanes for that money.
The PCIe gen 3 PLX cards seem to be good price atm. Seen em for 200-250ish for 8 drives in a x16 slot. PCIe 3.0 x16 is a LOT of bandwidth, guys!

2 Likes

For some odd reason, I can’t find where I purchased my adapter, but I am fairly certain that it’s a DiliVing PLX 8747 chipset card with 4x SFF-8643 that I am adapting to 3 (currently) of the M.2x4 to U.2 from OWC. I have these running various nVME 2TB and 4TB drives very well on my Plex server, especially as some SATA drives are slowly being phased out for SSD storage. I am using them in RAID and this sort of solution is quite handy since I don’t need full bandwidth of these drives even in RAID, they’re plenty fast to serve my data to my users. I’m very happy with mine in the PCIe x8 slot on my mobo and they’re very solid with no odd things in Windows or anything for drive visibility. Just my 2 cents since I saw someone complain about m.2 limited sizes but U.2 options are indeed out there as an option, not wasting all lanes by using the PCIe x16 slot altogether. My board is the z690 Creator Wifi, so it has two x8 slots bifurcated in that way, so one hosts the SAS controller and the other hosts this PLX card with u.2 on the other end.

LR-Link found on Amazon for $99 right now.
Sold as Generic brand but it works like a charm.
Link to PCIE card