PCIe x1 adapter for U.2 SSD ; are these legit?

I have been using the standard PCIe x4 U.2 adapter for years without issue

However I am planning another system build, and the motherboard I am looking at only has x16 and x1 PCIe slots ; I would prefer to save the x16 slots for x16 devices and utilize the x1 slots preferentially if possible.

So I saw this PCIe x1 U.2 adapter on Amazon

not many reviews

my memory on the details of how these adapters work in relation to the U.2 disk is a little hazy, I am wondering if these x1 adapters really would work well with a U.2 NVMe disk such as the Intel P4600 series?

I dont actually care too much about speed on the SSD used here, I just want to utilize the U.2 I have laying around as a scratch-disk for staging data in front of a HDD disk array.

Any thoughts or experiences with these adapters? It kinda seems too good to be true. If U.2 SSD worked just fine with x1 PCIe, then why have we been using x4 PCIe adapters all this time??


not to derail my own thread, but on a similar note, I cannot help but have the same question in regards to PCIe SATA adapters;

as with the U.2, I had always used PCIe x4 varieties, now I see a PCIe x1 version, so I guess the same question applies; does this really work reliably and is there some reason why you would need x4 instead of x1?

this would be for the same system, and I would plan to fill most of the SATA slots on this adapter as well. I guess for good measure its worth mentioning this would all be attached to a B550 motherboard so all these PCIe x1 slots would also be sharing bandwidth via the mobo chipset too I guess. Do any of these things really matter?

Transfer rate scaling is linear with the number of lanes used. If you’re fine with x1 throughput then x1 is fine. If not, that’s what x2 and x4 are for.

2 Likes

thanks for the input

are these lanes bi-directional? and if multiple devices are operating over the x1 interface, and by extension through the motherboard chipset, are there any risks of issues such as blocking if multiple devices are trying to do IO, or it a devices is trying to do input & output simultaneously?

welp guess I just should have read this What Are PCIe Lanes and Why Do They Matter? – TechCult

2 Likes

That’s a fairly awful writeup really, many errors and confused terminology, but it’s broadly in the right direction. While Wikipedia’s not necessarily anything great its PCIe entry is more accurate and mostly more concise in describing lanes, including answering your full duplex question and covering some other important things like the electrical-mechanical distinction and open ended slots.

If the chipset’s upstream lanes are oversubscribed by a set of active devices how could there not be bus contention? If the mobo you’re considering’s reasonably recent and decent there should be a block diagram of the IO in the manual. I think some manufacturers still aren’t doing that but, if it’s one of those, you can probably get a pretty good idea from looking at ASRock’s equivalents.

Test data on how chipsets handle bus contention’s vanishingly rare in my experience, though.

I am planning to use the Asus B550 one here ROG STRIX B550-F GAMING WIFI II | Gaming motherboards|ROG - Republic of Gamers|ROG Global since my current build is already using the mATX B550-I version of that board, but I want to transition to a ATX sized setup for more disks.

I guess it will probably work just fine, mostly just as a media server + download server, probably over-thinking it about the fact that all the storage IO is gonna end up running through the chipset lol

Have you considered X670 boards with five M.2s as an alternative to spending US$ 180 on three x1 adapters? Might cost less and generally it’s two CPU x4s, an x4 off each Promontory, and like an x2.

You need a cable to get U.2 work with M.2 slots and the cables are more expensive than the cards. And cables for PCIe 4.0 are somewhat janky and can drop down to PCIe 3.0 if bad

More that 16 GT/s pretty well wants a redriver, though if the build focus here is IOPS that’s perhaps beside the point.

  • The more capable, current gen PCIe 4.0 x4 and 5.0 x4 NGFF drives are both competitive with and tend to cost less than legacy Intel parts, meaning no U.2 cabling off of M.2s is necessary. Also less of a thermal solution’s needed than with most U.2s.
  • Pricing I can get on M.2 focused X670E mobos which add a 5.0 x4, two 4.0 x4 drives with 4.0 x4 up, and a third drive that’s at least 3.0 x2 is less than what Asus wants for a Strix B550 that’s PCIe 3.0 x4+3x1 down and 3.0 x4 up.

So a good bit of build budget might be reallocated to cover Zen 4 or 5 and DDR5’s cost risers over Zen 3 and DDR4, potentially reducing overall parts cost. Power cost might usefully drop as well.

Probably works based on the fact that it is being made, but I have concerns stemming from how boot power might work with it. What in the world does “after initialization and software configuration” mean in this case haha

Thanks for the suggestions. This is not a complete new build, I’m transitioning a B550 mITX system to an ATX form factor so keeping a lot of the stuff the same just larger mobo, more drives, bigger case, extra PCIe devices

Pretty sure it’s a Wikipedia error for x1 cards, which are limited to 10 W unless they’re standard height, full length, and get a Set_Slot_Power_Limit allowing 25-75 W. x4, x8, and x16 can’t pull over 25 W until receiving a Set_Slot_Power_Limit that lifts that to 75 W.

See sections 4.2 and 6.9 of the card electromechanical and base specs.

1 Like

Interesting. Thanks for pointing that out. What impact does the height and length of a pcie card have to do with the slot power limits though? And I have never heard that x4 can pull 75w, thats pretty nuts

1 Like

Presumably not all that much, but that’s the way the 5.0 spec is written. Probably you’d have to ask the people who were in PCI-SIG at the time (PCIe 3.0, IIRC?) why they did that way though, if it’s like 12VHPWR/12V-2x6, there may not be a particularly good reason.

I don’t know about u.2 ssds. But i have had some trouble getting an M.2 optane 905p working out of a Pcie x2 M.2 slot. And in my case. It turned out to be a power issue. I found a m.2 2280 to 22110 adapter that had an external power source. And that got it up an running.

1 Like

Yeah the only reason that I am concerned about this is possible power limitation issues with x1. A given ssd should negotiate x1 :L