Consumer board, no bifurcation, four to six u.2 nvme for special data.
I’m struggling to figure this world out; is there a 4.0 X8 card that would do what I need? Do I need a trimode? How many u.2 can go on each port for a tri-mode card? Just one?
Consumer board, no bifurcation, four to six u.2 nvme for special data.
I’m struggling to figure this world out; is there a 4.0 X8 card that would do what I need? Do I need a trimode? How many u.2 can go on each port for a tri-mode card? Just one?
Since you have no Bifurcation, this can only be accomplished via a PCIe switch. To my knowledge there are no cards that handle this at the moment. Closest I could find was this:
I think this jank setup could work for a 4x nvme drives setup though, but bifurcation is required:
If you search “pcie retimer” on amazon, several products come up, probably all shipping from China, that should do what you want. I have no experience with these, so I can’t speak to how well they work. But one of those and 2 of the SFF-8654 to 2x U.2 cables should give you 4 pretty-much-full-speed U.2 drives from a single 16x slot. Of course, YMMV, especially with PCIE Gen 4 and higher stuff.
SABRENT 4-Drive NVMe M.2 SSD to PCIe 3.0 x4 Adapter Card [EC-P3X4]
Aehm - these cards all require motherboard bifurcation to work. Sorry
Two drawbacks:
I think you’ll struggle to find adapters with PCI switch that support PCIe Gen4.
However, there are good options for PCIe Gen3:
https://www.newegg.com/p/14G-0609-001S1?Item=9SIB9R6KBS1957
https://www.newegg.com/linkreal-lrnv9547l-4i-pci-express/p/14G-0600-00033?Item=9SIBKFCK743578
Tips:
These are the products you would be looking at to connect multiple nvme drives without bifurcation support needed on the MB, and they are a lot more expensive than the cards that just use bifurcation because they require a PCIE switch chip:
The combination of these two requirements means active controller.
Not if you only want to attach NVMe-devices.
Depends on the adapter.
From your requirements as stated, the card below fits. Price however, is steep!
Out of curiosity why is that? Gen 4 is pretty damn standard these days. Most of the server stuff in storage or networking is Gen 2 or gen 3 these days, just seems weird
They exist just fine, but are from only a few companies and are very expensive because they are only targeted at high end server segment. Since only 2-3 companies in the world bother making pcie switch chips and the only demand is from a small segment it keeps the pricing high. Then once they are 2 gens old and trying to just get rid of old stock the price is down where home users can affords them finally.
Let’s call it out by name: Broadcom bought the biggest independent manufacturer of PCIe switches (PLX) and jacked up the prices. That happened around emergence of PCIe gen 4 chips.
Yes, chips and adapter cards exist but not at prices that are feasible for home labs. For some reason gen3 chips are still available for reasonable prices.
ok, so for now i think im going to use a 3.0 x4 ibm card wtih a break out. will give me sas 3 and enough ports to run some ssd/sas platters. plus i can throw it in an otherwise useless slot and save my x4 4.0 and x16 4.0 for later.
Nothing like chopping off 3 PCIe lanes to each NVMe in Gen3 mode.
They wouldn’t confirm any newer versions at the time that could use Gen4 or a new controller to bifurcate at least 2 lanes per each M.2 in a quad setup.
I use it as a jank ZFS special vdev holder for an old atom board in a 8 bay case. Works pretty well for my case, because even x1 pcie 3.0 is better than a sata HDD in every way.
I just came across these which utilize x8 lanes, though they are still gen3.0 and I’m not sure if it’s just lane division or something better.
Quad PCIe NVMe M.2 SSD Adapter Card-PCI … (ASM2824 Switch) (3003K)
LRNV9524-4I PCIe x8 to 4-Port M.2 NVMe Controller Card
So, I’m all too familiar with these cars. I was looking for one to use in my PC, while my Server continued using a SanDisk SX350-6400.
It would have to be like the first Sabrent card. Prosumer motherboards have x4 lanes in bottom slot, and if you used the x8 in the mid slot, you’d cut your GPU’s throughput in half. For some this isn’t a big deal, but for those like me, it’s a deal breaker.
So, I ended up redoing my PC build, I went w/ a single Gen4 4TB drive w/ caching in my X870E Nova main M.2 slot, and took my old 2x 2TB Gen3 drives, and put them in RAID0 in two of the bottom slots. 8TB of raw storage is fine. My server is spec’d out to keep using the flash storage card. I can format and retire my old 3.2TB Fusion ioScale drive to my friend’s new home server (which is almost an identical build to mine).
Only reason you’d need a full [email protected] speed on the GPU is if you are like, RAID0 on two m.2 drives (also 5.0 straight from the CPU) trying to break the limits of what a 9950X3D can actually do in terms of IOPS - for regular gaming ain’t no CPU in the consumer space that will not bottleneck the 5090, and thus it will only saturate x8 PCIe lanes - e.g. no degradation at all. Maaaaaybe if you’re trying to output to a 16k display with reasonable framerates (and 16k screens are pretty much the end of the road for when it makes sense - 32k brings pretty much zero improvement to the image display), while streaming textures from the other 8 lanes… But even that is far fetched.
Gaming and graphics are simply not powerful enough anymore to drive GPU development forward. [email protected] speeds I can be more on board with, but even then going to x8 is merely a 5%-10% degradation. Most people can live with going from 350 FPS to 315 FPS in CS:Go.
However, I have no idea of your specific use case, or why you must have x16. All of the above is speculation. Just wanted to chime in that chasing “full lanes” is starting to become quite silly, when the fastest cards cannot even properly utilize [email protected] much less x16… But then there is always That One Guy™, ya?
That uses an AS Media PCIE switch chip, so that is exactly like what OP wants only in m.2 form factor instead of u.2, so an m.2 to u.2 adapter cable would need to be used which is not a big deal. Though the HighPoint Rocket 1120 is still better since it is cheaper and support an x16 slot so you could have full bandwidth if you wanted to. The ASM2824 is 24 total lanes, so in that card is used as 8 lanes in and 16 lanes out. The way switch chips work is it takes your incoming lanes and gives you extra outgoing lanes. So if your example you had 4 in and 16 out, any of the 4 end devices attached would be able to use full bandwidth, they arent constrained to a single lane each for instance, but as soon as you started trying to run two devices at full speed it would bottleneck down to the 4 total lanes in that it had for your maximum bandwidth. The benefit is that you dont need motherboard bifurcation support, and if you used them as multiple single drives then any single drive could run at full speed, and it allows you to connect more drives than you had original lanes for.
Its unfortunate AS Media never made a gen4 PCIE switch chip, as their chips are much lower priced than Broadcom is. Makes me wonder if PLX was licensing the IP out to AS Media and when Broadcom took over they stopped licensing it out or something.