Something really weird/interesting that I’ve been looking for is an x8 pcie 3.0 to 4 m.2 x2 card, or alternatively x16 pcie 3.0 to 8 m.2 x2 card. Why something so strange and needlessly specific?
The older optane m.2 32gb modules aren’t completely insane pricing wise these days, and I’d love to play with the idea of a raid 10 of either 4 or 8 of those drives together on all CPU lanes as a sort of poor mans p5800x
We 'd use more PCIe (x16 3.0 vs x4 4.0, or roughly double), lose some of the latency benefits because of software raid overhead in certain scenarios , but capacity itself would be comparable and speeds show be in the same ball park depending on work load (if we can make it work of course)
Ideally I’d use the 58GB optane 800p modules, but the form factor for those disks isnt ideal and compatibility would be rough.
Has anyone tried anything like this and/or has any advice on the best way to go about this?
I’ve come across this card from Aplicata https://www.aplicata.com/quattro-400/
It uses PLX chips unfortunately instead of allowing direct lane access.
They have a 410 which is x16, but also only supports 4 drives (but uses system bifurcation thankfully)
In an x16 form factor, the closest thing I’ve been able to find is this Asus AI accelerator card for the google corral TPU m.2 modules. (also has fans and a 6 pin connector, but running those are optinal in this case and could be disabled to run in single slot full hight) ASUS IoT: Industrial Motherboard, Intelligent Edge Computer, Single Board Computer
It also uses a PCIe switch unfortunately, but does have the same amount of bandwidth in as bandwidth out, presumably for systems that don’t support bifurcation.
Another option is this card from amfeltec which supports 6 devices on PCIe x16 in single slot full height without any modifications https://www.amfeltec.com/pci-express-gen-3-carrier-board-for-6-m2-or-ngsff-nf1-pcie-ssd-modules/