It’s a reasonable request, but uncommon in this particular form, which is why you’re not finding much.
At a basic level, PCI Express supports a single device for a physical connection (board slot), which can negotiate to use a particular number of lanes for communication: x1, x2, x4, x8, or x16. Since this requires physical wiring, this is the simple baseline that all this starts with, and there is only one device possible.
Of course more complexity can always be added, so there are two main ways we get multiple devices onto a single board slot.
The first is for the PCIe controller powering the slot to support treating each group of lanes as belonging to individual devices, when paired with the appropriate card. Today that PCIe controller will either be in the CPU or the chipset. A bifurcation card (riser) handles power and control signal redistribution, and physically routes each set of lanes to a new physical slot. Then the PCIe controller has to be explicitly configured for the number of new slots. This is the lowest-cost option, and what that eBay item and things like the ASUS 4x NVME card do.
For AMD desktops, that typically shows up in BIOS as being able to change a block of x16 lanes to x8/x8, x8/x4/x4, or x4/x4/x4/x4 configurations. That would allow a riser card to plug into an x16 slot and provide 2 x8 slots, or 4 x4 slots. In some cases it also allows plugging into an x8 slot and providing 2 x4 slots, though this depends on how the x8 slot was created by the board in the first place.
As far as I am aware, AMD does not supply BIOS components that support any other configurations, so getting down to e.g. x2 or x1 would require a board manufacturer to create custom support, which is unlikely given the lack of market for it. I also don’t know if the desktop AMD PCIe controllers support any finer-grained bifurcation. The EPYC I/O controller they’re based on can only get down to x2 in general configuration, 8 x1 is not possible.
I’m much less familiar with intel’s current options, but my understanding is they’re even less flexible here.
The other main approach is to add a PCIe controller, a “bridge” in PCI terms, though you’ll often see them referred to by brand as PLX or PEX chips. These can support whatever the electrical engineer desires, and can choose to guarantee individual bandwidth or share it among many devices or slots. They’re also expensive, both the controllers themselves and the riser card designs required to support them, since it’s basically like designing the main board.
Because of the cost, these tend to show up only for specific uses (e.g. NVME drive aggregation, or combo USB/network/storage cards), and not so much as general-purpose slot expansion — though that does exist too in both small and large forms.
The elephant in the room for all of this is physical: fast PCIe has very tight electrical tolerances, and maintaining that across connectors, boards and cables is challenging, so chassis constraints come into play for anything low cost. The cryptocurrency miner related uses you see don’t need the speed at all, since the communication over the PCIe bus is minimal, so they make do with much worse tolerances.
So you’re not likely to find practical x2 or x1 splits from bifurcation boards. There is a long-running thread here on the forums if you want to go down the bifurcation rabbit hole.
Can you describe more about your use case though? NVME on PCIe x1 is already bandwidth constrained, and I’m not clear what you mean by two cards sharing the same slot.