So... with these “dual PC” cases, can you plug the computers into each other via PCIe?

Seems like it’d be a handy way to cluster them, if it can work.

Some people have more than one computer on their desks for various reasons. e.g. a workstation and a router/backup nas.
Putting them in one case helps organizing that desk space.

but as for connecting via PCIe, no, that’s not how PCIe works, you could theoretically connect them on the cheap using 40Gbps nics, but usually they’d just both connect to the network and share a case.

Sometimes they’d share a PSU.


Too bad… I’d imagine that having DMA connections between two systems like that could result in some interesting code. Oh well, thanks for answering me.

PCIe is basically transactional DMA over serial/differential pairs, with a big spec on how things like enumeration and power management and lifecycle of devices is meant to look like.

There exist network protocols such as RoCE (pron. rocky) that provide remote DMA (RDMA) over network - idea is that your app/code would just have some really slow memory allocated to it in some very big array, and it would then access it very carefully - kind of like mmap.

It theoretically helps save latency and cpu and ram and it theoretically allows further disaggregation of ram caches away from compute in HPC environments so you end up having a storage layer between ram and flash in terms of price/performance. We’re still talking typically about tens of microseconds of latency for this remote ram today.

It’s totally impractical for home use - as of today, and likely anytime in near future.

Not… quite? Though there are pcie compute cards.

Then theres this thing.

Buuuuut that thing is seh up for a mini-computer setup. I saw that first thing I thought of was putting it in a g5 but nope. Its meant to be put on a backplane with other hardware on the same backplane (ie, mini-computer).


But isnt that what LAN is for? Even supercomputers are connected via LAN…

You can do Thunderbolt 3 file transfers, but not much else.

What you’re thinking of is PLX chips and risers. This is a more effective way of expanding PCI-E as opposed to going through a completely different system. Even translating to Thunderbolt causes stuff to be limited to PCI-E 3.0 x4.

Or have one controll box on top and a PCIe Switch that feeds a whole bunch of compute cards.


The one I was working on used omnipath. Which, come to think of it, does have a networking layer, but I don’t think we were using it? I don’t know maybe we were, it’s been a while.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.