PCI-Express questions

I was wondering if you literally connected two computers by the PCI-Express bus how could you make them communicate in order to leverage the speed of the PCI-Express bus? this is for an experiment on campus.

Better to connect them via RAM slots

or just use really fast SSDs and link them on a 10 gigabit network

Also isn't thunderbolt basically just PCI-e?

You could just buy like 2 ribbon cables and shove it into both slots, but they'd have to be pretty close, and you wouldn't have any software to make them talk

As far as i know PCI-e can not be linked together directly. you could use thunderbolt tho.

Through PCI express yes but you'll need a transfer protocol able to make them talk. I don't know if Thunderbolt is able to straight away because I remember that connecting two PCs with a USB cable is not someting plug'n'play but needs a device to help them communicate properly.

What a great question. The answer is yes this kind of tech does exist in Enterprise class kit.

Have a look at InfiniBand adaptors and RDMA (Remote Direct Memory Access) this technology is now used in hyper converged infrastructures - you can live migrate Virtual Machines from one host to another with out any shared storage etc.

I had heard of an opensource RDMA project, that might be worth a look for a university study/experiment: http://rdma.sourceforge.net/

InfiniBand is a interesting project, but what my implementation is involves physically embedding multiple systems on the same board and connecting them VIA the Northbridge and PCI-Express for highspeed communication, I am aware that Intel have the QPI, But I am using ZEN cpu's to do this for a specialized parallel CAD workload. The higher clock speed will make up for what the IPC loss is in my testing so far.

Basically is it possible to communicate directly over PCI-Express without the need of emulating Ethernet and having a bloated emulation communication layer like that? Is there a way to just directly communicate without having your device pretend to be another one to use an existing standard that is already widely supported.

You you need an interface bus such as AXI or Ethernet otherwise you will have a bus contention between the two PCI-E HOST controllers.

i don't recommend infiniband - but its possible.

Possible?
Yes, I work at IBM on the Z series mainframes that now use PCIe as the I/O standard and during testing they will make PCIe loop-backs to test the interfaces. So its ENTIRELY possible.
Difficulty?
I have no idea, that would require you to write your own kernel drivers, Linux would probably be the best place to build this off of.
You could use a board to board interconnect thats short enough that you don't need a re-timer chip (aka a chip that boosts the signal and 're-times' it). But another option would be to find some way to use something pre-made for transmitting PCIe over a long distance...
Like these:

These would theoretically just be PCIe re-timers, but it does appear there is another chip on the board, so it may be something Nvidia special. There are expansion cards that are just re-timers but they are expensive generally.
For example, Magma makes PCIe expansion chassis, and they use similar cards:


This board is very likely just re-timers, you could look up the 4 chips part numbers to be sure.
Problem is, even though this one card is cheap, its the first I've seen, and good luck finding another under $500 or so...
Another problem... Heres the cable for it:

So likely for your case your going to want to get some chinese PCIe expanders, the ones used for cryto mining...
Take off the male end and one cable on one side, and the male end on the other, and solder it together with the correct pin-outs to make sure everything lines up. Which I honestly have no idea how you'd need to do it. Then you can possibly get to writing Linux kernel level drivers.

Hope that answers all of your questions:)

Would the data throughput and timing for delay be worth all of the trouble to do this? Or could you have one of the two devices emulate a 10 gigabit Ethernet connection to the other and have the devices act as if they are connected via network? My goal is to have a weak host system with powerful slaves connected to them to use the graphics and or other horse power to drive then