Connecting two Xen VMs directly

OK, so I actually managed to find how to create such a link and the keywords were actually driver_domain and backend in vif specification.

https://xenbits.xen.org/docs/unstable/man/xl.cfg.5.html#Other-Options
https://xenbits.xen.org/docs/unstable/man/xl-network-configuration.5.html

As a proof-of-concept I made the following two PVs:

name = "Test1"
type = "pv"
driver_domain=1

memory = 2048
maxmem = 2048
vcpus = 2

kernel = "/mnt/arch/boot/x86_64/vmlinuz-linux"
ramdisk = "/mnt/arch/boot/x86_64/initramfs-linux.img"
extra = "archisobasedir=arch archisodevice=UUID=2024-01-01-16-44-54-00"

disk = [ 
    "file:/opt/xen/isos/archlinux-2024.01.01-x86_64.iso,hdc:cdrom,r",
]
vif = [
    "mac=00:16:3e:11:22:33,bridge=mgmt-lan-br",
]
name = "Test2"
type = "pv"

memory = 2048
maxmem = 2048
vcpus = 2

kernel = "/mnt/arch/boot/x86_64/vmlinuz-linux"
ramdisk = "/mnt/arch/boot/x86_64/initramfs-linux.img"
extra = "archisobasedir=arch archisodevice=UUID=2024-01-01-16-44-54-00"

disk = [ 
    "file:/opt/xen/isos/archlinux-2024.01.01-x86_64.iso,hdc:cdrom,r",
]
vif = [
    "mac=00:16:3e:22:33:44,bridge=mgmt-lan-br",
    "mac=00:16:3e:33:44:55,backend=Test1",
]

Directly after booting you can see the following state:

After bringing up vif10.1 and setting up ip you can do a ping:


Perf is… not terrible, but could be better honestly, with about 7Gb/s in basic iperf3. Increasing MTU to 9k did not change anything.

With HVMs it’s also possible, but the performance is noticeably worse at ~3.3Gb/s:

For comparison, these are the same HVM domains, but communicating through SR-IOV VF on a 10G Chelsio NIC (T540-BT):

The associated physical port is unused. For a used port, the speed is limited by the physical link speed, e.g. if I use a port connected to a 1G appliance the VM-to-VM speed is also limited to 1G:

Although there might be some way to tune it.