Has there been any attempts to get Looking Glass running under Xen (rather than KVM)?
I personally have a working GPU passthrough setup under Xen, which I don’t really want to change, so I gave it a try.
I had a brief look through how LG works on KVM, and there seems to be two ways it could be ported to Xen:
- Continue to use ivshmem (technically unsupported on Xen) with some hacks and manually map the shared memory into the host (see below for details). This would be minimal new code
- Use Xen grant tables (also available via the vchan library) to share memory in the Xen supported way. Would probably need to write Xen-specific drivers both for host and guest
Obviously continuing to use ivshmem would be the least effort, so it’s what I tried so far. While Xen doesn’t allow you to manually add arguments to the qemu command line arguments, it does give you access to the qemu monitor to add devices after the VM’s been created:
xl qemu-monitor-command <guest> 'object_add memory-backend-ram,id=ivshmem,share=on,size=64M' xl qemu-monitor-command <guest> 'device_add ivshmem-plain,memdev=ivshmem'
(This allocates memory in the guest, so you might need to use
xl mem-max <guest> and
xl mem-set Domain-0 to raise the guest memory limit and free up memory from Dom0 to move to the guest)
Notice how the memory backend is a
memory-backend-ram, rather than a
memory-backend-file. This is because Xen doesn’t support file-backed memory backends (qemu will return an error if you try). But this makes Xen allocate the memory, with no direct way for the host to access it. We have to:
- Find where in guest memory the memory has been allocated to
- Map the memory into the host
Fortunately, there’s some more qemu monitor commands to use to find the memory in the guest:
# xl qemu-monitor-command <guest> 'info pci' ...snip... Bus 0, device 7, function 0: RAM controller: PCI device 1af4:1110 PCI subsystem 1af4:1100 BAR0: 32 bit memory at 0xf6312000 [0xf63120ff]. BAR2: 64 bit prefetchable memory at 0xf0000000 [0xf3ffffff]. id ""
Meaning the memory is in guest physical memory [0xf0000000, 0xf3ffffff] (
1af4:1110 is the ivshmem PCI ID, and BAR2 is where ivshmem exposes the shared memory). This address can change, so it needs to be re-looked-up every time the guest is booted and ivshmem added.
There might be other ways to find the memory, like via the qemu QOM
I was browsing the QOM to find a “cleaner” way to get the memory info, rather than parse
info pci's output, and
/objects/ivshmem/ivshmem looks promising:
# xl qemu-monitor-command <guest> 'qom-list /objects/ivshmem/ivshmem' type (string) container (link<qemu:memory-region>) addr (uint64) size (uint64) priority (uint32)
qom-get isn’t in any qemu release yet (it was only added a few days ago in commit 89cf4fe34f4afa671a2ab5d9430021ea12106274), so I’ll have to compile qemu from source before I can test this out
After getting this far and running Looking Glass on the guest, I was able to use another qemu monitor command to double check that LG is indeed writing to the memory segment:
# xl qemu-monitor-command <guest> 'xp/4cw 0xf0000000' 00000000f0000000: '[' '[' 'K' 'V' 'M' 'F' 'R' ']' ']' '\x00' '\x00' '\x00' '\x08' '\x00' '\x00' '\x00'
So now all that’s left is making Looking Glass map in memory from the guest. Fortunately Xen has a library that seems to allow just that: libxenforeignmemory.
This is as far as I’ve gotten, where I’m about to start modifying Looking Glass to map in the shared memory with libxenforeignmemory.
As you can see, a lot of
xl qemu-monitor-commands have been involved, which is explicitly unsupported by Xen. Thus this is all a kind of big hack.
Do you know of any other efforts to port Looking Glass to Xen? If I do the porting using these big
xl qemu-monitor-command hacks, do you think I’ll be able to upstream the work to Looking Glass (I don’t think I’ll ever be able to get ivshmem support upstreamed to Xen, since grant tables already exist)?
I’ll keep this thread updated on my efforts, so hopefully other people might be able to make use of my (albeit hacky) work