Return to Level1Techs.com

Looking Glass on Xen

Has there been any attempts to get Looking Glass running under Xen (rather than KVM)?
I personally have a working GPU passthrough setup under Xen, which I don’t really want to change, so I gave it a try.

I had a brief look through how LG works on KVM, and there seems to be two ways it could be ported to Xen:

  • Continue to use ivshmem (technically unsupported on Xen) with some hacks and manually map the shared memory into the host (see below for details). This would be minimal new code
  • Use Xen grant tables (also available via the vchan library) to share memory in the Xen supported way. Would probably need to write Xen-specific drivers both for host and guest

Obviously continuing to use ivshmem would be the least effort, so it’s what I tried so far. While Xen doesn’t allow you to manually add arguments to the qemu command line arguments, it does give you access to the qemu monitor to add devices after the VM’s been created:

xl qemu-monitor-command <guest> 'object_add memory-backend-ram,id=ivshmem,share=on,size=64M'
xl qemu-monitor-command <guest> 'device_add ivshmem-plain,memdev=ivshmem'

(This allocates memory in the guest, so you might need to use xl mem-max <guest> and xl mem-set Domain-0 to raise the guest memory limit and free up memory from Dom0 to move to the guest)

Notice how the memory backend is a memory-backend-ram, rather than a memory-backend-file. This is because Xen doesn’t support file-backed memory backends (qemu will return an error if you try). But this makes Xen allocate the memory, with no direct way for the host to access it. We have to:

  1. Find where in guest memory the memory has been allocated to
  2. Map the memory into the host

Fortunately, there’s some more qemu monitor commands to use to find the memory in the guest:

# xl qemu-monitor-command <guest> 'info pci'
...snip...
  Bus  0, device   7, function 0:
    RAM controller: PCI device 1af4:1110
      PCI subsystem 1af4:1100
      BAR0: 32 bit memory at 0xf6312000 [0xf63120ff].
      BAR2: 64 bit prefetchable memory at 0xf0000000 [0xf3ffffff].
      id ""

Meaning the memory is in guest physical memory [0xf0000000, 0xf3ffffff] (1af4:1110 is the ivshmem PCI ID, and BAR2 is where ivshmem exposes the shared memory). This address can change, so it needs to be re-looked-up every time the guest is booted and ivshmem added.

There might be other ways to find the memory, like via the qemu QOM

I was browsing the QOM to find a “cleaner” way to get the memory info, rather than parse info pci's output, and /objects/ivshmem/ivshmem[0] looks promising:

# xl qemu-monitor-command <guest> 'qom-list /objects/ivshmem/ivshmem[0]'
type (string)
container (link<qemu:memory-region>)
addr (uint64)
size (uint64)
priority (uint32)

But unfortunately, qom-get isn’t in any qemu release yet (it was only added a few days ago in commit 89cf4fe34f4afa671a2ab5d9430021ea12106274), so I’ll have to compile qemu from source before I can test this out

After getting this far and running Looking Glass on the guest, I was able to use another qemu monitor command to double check that LG is indeed writing to the memory segment:

# xl qemu-monitor-command <guest> 'xp/4cw 0xf0000000'
00000000f0000000: '[' '[' 'K' 'V' 'M' 'F' 'R' ']' ']' '\x00' '\x00' '\x00' '\x08' '\x00' '\x00' '\x00'

So now all that’s left is making Looking Glass map in memory from the guest. Fortunately Xen has a library that seems to allow just that: libxenforeignmemory.

This is as far as I’ve gotten, where I’m about to start modifying Looking Glass to map in the shared memory with libxenforeignmemory.

As you can see, a lot of xl qemu-monitor-commands have been involved, which is explicitly unsupported by Xen. Thus this is all a kind of big hack.

Do you know of any other efforts to port Looking Glass to Xen? If I do the porting using these big xl qemu-monitor-command hacks, do you think I’ll be able to upstream the work to Looking Glass (I don’t think I’ll ever be able to get ivshmem support upstreamed to Xen, since grant tables already exist)?

I’ll keep this thread updated on my efforts, so hopefully other people might be able to make use of my (albeit hacky) work

2 Likes

This is the first I’ve seen of this sort of attempt, but I’m not as clued in as gnif.

Looks like you really did your homework and I’ve gotta say, this is really good. I’m looking forward to updates

Unsupported just means that you likely won’t get the sign off from upstream.

I suspect the number of xen passthrough situations are far fewer than KVM. Though, I may be wrong.

You already are at the crux of the issue, you need to partition the Xen people for a method to obtain uncached access to this ram segment, without it there is nothing more that can be done. LG doesn’t need to “spport xen”, it’s the other way around.

1 Like

Just tried to map in the guest ivshmem region with libxenforeignmemory, but it’s failing with “Invalid Argument”.

What’s weird is it’s failing with “Invalid Argument” for any PCI device memory (which ivshmem is), while if I try to map in regular memory, it works just fine.

I’m now asking the xen-devel mailing list if they know what’s going on, while in the meantime also digging through the qemu source code to find out how exactly it creates a PCI device in the guest