Running looking-glass-host in a linux guest vm

Hi there,

I’m not allowed to post links to the documentation and github repo so I had to break the markdown.

I have Looking Glass working great with Windows 10/11 guest VMs to my linux host/hypervisor, but I want to connect linux guests to the same applications.

For the host, I use the same libvirt XML snippet to create the client backing store:

    <shmem name="looking-glass">
      <model type="ivshmem-plain"/>
      <size unit="M">32</size>
    </shmem>

In the guest, I’ve compiled both the [kernel module] (ttps://looking-glass.io/docs/B5.0.1/module/#compiling-loading-manual) and [host binary] (ttps://looking-glass.io/docs/B5.0.1/build/#for-linux-on-linux) according to the instructions.

$ ls -l /dev/kvmfr0 
crw-rw----+ 1 user kvm 243, 0 Mar 19 14:56 /dev/kvmfr0
$ lspci -k -s 02:01.0
02:01.0 RAM memory: Red Hat, Inc. Inter-VM shared memory (rev 01)
	Subsystem: Red Hat, Inc. QEMU Virtual Machine
	Kernel modules: kvmfr

But when I run the host binary, it does not report that the Capture has Started:

$ ./build/looking-glass-host 
[I]   4596620845               app.c:520  | app_main                       | Looking for configuration file at: /home/user/looking-glass-host.ini
[I]   4596620888               app.c:522  | app_main                       | Configuration file loaded
[I]   4596620894               app.c:544  | app_main                       | Looking Glass Host (B5.0.1+)
[I]   4596620972           cpuinfo.c:36   | lgDebugCPU                     | CPU Model: Intel Xeon Processor (Cooperlake)
[I]   4596620978           cpuinfo.c:37   | lgDebugCPU                     | CPU: 4 cores, 8 threads
[I]   4596620980           ivshmem.c:128  | ivshmemOpenDev                 | KVMFR Device     : /dev/kvmfr0
[I]   4596621813               app.c:561  | app_main                       | IVSHMEM Size     : 32 MiB
[I]   4596621820               app.c:562  | app_main                       | IVSHMEM Address  : 0x7F3876831000
[I]   4596621821               app.c:563  | app_main                       | Max Pointer Size : 1024 KiB
[I]   4596621822               app.c:564  | app_main                       | KVMFR Version    : 14
[I]   4596622837               app.c:625  | app_main                       | Max Frame Size   : 14 MiB
[I]   4596622847               app.c:650  | app_main                       | Trying           : XCB
[I]   4596623084               xcb.c:107  | xcb_init                       | Frame Size       : 1920 x 1080
[I]   4596623111               xcb.c:125  | xcb_init                       | Frame Data       : 0x7F3876048000
[I]   4596623118               app.c:675  | app_main                       | Using            : XCB
[I]   4596623120               app.c:676  | app_main                       | Capture Method   : Asynchronous

(This is the end of the log; the application has not [printed] (ttps://github.com/gnif/LookingGlass/blob/B5.0.1/host/src/app.c#L350) ==== [ Capture Start ] ====)

I ran it though gdb and found that it’s just in an infinite loop at [lgmpHostQueueHasSubs] (ttps://github.com/gnif/LookingGlass/blob/B5.0.1/host/src/app.c#L694-L695).

I’m not familiar with the Looking Glass protocol. How does the host queue get subs?

I’m not sure what else I can do to connect the kernel module in the guest to the IVSHMEM file on the hypervisor. I’m unsure of if I need to install the kernel module on the host as well, but then what’s the point of the IVSHMEM file in libvirt…

On a slightly unrelated note, the [source repository] (ttps://github.com/gnif/LookingGlass/blob/B5.0.1/module/Makefile#L15) has a make load command that wants to find a /dev/uio0 device, but my uio module doesn’t make that…?

$ ls -l /dev/uio*
ls: cannot access '/dev/uio*': No such file or directory

Thanks in advance,
-0xdc.

At this time we do not support the Linux host application, it’s in an unfinished and broken state. There is work progressing on it but don’t expect too much until we officially announce it.

Fair enough.

When support does come, was this set up as you would expect it to have? Is there any way this setup can help e.g. testing?

Too early to say.