Ups drop very low if kodi is running on linux host

i trying looking-glass for the first time and really like it but i have a little problem. If kodi is running at the same time as the looking-glass-client i get really low ups at about 4-8 ups but still 120FPS. Without kodi in the background i have about 12-24 ups. It doesnt matter if kodi is doing some work or not. Why could this be?

I have a crazy setup so this is more a theoretical question that a real problem.

Setup:

Current Arch-Linux on a Intel Nuc Quad i3-7100U CPU @ 2.40GHz and Intel hd 620

The hd620 is shared with the vm via Intel_GVT-g so both the linux host and the windows10 guest a using the same graphiccard.

This is the first instance of this we have seen, so what to expect is unknown. It does sound like the GPU is simply too slow to keep up with the workload.

The problem didn’t show up if i connect to the vm via the qxl or spice video driver but than i miss many graficfunctions inside the guest. But shouldn’t the workload be the same or higher if i connect completely via spice? Kodi isn’t doing anything in the background than drawing the gui, could this be so expensive for the gpu?

I full understand that my hardware could be on the limit, i only interested to get the maximum out of it.

Kodi doesn’t ‘draw the gui’, it renders it… the entire application is using the 3d pipeline.

QXL uses an entirely different method to transfer the data, because it emultes a GPU it can very very efficicently send those drawing commands and updates over a socket. The down side is, no 3d.

LG doesn’t get this level of access, we capture each update, copy the entire frame into shared ram, and then back out again into the GPU. Since you’re splitting up a single physical GPU, the transfer from gpu to ram, and then back into gpu ram again is likely the issue, especially as it’s very likely saturating the GPU’s bus.

Intel-GPUs are using the system-ram but thank you very much for this deeper explaining. I think i understand the problems a little bit more and im already happy about the situation now that i can use 3d in a windowsvm with closing kodi in linux.

Thank you again for this nice software

Yes, which makes matters worse, now instead of being able to do a DMA upload into the GPU RAM, it has to do a CPU copy into the shared system ram. LG has no way to know that the GPU uses shared ram, and even if it did, it would have no way to accelerate it.

Have you tried using the dma-buf display method? You can output that through spice or vnc. https://wiki.archlinux.org/index.php/Intel_GVT-g If so how does that compare?

I saw a guide somewhere that talked about how the mdev’s type final number is sort of like a priority number for the number of “time slices” that the vgpu gets. I wouldn’t be surprised though if the host portion is able to barge in and take a lot for itself, starving the guest. You could try using a higher priority mdev device…and conversely maybe a lower priority could help? Its all coming from the same bucket though so it may well not make any difference.

I was going to soon be trying Looking Glass with gvt-g myself so this thread is interesting to me.

Some else seemed to have similar behavior to you with looking glass here:

Thanks for the ideas. I havent test any other than the default virtual gpu from qemu and looking-glas. Since i have to many projects this isnt on my highes priority on my list.

Actually a new version of LG is in the works that creates a pixmap from a dmabuf. Considerabe work has gone into the host and the kvmfr kernel module to allow for this even with VM->VM feeds. Results are amazing so far.

It will be a while off still because at the same time I have decided to ditch SDL and have been writing a new agnostic layer that is better suited to our requirements. Again however, the results are outstanding.