Looking Glass on a Laptop?

To have GPU accelerated guests on the same laptop display that the host is running on seems it would required 2 GPUs… Has anyone seen any clever workarounds? Especially curious if any modern AMD laptop with integrated graphics can have Windows guests with GPU passthrough-like performance.
Thanks!

This is not really LG related topic. LG by definition requires two GPUs where it copies frames from one to another. Depending on your laptop architecture and whether you have two GPUs in it (like an iGPU and a dGPU), you might be able to use LG if your laptop has the ability to route the dGPU to an external output in addition of having the iGPU drive the laptop’s display. The other idea that folks have been playing with (that is not LG related) is single GPU passthrough where you basically shutdown any GPU use on the host, detach it (so the host becomes headless) and attach it to a VM, reversing the process when you shut down the VM. This is basically a bad way to implement dual-boot and has only very few use cases that justify the cons of it.

To summarize, LG requires a dual GPU setup with a guest VM that is successfully running with a passthrough GPU. For any other ideas, or assistance in getting those prerequisites, you might want to visit the VFIO discord server as those are very typical discussions there.

1 Like

I’m late, but I’ve done it with GVT-g no problem. It works fine, but I was surprised it worked at all.

1 Like

How to video! This works because Intel GPU’s let you split the GPU into other virtual GPUs… it’s GPUs all the way down.

AMD 5800H laptop with onboard graphics and a Nvidia discrete card. Keep the onboard graphics for the host, pass through the Nvidia card. HDMI dummy plug, LG, and away you go.

1 Like

Curious how much AMD integrated graphics would bottleneck a dedicated GPU being passed through. I’m guessing yours is onboard but I’m mostly curious regarding top-of-the-line eGPUs being passed through via USB4 to a VM (while the host has integrated AMD graphics). If it wouldn’t bottleneck the eGPU, then that would be awesome!