I’ve successfully passed a discrete Nvidia GPU on an Optimus laptop (Lenovo Legion) to a Windows VM and I’m now trying to output Linux Xorg session to a second external monitor. I was hoping that the integrated Intel GPU will pick it up once I’ll connect a DisplayPort to USB-C (thunderbolt) port. Unfortunately, VM picked it up instead (i recognized it as a second monitor).
Is there any way to “unbind” Nvidia from taking over that second external monitor? Or are thunderbolt ports hardwired to Nvidia chip same as the HDMI output usually is? I’ve looked at udevadm monitor output, dmesg and journalctl outputs, but none of them seem to show any indication that a second device has been plugged in.
KDE Session seems to react though when the monitor is plugged into USB-C port and it prompts to select display configuration (extend to left/right, unify, etc.). I’ve looked at dbus messages with wireshark and indeed EDID information is being broadcast on session dbus from kded5 process (KDE daemon).
I’m considering looking into ebpf tools to see if some probes on DRM driver may give some more information. Any hints where to look for more hints or if it’s possible at all would be very appreciated
UPDATE: This might have been a wrong approach. These dbus messages seem to be triggered by generic sensor data and not any of the graphics kernel subsystems:
www.reddit. com/r/linuxquestions/comments/tkenb6/where_does_kdes_plasma_system_monitor_get_sensor/
UPDATE 2: Another approach would be to print IOMMU groups… it showed that all thunderbolt controllers are part of the Intel chipset… which seems to suggest that it should be possible to tweak them after GPU is passed to the VM…
What would be ideal is to output the VM through an HDMI/DP port on the laptop, which is most likely connected directly to the GPU.
But, what you’re trying to do, depends heavly on how the GPU is connected to the laptop: if your laptop has a MUX switch there’s a chanche that you might be able to de-couple the GPU output from the iGPU output. If not all the video signals go through the iGPU and you’ll be stuck.
Sorry, I’ve edited the title to make it clearer. I meant that the iGPU would output to an external monitor A while VM with dGPU to an external monitor B. I didn’t mean to combine them into a single external monitor. Sorry for any confusion.
There is no way to do what you’re asking, your laptop is “muxless” wihch means there is no way to re-wire things. The USB-C port is physically wired to the dGPU, there is nothing you can do to change this.
Thanks for confirming and thank you very much for creating looking glass - it’s such an amazing piece of technology. It’s a shame that VNC may be the only way to get host iGPU framebuffer out of the laptop.
Maybe I will try x11spice instead then, hopefully it will perform better than VNC. I was surprised how “choppy” videos are for example on VNC compared to Windows’ RDP. If you had any recommendations for host-guest remote desktop please let me know.
One thing that didn’t make sense to me though is that USB-C port still seems to work for other devices even after GPU itself is passed through to the VM - you can connect a docking station to the USB-C port and for example Ethernet connection will still be detected on the Linux host. I couldn’t understand why displays couldn’t be “relayed” in a similar way.
Maybe looking into this laptop’s schematics will explain some of this…
One more update: I’ve ordered a USB 3.0 to HDMI adapter - not sure if this will work, but can’t think of any other way to get iGPU output to a second monitor. Maybe this will “fool” the motherboard to direct the stream to iGPU?
It won’t work, there is no way to fool the physical PCB traces on the motherboard. I am sorry but your chasing something your laptop physically can not do.
One thing that didn’t make sense to be though is that USB-C port still seems to work for other devices even after GPU itself is passed through to the VM - you can connect a docking station to the USB-C port and for example Ethernet connection will still be detected on the Linux host.
Just because it’s on the same connector doesn’t mean it’s wired to the same device. The HDMI component of the port is physically wired to the NVidia GPU.
When the port is configured for video, the high speed data path is no longer for USB, but for DP Alt Mode (Display Port), which is not wired to the USB controller but to the GPU itself (usually using a digital switch, these pins a muxed). USB 2.0 is still available though so you can have a docking station with both video and USB at the same time, but the video does come from the dGPU in this instance.
Note the bottom components in blue, there is a mux chip here to switch between the USB 3.1 and alternative mode paths. This diagram shows “To processor”, it does not need to be the same processor, they can (and are usually) wired to different devices in laptops.
Thank you very much again for explaining this. It would have taken me ages otherwise to understand this
I will start testing x11spice/OpenNX/xpra and x2go to see which one will perform best. It’s easier to focus on fine-tuning their performance now knowing that physical links are out of the equation.
Sorry for digging out this old thread, but I’m wondering if instead of relying on hardware adapters this could be “virtualized” on iGPU in the sense that Xorg could create a second “VIRTUAL” output that would then be “tunneled” via usb c that iGPU has access to, so that an external monitor could read it?
Is this something that you might have seen before? Any hints would be really appreciated. If this was possible I assume that Xorg could then release the dGPU that could be freely shared between host and the guest VMs.
Sorry for being annoying, but perhaps there is some software solution that could take the “virtual” buffer in RAM and cast it with something like a chromecast to an external monitor or emulate display output on some port that iGPU has access to? Linux has dma-buf framework and modesetting soI thought that at least theoretically this could be possible in the kernel?
As gnif explained to you already multiple times, this is not going to happen. Hardware solution is impossible on your platform as the signals are trapped inside the laptop and any software/emulation/network-casting will add so much latency it goes against the prime motivation of this project: “Looking Glass aims to achieve the lowest possible latency”.
Closing this thread, it’s been answered multiple times. You do not have the required hardware to make this work, and there is no method available to you to resolve this.