So the 5700G has an iGPU that you can use for the host OS, but can also support dGPUs. I’m wondering if anyone has done a single dGPU 5700G VFIO thing yet? How were IOMMU groups on past Ryzen APUs when a dGPU was installed? Anyone have a 5700G or a 3400G and can do some LS-IOMMU outputs with a dGPU installed?
Just making this thread to set my expectations for Zen4, since we still don’t really know if the IGP on Zen4 steals PCIE lanes, or is completely over infinity cache. (which over infinity cache would be so great)
On my mobile rig (recent Lenovo Legion 5 with 5800H, Radeon graphics and Nvidia 3060 Max Q) the discrete GPU was fully isolated in its own iommu group, on linux, once virtualization was enabled in the bios. The radeon iGPU was very much not and bundled with lots of other stuff.
Yes, that’s the mobile CPU. I’m just curious about the actual 5700G if anyone has investigated that yet. If the IGP doesn’t use up lanes, would it be PCIE 3.0 x16 or is it still x8?
I would expect a similar performance to a 3400G. Lanes from CPU to dGPU would most probably be the same, as would bandwidth bottlenecks.
That said, I think this is a goal worth pursuing, because imagine Looking Glass + dGPU + highly optimized Windows / Linux image running only the game and nothing else in a VM. Making such a Linux image would be a couple of hundred megabyte, which is pretty much nothing for a 20 GB game, and would pretty much eliminate platform dependency. Heck, it would also probably simplify anti cheat measures and other necessary evils. And if you have the dGPU and APU already available… Why not?
Sometimes it’s fun to dream, but I doubt we’ll see that happen for many years…
So what you’re saying is there needs to be a Steam Deck: Docker edition? It makes a local GeForce NOW instance then uses Looking Glass instead of H264?
I mean, think about it - if you can guarantee a game to have 8, 12 or 16 hardware threads available solely for their use, with minimal interference but a 5% performance hit… Would that not solve a lot of gaming issues?
The Linux kernel can be made extremely minimalistic, too - heck since most is virtualised, it’s even feasible to switch to a new microkernel like Fuchsia, and then drivers to real hardware will be few and far between. This is all pie-in-the-sky but is what would be possible if dual GPU systems are now becoming standard.
Another interesting idea are the two-GPU-one-card offerings slowly popping up, could this be used somehow? No idea, but would be awesome!
Yeah, a iGPU and dGPU combo with the dGPU used for virtualization would be great. Problem is the Steam Client’s been recently bloated by Chromium so it would need a ton of dependencies.
Steam would launch a VM with x amount of cores/hw threads and passthrough preconfigured. Only the game engine would launch in the VM, with perhaps a small wrapper library to talk to Steam.
Obviously, this would take years for a game company to fine tune properly, though the gains would be massive if it could work properly and across both Windows and Linux hosts. Like I said, pie-in-the-sky.
Can you check PCI-E lane width for your dGPU? If it’s still x8 we may have to wait for Raphael for the full x16 lanes with an iGPU.
Also, Looking Glass is superior to Parsec. Lane width actually matters for Looking Glass at higher resolutions.
Have you tried USB passthrough by passing through an entire controller like a NEC/Renesas based uPD720202? (Once you get X570, or a chipset with better IOMMU separation)
Looking Glass seems to be great indeed, but in my case the machine I’m playing on is situated in another room, hence the need for something like Parsec. Planning on trying Moonlight as soon as I can get my hands on a supported Nvidia GPU, though.
If you need something now, Steam Remote Play also works.
PCi-E 1.0 is only because it’s idle. You should measure link speed when the dGPU is actively rendering 3D. GPU-Z has a internal PCI-E power management benchmark that wakes the GPU from Gen 1 to Gen 3.