My Hardware
gigabyte x570 mobo
corsair 64 gb ram @ 3200
rtx 3080ti
ryzen 9 5950x 16 core cpu
fedora 34 with kvm and vfio enabled and working
windows10 with hyperv enabled and working
My hardware #2
gigabyte 15p yd laptop
32gb ram @ 3200
rtx 3080 mobile [16 gb vram model]
core i7-11800h 8 core 16 thread cpu
fedora 34 with kvm and vfio enabled and working
windows10 with hyperv enabled and working
both tests came to the same conclusion
works on the laptop with no issues
Can i pass the ivshmem device to a gpupv vm that is already inside a windows vm?
i have a linux host, a win10 gaming vm, and a win10 gpupv vm inside the win10 gaming vm.
I know your first thought, why even???
Gpus are finally getting to the point we can divide them into smaller chunks and have multiple users gaming on one gpu, the only issue is that the technology is, or was hidden from the consumer since it was created. The difference being windows gpupv, gpu paravirtualization.
The reason i have a vm inside of another vm is because of the fact that other than sr-iov or expensive grid licensing there nothing else i know of that offers these options, and sr-iov is so hard to get working.
In a reality, i would not do vm inside a vm under linux host, and i could definitely just run windows host with gpupv, and parsec. parsec and that setup wouldn’t offer seamless gaming like looking glass does, vm → host, or vm → vm.
My question is not if i can do gpupv in a windows vm, i have already achieved it. I want to use looking glass to get the frames from the gpupv vm straight into linux WHILE also having a gamer on the 1st vm under linux.
Another thing people may ask is, yes but how will you get a display into that gpupv vm in the first place?
There is a solution already, there is a driver that fakes a display and allows parsec to run.
What i have tried:
linux host with a gaming vm hosting the gpupv vm [working]
use looking glass vm1 → host [working]
gaming on the gpupv vm following this tutorial
Easy-GPU-PV
[working]
Dismount-VMHostAssignableDevice -force -LocationPath
pass the ivshmem device to the hyper-v gpup vm
[did not work]
error: Devices can only be dismounted after they are disabled
disabling the motioned device.
[worked]
PowerShell to dismount device, tried to, and it did disable it, but failed so it reverted it
[did not work]
error: The required virtualization driver (pcip.sys) failed to load.
trying without force option
[did not work]
error: The specified device is not a PCI Express device.
I know this is not the best place to ask this exact question,
but its kind of a mix of vfio and windows hyperv.
My main question is it possible to add the ivshmem device into the GPUPV vm.
it sounds like it right?
gpu passthough is available for hyperv.
im no where near good enough to understand what is actually needed to do this.
do we need driver to spoof the device and tell windows it can be passed like a gpu?
The ivshmem device works works passing data from windows to linux, im hoping that passing this device into the gpupv vm would be able to just be used the same.
host
vm1
gpupv vm
gpupv -> host
I look forward to your thoughts and help
vm inception LOL