Pass ivshmem device to hyperv for Gpu Paravirtualization?

My Hardware
gigabyte x570 mobo
corsair 64 gb ram @ 3200
rtx 3080ti
ryzen 9 5950x 16 core cpu
fedora 34 with kvm and vfio enabled and working
windows10 with hyperv enabled and working

My hardware #2
gigabyte 15p yd laptop
32gb ram @ 3200
rtx 3080 mobile [16 gb vram model]
core i7-11800h 8 core 16 thread cpu
fedora 34 with kvm and vfio enabled and working
windows10 with hyperv enabled and working

both tests came to the same conclusion
works on the laptop with no issues

Can i pass the ivshmem device to a gpupv vm that is already inside a windows vm?
i have a linux host, a win10 gaming vm, and a win10 gpupv vm inside the win10 gaming vm.
I know your first thought, why even???

Gpus are finally getting to the point we can divide them into smaller chunks and have multiple users gaming on one gpu, the only issue is that the technology is, or was hidden from the consumer since it was created. The difference being windows gpupv, gpu paravirtualization.

The reason i have a vm inside of another vm is because of the fact that other than sr-iov or expensive grid licensing there nothing else i know of that offers these options, and sr-iov is so hard to get working.

In a reality, i would not do vm inside a vm under linux host, and i could definitely just run windows host with gpupv, and parsec. parsec and that setup wouldn’t offer seamless gaming like looking glass does, vm → host, or vm → vm.

My question is not if i can do gpupv in a windows vm, i have already achieved it. I want to use looking glass to get the frames from the gpupv vm straight into linux WHILE also having a gamer on the 1st vm under linux.

Another thing people may ask is, yes but how will you get a display into that gpupv vm in the first place?
There is a solution already, there is a driver that fakes a display and allows parsec to run.

What i have tried:

linux host with a gaming vm hosting the gpupv vm [working]

use looking glass vm1 → host [working]

gaming on the gpupv vm following this tutorial
Easy-GPU-PV
[working]

Dismount-VMHostAssignableDevice -force -LocationPath

pass the ivshmem device to the hyper-v gpup vm
[did not work]
error: Devices can only be dismounted after they are disabled

disabling the motioned device.
[worked]

PowerShell to dismount device, tried to, and it did disable it, but failed so it reverted it
[did not work]
error: The required virtualization driver (pcip.sys) failed to load.

trying without force option
[did not work]
error: The specified device is not a PCI Express device.

I know this is not the best place to ask this exact question,
but its kind of a mix of vfio and windows hyperv.
My main question is it possible to add the ivshmem device into the GPUPV vm.
it sounds like it right?
gpu passthough is available for hyperv.
im no where near good enough to understand what is actually needed to do this.
do we need driver to spoof the device and tell windows it can be passed like a gpu?

The ivshmem device works works passing data from windows to linux, im hoping that passing this device into the gpupv vm would be able to just be used the same.

host
vm1
gpupv vm

gpupv -> host

I look forward to your thoughts and help
vm inception LOL

no.

Why?

You really have to ask? You’re trying to pass a virtual device to a nested VM, a device that is already emulated at level 1… how on earth do you expect MMIO to even function?

See libvf.io

1 Like

Well, I really though it was an good idea. Like I said I know little about the workings of the technology. It is dumb for sure and wouldn’t do it but only for the gpupv. I actually thought you would like the concept.

Looks nice I will check this out then.

Sorry, end of a long day and pretty tired. It’s not something that we would ever really want to do, nested virtualization performance in general is pretty bad for anything VFIO related, the performance hit is huge. I also don’t see how the MMIO (memory mapped IO) would even function with a virtual device like this as the IVSHMEM device is virtual, there is nothing for the hardware IOMMU to map into the nested guest.

2 Likes

That’s ok! :+1: I’m glad to learn anything I can. I try to follow up on things and I suppose it’s just a experiment. Thanks for the reply too it’s really cool to speak to the creator of an amazing technology!

1 Like

Thank you gnif, i had no idea of this technology. This is exactly what i have been looking for. I read the page you sent me to and i watched some videos, i had no idea you also worked on this. This is what i was trying to do. I wanted gpupv not windows or this setup! amazing work

1 Like