GPU Paravirtualization Hyper-V with Linux Guest

Hey Guys,

just a short question: Is it possible to use a passed through gpu to a linux guest from hyperv?

I the post “2-gamers-1-gpu-with-hyper-v-gpu-p-gpu-partitioning” you copy the driver files from windows host to windows guest.

I only want to use the gpu for computing purposes. So far the paravirtualized gpu shows up in the vm:

root@pve:~# lspci
c2a1:00:00.0 3D controller: Microsoft Corporation Device 008e
ee4e:00:00.0 3D controller: Microsoft Corporation Device 008e

Does anyone have a plan, how to utilize this gpu? Installing nvidia-driver or cuda sdk does not work:

root@pve:~# nvidia-smi
NVIDIA-SMI has failed because it couldn’t communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.

Also trying to access the gpu with the tensorflow libary does not detect any valid gpu to use for processing.

I’m sorry, I’ve not encountered this problem before, but would it be possible to clarify:

1.Does the problem persist, through a vm-reboot?
2.Can you confirm this doesn’t happen w/ a Windows vm (or a Linux-desktop vm)?
3. Does the package manager fail to install the driver, or is the module not loading (output of lspci -k or modprobe)?

@Sean_Dudley lspci -k results in the same output as before. A reboot also doesn’t change anything.

I myself was wondering, if it is a problem with paravirtualization and hyper-v. Currently I cannot confirm, if a full pass-through would work, but I guess so, because you are releasing the gpu from the host system and mounting it to the vm. In the host vm you would install drivers as regular. It also doesn’t matter if the guest would be a windows or linux os.

I read something about the Windows Display Driver Module (WDDM) which makes this partial gpu usage possible in newer versions of windows. The only thing I don’t now so far is, if it is possible to split the gpu in half with a linux machine

Does this post match what you’ve tried?
https://mu0.cc/2020/08/25/hyperv-gpupv/

" TL; DR;

To enable GPU virtualization (GPU-PV) on arbitrary Hyper-V Windows 10 virtual machines, do the following:

  1. Ensure that both the host and the guest meet the system requirements, and use the generation 2 virtual machines.
  2. The following commands are to be done in PowerShell. Check that GPU-PV is available by executing the Get-VMPartitionableGpu. If more than one GPUs are available, the first one listed will be used. Currently, the only way to “select” GPU in use is by disabling other GPUs until the wanted GPU resides at the top of the output of the said command.
  3. Copy the appropriate GPU driver from host to guest. Copying has to be done manually. Installing the GPU driver from an installation package will not work.
  4. From the host, find out the correct driver package path. Open Windows Device Manager, open the GPU to be used for GPU-PV, go to the “Driver” tab, click “Driver Details”, and scroll down until you find a file with the following pattern: (filename.inf)_amd64_(random_hex_number)\(somefiles.dll).
  5. On host, go to C:\Windows\System32\DriverStore\FileRepository. There should be a folder with the same name as above.
  6. Copy that folder to guest under C:\Windows\System32\HostDriverStore\FileRepository. If the folder is not there, create it.
  7. Enable write-combining and enlarge MMIO address spaces on the guest (see here for details). The following example sets the 32-bit MMIO to 3 GB and above 32-bit MMIO to 30 GB.
Set-VM -VMName $vmname -GuestControlledCacheTypes $true -LowMemoryMappedIoSpace 3072MB -HighMemoryMappedIoSpace 30720MB
  1. Mark the virtual machine to use GPU-PV.
Set-VMGpuPartitionAdapter -VMName $vmname
  1. If everything works, the virtual machine should start with GPU acceleration. Enjoy! However, if it does not work, it is likely that you are partitioning a render only GPU. GPU-PV on such GPUs seems broken under Windows 10 version 2004 but seems to be fixed as well for the next version of Windows 10. See this section for details."

It seems like a Windows driver is a requirement here, which may outright exclude Linux guests from using GPU-PV. Not sure why a Windows guest wouldn’t work though.

Ok, this just confirms my finding. Do you think it is only a temporary problem, that linux guests are excluded or may it be possible in the future, that linux guest also support gpu-pv?

I’m not sure I would hazard a guess, sorry.

Found this:

(it doesn’t seem very promising, but could be interesting to try[?])

For me there seems quite a lot of problems.

If I try to give a Ubuntu 22.04 Vm a gpu partition there is no output when I run lspci, so I guess something is not working there with the sharing of the gpu partition.

My Proxmox VM just runs fine when attaching and detaching the gpu partition, but following the steps from this tutorial did not yield me any results.

How did I find the driver files to copy?: Device Manager → Display Adapter → Nvidia RTX 3070 Mobile → Properties → Driver → Driver Details → for me the folder is named like nvrzi.inf_amd64_5ca0829c4e804b3f/*
In there I find nvidia-smi and several dll’s.

uname -r yields me version 6.2.16-3-pve so I ran the wsl-kernel script from the tutorial with the linux-msft-wsl-6.1.y branch instead of the 5.10 branch. The kernel module just built without errors.
/dev/dxg is also availible on my system

If I want to run /usr/lib/wsl/lib/nvidia-smi it throws a error that it can’T communicate with the driver and “Failed to properly shut down NVML: Driver Not Loaded”

Running a dkms install dxgkrnl/f53bd0a62 (current git commit id) says, that I already installed the module on kernel 6.2.16-3-pve.

A modprobe dxgkrnl/f53bd0a62 says, that the module is not found in /lib/modules/6.2.16-3-pve (/kernel/drivers/)

When I run glxgears the output is: Error: Couldn’t open display (null)

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.