Im geting so tired (mxgpu, sr-iov)

Can someone please tell me the easiest, most stable way to get VMware workstation like setup on a linux host OS, with windows guest, and have it be hardware gpu accelerated/native performance?

Im getting so tired of searching and not finding.

If I had an mxgpu (firepro) and used KVM, would connecting through spice give me the same native performance? If so, why is lookingglass needed? This is the are im not getting!

For native performance, consider running windows on bare metal and linux in a VM.

sr-iov video cards are pretty rare in the home lab setup, and they don’t game very well (look at craft computing and serve the home video on the subject).

windows on bare metal is out of the question

It depends on what you mean by easy. I would find it fairly easy with the right hardware, as I have some Linux experience.

With some hardware(ie bad IOMMU groups, AMD GPU reset bug), it would be hard or not stable. With some hardware, it would be impossible with the current software(ie no IOMMU, or some Muxless laptops).

For a user that has very little or no Linux experience, it would be more difficult.

Define this, please. Do you mean GPU performance? CPU? Memory? Disk? Network? The latency of the video?

The realm of GPU acceleration in VMs without spending boatloads of money is still a very niche thing. So you may be looking for something that does not exist, or at least not without significant work.

I think spice video has higher latency then Looking-Glass.

Because most people don’t have $10k for a Radeon V340 that only delivers probably like Vega 56 performance at max. They also cannot or do not want to pay for an Nvidia vGPU card and also the software licensing

They want to buy normal consumer GPU and pass it through. To get low latency good looking video feed out of a GPU in a VM, you either have to use Looking-Glass or plug it into a monitor directly.

What is the reason for this?

3 Likes

Go to the Proxmox Wiki and look up Pci_passthrough
https://pve.proxmox.com/wiki/Pci_passthrough#Introduction

I followed those instructions with a basic HD 6670 Radeon GPU and they worked perfectly. You end up with a Windows 10 VM that owns the GPU when running. It’s not SR-IOV so only one VM can using the GPU at a time. I expect you can have several VMs which are set to use the card, just don’t run them all at once. I was able to run Unigine Valley remotely over LAN, very cool.

My Proxmox is a RYZEN 7 2700 on an Asus Prime board. This is important because it did not have the necessary features on an MSI board running a XEON E3-1220v2.

The steps are as follows;
Add a notification about AMD and IOMMU to Grub.
Find the GPU ID in the PCI devices
Block the GPU from being used by Proxmox for it’s console
Add the GPU to the config of the Windows10 VM so it can use it
Set Windows VM so it accepts Remote Desktop, we don’t want to be using Spice or VNC
Make sure your desktop computer has an RDP client, I used Remmina.
Fire up your VM making sure it’s display is set to none and you have the PCI device set to primary GPU.
You should see the screen of your Proxmox box show the Windows 10 desktop, a sign it’s working
Fire up your Remmina on RDP to the IP address of your Windows 10 VM. You should be presented with a Windows 10 login.

You will probably need to create a profile with the right screen resolution set.
Windows 10 will most likely have downloaded the driver for the GPU but you can go to AMD and get the full version.

The instructions in the link will give you the details but this gives you the outline.
It really was not that hard but starting with an AMD RYZEN and an AMD GPU made things a lot easier for me. I had no joy with the old board because I did not have IOMMU.

EDIT: I’ve now got an HD5970 running on two Windows 10 VMs at the same time doing Unigine Valley at 50FPS. Seems pretty stable.

1 Like