Intel Flex 140 error on passthrough

Hi all

so i have inherited an issue at work from the IT guy who has left the company mid project…

We have two HPE DL380 Gen10 servers which run Windows Server 2022, Hyper-V to host all the companies VMs. We have the intel Flex 140 cards installed and drivers installed, the cards work perfectly in windows server 2022, when i go through the GPU pass through method the cards appear in the Windows 11 and 10 VMs.

When i install the drivers on the first VM, the intel installer software detected the GPU and installed the drivers, the GPU showed up in Task manager and Devise manager as a Intel Flex 140 then i had to restart the VM and we got the Blue Screen on boot up, nothing i could do would make the VM boot.

The second and third VM i installed the GPU on crashed during installation of the drivers with the same Blue Screen.

Hyper-v reporting the following error: ‘vm-gpu’ has encountered a fatal error. The guest operating system reported that it failed with the following error codes: ErrorCode0: 0x7E, ErrorCode1: 0xFFFFFFFFC0000005, ErrorCode2: 0xFFFFF80780791A2F, ErrorCode3: 0xFFFFF40C3E8A6548, ErrorCode4: 0xFFFFF40C3E8A5D80. PreOSId: 0. If the problem persists, contact Product Support for the guest operating system. (Virtual machine ID 8D5FD152-6A3D-4431-948F-D5E37528697E)

The GPU are unmounted from the VM and then they are available for use by the host OS

Any help will be much appreciated.
Steve

2 Likes

Sr iov on. in bios and iommu enabled not auto?

was it working and stopped of is this a new config?

1 Like

Hey Wendell

new config, i installed the cards fresh out of the box at the weekend. I will check the bios config tonight once everyone has finish work, the power profiles are set to “Virtualization High Power” which should turn on all features.

Cheers

1 Like

Did you configure the GPU to pass back to host, and does the host have drivers installed?

Additionally, did you soft restart the VM or poweroff then back on?

Did you configure VM to shutoff and never save?

We experience this regularly in deployments. Our protocol is install base drivers on host and VM’s. This way the GPU never reverts/drops out when passed back to host.

Then configure VM from powershell to accept and cleanly pass back.