I just watched the video from my sr-iov vm in 4k, and currently making this post from said vm.
I do have a half decent 4k client solution for your vgpu virtual machines.
I use a combination of 3 solutions.
MolotovCherry virtual-display-rs on github
This is for creating a ‘virtual display’ when rdp connects this happens, any other client has to create a virtual display. This one has a nice gui and persists reboots. There are some performance issues that i will get into later. People with dedicated GPUs get the best results with ‘dummy plugs’(thats gonna be important later). There are lots of different solutions.
lizardbytes sunshine on github
This one alot of people are probably familiar with, but in short this is a screen capture server designed for gaming, so it works for everything else pretty good.
3.moonlight-qt on github
Moonlight, said high performance client for accessing your vms remotely. With intel igpu borderless windowed mode is key.
I can get 4k video play back with this and not skip a beat as long as the video is not a 4k60fps video. Any more than 40fps in 4k is just too much for this setup. Some 4k gameplay is possible, just don’t have the highest expectations with 5 out of 80 execution units of an igpu.
One tradeoff with this setup is the hardware acceleration for sunshine/moonlight doesn’t work. This could be due feature regression in the intel drivers, or feature add with newer versions of intel drivers that require changes to moonlight. idk. I use software h264 for lowest cpu consumption.
These performance limitations I suspect are to be blamed on the virtual display, which intel has a solution already designed and ready to go, somebody just has to test it out.
on github intel Display-Virtualization-for-Windows-OS
This Github describes passing physical displayports on your device to vms, via customization to qemu.
intel also provides a driver :iotg-display-virtualization-drivers.html
They also provide this driver for usage with windows vms.
A whitepaper by a company called DFI about SRIOV led me down this rabbit hole and I am trying to figure out how to make a miracast display work on a vm.
I’m not sure which if any reseller procurement is going through, but it is Supermicro. I just gave the list of specs I wanted and sent it off. I asked for a couple tower versions too and they came back asking if a different variant of said tower was acceptable so supermicro engineering is willing to do it, or so it seems.
Can you please expand of what you mean by not fully backed on gen 10/11 for sr-iov. I have two free hosts i want to try this on, but they are gen 11, what should I expect?
did someone allready test the arc 750 or 770 teased in the video?
over here its 333Bucks and for a new build would be great to have and better than the need to have patched/custom nvidia drivers
Is this at all possible with an AMD rx6400? I’m only looking to split it for 2 vms. I have virgl GPU working but of course it only works when using virt-viewer and spice. I would much rather have dedicated GPU for vm so i can use the acceleration even when I’m not using a spice client.
I’ve attempted to build the i915 drivers against ProxMox’s 6.5.x PVE kernel for an Intel A380, but w/ no love.
I have the A380 slotted into an older AsRock X99X LGA-2011 socket board w/ a Xeon E5-2660 v3 installed, but the latest BIOS does have the option for SR-IOV. Just looking to split the GPU for more than just Plex duties. Does it take more than just BIOS support to make proper SR-IOV work end-to-end? I hate to e-waste a running system for just 1-2 features.
Hello All,
so I got an Intel Flex 140 to try and test in Proxmox.
and I only have AMD Epyc servers to use this in…
compiling the driver went all fine, and from the dmesg output it loads OK.
I am unable to enable the desired virtual functions by entering a number to sriov_numvfs.
Maybe I am getting to the relevant folder incorrectly, not through the right bridge…
lspci | grep Flex
87:00.0 Display controller: Intel Corporation Data Center GPU Flex 140 (rev 05)
8a:00.0 Display controller: Intel Corporation Data Center GPU Flex 140 (rev 05)
dmesg | grep i915
[ 0.000000] Command line: initrd=\EFI\proxmox\6.5.13-3-pve\initrd.img-6.5.13-3-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs initcall_blacklist=acpi_cpufreq_init amd_pstate=active amd_pstate.shared_mem=1 i915.enable_guc=3 i915.max_vfs=3
[ 0.212536] Kernel command line: initrd=\EFI\proxmox\6.5.13-3-pve\initrd.img-6.5.13-3-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs initcall_blacklist=acpi_cpufreq_init amd_pstate=active amd_pstate.shared_mem=1 i915.enable_guc=3 i915.max_vfs=3
[ 46.212175] i915 0000:87:00.0: Running in SR-IOV PF mode
[ 46.212613] i915 0000:87:00.0: [drm] GT count: 1, enabled: 1
[ 46.213448] i915 0000:87:00.0: [drm] VT-d active for gfx access
[ 46.213900] i915 0000:87:00.0: [drm] Using Transparent Hugepages
[ 46.214348] i915 0000:87:00.0: [drm] Local memory IO size: 0x0000000140000000
[ 46.214377] i915 0000:87:00.0: [drm] Local memory available: 0x000000013cc00000
[ 46.218172] i915 0000:87:00.0: [drm] GT0: HuC firmware i915/dg2_huc_7.10.14_gsc.bin (7.10.14) is recommended, but only i915/dg2_huc_7.10.14_gsc.bin (7.10.3) was found
[ 46.218230] i915 0000:87:00.0: [drm] GT0: Consider updating your linux-firmware pkg or downloading from https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git/tree/i915
[ 46.226597] i915 0000:87:00.0: [drm] GT0: GuC firmware i915/dg2_guc_70.19.2.bin version 70.20.0
[ 46.226980] i915 0000:87:00.0: [drm] GT0: HuC firmware i915/dg2_huc_7.10.14_gsc.bin version 7.10.3
[ 46.237504] i915 0000:87:00.0: [drm] GT0: GUC: submission enabled
[ 46.237507] i915 0000:87:00.0: [drm] GT0: GUC: SLPC enabled
[ 46.237729] i915 0000:87:00.0: [drm] GT0: GUC: RC enabled
[ 46.272826] i915 0000:87:00.0: GT0: local0 bcs'0.0 clear bandwidth:106100 MB/s
[ 46.285145] i915 0000:87:00.0: GT0: local0 bcs'0.0 swap bandwidth:2362 MB/s
[ 46.285569] i915 0000:87:00.0: 3 VFs could be associated with this PF
[ 46.286344] [drm] Initialized i915 1.6.0 20201103 for 0000:87:00.0 on minor 1
[ 46.320844] i915 0000:87:00.0: SPI access overridden by jumper
[ 46.341173] i915 0000:8a:00.0: Running in SR-IOV PF mode
[ 46.341667] i915 0000:8a:00.0: [drm] GT count: 1, enabled: 1
[ 46.342601] i915 0000:8a:00.0: [drm] VT-d active for gfx access
[ 46.342972] i915 0000:8a:00.0: [drm] Using Transparent Hugepages
[ 46.343335] i915 0000:8a:00.0: [drm] Local memory IO size: 0x0000000140000000
[ 46.343637] i915 0000:8a:00.0: [drm] Local memory available: 0x000000013cc00000
[ 46.345429] i915 0000:8a:00.0: [drm] GT0: HuC firmware i915/dg2_huc_7.10.14_gsc.bin (7.10.14) is recommended, but only i915/dg2_huc_7.10.14_gsc.bin (7.10.3) was found
[ 46.346015] i915 0000:8a:00.0: [drm] GT0: Consider updating your linux-firmware pkg or downloading from https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git/tree/i915
[ 46.353163] i915 0000:8a:00.0: [drm] GT0: GuC firmware i915/dg2_guc_70.19.2.bin version 70.20.0
[ 46.353487] i915 0000:8a:00.0: [drm] GT0: HuC firmware i915/dg2_huc_7.10.14_gsc.bin version 7.10.3
[ 46.365461] i915 0000:8a:00.0: [drm] GT0: GUC: submission enabled
[ 46.365781] i915 0000:8a:00.0: [drm] GT0: GUC: SLPC enabled
[ 46.366274] i915 0000:8a:00.0: [drm] GT0: GUC: RC enabled
[ 46.396210] i915 0000:8a:00.0: GT0: local0 bcs'0.0 clear bandwidth:105995 MB/s
[ 46.409447] i915 0000:8a:00.0: GT0: local0 bcs'0.0 swap bandwidth:2343 MB/s
[ 46.409789] i915 0000:8a:00.0: 3 VFs could be associated with this PF
[ 46.410556] [drm] Initialized i915 1.6.0 20201103 for 0000:8a:00.0 on minor 2
[ 46.411206] i915 0000:8a:00.0: SPI access overridden by jumper
[ 46.447512] Creating 4 MTD partitions on "i915.spi.34560":
[ 46.448217] 0x000000000000-0x000000001000 : "i915.spi.34560.DESCRIPTOR"
[ 46.450999] 0x000000001000-0x0000005f0000 : "i915.spi.34560.GSC"
[ 46.453700] 0x0000005f0000-0x0000007f0000 : "i915.spi.34560.OptionROM"
[ 46.456084] 0x0000007f0000-0x000000800000 : "i915.spi.34560.DAM"
[ 46.460264] mei i915.mei-gscfi.34560-46e0c1fb-a546-414f-9170-b7f46d57b4ad: Could not read FW version ret = -19
[ 46.460791] mei i915.mei-gscfi.34560-46e0c1fb-a546-414f-9170-b7f46d57b4ad: FW version command failed -5
[ 46.461127] Creating 4 MTD partitions on "i915.spi.35328":
[ 46.461697] 0x000000000000-0x000000001000 : "i915.spi.35328.DESCRIPTOR"
[ 46.463348] mei i915.mei-gscfi.35328-46e0c1fb-a546-414f-9170-b7f46d57b4ad: Could not read FW version ret = -19
[ 46.463783] mei i915.mei-gscfi.35328-46e0c1fb-a546-414f-9170-b7f46d57b4ad: FW version command failed -5
[ 46.464069] 0x000000001000-0x0000005f0000 : "i915.spi.35328.GSC"
[ 46.466585] 0x0000005f0000-0x0000007f0000 : "i915.spi.35328.OptionROM"
[ 46.469138] 0x0000007f0000-0x000000800000 : "i915.spi.35328.DAM"
[ 47.479826] i915 0000:87:00.0: [drm] GT0: HuC: authenticated!
[ 47.480477] mei_pxp i915.mei-gsc.34560-fbf6fcf1-96cf-4e2e-a6a6-1bab8cbe36b1: bound 0000:87:00.0 (ops i915_pxp_tee_component_ops [i915])
[ 47.510479] i915 0000:8a:00.0: [drm] GT0: HuC: authenticated!
[ 47.510980] mei_pxp i915.mei-gsc.35328-fbf6fcf1-96cf-4e2e-a6a6-1bab8cbe36b1: bound 0000:8a:00.0 (ops i915_pxp_tee_component_ops [i915])
echo 6 > /sys/devices/pci0000:80/0000:80:01.1/0000:81:00.0/0000:82:00.0/0000:83:00.0/0000:84:18.0/0000:88:00.0/0000:89:01.0/0000:8a:00.0/sriov_numvfs
-bash: echo: write error: No such file or directory
I indeed pulled the most recent firmware blobs from the link you posted above, and also symlinked the latest version to be a specific version the card was looking for.
If you go back to my dmesg output above, you will note that the card loads the symbolic link with a version and outputs the version the blobs is.
so dg2_guc_70.19.2.bin was actually 70.20.0 and dg2_huc_7.10.14_gsc.bin actually 7.10.3, i.e. older than the card was looking for.
double check bios settings? might try rebar on/off (it should not be off) and sr-iov should be on. vt-d should be on. confirm bios looks okay? you might also, strange as this sounds, disable thunderbolt if you have the option.
You might also have to use one of the other git repos that has the slightly-patched version of the i915 driver.