I applied the changes provided by @gnif by hand on the 3.1 release, it got the job done. nVidia system information reports x8 pcie 3.0 connection.
Then I tried the qemu on the master branch on github, nVidia system information reports x1 1.1 operation or something like that, just like the unpatched qemu. However, gpuz can run a basic test and determines that on load the card runs x8 pcie 3.0. Games run fine, though. Might be a power saving thing, idk.
I’m looking for a configuration example with virt-manager for the master branch (compiled it just yesterday) on how to configure speed and width of the ports. I tried the configuration mentioned by @nibbloid but I think he applied some patches not yet available on github and the vm wouldn’t boot, a popup in virt-manager says the options are not supported.
To install AMD drivers in Windows 10 I use 2 monitors:
monitor #2 connected normally via displayport to VM (it is also connected to host via HDMI)
monitor #1 connected via DVI-D to Linux host
On the Linux host I connect to the VM’s Remote Desktop (i.e over RDP) with Remmina (this makes Windows 10 logout on monitor #2). Leave Monitor #2 on the logout screen.
I then install the AMD drivers over the RDP session. The RDP session normally exits at around 55-60% driver installation progress & I just reconnect with RDP again to finish the installation.
This method will install the current 18.12.x / 19.1.1. drivers.
Almost a 500pts difference. In general, when actually using the OS I notice some slight sluggishness with hypervisor off as well (It seems to negatively impact 2D performance). With it on there’s no perceptible difference to bare metal.
Appologies if this is hyjacking the original topic of the thread, but i do feel its still related to an extent…
The PCIe patch got pushed to us UnRaid users not long ago in an RC for the next version.
We can see the correct slot info populated in the nvidia driver, and seeing the expected performance improvement (which is great).
The UnRaid devs have asked the question as to WHY we are even using the Q35 machine type in the first place for a windows VM, with PCIe passthrough, as i440fx would be better suited, and doesnt suffer from the issue with PCIe speeds that Q35 had. They’re even considering removing the option in their GUI to select Q35 when creating a windows VM because of this: https://forums.unraid.net/topic/77499-qemu-pcie-root-port-patch/?do=findComment&comment=724656
Ive always been under the impression that latency\performance improvements were only really able to be taken advantage of by the Q35 machine type…
So I guess my question is… Why do you guys use Q35 over i440fx? Should we be using Q35 at all if all we’re doing is passing though a PCIe graphics card and NVMe? Is i440fx being developed and improved? Will we still see latency and performance improvements down the line using i440fx?
At the end of the day, I dont care what machine type my VM uses, I just want the closest performance possible to bare metal!
This thread was never about how q35 is faster or lower latency, it’s about those of us that do use it either for improved compatibility with pass-through or whatever can obtain optimal performance from a q35 system.
These features are only available on the q35 platform and some of us require them to either get optimal performance out of our hardware, or even get passthrough to work properly in the fist instance.
The Idea that UnRaid will remove Q35 or even warn about using a newer platform topology that even the Qemu developers are trying to push is idiotic. Not only are they going to make it harder for their users to use UnRaid where Q35 is a requirement, it is going to make people shy away from the option when it is completely valid and may for their particular configuration yield performance gains.
Also we have evidence that shows that when the driver detects a link speed of 0x, or that it’s not on PCIe (ie, i440fx), it programs the SoC differently. Evidence both through benchmarks and the AMDGPU source code in the Linux Kernel which has an todo to implement the PCIe 3.0 specific configuration in: https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/amd/amdgpu/soc15.c#L446
It’s also in the complete opposite direction of the oVirt team who recently added Q35 options for BIOS Type (with options for Legacy BIOS, UEFI BIOS, and SecureBoot) to help provide more PCI passthrough options. Definitely a step in the right direction that they included this, it will help when I decide to add a GPU for my media server VM for transcoding 4K.
QEMU 3.1.90 monitor - type ‘help’ for more information
(qemu) qemu-system-x86_64: -device pcie-root-port,id=root_port1,chassis=0,slot=0,bus=pcie.0: Property ‘.speed’ not found
Compiled the latest qemu from git which seems to have this patch already applied. I’m using the following configure;
Generic PCIe root port link speed and width enhancements: Starting with the Q35 QEMU 4.0 machine type, generic pcie-root-port will default to the maximum PCIe link speed (16GT/s) and width (x32) provided by the PCIe 4.0 specification. Experimental options x-speed= and x-width= are provided for custom tuning, but it is expected that the default over-provisioning of bandwidth is optimal for the vast majority of use cases. Previous machine versions and ioh3420 root ports will continue to default to 2.5GT/x1 links.
Those arguments work, I built the 4.0.0-rc0 release
Now I am getting windows throwing the error 0xc0000225 whenever I try to boot (even from the ISO itself)
I get the error even without the compile options.
If I switch back to Debian’s packaged qemu and change back to the ioh3420 the VM boots fine, as well as the ISO.
I’m stumped. I don’t understand why the ISO can’t boot.
Edit: I have a feeling this is something to do with the default drivers in the new versions of qemu. I’m going to try adding my VM to Virtual Machine Manager and see what happens.
All due respect, asking a small question or sharing tips and tweaks is far from what “support” means, no one actually demands support.
I understand and respect your point of view, though.
Now, going back to the subject and gnif’s initial observation:
I see they released v4 rc1 which seems to have fixed some issues with windows 10 (of which I’m concerned) so I’m able to experiment with q35 4.0 machine type. They do over-provision the VM, a q35-3.1 machine had x1 bus as reported by nvidia control panel (check system information) and a q35-4.0 machine now has x16 bus. GPUZ detects a 8 lanes in either case, which reflects the actual configuration in my case (2 slots running x8 x8).
No, but you asked an off topic question, this thread isn’t about why Qemu might be crashing. This forum is full of very smart people willing to help, but if cross posting to off topic posts is how people are going to behave it’s going to drive them/us away.
The topic of this post and discussion here is highly technical and it’s hard enough to follow as it is simply due to the evolving nature of this thread as the discovery process went.
It is not “released”, it has been made available, RC stands for “Release Candidate” which means, it might be ready for release, but needs testing and may contain serious bugs, etc.
Yes, they now default to the maximum specified configuration of PCIe 4, but you can (and really should) set it to what it really is by specifying the values via the x-speed and x-width parameters.
Just because GPUZ sees it correct doesn’t mean the driver is identifying the card the same way, it may be programming the GPU registers incorrectly. We already know that GPUZ doesn’t see it the same as when you view the “System Information” in the nVidia control panel. Here you get the true configuration of the card as the driver sees it.
This thread is very valuable for learning thank you very much. I have passed through 6500 XT .
Struggled with same GPU-Z bus shows PCI instead of PCI-e. Since I am using ubuntu 18.04 I thought it belongs here. After updating to latest arch kernel qemu and libvirt it works on bare metal performance.
And I have used ioh - xio downstream and upstream pci-e switches on ubuntu 18.04 replicated the structure on both on qemu2.11 with libvirt and qemu 6.1 commandline , even it did not work. I guess the trick is in the kernel.