I applied the changes provided by @gnif by hand on the 3.1 release, it got the job done. nVidia system information reports x8 pcie 3.0 connection.
Then I tried the qemu on the master branch on github, nVidia system information reports x1 1.1 operation or something like that, just like the unpatched qemu. However, gpuz can run a basic test and determines that on load the card runs x8 pcie 3.0. Games run fine, though. Might be a power saving thing, idk.
I’m looking for a configuration example with virt-manager for the master branch (compiled it just yesterday) on how to configure speed and width of the ports. I tried the configuration mentioned by @nibbloid but I think he applied some patches not yet available on github and the vm wouldn’t boot, a popup in virt-manager says the options are not supported.
Almost a 500pts difference. In general, when actually using the OS I notice some slight sluggishness with hypervisor off as well (It seems to negatively impact 2D performance). With it on there’s no perceptible difference to bare metal.
Appologies if this is hyjacking the original topic of the thread, but i do feel its still related to an extent…
The PCIe patch got pushed to us UnRaid users not long ago in an RC for the next version.
We can see the correct slot info populated in the nvidia driver, and seeing the expected performance improvement (which is great).
Ive always been under the impression that latency\performance improvements were only really able to be taken advantage of by the Q35 machine type…
So I guess my question is… Why do you guys use Q35 over i440fx? Should we be using Q35 at all if all we’re doing is passing though a PCIe graphics card and NVMe? Is i440fx being developed and improved? Will we still see latency and performance improvements down the line using i440fx?
At the end of the day, I dont care what machine type my VM uses, I just want the closest performance possible to bare metal!
This thread was never about how q35 is faster or lower latency, it’s about those of us that do use it either for improved compatibility with pass-through or whatever can obtain optimal performance from a q35 system.
These features are only available on the q35 platform and some of us require them to either get optimal performance out of our hardware, or even get passthrough to work properly in the fist instance.
The Idea that UnRaid will remove Q35 or even warn about using a newer platform topology that even the Qemu developers are trying to push is idiotic. Not only are they going to make it harder for their users to use UnRaid where Q35 is a requirement, it is going to make people shy away from the option when it is completely valid and may for their particular configuration yield performance gains.
It’s also in the complete opposite direction of the oVirt team who recently added Q35 options for BIOS Type (with options for Legacy BIOS, UEFI BIOS, and SecureBoot) to help provide more PCI passthrough options. Definitely a step in the right direction that they included this, it will help when I decide to add a GPU for my media server VM for transcoding 4K.
Generic PCIe root port link speed and width enhancements: Starting with the Q35 QEMU 4.0 machine type, generic pcie-root-port will default to the maximum PCIe link speed (16GT/s) and width (x32) provided by the PCIe 4.0 specification. Experimental options x-speed= and x-width= are provided for custom tuning, but it is expected that the default over-provisioning of bandwidth is optimal for the vast majority of use cases. Previous machine versions and ioh3420 root ports will continue to default to 2.5GT/x1 links.
All due respect, asking a small question or sharing tips and tweaks is far from what “support” means, no one actually demands support.
I understand and respect your point of view, though.
Now, going back to the subject and gnif’s initial observation:
I see they released v4 rc1 which seems to have fixed some issues with windows 10 (of which I’m concerned) so I’m able to experiment with q35 4.0 machine type. They do over-provision the VM, a q35-3.1 machine had x1 bus as reported by nvidia control panel (check system information) and a q35-4.0 machine now has x16 bus. GPUZ detects a 8 lanes in either case, which reflects the actual configuration in my case (2 slots running x8 x8).
No, but you asked an off topic question, this thread isn’t about why Qemu might be crashing. This forum is full of very smart people willing to help, but if cross posting to off topic posts is how people are going to behave it’s going to drive them/us away.
The topic of this post and discussion here is highly technical and it’s hard enough to follow as it is simply due to the evolving nature of this thread as the discovery process went.
It is not “released”, it has been made available, RC stands for “Release Candidate” which means, it might be ready for release, but needs testing and may contain serious bugs, etc.
Yes, they now default to the maximum specified configuration of PCIe 4, but you can (and really should) set it to what it really is by specifying the values via the x-speed and x-width parameters.
Just because GPUZ sees it correct doesn’t mean the driver is identifying the card the same way, it may be programming the GPU registers incorrectly. We already know that GPUZ doesn’t see it the same as when you view the “System Information” in the nVidia control panel. Here you get the true configuration of the card as the driver sees it.
This thread is very valuable for learning thank you very much. I have passed through 6500 XT .
Struggled with same GPU-Z bus shows PCI instead of PCI-e. Since I am using ubuntu 18.04 I thought it belongs here. After updating to latest arch kernel qemu and libvirt it works on bare metal performance.
And I have used ioh - xio downstream and upstream pci-e switches on ubuntu 18.04 replicated the structure on both on qemu2.11 with libvirt and qemu 6.1 commandline , even it did not work. I guess the trick is in the kernel.