I am pleased to announce that those using Spice for mouse input now have a fix for the Mouse freeze problem, see https://www.patreon.com/posts/18658431 for more information.
Excellent work! Thanks for solving this. That was the primary blocker to me using LG!
This is really neat, have you done any more experimenting with VM to VM stuff, and do you have any tips for getting it to work?
I currently use VM->VM daily, I use the following configuration on my ThreadRipper platform.
Host: Debian, VGA: Old quadro something, just some junk and a ton of storage in a ZFS array.
- Workstation, Debian, VGA: AMD VEGA, USB hub passed through with KB & Mouse
- Games, Windows 10, VGA: GTX1080Ti, using LookingGlass from the Workstation
- Web Development, Debian, HTTP server, debugging enviornment, etc…
- Phone Server, Debian, Running Asterisk providing VOIP services
- GitLab Server
- Build Server, Debian, kernel builds, etc.
The host is configured to expose the NUMA topology to Linux (Set memory interleaving to “Channel” in the BIOS), which I then use numactl to launch qemu and restrict VMs to one of the two dies, keeping memory allocations local to the VM.
I run OpenVSwitch on the host so I can VLAN tag each VM as appropriate (ie, Windows is on a “Guest” VLAN as I don’t trust it)
The host kernel and QEMU has been patched to support HyperThreading on the Ryzen and ThreadRipper platform, as well as applied the PulseAudio patch for windows Audio. For Linux guest audio I simply connect the guest to the host’s PulseAudio server over the local lan.
LookingGlass has problems with throughput though from VM -> VM, at current I can not get enough bandwidth to support 1080p well. I will re-investigate this though when I have some free time.
Can looking glass be used to control a VM from another VM?
What a joke, I was banned (not just warned, or had my post deleted/edited) from the AnandTech forums simply for posting to the official AMD discussion asking AMD what they are going to do about the outstanding issues with ThreadRipper, PCI bus problems, and VEGA FLR (Function Level Reset).
The reported reason for the ban is “Spam”
Please everyone, go and post these issues over here… I highly recommend you all make some noise about these problems as it appears it is being censored and ignored.
Threadripper & PCIe Bus Errors
Thanks a lot, I’ll have to give it a shot next week.
Well damn I was wanting to run my windows guest at 1440p. I’ll keep patreon money coming and see what you got going down the road. Appreciate the work you have done
Our guys at msi have made a lot of progress on this from what I’ve been told. Still waiting on some details tho
I’m hoping a JPEG XS encoder could be implemented as a non-free build of Looking Glass which you build from the source yourself. Would help a ton for memory bandwidth constrained situations to run 4K.
Correct, I have a partial implementation but it needs work (& time).
How is MSI involved in this, if I may ask?
Three-letter words cannot be used as a search term.
They setup a test system identical to ours, and when there is weirdness, they look at it/document it. So far it has been a really awesome collaboration and they’ve put a lot of work into shoring up some odd behavior. Some actual engineer eyeballs have been looking into the issues.
I have to say that PCI-E reset bug is quite annoying on Vega56. Is there any solid solution to it?.. Besides Win register fix, which is not very reliable.
@gnif I don’t like reviving old things, but it seems no one has answered you yet.
Some SUSE folks posted a document EPYC Performance where they’ve measured both bare metal and VM (KVM/XEN) performance. Apparently, VMs don’t know the topology of the L3 cache and there’s no way to send that information to the host. This affects memcpy() as it can’t use all the compiler optimizations.
I appreciate the thread bump. It made me aware that the developer of Looking Glass is on Patreon, and I’m not pledged (yet).
Thanks for sharing. I’m very interested in this. Is this solvable by a GCC patch?
This is resolved, see the patches by Babu Moger:
I have worked with him to ensure these patches are working and correct the topology of the TR and EPYC platforms, including the cache sizes.
I am talking to him as well, and I am running the 3.0.0-rc2 qemu branch (just saw you signed off on some patches!).
While the patch is amazing, it doesn’t work OOB by using -cpu host, you have to force “-cpu host,+topoext” which is completely different behavior from Intel. For now I have mine patched just enabling topoext on x86 cpus.
Talking about qemu 3, there is a patch from Gerd Hoffmann making the hda better as well.
As gnif mentioned, Babu Moger from AMD fixed this with his series of patches, there are a few others he pushed besides the ones linked.
If you use qemu-3.0.0-rc2, it was tagged yesterday on git.qemu.org, it will contain the patches.
Emulating EPYC CPUs will work out of the box with SMT and L3 topology. If you are using -cpu host, you will have to change it to “-cpu host,+topoext”.
Libvirt requires an addition to the cpu
<cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='2' threads='2'/> <feature policy='require' name='topoext'/> </cpu>
This should be enough to have the SMT enabled on AMD 17h and the cache topology will be applied as well. I am trying to see if this can be enabled automatically when using Ryzen/TR and -cpu host, but I do have a very “hacky” patch (not sure how it behaves when using this with Intel CPUs, as it enables topoext with all X86 cpus).