In the future, computer operating systems and hardware will be smart enough to allow apps to run in an operating system agnostic way. To me this means that a computer could run a windows app, a mac app, a Linux app (or BeOS, or FreeBSD, or Plan9, or Android, or anything, really…) side-by-side with performance like as if it were on bare metal hardware.
Ive recently switched my main VM server from Xen to KVM because of its better support for pci passthrough and then learned that the motherboard has some serious firmware bugs and without an HP service subscription I cant update its bios but anyways, probably something for another thread. What Im wondering here is why you manually redefined the VM rather than using virsh edit which does syntax checking and will tell you if you did something wrong. It uses whatever editor you have set in $EDITOR and imo works better because you dont have to go searching for the xml files. Virt-Manager still picks up the changes without any issues.
Forgot to mention that Ive been drooling over this Ryzen stuff. My 2700k is starting to show its age but I dont have anywhere near the money to buy new parts. It seems with these newer cpus the focus is starting to be less towards instructions/clock and more towards connectivity. The NVME drives have me especially interested as I deal with a lot of very very large files and 500MB/s is starting to feel slow.
Nice to see better support for pci-e passthrough, but wouldn't it be slightly more convenient to edit the domain xml with virsh edit win10 instead? When I did this myself on my brothers pc I installed the virtio drivers during the windows installation with the virtio-win iso added to a second virtual drive, which might be somewhat less fiddly than installing windows on baremetal, booting into linux and giving that drive to the windows vm. Otherwise it was a great informative video.
There's one additional way to also reroute sound from the VM. Do not add any virtual soundcards via KVM but instead add a virtual dummy sound card driver to Windows itself and use IceCast (OGG) as sound server and then connect to it from the physical Linux side with your favorite software.
Exciting times. I definitely share your enthusiasm for the concept of Windows as an appliance - in a container. It is not just gaming at issue here. Autodesk's "partnership" with Microsoft is an eternal bond that permeates throughout engineering keeping huge swathes of industry tied to Windows. Together that marriage dictates that Windows runs the servers as well pushing license fees through the roof. It is possible that sr-iov can begin to change some of the situation.
It's actually pretty easy to "hotswap" mouse and keyboard between host and guest. I used the qemu option
-qmp tcp:localhost:4444,server,nowait and added id="something" to my mouse and keyboard usb passthrough.
Then you are able to netcat some commands into qemu and grab and release the devices as necessary. Pack it into a script with a keybinding and you are good to go. How you get the command invoked if the devices are bound to the guest depends on your creativity. My solutions was to stick an old keyboard in one of my drawers and basically only use it to hand over control back to the host.
Took me ages to figure this one out but works quite fine. After releasing the usb devices linux uses them in a fraction of a second. But windows takes about 30 seconds to notice after it gets the devices back. Maybe there is some hotplug interrupt not thrown or something.
Very, very exciting news. Thank you @wendell for your hard work on this! I can wait for my holidays to give it a try on my own machine. Holidays not because of the time required to set this up, but because of the potential distraction of games etc.
I would be highly interested in a comparison between Windows 10 native and Windows 10 VM performance in KVM under RYZEN. To my knowledge nobody posted something about that yet. (CPU bench / GPU bench / IO bench)
Awesome video! I can't wait for my AM4 bracket to arrive, I want to play with things like this However my problem is that I have a single 1070 only.
Is it possible to have a secondary graphics card (something like an nvidia 210), and before I start the vm, I switch from the 1070 to the 210 on X (with something like the graphics switching on laptops), somehow tell Linux to "free up" the 1070, and when the VM halts, undo this, so I would get the 1070 back for gaming and CUDA?
I have been out from Linux in the last 5 years, so I don't know if something like this is possible, or how systems around these work.
I think it would be much harder with nonfree NVIDIA drivers, because the driver has to support unbinding from the card easily without a reboot.
I have separate GPUs for my host and the VM. My Debian Stretch host gets a 970, and I pass through a 1080 to my windows 7 guest. It is more expensive to have two cards, but it lets me play games in both at the same time, so it is as if I have two computers when a friend comes over.
Windows 10 does indeed kill my soul just a little bit more every time I use it. I would love to use Linux 24/7 again, and I would consider going to Ryzen just for this.
I am thinking that my next CPU will be a Ryzen 1600.
(on phone, limited in linking stuff) Are you sure about that? Take a look at the Arch Wiki, they documented how to spoof the vendor ID so that it works:
Troubleshooting: "Error 43: Driver failed to load" on Nvidia GPUs passed to Windows VMs
They provide 2 ways, the first one being suitable for newer versions of Libvirt/QEMU. The one below then is for older systems where this functionality does not exist.
A Ryzen 5 1600, 1600x (6 core) or any Ryzen 7 CPU is an ideal paring for this setup. Be sure to check out our motherboard reviews for the full rundown.
While I haven't actually tested running a VM yet, I can confirm that on the Asus Prime X370-Pro with the latest AGESA 1.0.0.6 UEFI that graphics cards show up in isolated IOMMU groups.
@wendell Have you experimented with assigning different amount of cores/threads to the VM via pinning? For example instead of splitting them up 50/50 between Host and VM, a 25/75 split (2c/4t for host, 6c/12t for VM).
I reckon this would work just as well on Manjaro, right?