Skylake - Run Windows, and Windows Games, under Linux with UEFI and full video passthrough!
(update for this coming 2017!!!!!!!!!!111one!!)
Skylake - Run Windows, and Windows Games, under Linux with UEFI and full video passthrough!
(update for this coming 2017!!!!!!!!!!111one!!)
Welp, lets try that again.
Skylake - Run Windows, and Windows Games, under Linux with UEFI and full video passthrough!
The point of this writeup and these two videos is to show you a Linux box that will literally do everything your windows machine does, but in a nice self-contained way. The reality here is a little less hollywood than waht you see in the video -- Arch Linux is, IMHO, not the best choice for a beginner. The Arch wiki is a stellear set of documentation but it helps if you already know what you are doing. Fedora Rawhide is probably more user friendly as a basis. Still, if you have some Linux experience and have installed Arch successfully, you can probably manage this.
It is worth noting that your goal, if you do this, should be to live on Linux as much as possible. If you want to replicate the Windows (or Mac!) experience on Linux, you're going to be disappointed. Linux works differently, and you should set your expectations as such. You can be perfectly productive on Linux and many games on Steam are already available on Linux natively. You can also run Windows without passing through a graphics card, though you would not be gaming much in that case.
Finally, Wine is another viable alernative that is less "heavy" than what we're doing here. Wine lets you run Windows apps underLinux like native apps. Generally this is a better experience, but games and apps are problematic. The Adobe suite especially problematic, but the Adobe suite will work fine (even with hardware acceleration) under this graphics-card-passthrough methiod.
For this video I used Arch Linux and Kernel 4.1. I would also recommend Fedora Rawhide. It should be possible to do this with Ubuntu or Debian as long as you update to Linux Kernel 4.1 or 4.2. Ubuntu 15.10 is likely to ship with a bleeding edge kernel, and if that happens I will probably update this post. Or perhaps our memebers can :)
Kernel 4.1 improves PCI Passthrough cpabilities and device support. Kernel 4.2/4.3 adds better support for the proprietary/open source driver-in-userspace merger going on right now with the ATI/AMD graphics driver.
This how-to is geared a bit toward Skylake and the Asus Z170 Deluxe. It should work on any board in the channel family (Z170A, etc). It should work on other baords as well, but I haven't yet tested. I plan to, if there is enough interest. It does depend a bit on the motherboard UEFI having proper configuration options for UEFI passthrough, and a functional IOMMU table.
Please note that at this time I am not aware of a working PCI passthrough for the onboard graphics, so this how-to obliviates the possibility of passing through the onboard graphics to a skylake VM.
This how-to is a guide on using a UEFI-based virtual machine (for fast boot times!) and VFIO for PCIe passthrough as opposed to the old ways.
he following links were very helpful for me in figuring this out: http://vfio.blogspot.com/ https://wiki.archlinux.org/index.php/QEMU https://bbs.archlinux.org/viewtopic.php?id=162768 (reference only, skip to the end, somewhat outdated) https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF
At the end of this guide, you should be in a position to boot Windows in a VM in about 3 seconds.
What do you need on the hardware side of things? Ideally:
A UEFI-based graphics card. For our tests we'll be using the Asus Strix 390X. I can't recommend nVidia here because if their graphics driver detects that it is running in a virtual machine, it shuts down. This is old news and has been extensively documented, but it has not been fixed by nVidia. This is bad behavior and you should vote with your wallet.
You do not have to have a UEFI based video card, but in the past one of the reasons that I did not want to do a tutorial on this is that I had a lot of problems with different graphics cards when you startup and shut down the VM. What happens is the Windows drivers leave the card in an undefiend state which can lock the whole machine next time you boot the VM. UEDI is much more well-behaved.
You will want to have a monitor with multiple inputs. You could use a switchbox, but I do not recommend it. The reason is that you will attach both the onboard graphics an the add-in graphics adapter to the same display and toggle between inputs (at least as far as this tutorial goes). You could use two separate displays, if you want, but for this tutorial I am using the A399 4k monitor which is glorious for both productivity and gaming applications.
One use case of your Windows VM also involves steam inhome streaming. In that case, you would not need to attach anything to your onboard graphics card once your virtual machine is setup. You can use remote desktop or VNC to remote into the Windows VM and then Steam on Linux can connect up to Steam on Windows and then game stream from Windows to Linux. This is a bit of a performance hit, obviously, but for many games this is a fine pragmatic solution.
Remote desktop seems to work fine with Adobe OpenCL acceleration which means you can run apps through remote desktop on your virtual machine and this will be a near-native experience again with no physical connection to the video card.
For me, I found it fine to toggle the display input between DisplayPort (Windows) and HDMI (Linux) on the monitor. Do note that I was not able to get 4k/60hz working with Skylake, but it is likely that is resolved in Kernel 4.2.
For control of the virtual machine, for initial setup, I would recommend a second USB keyboard and mouse. You can map these USB peripherals directly through to the virtual machine. Just use lsusb to determine the device and port numbers and substitute those in the quemu command that will appear later. Once you have your VM setup, you can use Synnergy to seamlessly pass mouse and keyboard control between the VMs without having to do any USB mapping.
Synnergy is a program you can install on arch (and download and install on Windows) that uses network sockets and software to "pass" input devices between physical machines. In our case, we'll set it up so that when we go off the left side of the screen in linux, our cursor will appear on the right side of the WIndows display. Scroll Lock will lock the cursor to the display/machine in question. This is also a handy program if you want to, for example, setup your laptop next to your desktop but still only use one set of input devices.
If you have a USB KVM switch, that can also be configured to toggle one keyboard and mouse between different USB ports. Some USB KVM switches are problematic. I would not recommend this IOGEAR KVM because it renumbers the USB devices every time you switch inputs.
Here's our full hardware Rundown: - Intel i7 6700k Skylake CPU - Corsair H110i GTX CPU Cooler - Corsair Vengance LPX DDR4-3200 16gb Memory Kit - Corsair TX850 PSU - Intel NVMe 750 SSD (400gb) - Asus Z170 Deluxe - Asus STRIX Radeon 390X AMD GOU - Lian Li PC A51 ATX Case (modded!)
--- Hardware Build Vdieo Here:
The first thing to do is to go into your UEFI and enable VT-d, IOMMU and related options. There should be some coverage of this in the video. As we're really talking about Skylake, the i5 and i7 k parts do have everything you need.
For my setup I gave the evo/lution Arch Linux installer a try, but it didn't know how to deal with my Intel NVMe SSD. So I ended up having to do all that manually to get through the Arch install. Whatever, no big deal.
The next thing to do in UEFI is to make sure that you configure the system to boot from onboard graphics. This means that when you turn the computer on, there should be no output from your add-in graphics card (the Strix 390X in this case) and instead you should see the UEFI and boot screen from attaching to the onboard graphics. I found the Skylake HDMI a bit fiddly with the 4k display, so I used display port for this part. If you aren't running 4k, you aren't likely to encounter an issue with that part. The other issue you may encounter is that Skylake graphics support is not on by default except in kernel 4.3+. You have to enable it on the kernel boot line. Read the rest of this section before proceeding; you may even have to wait before switching the default graphics order to onboard first until after you setup your kernel boot line to support the intel skylake graphics (called i915).
Once you boot up and the boot up screen/UEFI displays from the onboard graphics adapter, the next thing will be to configure Linux not to use the add-in graphics card either. The general way this works is that you blacklist a driver and/or assign an alternative Kernel module (driver) to the device you want to pass through to the VM by PCI ID.
This assumes that you have setup Arch, installed it, downloaded Qemu, libvirt and related virtualization packages. Perhaps some kind soul will supply that list and I'll update this how to with the commands here.
You can use lcpci -ntvv to determine the PCI IDs you want to assign.
If this doesn't work, you will need to blacklist the radeon driver by adding it to the blacklist in /etc -- this sucks though because if you have two Radeon graphics adapters you want to use you really want to specify which card by PCI ID and Bus Location. That's doable, but I'm going to leave that out for now. This can always be updated later with help from the community.
[[email protected] ~]# lspci -tvnn -[0000:00]-+-00.0 Intel Corporation Sky Lake Host Bridge/DRAM Registers [8086:191f] +-01.0---+-00.0 Advanced Micro Devices, Inc. [AMD/ATI] Hawaii XT [Radeon R9 290X] [1002:67b0] | \-00.1 Advanced Micro Devices, Inc. [AMD/ATI] Device [1002:aac8] +-02.0 Intel Corporation Sky Lake Integrated Graphics [8086:1912] +-14.0 Intel Corporation Sunrise Point-H USB 3.0 xHCI Controller [8086:a12f] +-16.0 Intel Corporation Sunrise Point-H CSME HECI #1 [8086:a13a] +-17.0 Intel Corporation Device [8086:a102] +-1b.0-----00.0 Intel Corporation PCIe Data Center SSD [8086:0953] +-1c.0-----00.0 ASMedia Technology Inc. Device [1b21:1242] +-1c.2-[04-0c]----00.0-[05-0c]--+-01.0--- | +-02.0-----00.0 Broadcom Corporation BCM4360 802.11ac Wireless Network Adapter [14e4:43a0] | +-03.0-----00.0 ASMedia Technology Inc. ASM1062 Serial ATA Controller [1b21:0612] | +-04.0--- | +-05.0-[0a]-- | +-06.0-[0b]----00.0 Intel Corporation I211 Gigabit Network Connection [8086:1539] | \-07.0-[0c]-- +-1c.4-[0d]----00.0 ASMedia Technology Inc. Device [1b21:1242] +-1c.6-[0e]----00.0 ASMedia Technology Inc. Device [1b21:1242] +-1d.0-[0f]-- +-1f.0 Intel Corporation Sunrise Point-H LPC Controller [8086:a145] +-1f.2 Intel Corporation Sunrise Point-H PMC [8086:a121] +-1f.3 Intel Corporation Sunrise Point-H HD Audio [8086:a170] +-1f.4 Intel Corporation Sunrise Point-H SMBus [8086:a123] \-1f.6 Intel Corporation Ethernet Connection (2) I219-V [8086:15b8]
One thing I really like about the Asus Z170 Deluxe is that is has a metric ton of USB3.1 controllers. This is handy because you can pass through an entire PCI USB3.1 controller to your virtual machine and have dedicated USB3.1 ports mapped directly into the virtual machine.
Once you've got your PCI IDs, we need to add them to the boot config and blacklist the radeon driver. I am using systemd to boot my system, but I figure most people are using grub. Fortunately the change is pretty simple -- you just need to add stuff to your kernel line in either case. Before we blacklist the Radeon driver, we need to make sure that the skylake graphics is working fine. That will take some kernel parameters unless you're rocking Kernel 4.3+
You will also need to add some parameters to your kernel for IOMMU and to enable skylake graphics. Weirdly, I had a lot of DRM (render manager, not rights management) errors reported by dmesg until I disabled IOMMU for skylake grapjhics. This should be okay since we don't intend to pass through Skylake Graphics through to the VM but something to be aware of.
Here is the kernel line that worked for me:
options root=PARTUUID=(dont change this part) rw i915.preliminary_hw_support=1 intel_iommu=on intel_iommu=igfx_off pcie_acs_override=downstream
I am not entirely sure I need the PCIe_acs_override=downstream, but I added it anyway.
Once those are set, and you are now booting from onboard graphics from the UEFI to the login prompt, we'll want to disable the Radeon driver:
echo "blacklist radeon" >> /etc/modprobe.d/blacklist.conf
Finally, we need to assign those PCI IDs to be usedby the VFIO driver (instead of the now blacklisted Radeon driver). You'll want to pass through tthe PCI IDs of both the Radeon graphics adapter a nd the HDMI audio.
The contents of my /etc/vfio-pci.cfg is
This corresponded to the 390X and its audio component in my system. Yours may vary.
Now, you'll have to reboot once again and you can use lspci to verify that the vfio driver is attached to your add-in graphics card. If that's the case, that's great news.
The next piece of the puzzle is that we want to boot our Virtual Machine in UEFI mode. QEMU/KVM doesn't do that out of the box with Seabios, so we need to get a UEFI bios. Fortunately the Fedora folks have put together an awesome UEFI. You'll want to grab adn install it from here:
There seem to be two UEFI files -- one that is the actual UEFI and one to store UEFI vars. The one to store UEFI vars you'll want to copy to your home folder and use it that way.
Note: You also have to reconfigure some QEMU permissions to access things, or run QEMU as root. This is not ideal to start your QEMU VM as root but is maybe helpful for testing things. I intend to expand this section or perhaps the community can help out here. It is the normal QEMU/KVM stuff for when setting up permissions but it should be documented here. TODO.
Once you've got it installed, you can try to boot up a test virtual machine. I did not use libvirt, an awesome interface for qemu, but you should becuase it simplifies networking. Here is my script:
qemu-system-x86_64 \ -serial none \ -parallel none \ -nodefaults \ -nodefconfig \ -enable-kvm \ -name Windows \ -cpu host,kvm=off,check \ -smp sockets=1,cores=2,threads=1 \ -m 8192 \ -device ich9-usb-uhci3,id=uhci \ -device usb-ehci,id=ehci \ -device nec-usb-xhci,id=xhci \ -rtc base=localtime \ -nographic \ -netdev tap,id=t0,ifname=tap0,script=no,downscript=no -device e1000,netdev=t0,id=nic0 \ -device vfio-pci,host=01:00.0,multifunction=on \ -device vfio-pci,host=01:00.1 \ -vga none \ -device usb-host,bus=ehci.0,hostbus=5,hostport=1 \ -device usb-host,bus=ehci.0,hostbus=5,hostport=1.1 \ -device usb-host,bus=ehci.0,hostbus=5,hostport=1.2 \ -device usb-host,bus=ehci.0,hostbus=5,hostport=1.* \ -drive if=pflash,format=raw,readonly,file=/usr/share/ovmf/x64/ovmf_code_x64.bin \ -drive if=pflash,format=raw,file=./Windows_ovmf_vars_x64.bin \ -hda ./win10.qcow2
Note the OVMF code files for the UEFI. More information about that can be found in the reference links but I'd like to hae this how-to be comprehensive. Will update this section later. Or perhaps the community can help out. TODO.
You can simplify the aove qemu commandto just have the device if=pflash commands for UEFI. Witht that, when you start qemu you should see your UEFI graphics card flicker to life and display the UEFI startup screen from the QEMU KVM.
If that worked, from here it is just a matter of putting in an OS Install USB and passing it through to the VM so it can boot off of it. You'll also have to create a permanent storage file for the "hard drive" in the VM or you can pass through a separate physical storage device.
This config passes through 8gb (8192mb) of memory and 2 of the skylake CPU Cores.
Benchmarks of this are coming but so far Gaming speed is 92-101% of native windows speed on the Z170 Deluxe with the 390X.
This is a work in progress. Mods, feel free to fix typos and other corrections. This will probably be turned into a wiki post at some point and I would love to improve this with feedback from the community. This is very much a 0.01 release. I will link the videos here when they are up shortly.
Damn, why did I get the 3570K? I don't even overclock. Definitely not going to be able to do this for my needs (Video work and gaming) until I upgrade / build a new rig.
What great timing, just happened to be browsing the forum about playing games on linux.
I'm running Linux on my laptop just to get used to it, once I feel versed enough in it I'm gonna go for the switch over to Linux on my main rig. I'm so glad this channel came out to help me through all this, thanks guys!
Ubuntu 15.10 is based on 4.1 kernel, just a heads up
For people who do not know how to install Arch, try manjaro, I will try wendells method with Manjaro and see if it works the same and then report back.
Awesome. you got me really excited for this :)
You talked a bit about compatibility with older hardware - can I run you by my system before getting on with this so I'm not too disappointed later? The thing I'm most concerned about is the graphics card, it's a little old cause I built this system on a budget. The only things I really need to virtualise are CAD, 3D modelling, and rendering packages, along with the Creative Suite, so this is kinda important.
ASUS Gene VII
Intel 4790K @4.7GHz stable overclock
16GB Mushkin ram
Gigabyte windforce GTX660
Various SSDs and HDDs, there's a 120GB Samsung drive with OS X sitting on it that I rarely use, so is prime for Linux testing.
All together inside a custom built case
Too bad you can't give the entire CPU to the VM, my old i5-2500 does allow virtualization but it was already really close to maxing out when I played GTA5 on Windows.
Oh well, I wasn't planning on using Arch anyway, still far too happy with Mint. Still thanks for the tutorial though, at least it shows that there is a way. Perhaps once someone has it running on Ubuntu, I'll consider upgrading the PC and installing the game on a VM.
nVidia cards can work as long as you are using a Titan card or of you modify your GPU to identify as a Professional counterpart to the VM http://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/ this is the guide on how to do so, I've modified my spare 680 to a K10 and it works well.
My dream is to have a linux distro with the option to easily install windows or other os and pass through certain hardware to each vm, be able have it run in seamless mode or like wine where the apps appear like native linux apps in menus / on the desktop. Also good file system integration would be amazing, so a folder in the vm is mapped to a linux folder - like mounting with sshfs and you can get at all your files from linux.
So this isnt possible with a nVdia card since the driver shuts down when it detects its in a vm? (GTX 970 in my case)
Unless you disguise it as a quadro card, if you do that, it'll passthrough.
It is possible to do so with any nvidia card. The trick is to disable virtualization extensions in the shell script or .XML file for your VM. This will not let the nvidia card know that it is in a VM and will prevent the error it gets when starting the driver.
There is a group called "kvm" or "virt" or "virtualization" in most RPM-distros. In Arch, you have to create that group I think (I didn't check though). Then you arrange the privileges for kvm/qemu through that group, instead of su-ing it.
The new version of virt-manager (that just came out) makes it a tad easier to bind devices.
As I pointed out a few times already, I used this system to run Windows for a few years, but to be honest, I don't do it any more, because I like to change hardware more often nowadays, and it's always a custom job, there is no way to give a how-to that will even work for 50% of users. Since a year or so, I use Windows for what it is, a console, like an XBox, a game server or dedicated entertainment software server. I'm even moving away from that now, as the services that I liked to run with DRM are getting so evil in terms of ToS that I'm banning them from my house (e.g. Spotify, the new Office 365 with extra Volometrics software extensions, Adobe Cloud, etc...). Basically the only thing I still need Windows for is for playing CS:GO in a competitive environment, and I have a dedicated machine for that, but then I have always had that for CS, because that kinda comes with the territory for that, having a dedicated lean and mean machine that does nothing but CS with lowest latency and highest fps, to a point that is unrealistic for anything else. A lot of things have changed in the recent past, like media now actually play better and with less hassle on linux than on Windows, especially now that DVD and BR playback in Windows is optional, and there is double DRM that doesn't work nice with more Apple/Linux-capable solutions like Bose SoundTouch, Onkyo AV-receivers and other network-centric AV solutions like Plex etc... Windows has become second choice in those fields, because it doesn't provide the best basic functionality and because it messes with the media data to the point of risking the normal use for those files. A lot of software that I used to use in Windows, like Adobe Photoshop and Adobe Lightroom, have not evolved and are full of bugs (Photoshop isn't as useful when it can't even render the brush in all sizes and positions on the working surface, and Lightroom after version 3.6 has been nerve wreckingly slow, and both really suffer from the lack of modern RAW processing of ACR, especially with non-Bayer sensors). From a purely functional perspective, Windows has become second choice, even for Microsoft software, for instance MS-Office, which has not evolved at all beyond office 2007, and is not even equipped with a decent touch interface at this point, and if you're going to use an office suite that doesn't have a decent touch interface anyway, might as well use an office suite that uses opencl spreadsheet acceleration and overall works much faster and is compatible with more different file formats, without sacrificing functionality in doing that).
The world has changed a great deal in the software landscape over the last couple of years. Linux has grown a lot in user-centric functionality, the software ecosystem is larger than ever, and the pile of advantages over non-open-source solutions has become so big that it has thrown a deep dark shadow over the non-open-source solutions.
Anyone know what the test-bench type platform wendell builds on in "Wendell's Skylake PC Build - i7 6700k"?
Finally! Thanks for the video @wendell been waiting for a while for it. I did having this 90% working on my ol' socket 2011 rig but messed it up. I used Arch with Virt-Manager and windows 7 for the VM, so I went ahead and modified the XML file for the VM to use UEFI by accident (silly me!). After that the VM wouldn't boot and I couldn't manage to even edit the XML file to load with virsh -ARGH. I'll probably take a shot at it again soon.
timer name='hypervclock' present='yes'/
spinlocks state='on' retries='8191'/
From your XML.
Afterwards, reinstall the Nvidia driver.
There are brackets but Tek Syndicate makes the text invisible with them.
@wendell can you test this? I don't have a card to do so otherwise I would. I just scraped together what I could find.
Nice. Finally something worthwhile from the TekLinux channel.