Windows virtualisation for gaming using Ubuntu host system

After completely being tired of Microsofts shit recently, I want to do the step switching to Linux as my main OS.
Using VirtualBox on Windows I am currently in the process of selecting, testing and discovering new programs that will replace my old Windows ones. (Feel free to suggest any interesting ones)

As I am currently typing this on a Ubuntu Gnome 16.04 BETA virtual machine, have decided to use this distribution as the basis of this interesting journey...

but now for the fun and interesting part where I could need some help:

As I own lots of windows exclusive steam (and origin) games I still want to be able to play them while still running Linux all the time.

My idea would be to use KVM to run a "small" Windows Instance with 8GB RAM (16GB), 4 Cores (i7 -> 8 "Cores") with a dedicated SSD (128GB) with low level access to it and a 1TB share on a (future) NAS or dedicated drive. The graphics would be handled by an "old" GTX 660ti which is being passed through by KVM to allow low level access.

The VM would need to handle gaming (BF3/4, Arma 3, PCars, etc.) and CAD work (and probably a bit of rendering) done in Autodesk Fusion 360.

Additionally I would like to have proper copy/paste functionality between host and guest system, similar to the way VirtualBox handles it.

By the time my setup will change to this proposed configuration, I'll be running one 21:9 1440p, 2x 16:9 1440p and one crappy 16:9 1080p screen, so i hope that any multi-screen work will be still possible while running the VM

Is this feasible? And if so, any recommendations regarding this project?

I hope this was somewhat bearable to read, as my English probably isn't one of the best out here

1 Like

First off I would like to say welcome to Linux! I would like to warn you that the journey ahead of you will cause you to some time stay up for hours on end in the middle of the night trying to figure out what the hell just broke. I would also like to say that it has been the most rewarding experience in my life as I have learned so much from my trials and issues in Linux. I personally use Ubuntu 16.04, because it is an easy transition into the rest of the community. Cannonical(the company behind Ubuntu) has been doing a great job to secure agreements with companies to get both OEM (Dell, IBM, HP) driver support out of the box as well as getting support from AMD and Nvidia to get the graphics card drivers written for Debian like systems. AMD does not have the greatest driver, they just flat out don't, but Nvidia has proprietary drivers that will yield almost the same performance as you can see on a windows machine. I encourage you to play around with everything that you in your stay in Linux and if you come across a bug please report it so that we can fix it and help make the software better for everyone to share in :). I personally use a windows 10 VM that has 4 cores (i7-6700) and 16GB(32GB) of ram so that I can run multiple VM's at the same time in order to stress test systems and see how visualization works.

Windell (SP) Bless his soul as made a video on this:

I plan on working with a group of 14 other CS majors in my college to turn that wrapper he talked about into a real thing so that Linux users can game comfortably. We are discussing making a way where when you run a game it will launch the game in a small windows instance that is just fixed to that game imagine a windows system that is just full screen and all of the non gaming mandatory services in that VM are killed.

2 Likes

I actually can´t believe that the only video from teklinux I didn´t watch was this one...

Sadly the video and the forum thread mention a few problems:

1st:

"A UEFI-based graphics card. For our tests we'll be using the Asus Strix
390X. I can't recommend nVidia here because if their graphics driver
detects that it is running in a virtual machine, it shuts down. This is
old news and has been extensively documented, but it has not been fixed
by nVidia. This is bad behavior and you should vote with your wallet." (From the forum thread of this video)

Well... Shit?

A bit of google revealed that this might be possible though

https://bufferoverflow.io/gpu-passthrough/ (Based on Arch)

So I hope that incorporating that tutorial will somewhat help me with that issue...
(It works for you, so why wouldn´t it work for me as well)

2nd:

The VM seems to output only on the monitors connected to the graphics card, which requires some workaround to get keyboard and mouse working. (I hope I got that right) Isn´t it possible to control the VM simply by using the remote access thingy provided by my VM Control Software (Virtual Machine Manager)? Or would that cause way more latency than the method described in the video?

You can use synergy to share a mouse and keyborad with different machines. There might be better ways of doing it but that's what I use (although not for VMs).

But yeah, you have to use the video output from the graphics card, either that or something like steam streaming. VNC, remote desktop, etc won't cut it for games (not even sure they work at all with 3D graphics).

I'm sorry, I take issue with your statement, canonical is behind ubuntu, not linux, that's a huge difference, and, to be fair ubuntu has become lackluster in the stability department...

My apologies I mistyped No one group is behind GNU/Linux it was started by Linux torvalds and grew into what we have today. And yes using Ubuntu is a grab bag for everyone I am simply using it because it is an easy distro for people to move in to and I plan on porting this tool for other distros as well. So far I have set up QEMU and I am testing with windows XP on 4 different systems for the maximum compatibility. I am testing on a i5-6600 with 32 GB of ddr4 ram and a MSI m3 gaming motherboard. I am testing on a HP compaq with a proprietary MOBO and a intel core 2 duo e6550 and the same system again with a e8400. I am testing on a intel i3-4130 with a asus b85m-g mobo and 8 gb of ram. I am doing this to sample a rough estimate with a good part of the user space. I plan on getting a hold of an FX-4300,FX-6300,FX-8350 soon to do some further testing for both teams. I am currently using a Nvidia GTX 980 across all systems as well as a nvidia gForce 7800GS to test to see if this is possible on extremely old cards. I am currently using Windows XP to test to see if a legacy OS is able to function properally and then I plan to move up to windows 7 (Vista is dead). Then I plan on moving to windows 10, because a majority of people are moving away from windows 8.1.

To address the question before of using two outputs for the system. Yes in my testing right now I have to use two inputs and put them into a simple hdmi switch for one of my monitors. I have the LG25UM57 which allows me to split the screen in half down the middle to display both inputs at once. I want to try and find a solution in order to nest the VM window inside of your host window but allow for the iGPU to be used to process the host os and the video card to be used for the guest. This will most likely add more latency to the host os while running the guest as more system calls will need to be made in order to have the iGPU process the images and then send the result over PCI to the Video card to then output the image. I don't think that would work properly as the physical card will be passed to the VM I want to try and find a solution to this issue and I am open to any ideas as I will test all of them and provide documentation on my findings which will be attached to the git page when the software begind development beyond indepth testing.

I am pretty much guessing right now, but couldn´t you use a low latency (point of failure right here) capture card running a preview window in linux?

It's funny you should post this I am also working on getting windows as a VM on a host Ubuntu system. I was able to convert my physical windows to a VM image.

I used this guide even though the pictures and some of the wording is old you can still follow along just fine.

Physical to Virtual Machine by VMware vCenter Converter

This will make a VMware workstation 11/12 image, which once loaded into VirtualBox runs just fine. This will invalidate windows, just call the activation number to get it working again(that's what I did)

Things I haven't got working (or tried yet) are:
1. GPU passthrough
2. Multiple monitors to work full screen via the VM
3. My other HDDs encrypted with Ubunutu
4. Haven't tested if my 144Hz will run as 144Hz via a VM

The point of what I am doing is to run a windows VM within Ubuntu but give windows as much of my PCs power as I can. This is due to gaming, until I can play 100% of every game I play on Linux and without issues (or at all) I am stuck with windows. So when doing non gaming related things I plan on using Ubuntu, but when i want to game ill fire up the VM and go ham.

I am still in the testing phase of this, if I cannot manage to get my 980 Ti to pass through then I wont do this at all at home. However at work where I don't need a GPU I might still do this. If nothing else messing around with this has been very fun and a learning experience. One of my coworkers thinks IF I get this working I should do a long write up on exactly how I managed.

Of course after re-reading the OP, I don't think what I said had anything to do with it. But I already wrote this all, so I am not deleting it now.

EDIT: I think badges are working again, I got a badge for posting that link URL.

This pretty much sound like something I want to do, but with a fresh install of windows.

Feel free to post any further developments of your project, as It would definetly help me (and probably other users as well)

Well if you want to do a fresh install then you have it even easier. Just set up a new VM, and install windows on it(the virtual disk) and allocate how much RAM and CPU cores you want to give it.

And as I make progress (should any be made) Ill either post it here or make a new post.

Virtualbox is no good for windows gaming on linux.

You have passthrough GPU to a VM with qemu and that works now but you have to manage the display between the base linux display and the VM on the GPU card.

There is a software solution which is Virgil 3D. It still alpha but I hold hope it will become a thing.

1 Like

You mentioned low level access to the disks... If your motherboard has more than one disk controller (mine has) What I did was passthrough the controller and just connected the storage I wanted for the VM to it , this avoids a lot of the overhead , as it is basically accessing the disks as it would without any kind of Virtualization, You just have to make sure that the controller is in an isolated iommu group or In the same group the GPU is in.
Also, for performance It would help if you would pin the vcpus to the specific cores you want to use ( be careful with hyperthreading) and allow the emulator threads (the ones that "emulate" the virtual hardware, like the NIC for example) to use the rest of the available cores, I am using libvirt in my setup:
<vcpu placement='static'>3</vcpu>
<cputune>
<vcpupin vcpu='0' cpuset='3'/>
<vcpupin vcpu='1' cpuset='0'/>
<vcpupin vcpu='2' cpuset='1-2'/>
<emulatorpin cpuset='0'/>
<vcpusched vcpus='2' scheduler='fifo' priority='99'/>
</cputune>
<os>

The value you put in cpuset represents the physical core and the vcpu value is (obviously) the "virtual cpu" the guest sees.
In your case, the emulatorpin would be the range of cores you did not dedicate to the vm, something like:<emulatorpin cpuset='0-3'/> If you assigned the last four cores to the vm.
You can ignore the vcpusched part if you add something like : isolcpus="3,0" to grub's kernel argument
What that does is isolate the cores specified from the rest of the system processes, so if you pin the range of cores you want in the libvirt xml file and you add those to the isolcpus list, you have a system where you ensure minimal latency and you can be stressing the host (doing something like compiling the kernel) and game at the same time (in the vm) without noticing any performance drops in either.
Also , I am not taking hyperthreading into account here , but the arch wiki has some good info on that (as always ;D)

1 Like

Aaaand my SSD just died, so I guess this project has to become reality now..

850 PRO 250GB vs 850 EVO 500GB ? Both are about the same price (Speed does´nt really matter that much, realiability does)

I´ll update this thread (finally) as promised to report any progress

If you want a more comprehensive guide, me and a few members of the community put together this guide which helps explain things: https://tekwiki.beylix.co.uk/index.php/VGA_Passthrough_with_UEFI%2BVirt-Manager

1 Like

I am running windows with a GTX 980 at the moment. So nvidia works just great. Tested with gta V and the witcher 3. So no problems there. Ubuntu might not be the best choise for this.I would recommend antergos (based on arch) for you. The arch documentation is really good. This might help: PCI passthrough via OVMF

I am doing this without libvirt so if you might be interested in the script:
#!/bin/bash
export QEMU_AUDIO_DRV="pa"
qemu-system-x86_64 \
-serial none \
-parallel none \
-nodefaults \
-nodefconfig \
-enable-kvm \
-name Windows \
-cpu host,kvm=off,check \
-smp sockets=1,cores=4,threads=2 \
-m 8192 \
-soundhw hda \
-device usb-ehci,id=ehci \
-device nec-usb-xhci,id=xhci \
-drive if=pflash,format=raw,readonly,file=/usr/share/ovmf/x64/ovmf_x64.bin \
-device virtio-scsi-pci,id=scsi \
-drive file=/dev/sdc,id=disk,format=raw,if=none -device scsi-hd,drive=disk \
-boot order=c \
-rtc base=localtime \
-nographic \
-device vfio-pci,host=01:00.0,multifunction=on \
-device vfio-pci,host=01:00.1 \
-net nic -net user \
-usb -usbdevice host:045e:028e

Having tested the passthrough of hardware to a VM, it is not really worth, rescently i had a long rant with me growing my grays hairs cause of problems with drivers and what not, i got the VM working though "somewhat", using quemu, you won't get virtual box working with passthrough.
The CPU cores you use will experiences large overhead, so dont expect a 1 to 1 cpu performance. And expect glitches in all the hardware you run emulated could be eg. keyboard, mouse, netcard, etc.
After about long hours i finally managed to passthrough the hardware addresses of my AMD APU's graphics to the VM, and it worked, and actually quite well considering it was virtualized. BUT the caveat, was fighting glitches, and degraded performance. Eventually i just went with a dual boot solution(which took me a whole 2 days to decide on), where if i wanna game i reboot into a native windows 10, and if i wanna do anything else more or less, i boot linux.
IMO the whole game through a VM is extremely nice if you got a massively specced computer, with loads of cores to spare. And you are in the need of running multiple users of the same computer. Else just stick with dual boot.

2 Likes

Regarding the keyboard/mouse/copy/paste issue:

(I'd be willing to pay 10€ without hesitation if I'm getting a somewhat seamless experience)


Well, I am now thinking about buying an i7 3770 with 16GB RAM from a friend and using it as a Windows box..

I'm going to give you the other side of the coin that @Lauritzen gave you, while I agree that Ubuntu isn't what I'd recommend for a host system either (it just lacks the support that other distros have) having said that I've been running a KVM on Fedora for almost a year now and while it's not been without it's hurdles for me it has been well worth the effort.

My KVM runs on a all AMD system (8370 and R9 270's) and I run Win 7 as the guest system, hardware pass through has allowed me to use Linux as my daily driver and yet keep a Windows virtual machine that is capable of running most anything I like, while I have had a few issues none have been host system related but rather driver related in the guest system most of this is based on the AMD GPU and their known driver issues (catalyst and crimson).

But..... like I said for me it was well worth the effort, I have a very stable system (both host and guest) I have no heating problems (CPU is water cooled) I can run most any game (finished Fallout 4 on this system), don't get me wrong for a lot of people dual booting is the way to go, it just wasn't how I wanted to roll...lol.

The only thing I'd like to say is that hardware pass through isn't something you just do on a whim, it takes lots of hardware and resources to share to run two computer systems in one box which is in essence what you are trying to accomplish, it really needs to be something you plan out when your building a new system because trying to cobble together enough hardware leads to making compromises that will cause you to fail or have a unstable guest that is basically unusable for it's intended purpose.

Hope this helps.

Maybe this will help. http://ubuntuforums.org/showthread.php?t=2266916

After @Lauritzen metioned the troubles he had, I gave up... (for now at least)

I just bought an AMD Phenom X4 955BE, a decent ASRock board, 6gb of ddr3 (going to up it to 12gb), decent power supply and an old HD5870 or something, which I swapped for my 660ti ... and all that for 170€ (excluding my 660ti of course). When my main rig is up and running again, this will be my linux/productivity machine and my main rig will be reserved for gaming and CAD