GTA V on Linux (Skylake Build + Hardware VM Passthrough)

Here a little tips for the keyboard and mouse when doing something like that.

Invert the way you use synergy. The problem is that, if you game in the Windows VM, you will get a latency from the mouse, which is not good nor pleasant. But, your Linux machine is probably used for thing that is less sensitive to mouse latency. So, a good way to prevent this problem, is to make the Windows VM the synergy-server at boot and pass the mouse and keyboard USB port to the VM. When the VM is done booting, you will get your mouse on both OS. This mean Windows will get native mouse and keyboard, and linux will get the synergy-client ones.

Here the script I am using to do this :

# start trying to connect to the synergy server on windows
synergyc windows1-vm 

# Start the VM, use whatever your command is here. Note the -usb which is my device ID
QEMU_ALSA_DAC_BUFFER_SIZE=512 ... bla bla bla \
-usb -usbdevice host:046d:c537 -usbdevice host:2516:0004

# Kill the synergc when the VM stop (when the script is finished)
pkill synergyc

Doing something like that, you will have native mouse and keyboard on Linux until you start your Windows VM, and then it will pass through the synergy client/server. When you are done gaming with the VM and you shut it down, the mouse and keyboard come back to Linux.

6 Likes

I have 99% of this working for a while now, but I can't get the drivers for my R9 390 installed in Windows. Every version of Windows I tried crashed upon driver installation.

I made a post about this on the Arch Linux forums, but nobody there seemed to be able to help:
https://bbs.archlinux.org/viewtopic.php?id=201054

Maybe you guys can help me out with that.

Great! I will most likely actually try to do this in the coming week.

I currently use a Xeon E3 1231v3 with an ASUS P9D WS motherboard and a Sapphire R9 380 Nitro. Will I have issues with the Xeon not having on board graphics? I currently run Ubuntu 15.04, but mught make a switch to Arch or Mint (don't know wich kernel mint is on currently).

https://bbs.archlinux.org/viewtopic.php?id=162768 suggests that you just need to use kvm=off to disguise being a virtual machine ;-)

Edit:
Eh, apperently the forum thread is outdated (big red caps at the top - ignored by me though xD) and you should do a bit more work now, nicely documented here:
http://vfio.blogspot.de/

EDIT2:
The specific part is here:

The GeForce card is nearly as easy, but we first need to work around some of the roadblocks Nvidia has put in place to prevent you from using the hardware you've purchased in the way that you desire (and by my reading conforms to the EULA for their software, but IANAL). For this step we again need to run virsh edit on the VM. Within the section, remove everything between the tags, including the tags themselves. In their place add the following tags:

<kvm>
  <hidden state='on'/>
</kvm>

Additionally, within the tag, find the timer named hypervclock, remove the line containing this tag completely. Save and exit the edit session.

Suppose so, afaik you at least need 2 GPUs.

This also my problem as I bought a Xeon for the money. I discovered HW-Passthrough through this topic in bbs quite early.
Sadly I just don't have the money for a CPU with HD-GPU + all the rest of the components to properly work out with that.

Also @wendell isn't using the HD-GPU on Linux (ie. not passed-through) quite a performance ditch for playing Linux-native games? Afaik you cannot just 'turn off' the hardware-passthrough without rebooting. Nowadays I'm able to play most of my games on Linux (pretty much only waiting for Payday2 which some say will release this month!) so this would be pretty sad, as rebooting into Windows doesn't really take much longer than rebooting my Linux - there's just the point left that you don't wanna run it on bare-metal.
Or is this possible when one is using the OpenSource driver?
It should definitely be possible in GNU/Hurd tho :P

The best thing would be to have two GPUs NOT-SLI, one passed-through and one for Linux :D

For now I just stick to occasionally rebooting into Windoze but I look forward to being able to ditch it completely (and not 'only' 90% of the time xD)

That sucks. I might pick up another 380 for crossfire, but until then i guess i'll have to reboot to play The Witcher 3

Thanks for tutorial.

Have been using linux as main operating system for almost 3 years on my main machine T420s. And to be honest as developer I'm really pleased with linux and can't image going back to windows.

Just started building skylake itx box. Bough intel i7-6700k and radeon r9 380. I was planning to install windows 10 on separate ssd, because I haven't played any demanding game in few years without windows and due to my laptop having only integrated graphics.

This could save me from dual booting headache. In upcoming few weeks when I'll get my hands on z170 itx motherboard I'll definitely try this.

is it possible to pull this off without UEFI support on my graphics card? if so, a section on non-UEFI cards would be nice.

additionally, any ideas on how to prevent radeon from claiming all the cards in your system on non-dracut distros? this would be useful to those of us running all-AMD systems.

It runs on Ubuntu.

Yes, it's certainly possible. The method Wendell used, modifies the process to allow for UEFI. Natively, QEMU runs its own emulation under BIOS (which Wendell has disabled in his QEMU script). There are guides everywhere for this. What makes this tutorial special is that it's based on newer hardware and requires certain hoop jumping to get Intel on-board graphics to work without going all screwy once you passthrough the GPU to the VM (I don't believe this was mentioned anywhere). This is another reason why UEFI is essential, in this case.

1 Like

Why bother worrying about 3D performance on the Linux machine if you're passing through to a VM that can play everything? The point of this is to use Windows to game and Linux to be productive. If for some reason you need high 3D performance on Linux as well as Windows, then running 2 GPUs is likely the way to go.

Are you able to post your QEMU config?

Just wish this were possible on single GPU. Darn you, FX 6300.

@wendell What if I'm running a dual Xeon setup (X79, Z9PE-D8 WS) with two R9 290's and no on-board discreet Intel graphics?
Where Linux should have one card and a Windows VM the other.
The computer has full VT-support (X,D, I/O and whatever), but I would need to stop Linux from initializing the fglrx-driver for one of the cards so that it could be passed through to the Windows VM.

I tried to get the "seemless" Windows-integration through Virtualbox to work like you showed in that video but couldn't figure out how to only initialize one card in linux...
So I gave up on it, but it would be nice if it were possible, I did learn how to setup and use Slackware and Redhat in my teens but then forgot all about linux for 15 years and now I'm worse then useless!

@TheBibliofilus You can specify the PCI-stub to be passed to the kernel at boot time. In Arch this requires the PCI-stub pkg. In other distros it may be called something else. It is the same method of grabbing the PCI ID for the passthrough file but it is in the boot line of the bootloader instead, just like IOMMU. More info is available on the Arch wiki for this, search PCI-Stub.

I would love to be able to do this, but I'm running a 4770k on my desktop and a 3630QM on my laptop. They both have Nvidia GPUs (780Ti and dual 650Ms) so they may not be the most ideal anyway.

I can't see anything wrong with it. When I was using Libvirt I wasn't getting blue-screens but I was getting a complete lock-up of the VM at the same point as you. The Windows machine was picking-up a different driver too--I'm guessing that's the issue.

At that point, I ditched vibvirt and went with a QEMU script set-up, identical to Wendell. Now I can't get my boot device to show-up...one thing after another...

Sorry I can't be any help. I would recommend creating a thread in the Linux OS forum, as you're likely to get more help there.

You'll need to blacklist the GPU by the hardware ID (lspci -nn) through /etc/initramfs-tools/modules so it doesn't start-up on boot. When you run your VM, you call on a config that you create to take-over the hardware and pass-through to your VM.

IIRC Virtualbox isn't great for this kind of stuff. You should be using a UEFI environment in something like KVM. If you're doing hardware pass-through on your GPU and want seamless integration, you need Synergy. It's software to manage your peripheral devices at the application layer, so you're going to have some input delay. If you're a hardcore gamer like myself, I think the best method for us is to use a physical KVM switch (like Wendell suggested). There's a post in this thread that should help.

I did this project and learned a few things along the way that helped me.
The i5 4690k does work even though most k processors don't.

One way i found to get past synergys input lag (its too much for playing esea level games) is to hardware passthrough the mouse and use synergy server on windows and synergy client on linux. The synergy client on linux would start in the same bash script as qemu.

You do intercomputer synergy by using -redir tcp:1234::1234 where 1234 is the port you defined in the synergy config. You then tell synergy to target localhost:1234, and it passes through to the vm.

ALSO:
implementing the "PERFORMANCE IMPROVEMENTS" on the bottom of this page here really helped my performance.