GTA V on Linux (Skylake Build + Hardware VM Passthrough)

I've seen some videos that claim that multi GPU pass-through is possible. If you do try it, start with just one GPU first, then try to get both running. It should be simple enough as all you're doing is telling the system to pass-through two different PCIE IDs

I've heard it is possible with AMD crossfire but tricky or not possible with Nvidia.

What about the nVidia isn't possible? Because earlier in the topic the way to pass-through one without the drivers realizing that its in a VM is described. So like @navihawk said it should be just passing through two PCIe ids right? Or is there something else that would prevent it?

I can't find any technical details at the moment. Apparently SLI is not as straightforward as passing in both cards into the VM. This is a comment made on the vfio mailing list back in September by a developer.

AFAIK nobody has gotten SLI to work in a VM, so strike that out as a possibility unless someone is willing to invest development time on it.

*edit have things changed in the last few months? Possibly but probably too new in any case to be investing money for new cards into.

DX12 and Vulkan should resolve the whole SLI issue. I would assume since those APIs bypass that sort of tech and just use the hardware as raw resources. It's a good chance that GTA5 and FO4 will get DX12 patches at some point in the future so that solves that problem for newer games, I would expect Witcher 3 to have DX12 at some point also, however that probably will come to Linux once vulkan is out.

The only game I'm uncertain of is MGS5, but given it isn't a super graphical intensive game, I don't think anyone needs dual card setup for it.

Theoretically, yes. if you pass both GPUs through to the VM via their PCIe IDs, and block the nouveau/nvidia binaries, you should be good to go unless you config lacks IOMMU, VT-D, or VT-X extensions.

At least in my case, I've been able to get VGA passthrough working, but never ironed out the issue with the EDKII UEFI not detecting bootable drives. (maybe this time it'll pick up since i've created a windows 10 VM and already set it up.) You can find my thread that i recently ghosted from the ether since im still working on it, regarding my problems with the VM setup up to the point where i moved back to ubuntu here: https://forum.teksyndicate.com/t/virtual-machine-uefi-firmware-cant-load-isos/89874

I know nvidia GPUs of at least the 700 series can be passed through with no issue, given that you have a working VM and IOMMU operational.

Can somebody re-do this with ubuntu 15.10 and a nvidia card. oh, and a bit more gui and less lui

Look into virt-manager if you're looking for a nice friendly way to manage your VM's.

@wendell Can you reorganize it in a step by step-ish guide please, it'll make it way less overwhelming. Thanks

@wendell Is there anyway to do the pass through with one gpu, and no on board graphics? I could definitely see picking up a second graphics card some day, but would almost prefer to go with an SLI config. I don't think I'd have enough PCI-E slots left over to support Windows, after an NVME drive on Haswell E. Maybe, Skylake E will up the PCI-E count.

Other than not having pass through, I almost have the VM recognizing 3 monitors. It can see 3 monitors, but they don't map to actual displays right now.

It looks like I will be taking the jump into a new machine and putting Windows 10 into the Phantom Zone.

I was hoping to go with an X99 system, but it looks like I might have to go Skylake so I have the on board graphics handy, so I can pass through my 780. I might look around for a cheap PCI-8 graphics card.

That being said, are there any quirks doing this between Skylake and X99?

@wendell & everyone else I found a convenience script on GitHub that binds a PCI device and all of it's IOMMU group members to vfio-pci. All that you have to do beforehand is to enable IOMMU on your machine and afterwards pass through the device(s) to your vm.
GitHub: Here
Found it via the vfio users red hat mailing list, might want to check this link for updates after awhile: Here

1 Like

Adding this here for the ones that might get some help out of it.

This is my setup, but with a small addition, an EVGA 710 2GB GPU in it.

Inspired from @wendell and his Arch Linux setup, I went on and made a similar approach.

Used a modified kernel Fedora 23 OS (4.4.2-301.fc23.x86_64) with QEMU-KVM, took the vfio-pci route, and was able to pass-through the 980Ti, mouse,keyboard and usb headset to KVM. I have the desktop space for two keyboards, so no big deal there.

Two things for the ones that might try it.
One is under fedora, if you are going to assign usb devices you should edit the file located at /etc/selinux/config and replace enforcing by permissive or disabled. After a reboot you can check the selinux status with /usr/sbin/getenforce , and It should indicate whatever change you made. If you don't do this, Fedora will not unbind your usb device and let kvm grab it.
And second, kernel 4.2 & kernel 4.3 will not register huge pages correctly so the system will halt. Kernel 4.1 should work fine, but I haven't tried personally, to verify.

Best of luck to all. :)

1 Like

Dear Wendell,

this tutorial is great, if I could play games on Linux, there would be nothing stopping me from making it my primary OS, and I would really love that. One problem I have is I already have Windows installed with quite a lot of files and no backup drive capable of preserving my files during the transfer. Is there a way to install linux as a secondary OS, set up all the things in the secondary OS and then set up virtualization from the Winpartitions to Linux?

Thank you, I really love your videos!

I truly would love to switch to Linux! But limitations like gaming on games not available for Linux prevent me. My i7-4770K does not support pass through. Are there any other options available? How good is WINE these days?

I want something simpler than what was described. I just want windows to be in the vm and play minecraft( I have a cracked version that only works on windows, yeah I know don't judge I can't legally buy minecraft here) on it. I have a g3258 and an asus z87 sabertooth and a gtx 780(dont question it, there are reasons, I can explain if you want..). can someone give me some instructions because I am a real noob? I have very limited linux experience even though I use 3 linux distros for my everyday tasks. I am going to set this up on linux mint xfce 17.3 if that matters. thank you all in advance.

Can I do this on a i5-3570 and a Gigabyte GA-Z77X-D3H Mini-ITX? I'm also running on OpenSUSE LEAP

Hi,

This thread and especially video inspired me to take a plunge for VGA pass-through windows set up.
Thought I'll share my experience and hardware/software list.

I just assembled my new Skylake (i3-6320) CPU and MSI Z170A GAMING PRO (MS-7984) Motherboard desktop.
Had Arch Linux already installed, had no issues loading on new hardware.
I was following guides on Arch Linux forums, vfio.blogspot.co.uk, Arch Wiki, and Google in general. After spending a day and half a night setting this up finally got everything working.

Had no major issues, did not need to deviate from instructions.

One thing which kept me long - I forgot to pass x-vga=on parameter with my video card options, and was wondering why I get no output from GPU, VM is not booting and using one core at 100%.

To pass x-vga=on parameter had to create wrapper around /usr/sbin/qemu-system-x86_64, as described in http://vfio.blogspot.co.uk/.
My script (/usr/sbin/qemu-system-x86_64-vga) looks like this:

#!/bin/bash
exec /usr/sbin/qemu-system-x86_64 \
`/bin/echo "\$@" | \
/bin/sed 's|01:00.0|01:00.0,x-vga=on|g'`

Have not tested VM properly, but it seems OK. I only assigned two vCpus (one core and one thread) to VM, so I'm not sure what performance I will get in games. I'm broke right now, so no better CPU in foreseeable future.
Unigine seems to run OK, will install Fallout 4 and see how it goes.

Other spec's:
Motherboard BIOS: 1.80
RAM: 16 Gb Kingston DDR4 @ 2133 (Dual channel)
SSD: PNY CS1311 240GB (EFI and Arch Linux root partitions, /var is moved to HDD)
Image file: located on SSD in my home folder, in qcow2 format, size - 128 GiB.
2 vCPUS are pinned to 1 and 3 physical CPU's.
Host GPU - Intel HD 530 (Integrated)
Guest (passthrough) GPU - Nvidia GTX 660 Ti 2 Gb (Zotac, not AMP - has no UEFI firmware)
Host OS: Arch, kernel 4.5.4-1-vfio (from Aur) - with i915 VGA Arbiter, ACS Override patches, systemd-boot UEFI bootloader
Emulation and VM management: QEMU 2.5.1-1, libvirt 1.3.4-1
Guest OS: Windows 10 Pro (no license yet, for testing), Legacy BIOS bootloader
Guest RAM: 6 GiB assigned, using 4096 of 2 MiB hugepages (6 GiB VM RAM + 2 GiB VGA RAM)
Host/Guest Networking: libvirt default virtual network bridge (NAT), Guest is using virtio NIC.
Sound: have not configured yet, but nvidia DisplayPort audio does work.
Monitor: Asus VE278Q, Host - HDMI input, Guest - DidplayPort input
Mouse keyboard support: Synergy (Host as Server, Guest as Client)

wow, that's super inspiring and really want me to try out a double Xeon build. For me it would be perfect as I need to use a CAD app that does not have a Linux version and it is a sore spot in my workflow.

(post withdrawn by author, will be automatically deleted in 24 hours unless flagged)