GTA V on Linux (Skylake Build + Hardware VM Passthrough)

Yup It is pretty sweet that the Window storage container is compressed and error checked by Linux's superior file systems.

Yes the 4930k does support VT-x and VT-d so CPUwise your good to go, MB really shouldn't be a issue, the CPU would be a great choice for virtualization being a 6 core CPU with hyper threading.

Why yes, you could do the "same thing" (ignoring the "Gaming" part) with a traditional VM (VirtualBox, VMware, QEmu KVM) and not have additional hardware sitting around (extra keyboard/mouse - though you can do that "extra input devices" stunt with them if you so desire - I once setup a machine running Linux as the host, starting a Windows VM on a secondary screen having its own keyboard and mouse, though that was a complete "edge case" the guy wanted solved). That'll also evacuate Windows into a "jail" and have the VMDK/OVF container sitting on top of a superior Linux filesystem. Though... that's where you really can't game or utilize any of the programs which rely on GPU access (Photoshop, Premiere, any other program wanting to make use of CUDA and/or OpenCL).

However, while the thing at hand might solve the "itch" of being able to run game that is "Windows exclusive" or run any of the programs that need access to the GPU, I find the way on "how to get it to work" somewhat convoluted (for the lack of a better word).

It's undeniably a insanely nice piece of technology and quite a improvement over current "mainstream" VM technology - I just think the implementation leaves something to be desired.

As for "doubling the price with two bare metal systems": Well, that entirely depends. If you assemble the same machine twice you, of course, double the cost, though that raises the question if you would really need the same hardware config for Linux as well (given most games on Steam/GOG are for Windows, and with publishers now starting to embrace DX12 I don't think the "list of games natively ported to Linux" will expand greatly - so the "more beefy" config will go to Windows anyway). However, since you need a second graphics board for the "GPU passthrough" solution... depending on how many frames at what resolution you like to fly by you can also easily rid yourself of $500+ to "drag the second 'on demand' GPU along for the ride".

Don't get me wrong ... I'd embrace a solution to have Linux and Windows running in parallel with "low-level hardware access from a VM" without much hassle in a instant. This is a nice approach, to me it's just not there yet (to me it would be there when I don't need a second GPU nor any additional input devices or "totally stupid 'loop-back' cabling" for making it a worthwhile experience).

Until that day comes I'd rather run with either two separate boxes or dual-boot (because no matter the "old-skool VM" or "bleeding edge PCIe pass-through VM" solution, you don't really save up on needing a Windows license anyway).

Fair enough, I'm not quite sure what you mean by loop-back cable? I have no such device on my system, I understand your feelings as others have the same thoughts of needing on the fly hardware allocation, we're a long way off from that becoming a reality as it's going to take more than just programming the Linux kernel to get that accomplished and there is really not a lot of incentive for hardware manufactures to embrace virtualization at the desktop level because it would cut into their bottom line in lost profits.

According to Wendell's video it's highly recommended one uses the K/V/M switch approach to switch the keyboard/mouse between the host/VM. If you abstract it a bit... that's just a "loopback". Yes, technically speaking it does switch between the host and the USB root you passed to the VM... but since it goes back to the very same machine it's nothing more but "looping it back". Reminds me about the way you cabled the very early 3D accelertor boards (i.e. 3Dfx).

LOL....Ok I get the reference, but @wendell's way is only one choice, you could also use a software solution like synergy that is pretty well seamless, or do as I do and pass a entire USB controller through, there are many different roads to accomplish the same task nothing is etched in stone other than needing the 2nd GPU.

Is this viable for VR applications? If so, what would be the minimum hardware required for such a feat?

My NVIDIA gtx 970 with 3.5gb GDDR5 :) ...supports UEFI. I was able to modify the script in this post to boot into a test virtual machine which dropped me into some kind of prompt, which I'm assuming means the PCI pass through works.

I can start my windows VM without these lines:

<os>
 <loader readonly='yes' type='pflash'>/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
 <nvram template='/usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd'/>
</os>

And no window comes up on the host and no output is displayed from the card. However I can VNC into the machine. Once I add the lines above however the machine never seems to get into the OS.

@mchiron While I do see your point, and It is a good solution, one should keep in mind the consequences of this. One of the reasons many people choose to run Linux is because they do not trust windows, especially with the new spying updates in windows 10. By giving windows full control of the mouse and more importantly keyboard, the user is thereby defeating the purpose of not using it in the first place as windows can now snarf everything you type, not only on the windows machine but on your "trusted" Linux box. While this may not concern some, I leave this here as a caveat.

I tracked the problem down to the virt-manager config. The gui was not using UEFI by default and the option was grayed out. I was able to add this to the qemu.conf file:

nvram = [
"/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd:/usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd",
]

This allowed me to select the OVMF_VARS file during advanced setup and the install now works.

Hopefully someone will find this useful!

1 Like

Hey blanger, what keyboard are you using alongside your passthrough? I have the Corsair K70 and the board freezes when being allocated to the VM--even in BIOS mode. It's a huge detriment to my plans of running a WIndows VM as this keyboard works perfectly fine and purchasing a new mechanical brown is not really in the budget right now ;)

EDIT: I forgot I had installed the unofficial Linux driver for this keyboard, which works great, but it causes instability with the keyboard when passing through to a KVM so I added a line into my QEMU script that disables the service. Ideally, when the Windows VM loads, it would open the Windows version of the driver and I get my fancy colours back.

As dumb as it sounds I use the cheapest of cheap MS branded USB keyboard, it's a generic less than $10 keyboard...lol I use a Logitech G-13 game pad for games that is passed through to the KVM and drivers loaded for it in Win7, works really well for my usage as I'm not to keen on using a full size keyboard to game on but it's just a matter of preference.

Hi, all

I have been trying to do kvm vga pass through off and on for a few months now with limited success.
Thanks to @wendell's video I was inspired try again and finally got it to work.

OS: Fedora 22
OS NETWORKING: Openvswitch

HOST BOARD: Asus hero vii
HOST CPU: Intel(R) Core(TM) i7-4771 CPU @ 3.50GHz
HOST GPU: NVIDIA Corporation GM204 [GeForce GTX 980]

KVM GUEST: Windows10

Thanks

2 Likes

it might just be me doing something stupid but i have tried this many, many times and i keep getting stuck at the UEFI for the qemu vm. No matter which version of the UEFI i download and use it will not boot.

You know, some others have reported this as well. Can you try the option that lets you boot into the boot menu?

1 Like

Hi

i made a tutorial i while back on how to get this working on ubuntu

say good bye to dual booting

1 Like

This might be of some help to you...

I just booted up my machine and started my VM and now i get no output at all. I tried recreating the VM and every time i got no output, I tried without the GPU passed through and using the display in the window that opens with the VM but nothing. I also created a test machine using the default bios version and that booted and displayed perfectly, so i am convinced that i am downloading the wrong UEFI file or not using it properly.

Yes. Using a i7 3820 clocked at 4.2 Ghz, an GTX 970 for the Guest and a GT 610 for the host. Got it to work with everything using an AsRock Extreme-4m motherboard. The option to enable VT-D and VT-X is there in the BIOS (at least with my revision).

I actually converted my running system into a VM using VMWare vCenter Converter Stand Alone Client (while it was running), and then took the VMDK file and ran it as the hard drive for the VM under KVM. I highly recommend to convert it to QCow2 as it enables snapshots.

I then mounted my hard drives bare metal through the PCI-VIRTio driver (from fedora), using a loopback raid array for each drive. - WARNING do with Extreme caution! xD Found out the hard way as I lost some files :$

I've been looking into this a bit more, and my 770s are definitely a no-go, so I'd need to ditch one and get an ATI card instead.
My i5-2500 officially supports VT-x and my motherboard has VT-D enabled, but I'm reading that virtualisation is very hit and miss on Sandybridge. I'd basically have to upgrade my entire system just to be sure, but that isn't going to happen just yet.

I still have one main question regarding the "trust Windows on bare metal" comment that @wendell makes during the first minute of the video though.
If you can disable the power delivery to the Linux and data drives so that Windows can't possibly see them, shouldn't that be enough to be able to "trust Windows on bare metal" or is there another reason why you'd only want to contain it inside a VM?

-

Full info : I am thinking about soldering a 2-way switch into my PC's wiring. Position 1 would give power only to the Windows SSD, position 2 would only give power to the other 3 SSDs (running a 4-drive system).
This would completely hide Windows from the Linux install and vice versa. I'd have my full-blown Linux PC for everything I need to do and a minimalist Windows install for those games that don't run in Linux (Assetto Corsa, GTA 5, etc). I'd need to power down the PC and then flick the switch while it's off, but I can live with that really.

Unless I'm completely missing something, this should isolate Windows from the main system, meaning that it should run on bare metal just fine without ever being able to peek into my data.
The only thing it could see is the NAS (to which I won't give it access), but it can see that from inside a VM too.