GTA V on Linux (Skylake Build + Hardware VM Passthrough)

Seems to me this isn't really going to take over until they can assign hardware to and from the Windows VM back to the Host Linux machine again (without a reboot). I wonder if that can be achieved one day? or is it a NEVER GONNA HAPPEN situation? Surely some Jesusry can be done at some level?!

I've been running Linux for almost 10 years now. The first two years were rough. Nowadays we have support for skylake almost on release. These last four years I've been running Linux exclusively and I'm still amazed of the support we have now compared to back in the days. After I passed 20 I kinda fell away from broad gaming need so I've been okay with HON/Dota2 and TF2 for entertainment when I'm not exhausted from work and they work perfectly on both Fedora and even on my hardened Gentoo after some PaX and SELinux exceptions on the binaries.

I've only used qemu to look around at the work on Debian GNU/Hurd though. This video made the explorer in me kinda curious. I haven't seen windows 10 yet and it's always nice to be reminded why i left m$ :) Will probably play around setting this up but I doubt I'm gonna keep it since I am limited to 128GB SSD and Windows in my experience is one fat cow compared to my 15GB roots. However I'm really interested to see this new windows era and the transformation into a cloud based operating system. The moment m$ told the public that 10 is the last windows they'll make, the only conclusion to be drawn is that they've now got all the pieces to transition to a cloud based operating system over time with or without the users consent.

I already tried installing just the GPU driver, same result. In fact, the installer tries to install the driver before CCC, so the crash doesn't seem to be related to that.

I passed through a physical USB stick that had the windows installer. It should show up at the boot menu or uefi for sure. It was a parameter formatted like my qvow2 c drive above.

1 Like

well i think if vulcan can pull off running amd and nvidia card together in sli or maybe the generation after vulcan depends on how much they accomplish then what is being suggested is not impossible it just takes the right developers and the right amount of funding to make it happen i mean this is software engineering i doubt anything is impossible with the proper technology and knowhow

This may sound really bad, but I use a 5.25 hot swap bay and just swap out SSD's with the OS I want.

So your saying if I go buy a Titan X it will work?

Count me in for that tutorial!

Yes, it's likely the software you're using to create the bootable USB drive--Try out Rufus.

I believe kernel 4.3 should help clear up issues with Skylake graphics. It would be interesting to give it another go when some more stuff is refined.

So I seem to be stuck but I'm unsure why. I have followed the wiki page located here:

https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF#QEM_commands

I believe I have done everything listed on the page up to "QEMU commands". Now I'm unsure what to do from here. I used virt-manager to setup a working install of windows 7. I used "virsh edit to add the XML listed under "Create and configure VM for OVMF". However once I do this, my virtual machine no longer works. It does something as I can see the CPU chugging to a spike and then it plateaus out around 5%. I checked the GPU and there is no output. I installed a vnc server prior and I am unable to connect to the virtual box. So what could I be missing here? Should I be getting any input from the graphics card prior to installing drivers? I'm thinking the virtual machine isn't booting properly?

Any help would be great, thanks!

Anyone Here know if this will work on an LGA 2011 (4930k For example) as it doesn't have a integrated GPU?

No unless you have two dedicated graphics cards. Because in host os you can't use video card that is asigned to guest os.

Itay be a problem with the uefi. Try to boot up a VM from the cli. You can eliminate msot of the crap on my boot script but keep the lines about flash. That's the bios file that should boot up in uefi. That should init your graphics card. Are you sure you have a uefi graphics card? That might be the other thing.

1 Like

I have a bios board with a Phenom Ii BE 955 would I be able to do this still?

I seem to recall reading elsewhere that with hardware passthrough you not only need to dedicate a graphics card, but also some CPU cores which you will no longer be able to access in the host OS. Is that correct?
Cities:Skylines is quite CPU-intensive, so permanently losing 2 cores in Linux because I need to reserve them for those few times I play GTA5 in the Windows VM is a bit of an issue.

I've been considering going this route for a while, but I'm now thinking of an easier solution which involves putting a switch on the case which is connected to the SSD power cables. Position 1 gives power only to the Windows SSD, position 2 only to the other SSDs. I'd still have Windows running on bare metal, but at least there is no way it can see the other drives, see what's on them or even make any modifications to those drives or their firmware.

In most cases the CPU cores are passed through virtually using virt-manager, some people will pin a CPU core (s) in their configuration which physically ties that core to the VM thus making them not available to the host system, but like I said if the cores, memory, etc are passed through virtually then when the VM is closed out/shut down the resources are returned to the host for usage.

I would also add that people who "pin cores" normally do so because they keep the VM running all the time and want dedicated cores/threads always allocated in the manner that works best for their usage, at least that's my understanding.

2 Likes

First of all, thanks @wendell for trying this out and wrapping it into a "how-to".

That being said... wow, I'm quite under-impressed by this solution, unless failed to comprehend something or missed out on a bit.

So this approach would actually need two graphics boards - one that is "booted" by the host for rendering the UI of the host machine and a second, dormant, one - excluded from any kind of init by the host - to be "booted" on demand to render the VM. Apart from that, if we consider the KVM switcheroo (or additional mess of having a second keyboard/mouse/display around) I think there's a smarter solution that doesn't even require that degree of configuration pain...

Just get two HAF Stacker 9xx cases (to keep the footprint on your desk as small as possible and not make the "stack" too high), slap a somewhat beefy MiniATX (remember, there are even X99 MiniATX boards around nowadays) into each of them, give each of them a dedicated graphics board suiting your 3D rendering "ooomph" needs, connect the ATX power-button across them (so both of them turn on/off by using one button), connect the mouse, keyboard, monitor to a KVM switch (in case there are KVM switches supporting DisplayPort (hint: 4K display)) and finally install one with Linux and the other one with Windows.

Pro: Two dedicated systems who can spend all of their hardware resources on the task at hand.
Con: Well, may suck up a wee bit more power - but at least you don't "drag" a rather useless second graphics board along (which, though not initialized for the better part of the time does also chew some power).

That's the "somewhat pricier" solution to dual-booting Linux/Windows on a "need be" basis - but it comes with the already mentioned advantage that there are two real machines for each task - I'd personally go for that approach if I would need such a setup (I'm more than satisfied with dual-boot ... Windows for Steam/GOG/... gaming, Linux for getting serious daily work done).

Anyway - nice "toy", I fail to see the practicality of it as this would require a 2nd graphics board plus a "mess of cabling" to somewhat sort it out.

EDIT: Oops, sorry for the reply to CaptainChaos ... was actually meant as a reply to wendell - hand/mouse coordination failed me.

Correct, NUMA is really useful when you want smooth performance since it always has dedicated cores without being scheduled.

@anon37371794 you don't always have to have the VM running so you can just spin it down when not in use.

1 Like

Actually your going backwards by having two dedicated boxes with really no advantage, the GPU your dragging along is not utilized when the VM is off or shut down, I can't speak for everyone who runs this type of system but on mine the KVM runs all the time, when Linux is booted up the KVM running Windows on my system gets booted up right after, they are always running both of them, I have all the advantages of a Windows PC, all the connectivity, and a pretty well seamless transition from one to the other, they both share a keyboard, but I do use two mice simply because I like to work in both OSs at the same time.

I can see where you might think two dedicated systems (boxes) would be more flexible, but in reality your missing the point, your solution would be much more expensive because you are doubling all the hardware which doubles the cost the only savings is in the mouse and keyboard because of the KVM switch...which can be also used for a VM/KVM setup thus eliminating the second keyboard and mouse along with the second PC.

The idea here is to have two complete systems in one box with the Windows OS contained in a VM for security running on top of a more stable less demanding Linux host, this type of system has so many advantages over the two box bare metal setup your suggesting, take snap shots for instance, or windows files being compressed, or not needing antivirus or firewall software on your Windows VM, the list is rather long as to the advantages of this type of setup.