A question about using VM's on Linux

Hello,
I have been using Linux in dual boot configurations since 1999. Once the VM clients were developed like VMware and VirtualBox I explored starting out with Windows 7 running in a VM with Linux being the host OS. I quickly found that these would not work for me because all of the PCs hardware was virtualized and of course the graphics was rather anemic and you could not use it for games or any Windows app that needed a good GPU to operate so I gave up using these VMs. Up until recently when I built my PCs I always used Nvidia GPUs because before AMD took over ATIs GPUs Nvidia had much better linux drivers than ATI. Since I was an avid gamer whenever a new driver was released I would down load the driver from Nvidia and compile the new driver into the Kernel. Back before systemd this was easy to do on systems using the traditional init system. Init was set up with different run levels init3 was just a terminal with no x server running init5 was the run level that the x server and the GPU drivers were loaded. To compile a new driver all I had to do was to open a terminal window log in as root and simply type init3 and after hitting enter the screen would go black as the x server was being stopped and then you were quickly presented by just a text based terminal and you could then compile the new driver and once this was finished you could then type in init5 and the x server was restarted that was running the new Nvidia driver. I included all of this info to be able to ask my question.

I am not a hardware expert and I am not a coder just an lowly end user. However I have had this question for a long time and it may be a stupid question but I have decided to ask it anyway. When the x server is not running it allows you to compile a new driver for the system, if you had Windows installed into a VM could the launcher for the VM have a script that ran before it started the VM that killed the x server once the VM was launched couldn’t Windows have access to the GPU to have it run using the Windows driver? If this is possible then people who cannot afford expensive high end Threadripper systems with lots of PCIe lanes for hardware pass through could still utilize a VM running Windows for gaming or the few programs that they will always need to run that only run on Windows.

Theoretically possible, there’s a bit more to it.

@gnif spent a bunch of time just trying to ensure you can reinitialize some PCIe cards properly in order just to allow a VM to restart. It sounded like there was a lot of underspecified behavior, and it was really hard to write code for that, very much an uphill battle and I’m not even sure if that work got merged upstream in the end.

(bare in mind, normally as a software developer, you’d write code and write tests to ensure you can make future changes without breaking pre-existing behavior. Sometimes you make errors, sometimes your tests will be wrong, but as long as you discover the issues you can fix them, and move on … most of this Linux chipset code doesn’t have that luxury, there’s an additional FUD factor surrounding every patch sent and developers only test using real hardware in a small set of use cases, it just slows everything down… it might be a few years before detaching/reattaching GPUs is a thing)

Yep, this does work for some people. It is a bit more complex to actually do in practice, and there are more things that can go wrong then GPU passthrough with one card for the VM and one card for the host.