I just built an ARCH Linux system for doing compchem/compbiol workflows and python development for a Masters project. I’m also planning to create a windows VM for windows specific tools Fusion360, and maybe the odd game.
The system is small form factor, AMD7950X, 64GB DDR5, ASUS B650E-I, and an RTX 4000 ADA gen SFF. PartPicker in my bio.
I was planning on using the integrated GPU for windows, and the SFF RTX4000 for linux (host) as I will need the CUDA cores for my compchem/compbiol work as well as some of the python development.
But it occurs to me some client apps could use the beefy GPU. I don’t believe the ada SFF can do gpu virtualization like other quadro cards (pls correct if I’m wrong).
So my question is; is it possible to pass a PCI device through to 2 different VMs as long as only one is loaded at a time? If so I could just keep my host light weight, and do all my linux work in a VM that has the RTX4000 passed through. And when I want to use windows close that VM and load the windows VM with pass through access to the same card.
Sounds like I’m setting myself up for a frustrating time if I expect to reliably switch VMs to go from using Fusion360 on Windows with the RTX4000 to working on my workflows in Linux with the RTX4000.
I guess I’m probably going to be doing myself a favour if I let the host use the RTX4000 and do my workflow work on the host, and tolerate windows apps using only the iGPU.
Unfortunately with a small form factor build I only have one slot for a GPU.