Evening! Long time lurker here, figured I’d toss this to you guys too.
Am in the process of rebuilding my PC into a different case with a couple upgrades (new 2TB SSD, a Mellanox ConnectX-3 HBA for IP over QDR Infinband, etc). Relevant specs are as follows:
CPU: i7-6950X (VT-d)
GPU: 2x GeForce GTX 1080 Ti
RAM: 128GB DDR4
Storage: 1x WD SN750 M.2 2TB local, 8TB of spinning rust remotely over a local QDR IB net
Motherboard: Asus X99-WS/IPMI
I’ve had to reinstall Windows 10 on this system twice in the last two months and it’s finally pissed me off enough that I’ve decided to run Arch Linux + Cinnamon for a desktop environment and run Windows 7 as a guest in whatever virtualization software I settle on for gaming/misc compatibility.
That brings me to the question here. I’ve never actually done PCIe passthrough like this before. I have two 1080Ti’s… one for the Linux host and one to be passed through to the guest OS. Now, my understanding looking at this configuration is that the GPU being passed through will be captured and unavailable to the Linux host at boot. Is this correct? I’d obviously like to have both of these available for things like renders in Blender/etc… is there any way to have both GPUs available to Linux and still pass through one + whatever the IOMMU drags along to the guest OS without re-configuring the boot parameters and restarting every time I want to open the VM?
Let me know what you think here… open to thoughts/suggestions. I see one other thread from four years ago that never really resulted in anything, maybe there have been some improvements since then.