Dream setup for linux

So, I got a new m.2 ssd and the samsung migration tools seems to have borked my windows installed somehow. So I might finally make the transition to actually have my main os be linux (I’m already used to using linux apps on my chromebook anyway).
I’ve given it many goes but I want this time to be proper and great.

What my dream setup looks like is this (I don’t know if everything here is possible, but it seems like bits and pieces are slowly being developed so it might work):
The ideal setup I’d like to have (but seems like it’s not (easily) possible yet?):
Host system is linux (ideally Kubuntu (like kde and want the LTS of ubuntu. don’t judge xd)
EDIT: If need be, I can probably use arch I guess. Just mainly want to have KDE, I love KDE plasma
A vm running windows.

I want the gpu to be available on the linux host if I feel like gaming on linux, but be easily available on the windows vm when I’m playing windows games or doing VR (valve index)
EDIT: I’m also planning to try some VR native on Linux as Vavle supports that
(also my new monitor is ‘gsync compatible’ (freesync) so having that work would be great)

And for this I don’t want to be switching display inputs on my monitor ideally. so looking glass fixes that part I guess)

Would super appreciate you guys’ help. (can switch my 3700X and GTX 1080 to Level1tech’s folding team instead of LTT if that helps :wink: my pc is on 24/7)

This sounds all good, but what exactly are you asking?

2 Likes

Sorry, what I was asking if all of this was possible?

The main part that I’m unclear about with the documentation I’ve read online, is if it’s possible to get the gpu in the linux host, but then easily without restarting anything turn on the windows vm and have the gpu fully accesible in there (including being able to do stuff like vr)

I don’t think that you can hot-swap your GPU between your Linux installation and your windows 10 vm. (Maybe somebody can prove me wrong as I haven’t looked at this recently)

However if you were doing only basic graphics tasks in Linux (web browsing, youtube, etc) you might be able to get a cheap gpu (or use integrated graphics) and use it as the linux dedicated gpu and then use PCI passthrough for the windows virtual machine?

Though, gaming on linux has been growing so there is also a good chance that your games may be supported without much fuss. I know there are a bunch of people on these forums that have done similar and would be willing to help.

I’m also not sure on where Linux currently stands for gsync, freesync, or high refresh so that may cause some other hurdles for you down the road.

1 Like

Yes you can, but only with NVidia as AMD reset issues prevent this working reliably.

3 Likes

@anon27052951 That was my reasoning. I want to game on linux when the game is available or works well over proton. But still need my gpu available in the vm for pretty much all VR things and some games that don’t work well yet)
But seems like it is possible with NVIDIA cards as Gnif said which is nice.

Is having a second gpu still required? or am I good with just my gtx 1080. (Got ryzen, so integrated graphics)

If you have integrated graphics, I think you may have a solid shot at this goal then!

Let me know how it goes for you!

3700X so no integrated graphics.
Could put my old GTX 680 in there I guess

Sorry about bumping but I just wanted to ask, how did your setup turn out @Thibaultmol?
I’ve been thinking about a similar setup, namely the hot swapping part.

Hey sorry, life’s been busy so I haven’t actually set anything up yet. (also because I haven’t really had the need to switch to windows for gaming much. But now I’m getting into Apex legends and I’m reconsidering doing the set up if I have the time)

I’d say nobody should ever give nVidia any money. If you already have one work with it but please don’t buy any nVidia consumer GPU to work with Linux.

2 Likes

For what it’s worth, I had this (dual booting the same windows image bare metal & kvm on a raw physical device) working. I.e. I could boot Windows on the device bare-metal and also boot linux and boot the same OS on the device in a kvm. It wasn’t too tricky to set up, the keys were installing both native & virt-io drivers to support both hardware platforms (one is virtual) and extracting the SSID from the native install and pasting that into the KVM so to not trigger windows licensing invalidation. I think some good hints were from a Spaceinvader One UNRAID youtube guide.

After a few months of dual-booting & getting looking-glass setup & dialed in I realized I hadn’t rebooted into bare metal in a long time so I switched to full-time KVM. Then it was obvious that dedicating an entire device to it was wasteful so after some experimenting I copied the drive into a sparse raw image & reformatted the drive to zfs (compression & snapshots are so nice!) and moved the image into a filesystem on that drive. Trick there is to occasionally run virt-sparsify on the images to reclaim free space. I think I’m getting something like 1.8x compression on the vms and they feel faster then bare metal (haven’t benched them though).

I also split (copied not cloned) the windows image from a baseline snapshot before installing games, etc so I have two pass-thru vms to separate work from play. Only can boot one at a time but it’s a good idea for the work I do, a lot of which is under nda. Then I have a few more non-passthru vms - a mac vm, kali, a windows vm where I keep my ‘persona’ for login sites, etc. Some routed to vpns, some not.

I do all this with a single discrete gpu and use the intel CPU graphics for the host. For me it works pretty well, all that’s really required is more RAM, I’d suggest 32GB for a minimum, the faster the better (for intel graphics). My system is basically a 8700K +64GB ram +GTX1080 +nvme and a bunch of hard drives. I use syncoid/sanoid snapshot replication to a local Z2 array of old drives, as well as rsync them offsite for backup.

Since then I’ve moved on to booting linux and keeping everything on a full zfs mirror vdev and using that 1gb nvme drive as a L2ARC (persistent read cache) and us a used (disposable since it gets hammered by writes) 500gb sata ssd as a slog (basically a write cache). For me this is pretty much the ideal setup.

The final hurdle I just recently overcame was bad host samba performance from the vms for some reason I never figured out. But switching to virtio-fs (recently became available) bypassed that whole issue and now I’m getting really close to bare-metal performance everywhere that matters. This is all on fairly vanilla Ubuntu 21.04.

For me this is my dream setup. Anyone curious, feel free to ask questions.

1 Like

Sounds like a nice setup, thanks for the writeup. Just to be clear, you never use your GTX on the ubuntu host?

1 Like

I’ve had the GTX enabled on the host before and it works but occasionally it triggers an obscure bug where the primary monitor gets switched. Since I have the passthru VM running virtually 100% of the time to avoid the hassle now I just blacklist the device on the host and only use it in the vm.