Running multi gpus, windows VM gpu passthrough

So my system is
i7-5820K
16GB quad channel ddr4 2666
gtx970

currently running windows, but wanting to change to linux as my primary os. i would much rather run a windows VM inside linux then dual boot.

how practical is it to do this? i do have old gpus laying around, like an amd 5450 i could use for my linux machine for displaying video, while using passthrough on my gtx970.

But how much of a hassle is it to hide and unhide your gpu from passthrough. like if i want to use my gtx970 on linux for a little while, then passthrough to a windows vm for gaming. (i know you need to restart your pc which isnt much of a problem).

would it be better to run say opensuse and run a windows vm inside that? or run something like proxmox and run a opensuse VM and a windows VM (which would require 3 gpus)?

and lastly how does multi monitors work? the main thing i was concerned about was when running opensuse with the 5450+gtx970 (unhidden being used on linux). would i still be able to just display video through the 5450 but use the gtx970 for gpu computing or rendering or playing games in linux?

I'm working on this as well. No luck as of yet, however, I think I'm almost there. I'm working on a write-up as well. I'll probably be posting it in the next couple weeks.

What distro are you using?

Sorry, I appear to have missed part of the question. If you want to run a windows VM in linux, it's easy. If you want to run a windows VM with a physical GPU passed through, it's quite a hassle. I've been working on my system for nearly a month now and it's still not working quite properly.

It's more ideal to run opensuse and a windows VM inside that, but if you like the ease of management that proxmox offers, have a look at installing proxmox on top of debian. That will allow you to have the management from proxmox and a full desktop environment from debian.

Multi-monitor will be fine on both a VM and your host machine. The trick is setting up synergy or getting a KVM switch, so you can swap from your VM to your host quickly.

I was trying this recently with unRAID. I was using an older Core 2 Quad system which supported the necessary VT-d passthrough instruction set. However even with editing syslinux.cfg with PCI_stub statements and editing the Windows 10 VM XML file with specific PCIE_bind statements the nearest I got was a partial boot then a black screen. That was with an AMD 6450 and an nVidia GTX 670.

I've just got my 5820K CPU & motherboard back after an RMA so I'm tempted to give it another go on the more recent hardware but yeah it's a pain having to run commands to identify the PCIE hardware addresses and then manually add lines to get them to work.

You will need two GPUs, while it's possible to use multi-configurations in grub to have one boot where one of the GPUs is black listed from the host and another configuration where it is available to the host it isn't the best situation but it should be doable.

Depends on your intent, with every layer you add complexity and the need for more robust hardware and system resources to share.

A typical KVM (VM) with hardware pass through would have a dedicated GPU for the host Linux system, and a dedicated GPU for the guest and of course each would have it's own monitor or monitors, the rest of the peripherals like keyboard, mouse, etc can be share virtually between the host and guest or can be physically shared using a software program like synergy or a KVM switch. (don't confuse the use of KVM when talking about hardware pass through, my first usage of the term is referring to a Kernel based Virtual Machine and the second use is referring to a KVM switch used to physically transfer a peripheral ie keyboard, mouse, monitor, between two computers.)

I've been running a setup like this for around 10 months which is a Fedora host system running Win 7 in a KVM, if you have any more questions just ask and I'll try to help you.

I had this "working" a couple weeks ago. By that, I mean to say it displayed content, but the kernel segfault'd once I tried to do anything 3D. I forget exactly what I did, but there was some setting for pcie_acs_override somewhere that I needed to enable. (it was in the gui, that's all I remember.) (R9 380 and 660 Ti)

Honestly though, I wouldn't call unraid reliable enough to do something like this. It's a nice system, but just not quite there, in my opinion. I think you would be best off with trying this under arch. I'm getting close to having my arch solution working, after which, I'll post a guide.

Yeah ive tried looking around, i really havent found a very clear answer or guide on doing this. (some say its not possible, some say it is)

i dont necessary have the time to try and completely do it myself from scratch, especially since my linux experience is not quite there yet. the distro ive used and liked the most was opensuse.
Not quite sure arch would be up my alley just quite yet from the looks of it. But if there were good guides on such things, id be able to pick it up faster. (honestly dont want to spend 2+ months trying to get it working XD)

i work a lot, and like to play games in my free time when i can. i do love tinkering with hardware and software, but really dont have the time or experience to do more hardcore things like this on my own. i do really like linux (ive been playing on/off with it since i first tried ubuntu 6.10), and really want to start moving away from windows as much as possible.

I was hoping the tek linux would kick off more then it did. im a very fast learner, but need a little bit of direction.

if you find any good guides on it, or end up making one yourself, id love to read it and try it.

Damn, 6.10? That's not bad.

My recommendation is to dual boot for now. PCIe passthrough for gaming is really just starting to become mainstream (in a very neckbeard way), and your best bet is probably to wait till it becomes a bit more stable and widely supported. It seems that the only thing right now that has a point and shoot solution for this would be unRAID, and I wouldn't trust unRAID with my data, because of the way its filesystem is structured.

Part of the issue with the lack of content for Tek Linux is that Wendell is not doing this as his job. I don't want to speak for him as far as priorities go, but from what he says in a few of his videos, he's got a full time job running a business. At the same time, he mentioned that he's gone so deep it's hard to see the surface anymore, making it difficult to know what content would be best tailored to new members of the Linux community.

A good explanation of the way I'm doing the PCIe Passthrough for arch can be found here.

At some point, I'm planning on writing a guide, but obviously, it not only takes time to write the guide, but I'm also trying to finish gaining the knowledge first. I still haven't successfully passed a device through to a VM. I'm having the same issues with time constraints that you are. That said, I have every intention to create a guide when I finish it up. Holiday seasons are over, but that means another semester of school has started, and I've got to make that my primary focus.

unRAID was a first experiment for me. Having the storage solution integrated was a bonus and it matched my existing storage method which was a pile of separate HDDs and an external USB 3.0 caddie. It just cut down on the disk swapping!

For now I have all my data on another Core 2 Duo box box running Synology XPEnology which will do for now.

As for unRAID I had to enable the PCIE ACS (experimental) option to even see the hardware to enable the passthrough. I may come back to that later. I'm not a Linux user primarily so the somewhat GUI led approach was more suitable to me.

I just bought a Zotac CI323 mini-PC which will run Sophos UTM in a VM (currently Hyper-V + Win10) at the very least and ultimately be my wireless AP as well. I tried to install Proxmox after watching Wendalls video but it didn't get too far in the boot process and bombed out so maybe the hardware is a little bit new?

So for now I'll carry on using my Core 2 Quad + Intel server board for experimenting with VT-d in Arch, Proxmox etc.

I really like the system, but the storage just feels way too flaky for me. I don't feel like I can trust it for mission critical stuff, and I really need to have a reliable system, as this rig does triple duty as a system for school work, playing games, and programming for work.

You should really look at btrfs arrays in linux if you like the system. It's similar, but (in my opinion) more reliable. Good to know you've got your data somewhere safe. I've really seen too much

How are you liking the Zotac? What video out port did you use? I've found that sometimes, proxmox installs will die just after grub if you don't use the VGA port. What was the error? While the hardware isn't specifically on the Compatibility list, it should do just fine, it's still good hardware and has all the needed extensions, minus VT-d. What storage are you using for the system?

Did it complain about some vga setting? If so, try using a vga cable, make sure you plug it into vga on the display, not a dvi or other converter. I've had this issue so many times it's been ingrained into my brain under the category "things I wish I never need to deal with again"

As far as arch goes, the arch wiki is probably your best resource for just about everything. A quick search for just about anything you want to do in arch will either show up with a forum post or a wiki page. That's why I love arch, how amazing their documentation is.

If you're turned off by the text based install, you can always install it through the antergos installer, which is pretty much just a gui installer for arch where you can choose a bunch of options, such as which desktop environment you want, if you want steam support out of the box, if you want AUR support (Arch User Repository, the greatest thing in existence). If you want to play with GPU passthrough, choose the LTS kernel (or use the linux-vfio kernel from the user repo).

Which C2Q? Those are nice chips. I actually have my Q6600 in a display case, because of all the fond memories of chasing 3.2GHz on it.

/slap head mode on!

Damn I forgot the VGA trick as that's one reason I went with the Zotac + 2 NIC & USB 2.0 ports. I only received it yesterday and wanted to get Win10 up n running (which I did do via VGA at work!) as I'm familiar with getting that side. Literally I just spent the last 20 mins streaming Dying Light to the Zotac from my X99 gaming PC even though both monitors are sat one above the other. That was a "just becasue I can moment".

I have a couple of 256GB SSDs for swapping between so I can try again in a bit. One issue I'll need to address is the SSDs don't touch the outer case so get rather hot. I've got an old Radeon all over water block to cut some copper chunks from. That should fill the gap!

I have a mini museum building up behind me. My recently superceded i5 3570K, Q6600 @ 3GHz, C2Quad & C2Duo with Intel server boards (ECC, VT-x, VT-d etc.). Another C2Duo with a desktop board but that's not doing much. They're cast off systems from work as they get upgraded and written off so I also get lots of 1.5/2TB drives and RAM.

We also dispose of server racks and 1U HP servers on occasion but they're just not practical for a home environment!

Anyway I'll follow your advice and have a look at Arch as I don't specifically need the UNRAID storage functions for now at least. I've just got a bit lazy about using the command line since my younger days. Time to get back in to tinkering.

Glad to see you've got the thing up and running.

Wish I had spare ssd's to play with, but I'm currently limited to one ssd per PC. Keep em cool and feel free to send some my way!