Type 1 Hypervisor, Ryzen, Linux, GPU sharing

Ryzen will have multi IOMMU support in Linux kernel 4.10. So you may be able to get away with fewer GPUs.

I would use something cheaper (used) for the host/low power VMs and a full powered GPU for gaming/high powered VMs.

First let me say that until Ryzen is out in the wild we have a lot of unknowns that are at this point marketing talking points, it's a little premature to plan a build on yet to be delivered parts....but that is just my opinion.

VMs or virtual machines are different than KVMs (Kernel based virtual machines with hardware pass through) using software like Virtual Box you can share one GPU between the host system and the guest VM but you can not game on such a virtual machine, you can however do a lot of things including everything that doesn't require direct access to hardware which is most things people would do with the exception of some graphic design software, video editing, and of course most games.

To answer #1 depending on how robust your hardware is you should in theory be able to run multiple VMs in software like Virtual Box concurrently all using the same shared GPU, I'm sure there will be a hardware saturation point, just where that point would be is totally dependent on the amount of hardware you throw at the system, which isn't really going to be cheap.

2 - To achieve a successful pass through today (pre-Ryzen) you first of all will need 2 GPUs (one for host and one for the guest) the GPU you are passing through must be "black listed" or isolated from the host system, the host system really has no idea the item to be passed through even exists after being black listed which leaves it free to be used by the guest without any conflicts on who owns and controls the device, it makes no difference what the device is ie GPU, NIC, sound card, USB controller, etc for it to work properly in the guest it has to be isolated from the host system.

So with that in mind when you power off the KVM (kernel based virtual machine) everything that is passed through physically by being black listed is dead and still not available for the host system to use, items that are passed or shared virtually to the guest like CPU cores, memory, etc are returned to the host system when the KVM is shut down.

You have two things going on in a pass through type of VM (KVM) one is that devices are black listed and removed from the host regardless of the state of the KVM (on or off), and devices or resources that are virtually given to the KVM when it is running that are taken back when the KVM is in a off state.

It is possible to make various boot options to add or remove bindings / black listed items but that is beyond the scope of what you asking.

3 - Hard question, but yet easy at the same time, if your host system is Linux then KVM/QEMU would be my choice, but there are other choices, for VMs without pass through hardware Virtual Box works very well, here's a list of some choices..If Linux is your host system than the distro you use is just as important as the hypervisor, not all Linux distros have the same abilities when it comes to virtual environments especially when your talking hardware pass through.

If your using Windows as your host system I really can't help you.... lol.

OK it seems like I missed a few points, my bad.

When I say "hypervisor" I really meant to say a "Type 1 hypervisor", not Type 2. For Type 2 this would be all too easy right? ;)

1 Like

There's not issue booting a computer without a GPU

1 Like

How so? Would you mind to give an example or link to an article?

So, sgtawesomesauce is releasing stuff specifically for this on Linux.

TL;DR: You only need 1 GPU. You won't need an iGPU. Or secondary GPU. Edit: No-go on that. Read sgtawesomesauce's response.

2 Likes

Well, you don't need a GPU for a computer to boot, obviously you will need one if you need to connect a screen to it, but if you just want it to boot and do things (ie a server) it doesn't need a GPU.

2 Likes

I remember hearing a computer beep when booting, because it didn't have a graphics card and refused to boot. Didn't try booting without the monitor plugged in though.

Eh, when you say that, it's a bit misleading. I'm still not confident I can reproduce GPU hotplug in X11. It's too buggy currently to say it's doable.

You do need a GPU to boot from, but what happens to it once the machine is up is a different story.

@flyingdoggy Might be worth looking into IPMI.

1 Like

IPMI will most likely if not definitely throw this build into server grade hardware segment, which is not to say I don't like them, but accessibility is a PITA.

When you say accessibility, do you mean the server, or the card necessary to have IPMI?

1 Like

I mean accessibility to hardware. I'm not in the US if that makes sense :( I can still get them it's just inconvenient.

Also, this specific IPMI card is PCI, not PCI-E as advertised...I'm not sure if there is PCI-E IPMI card out there but I know nowadays few motherboards come with PCI slots.

1 Like

Ah, makes sense. Getting something like that would have been nice. I'll see what I can do about boot GPU passthrough. I'll make this pretty high on the list.

2 Likes

AMD's Multi IOMMU and Intel's GVT-g is going to change that. Soon we will be able to use one GPU for many VMs

3 Likes

So this is a combination of new CPU support and additionally a enhancement (kinda' like new APIs?) to IOMMU itself? I've been looking around and can't seem to find any current info on the development...got any links so I can educate myself?

https://01.org/igvt-g Xen has support for it. And Kernel 4.10 will as well. AMD's Multi IOMMU is still a little hazy but I have been seeing chatter on kernel mailing list.

1 Like

Thank you...... so this is going to work for traditional VMs, but won't be a solution for physical hardware pass through like I'm doing. (I'm guessin' here) or will it eliminate the need for a passing through a GPU?

I'm kinda' basing the question on this.....

"In addition to the SVM AVIC, AMD IOMMU also extends the AVIC capability
to allow I/O interrupts injection directly into the virtualized guest
local APIC without the need for hypervisor intervention."

Not trying to hijack the thread....this is relevant to the OP's plans

Good to know the technology is getting there. Still wonder if, say a graphics card with 4 output ports, can be shared among 4 VMs and allow me to connect them to one KVM switch :D

My guess is that we will be able to do so if we are allowed to share GPU among bunch of VMs, it doesn't make sense if we can't do it.

1 Like

I am not 100% sure but if it is like what Intel has you will be able to run bare metal VMs but use one GPU to give 3D acceleration to multiple VM as well as live migration with full 3d support.

1 Like

I've made a simple diagram to make it a bit clearer. Basically I was hoping to avoid doing multiple GPUs at the hardware level, which I know would work, but it's just not clean and most importantly...not cool!

Hopefully with new passthrough technologies this can be made much easier.

1 Like