Type 1 Hypervisor, Ryzen, Linux, GPU sharing

So I am planning to build a Ryzen based workstation, which will be used to virtualize a few desktop environments for different purposes - daily Windows gaming & web browsing, work from home Windows, development Linux, experiment Linux, etc.

Problem is, Ryzen won't come with iGPU, and I don't expect the motherboards will have on board graphics - at least for consumer market and let's assume I can't be bothered to wait for the server ones.

If I remember it correctly, a computer won't boot with no GPU of any sort. Obviously I will be passing through a graphics card to the gaming machine, but what happens to the rest? I know installing numerous graphics cards will definitely solve the problem but that's not the path I want to go down. But at the same time, I know I need a few VMs to be "headful" (as opposed to "headless"), so I can use a graphic interface and switch between them. I will need to switch my mouse and keyboard so might as well get a KVM to take care of the switching.

To summarize, here is a question list:

  • How can I get video output from the VMs as cheap as possible? There can be a few VMs on simultaneously.
  • What happens to the GPU that is passed through to a VM, when the VM is powered off?
  • Which hypervisor do you think will be more suitable for my needs? I am after a Type 1 (or 0, if it's a thing now) hypervisor, not Type 2.

Update: got a simple diagram for the concept

get a cheap gpu card... like amd radeon 6350

I'm assuming it's gonna be "a bunch of cheap GPU cards" instead of "a"? Otherwise how can I use 1 GPU to output video signals from a bunch of VMs, and know which port is bond to which VM? I mean...to my knowledge this doesn't sound doable, but I would like to know how if it is.

Ryzen will have multi IOMMU support in Linux kernel 4.10. So you may be able to get away with fewer GPUs.

I would use something cheaper (used) for the host/low power VMs and a full powered GPU for gaming/high powered VMs.

First let me say that until Ryzen is out in the wild we have a lot of unknowns that are at this point marketing talking points, it's a little premature to plan a build on yet to be delivered parts....but that is just my opinion.

VMs or virtual machines are different than KVMs (Kernel based virtual machines with hardware pass through) using software like Virtual Box you can share one GPU between the host system and the guest VM but you can not game on such a virtual machine, you can however do a lot of things including everything that doesn't require direct access to hardware which is most things people would do with the exception of some graphic design software, video editing, and of course most games.

To answer #1 depending on how robust your hardware is you should in theory be able to run multiple VMs in software like Virtual Box concurrently all using the same shared GPU, I'm sure there will be a hardware saturation point, just where that point would be is totally dependent on the amount of hardware you throw at the system, which isn't really going to be cheap.

2 - To achieve a successful pass through today (pre-Ryzen) you first of all will need 2 GPUs (one for host and one for the guest) the GPU you are passing through must be "black listed" or isolated from the host system, the host system really has no idea the item to be passed through even exists after being black listed which leaves it free to be used by the guest without any conflicts on who owns and controls the device, it makes no difference what the device is ie GPU, NIC, sound card, USB controller, etc for it to work properly in the guest it has to be isolated from the host system.

So with that in mind when you power off the KVM (kernel based virtual machine) everything that is passed through physically by being black listed is dead and still not available for the host system to use, items that are passed or shared virtually to the guest like CPU cores, memory, etc are returned to the host system when the KVM is shut down.

You have two things going on in a pass through type of VM (KVM) one is that devices are black listed and removed from the host regardless of the state of the KVM (on or off), and devices or resources that are virtually given to the KVM when it is running that are taken back when the KVM is in a off state.

It is possible to make various boot options to add or remove bindings / black listed items but that is beyond the scope of what you asking.

3 - Hard question, but yet easy at the same time, if your host system is Linux then KVM/QEMU would be my choice, but there are other choices, for VMs without pass through hardware Virtual Box works very well, here's a list of some choices..If Linux is your host system than the distro you use is just as important as the hypervisor, not all Linux distros have the same abilities when it comes to virtual environments especially when your talking hardware pass through.

If your using Windows as your host system I really can't help you.... lol.

OK it seems like I missed a few points, my bad.

When I say "hypervisor" I really meant to say a "Type 1 hypervisor", not Type 2. For Type 2 this would be all too easy right? ;)

1 Like

There's not issue booting a computer without a GPU

1 Like

How so? Would you mind to give an example or link to an article?

So, sgtawesomesauce is releasing stuff specifically for this on Linux.

TL;DR: You only need 1 GPU. You won't need an iGPU. Or secondary GPU. Edit: No-go on that. Read sgtawesomesauce's response.

2 Likes

Well, you don't need a GPU for a computer to boot, obviously you will need one if you need to connect a screen to it, but if you just want it to boot and do things (ie a server) it doesn't need a GPU.

2 Likes

I remember hearing a computer beep when booting, because it didn't have a graphics card and refused to boot. Didn't try booting without the monitor plugged in though.

Eh, when you say that, it's a bit misleading. I'm still not confident I can reproduce GPU hotplug in X11. It's too buggy currently to say it's doable.

You do need a GPU to boot from, but what happens to it once the machine is up is a different story.

@flyingdoggy Might be worth looking into IPMI.

1 Like

IPMI will most likely if not definitely throw this build into server grade hardware segment, which is not to say I don't like them, but accessibility is a PITA.

When you say accessibility, do you mean the server, or the card necessary to have IPMI?

1 Like

I mean accessibility to hardware. I'm not in the US if that makes sense :( I can still get them it's just inconvenient.

Also, this specific IPMI card is PCI, not PCI-E as advertised...I'm not sure if there is PCI-E IPMI card out there but I know nowadays few motherboards come with PCI slots.

1 Like

Ah, makes sense. Getting something like that would have been nice. I'll see what I can do about boot GPU passthrough. I'll make this pretty high on the list.

2 Likes

AMD's Multi IOMMU and Intel's GVT-g is going to change that. Soon we will be able to use one GPU for many VMs

3 Likes

So this is a combination of new CPU support and additionally a enhancement (kinda' like new APIs?) to IOMMU itself? I've been looking around and can't seem to find any current info on the development...got any links so I can educate myself?

https://01.org/igvt-g Xen has support for it. And Kernel 4.10 will as well. AMD's Multi IOMMU is still a little hazy but I have been seeing chatter on kernel mailing list.

1 Like

Thank you...... so this is going to work for traditional VMs, but won't be a solution for physical hardware pass through like I'm doing. (I'm guessin' here) or will it eliminate the need for a passing through a GPU?

I'm kinda' basing the question on this.....

"In addition to the SVM AVIC, AMD IOMMU also extends the AVIC capability
to allow I/O interrupts injection directly into the virtualized guest
local APIC without the need for hypervisor intervention."

Not trying to hijack the thread....this is relevant to the OP's plans