I am trying to build a workstation for my office where I run Virtual Machines with GPU passthrough.
I am trying to find a VM Manager that is web based and provides access control. I just want the users to open the portal and start the Virtual Machines they want and not able to modify anything else.
I tried cockpit but it dosen’t have any such provision. OVirt is complicated and outdated and I have encountered major installation issues with it. Proxmox is my last option but I’d rather use Fedora or Ubuntu server since it uses libvirt.
How did you determine its outdated?
You could also look at vmware or proxprox.
libvirt is just a wrapper for the qemu command line interface
proxmox is a wrapper for the qemu command line interface
I’m not sure why you need to use libvirt.
ovirt is not recommended for this type of installation.
For your use case, proxmox is the best solution.
The centos kernel on which the Ovirt node is based is 3.x and I could never get the self hosted engine up and running with ovirt 4.3.
I was going to trying updating qemu from 2.x to 4.x but I gave up on that.
From what I have seen Libvirt seems to be better documented and has a lot more presence on the forums everywhere.
So trouble shooting and optimisation is a bit easier especially with things like cpu pinning and tuning.
My only reason to use OVirt was it’s utilisation of libvirt. I’ll try to deploy the whole system on Proxmox and see how it goes. Thanks for your input.
It sounds to me like oVirt would actually work really well for this purpose because it has an admin portal and a user portal. I am running oVirt on a single server, it is not running oVirt node however. I installed CentOS 7 and then installed oVirt as a whole package so it’s running the web UI, engine, database, all the VMs, etc. I installed it back when it was still in the 3 series and it’s been upgraded to the latest 4 series so I’m not sure what a fresh install is like now but even upgrades follow a similar path to install. I haven’t really had any issues with it except when I was running my firewall as a VM and went to take a disk snapshot before an upgrade, it took the host “offline” because the network dropped and then it was stuck in a locked state and I couldn’t delete the snapshot. I managed to get into the database, manually updated the record to available or something like that and then deleted the snapshot and it was fine after that. I’ve been successful in passing through various devices including hard drives, network cards, and even a GPU (though it is a Quadro so no tweaks needed - but adding hooks to add those tweaks on vm start is possible).
Look at the AWS SDK for Ruby or Node.js and look at building your own. I hate to be that guy, but this is very possible with something like that. Just swap the AWS Cloudformation calls with
virsh commands to make it KVM compatible.
You’d have to play with a lot of ideas, get creative, and reverse engineer some things. But, I’ve done something similar at a previous job. We used Ruby on Rails to build a web form, essentially, and we used the AWS-SDK to make Cloudformation calls. The developer would login (OAuth), use a drop down to select their preferred software stack (Java 8, Java 11, .NET Core 2.1, .NET Core 2.2, PHP 7.2) and then click create. We’d have calls to Cloudformation to show when the creation was complete, spit out the IP, and allow them to delete (destroy the CF stack) when they were done.
Just have several images ready and use a switch statement to use the image based on the selection.
I’m gonna be honest, this is kinda reinventing the wheel. Ovirt or proxmox will probably cover the requirements.
Proxmox natively supports the GPU passthrough scenario.
Their community is very cool, they offer enterprise support, and do a pretty good job of leaving the rest of the of system functional as a Debian Stable machine.
Their user control system integrates with existing auth systems, so if you’re already running an LDAP server, or using unix auth, it can be plugged in.
I really wish they’d modernize their UI, but it works really well otherwise.
It’s a bit dated, but doesn’t need to change. It works well and isn’t ugly.
I think it’s ugly (and a little distracting), but it’s not a major complaint. The CLI utilities are more my style anyway.
I couldn’t get into the Proxmox UI and think it’s kinda ugly, it’s why I didn’t stick with it but I did otherwise like it and it worked well, just dated. I like the slight redesign on the oVirt UI, it feels and responds in a lot more of a modern way, there’s more animations and they’re smooth, pages load and refresh quickly. I think they still have some room for improvement, sometimes you click on something to get some kind of a configuration window, sometimes you click on it and then click a button, sometimes the config window you need is down several windows. Stuff that’s bled in from the old UI and could still be improved at some point.
One man’s “outdated” is another man’s “stable”.
Security updates are backported to CentOS kernel, same as with RHEL.
RHEL runs the same kernel version as CentOS and is good enough for very large scale deployments so i would suggest that the “outdated” CentOS kernel is likely just fine for your requirements unless you are facing some specific hardware compatibility issue.