[Solved] Migrating from OpenNebula

Hello Biky. A year has passed since your migration.
Can you tell us the pros and cons ? It will be very rewarding to know

2 Likes

Hi. There were definitely the pros of easier management, like not having a weird db behind that changes the iteration of the VMs when you modify the slightest thing, like adding RAM or moving it on another host. It may be important in a cloud environment, which is what OpenNebula is designed for, but for us it was redundant information (KISS). No more headaches with the weird selection of VMs either (I mentioned that in previous posts I believe). Another big plus for Proxmox is ZFS comes as an option by default, unlike on CentOS (we had OpenNebula on a CentOS base), which was helpful on servers that had no raid cards. Nothing wrong with md, but ZFS is really neat. Installation on new servers was easy too (we’ve been buying refurbished servers around 1 per year, to replace most of the older Intel servers and a few low-end Dells). And we like that the VM configs don’t have strange generated names, but keep their Proxmox number (and the configs are replicated on all the hosts, so we don’t keep backups of them, since it would be close to impossible to lose 7 hosts at the same time, also considering their root has raid 1 / zfs raid1 (the admins before us only had 1 drive for the root of the hosts and I heard they had issues in the past, probably lesson learned for them). And if the configs are lost, it would still be easy to recreate, since the VMs disk images are stored in a folder containing their names on our NAS’es, unlike on OpenNebula, where they were all stored inside 1 folder in the NAS’es with random generated names, as mentioned above (it may have been a misconfiguration or something, but I haven’t looked too deep into it). Bonus points for Proxmox that it is slightly decentralized, you can access either host from the cluster and control all the VMs and hosts from there. OpenNebula had 1 centralized orchestrator on 1 of the hosts, if that went down, we had to get it back up in order to control our hosts. I found that using Virt-Manager worked in parallel with Nebula, since it’s just libvirt behind, however, doing so, Nebula wouldn’t update its db and it would make even more problems… The orchestrator never went down for us thankfully, but I did try virt-man in order to more easily control the VMs. Heck, if I knew how to make manual clusters and HA groups and such in virt-man, I’d rather use that than Nebula inside a small dataroom (didn’t look into it either, since Proxmox was offering way too much convenience). We have restarted our proxmox hosts quite a few times, migrated all the VMs on the others beforehand and it has been going smooth, changing the managing interface (web admin page) on another host, while 1 was restarting.

Other than this, not much has changed. We didn’t have problems with stability or anything. I’d rather argue libvirt does a better job than Proxmox’ qm or what is it called, both use KVM and QEMU, however, Proxmox had some issues with migrating some VMs at some point. I don’t recall what was the problem, it was a while back, I just remember that VMs didn’t have network access, until we powered them off and back on after they’ve been migrated, simple restart of the VMs didn’t fix that. We didn’t have that problem anymore since. Speaking of downsides, another thing we had issues with was that in Proxmox 5.4, Debian 9 has a version of snmpd that bugs out and stops working until you restart the whole server (restarting snmpd doesn’t work). Not sure if it’s been fixed in Debian 10 / Proxmox 6, we have yet to migrate to 6 (will probably begin soon, we’ve been waiting to see if anything went wrong with other people’s upgrades after corosync 3 was released for proxmox 5.4), but we’ve been using ssh_checks and it has been working fine (for 7 virtualization servers that isn’t a big issue, the NAS’es are running CentOS, so we have snmpd on them, no issues there).

That’s pretty much all, we’ve been really happy with Proxmox though, wouldn’t go back to OpenNebula, but I would probably try it in the future if I (personally, not at this company) am going to launch a VPS / VPC service - but that’s a long time in the future, if I ever do it, especially that I recall Nebula having some cost controls behind. It would have been fun having each team / project in our company have a monthly budget and leaving them to manage their VMs and their costs on our servers (we had really big issues with servers overloaded with VMs, because people wouldn’t tell us whether to poweroff the unused testing VMs, they would be left on and unused 24/7), but that would make our job as sysadmins slightly obsolete and the teams would have to take care of their VMs too now, basically transferring responsibilty, instead of spending more time coding, so we just did a huge inventory after all VMs have been migrated and we have a schedule of asking people if they still need some VMs that have been marked as “on-demand” (we have VMs that do have to be on 24/7 for testing, stress-testing, bug searching etc.). I do think it would have been more efficient from a server-usage perspective, but it would on the other hand, been less efficient on the programming side. OpenNebula was definitely more featureful, but that was working against us, as it was complicating stuff in the administration page.

Not sure what else I should say, this was all that came to mind. Reading between the lines on some older comments… We don’t have any other virtualization OS in production anymore, everything is running proxmox now, even on the old OpenNebula orchestrator (which was a pretty good server). We aren’t using containers, just pure VMs, because that’s how our software is intended to be run (a combination between some java servers using tomcat or jboss and oracle or postgresql dbs). I don’t remember if I gave any perspective about our small “data center” (I call it a data room we don’t have a lot of racks), most of the VMs are used for testing our software, customized for different clients on different environments (server sizes and OSes), with just a few that are used for production inside our company, like GitLab, our ticketing system, an internal mail server, some other automation VMs / software for building our product from source etc., and those are the ones that require HA, most VMs wouldn’t be a complete loss if they went down for 1 or 2h. So we’re by no means a big VPS provider, so it doesn’t make sense for us to use OpenNebula.

It may have been an option if we were going to offer our own cloud hosting to some clients (and we did host some VMs for some clients in the past, but not anymore after the GDPR, we didn’t even want to bother to be data holders / processors, even with no access to the data itself), which is probably why OpenNebula was chosen in the first place. Now we either let the clients run our software in their own infrastructure like we primarily used to, or use a public cloud (mostly AWS) for them and offer SaaS for smaller clients. I probably won’t stay much longer at this company, probably will go back to the US and find a job at a data center (or preferably at a smaller business) there. Sorry for the long rant.

1 Like