[Solved] Migrating from OpenNebula

Hello !

Before I begin the real question, I’ll present myself. I’m working as a newbie sysadmin in a small to medium company (around 70 people). I was hired a few months ago alongside another newbie sysadmin and an outsourced advanced sysadmin. Our IT manager which was alone taking care of the infrastructure left 3 months after we were hired, leaving us to deal with the mess left behind.

Our job is to migrate from OpenNebula to Proxmox VE. Given that they both use KVM in the back, it should be pretty easy. Even if we have to recreate the VMs, that shouldn’t be too big of a deal. The problem is that the disks (.qcow2 or .raw, depending on how old is the machine created) are encrypted and we have no idea how to decrypt them in order to recreate the VMs on a Proxmox VE server. We know which encrypted disk is associated with which VM, the problem is that we can’t recreate them in Proxmox VS (for example, the VM DBS-22 has the disks “DBS-22-os.raw” -> 9f36f2a95b2f930727b503ecccdb6a29 and “DBS-22-data1.raw” -> 558f8cedebc39ce11f7d22b9fa111b1c on the storage server Cube07 : the encrypted or hashed string is what we are seeing on the Cube storage, while the name appears in template in OpenNebula).

Has anyone migrated from OpenNebula to any other platform before and has gone through similar stuff? Does anybody have any idea how to add the disks in other Hypervisor without decrypting them?

Your help is greatly appreciated.

Shameless self-bump

I am guessing that people didn’t understand what I was saying (aside from my post going down). As an example, here’s how a template from OpenNebula looks like (the template of the VM is longer, containing the CPU, RAM etc. alocated, if you worked with OpenNebula, you probably know already):
Screenshot_20190423_102602

And this is how it looks on the NFS storage

I know each of the location for the VMs, but I don’t know how to recreate the VMs inside of Proxmox VE. I believe the disks are encrypted using md5, but I can’t know for sure. So, has anybody been through such a migration? Just attaching the disks to another VM in Proxmox VE would tremendously reduce the time, as currently, our only option is to image the VMs from OpenNebula and clone it on virtual disk in Proxmox.

Also, is there some other way to export the machines? Using Virt-Manager, I can see the VMs as they look in KVM, isn’t there anyway to export those machines as they are? Screenshot_20190423_105328

I would really appreciate any and all support.

Quite possibly but also if you posted in a time where there are barely any members and your topic got out of the front page, that reduces the visibility that it gets quite a lot.

It’s good that you included tags in your topic’s name however for next time if you need help include the “helpdesk” tag.

You can follow tags on this forum so there are people following Helpdesk and every time there is a new thread with tag helpdesk they get notification.

I included the tag in this topic so you don’t have to.

Sorry I can’t help more than that.

1 Like

Thank you very much.

2 Likes

Are the disks actually encrypted or does OpenNebula just use human-unfriendly names? It may not be necessary to rename them at all.

Check out what virsh dumpxml says about your VMs. This is ultimately the configuration that QEMU/KVM uses, so the XML plus the disk images are theoretically enough to run the VM on any QEMU/KVM host. However, I don’t know if there’s a way to directly import these in Proxmox and/or a sane way to side-load them.

1 Like

Thanks @carnogaunt
I checked the virsh dumpxml on a VM called one-1059 (OpenNebula creates the VMs with whatever it wants in the back, while showing another name in its web interface). I put the dump in Kate (notepad) and Searched and Replaced “<” with "< ", so the site won’t spew gibberish:
< domain type=‘kvm’ id=‘45’>
< name>one-1059< /name>
< uuid>89167d36-7aba-4f69-b377-3211163edd8c< /uuid>
< metadata/>
< memory unit=‘KiB’>4194304< /memory>
< currentMemory unit=‘KiB’>4194304< /currentMemory>
< vcpu placement=‘static’>2< /vcpu>
< cputune>
< shares>205< /shares>
< /cputune>
< resource>
< partition>/machine< /partition>
< /resource>
< os>
< type arch=‘x86_64’ machine=‘pc-i440fx-rhel7.0.0’>hvm< /type>
< boot dev=‘hd’/>
< /os>
< features>
< acpi/>
< /features>
< clock offset=‘utc’/>
< on_poweroff>destroy< /on_poweroff>
< on_reboot>restart< /on_reboot>
< on_crash>destroy< /on_crash>
< devices>
< emulator>/usr/libexec/qemu-kvm< /emulator>
< disk type=‘file’ device=‘disk’>
< driver name=‘qemu’ type=‘raw’ cache=‘none’/>
< source file=’/var/lib/one//datastores/175/1059/disk.0’/>
< backingStore/>
< target dev=‘vda’ bus=‘virtio’/>
< alias name=‘virtio-disk0’/>
< address type=‘pci’ domain=‘0x0000’ bus=‘0x00’ slot=‘0x04’ function=‘0x0’/>
< /disk>
< disk type=‘file’ device=‘disk’>
< driver name=‘qemu’ type=‘raw’ cache=‘none’/>
< source file=’/var/lib/one//datastores/175/1059/disk.1’/>
< backingStore/>
< target dev=‘vdb’ bus=‘virtio’/>
< alias name=‘virtio-disk1’/>
< address type=‘pci’ domain=‘0x0000’ bus=‘0x00’ slot=‘0x05’ function=‘0x0’/>
< /disk>
< disk type=‘file’ device=‘disk’>
< driver name=‘qemu’ type=‘raw’ cache=‘none’/>
< source file=’/var/lib/one//datastores/175/1059/disk.2’/>
< backingStore/>
< target dev=‘vdc’ bus=‘virtio’/>
< alias name=‘virtio-disk2’/>
< address type=‘pci’ domain=‘0x0000’ bus=‘0x00’ slot=‘0x06’ function=‘0x0’/>
< /disk>
< disk type=‘file’ device=‘cdrom’>
< driver name=‘qemu’ type=‘raw’/>
< source file=’/var/lib/one//datastores/175/1059/disk.3’/>
< backingStore/>
< target dev=‘hda’ bus=‘ide’/>
< readonly/>
< alias name=‘ide0-0-0’/>
< address type=‘drive’ controller=‘0’ bus=‘0’ target=‘0’ unit=‘0’/>
< /disk>
< controller type=‘usb’ index=‘0’ model=‘piix3-uhci’>
< alias name=‘usb’/>
< address type=‘pci’ domain=‘0x0000’ bus=‘0x00’ slot=‘0x01’ function=‘0x2’/>
< /controller>
< controller type=‘pci’ index=‘0’ model=‘pci-root’>
< alias name=‘pci.0’/>
< /controller>
< controller type=‘ide’ index=‘0’>
< alias name=‘ide’/>
< address type=‘pci’ domain=‘0x0000’ bus=‘0x00’ slot=‘0x01’ function=‘0x1’/>
< /controller>
< interface type=‘bridge’>
< mac address=‘MASKED’/>
< source bridge=‘net12’/>
< target dev=‘one-1059-0’/>
< model type=‘virtio’/>
< alias name=‘net0’/>
< address type=‘pci’ domain=‘0x0000’ bus=‘0x00’ slot=‘0x03’ function=‘0x0’/>
< /interface>
< input type=‘mouse’ bus=‘ps2’>
< alias name=‘input0’/>
< /input>
< input type=‘keyboard’ bus=‘ps2’>
< alias name=‘input1’/>
< /input>
< graphics type=‘vnc’ port=‘6959’ autoport=‘no’ listen=‘0.0.0.0’>
< listen type=‘address’ address=‘0.0.0.0’/>
< /graphics>
< video>
< model type=‘cirrus’ vram=‘16384’ heads=‘1’ primary=‘yes’/>
< alias name=‘video0’/>
< address type=‘pci’ domain=‘0x0000’ bus=‘0x00’ slot=‘0x02’ function=‘0x0’/>
< /video>
< memballoon model=‘virtio’>
< alias name=‘balloon0’/>
< address type=‘pci’ domain=‘0x0000’ bus=‘0x00’ slot=‘0x07’ function=‘0x0’/>
< /memballoon>
< /devices>
< seclabel type=‘none’ model=‘none’/>
< seclabel type=‘dynamic’ model=‘dac’ relabel=‘yes’>
< label>+9869:+9869< /label>
< imagelabel>+9869:+9869< /imagelabel>
< /seclabel>
< /domain>

Sorry for the wall of text. It seems that the VMs exist as KVM VMs. Should I just proceed to try to find a way to mount the VMs as they are inside Proxmox VE (and of course, modify the path where the disks are located to point to the correct path to the mounted NFS storages)?

It’s worth a try. If you were moving them to a bare libvirt hypervisor it would probably be as simple as adjusting the paths and running virsh define on that XML. However, I expect that Proxmox wants VMs to be created through its native interface so it can track some additional metadata, like OpenNebula. I would also expect that Proxmox would let you create VMs from the existing disk images if you can get them copied onto some storage that it can see. But I’m not very familiar with either of those platforms so I can’t really provide specifics.

1 Like

TYVM.
I will try to import the disk images directly on similarly configured VMs in Proxmox (manually for a test) and see how that goes. Even if it’s manual import, it still saves lots of time, than cloning the disks (also, we’re kinda limited with the storage).

Just as further information, for anyone else reading, both OpenNebula and Proxmox VE hosts have the same SAN mounted (like the Cube07 mentioned earlier).

and why are you migrating to Proxmox? I’m corious because we are just planing to do the contrary. Migrate proxmox to opennebula :slight_smile:

Oh, that’s rather interesting. There are a few reasons, mostly including bugs and lack of some features.

We’re on OpenNebula 5.6.1. When you select some VMs in the web interface (or any of the checkboxes), even when you deselect them and it says that only 1 box is selected, it may or may not also apply the action you wanted (shutdown / terminate / undeploy etc.) to the other VMs that were previously selected. We have to enter in the VM setting in the web interface and select actions only from there, to make sure that no other VM was selected. That’s probably the only potential risky bug that we found (and it was pretty terrible, one DB was corrupted because it wasn’t stopped before the VM was terminated - thank god for backups).

Anyway, both my colleagues worked with Proxmox, I went along with them because it’s KVM in the back, so I thought the migration would be pretty easy (I only used XenServer and HyperV before coming here, so obviously migrating to XS for example would have been more of a hassle).

Other neat things about Proxmox is the documentation that seems to be everywhere, while OpenNebula only has its docs as a good source of information. Also, OpenNebula is way harder to manage (requires more fiddling with the terminal - and while none of us are are afraid of the terminal, there is quite a lot to read about and the management is putting a little pressure on us to improve the infrastructure and the workflow, but without spending too much money - I know, typical management of the 21st century). Due to our requirements, it seemed that the Live Cloning of Proxmox would be a tremendously* helpful feature (there are requests for us to clone VMs and reconfigure them as they are, but doing that will require the VMs to be shutdown and downtime is not really acceptable).

Also, from what I remember (I didn’t study OpenNebula too much), If the main server that OpenNebula is configured on is going down (like unexpected crash or corruption or something), the VMs on the other hosts will be fine, but we don’t have a fallback / failover for the master server (the one that the web interface runs on), that means that we will have to recreate / import the machines on a reinstalled server / earlier backup. I might not be explaining too well, let me give an example:

  • Main OpenNebula server is Server1
  • Other hosts that have been added in OpenNebula are servers 2 to 11
  • If Server1 fails, the KVMs on server 2 to 11 will be fine, but we won’t be able to manage them from a web interface, until we recreate Server1 or restore from a previous backup

I hope that does it.

From what I understood, if every host is running Proxmox, if Server1 crashes, the next server from the cluster will become the master server.

Another pro of Proxmox with the live cloning is due to our poor infrastructure. We got some SANs that don’t have hotswap (some older HP ProLiant MicroServers), and if one of our drives from a RAID fails, we have to shutdown all the hosts running on said SAN, power it off and replace the bad HDD, then power the SAN and the VMs back on. This wouldn’t be a problem if we had hotswap, but in case a drive fails, we can just live clone them to another storage and make the VMs run from there with a lot less downtime.

^This is a part of what I said in the OP with

Anyway, there are further neat things about Proxmox. I’m a newbie to both OpenNebula and Proxmox, so I can’t remember / don’t know what else there was. Anyway, depending on your infrastructure, OpenNebula might make sense. We are mainly migrating for some features and ease of administration.

Interesting… Our problems with proxmox and LXC is with failover capability. Well my setup is very different. We have LXC containers mainly and some kvm too, but to run more containers with kubernetes. :wink:

About your cluster problem with opennebula, it seems a configuration problem. I read the documentation and here is how to setup it:

Anyway, if OpenNebula can not allow me to do cloning using snapshots with ceph… it will be a problem. But I don’t think so because this

Thanks for sharing your thoughts

1 Like

Update:
We tried a thing with “qm import” command in Proxmox VE, with the VM shut down of course, with no luck (the VM doesn’t boot) and I also tried to use Virt-Manager’s live migration option (from KVM host to KVM host), but when trying to connect to the Proxmox host, I encountered the error:

this remote host requires a version of netcat / nc which supports the -U option virt-manager
Another thing that I just discovered is that Proxmox doesn’t have the “virsh” command and I just read online that Proxmox doesn’t have libvirt, it uses its own tools (“qm”).

This is just an update (and a shameless self-bump, so maybe someone knowledgeable will respond).

At what point does the boot fail? Do you have an error message you can share?

“no bootable device found”
I know which the OS drive is and it is the one selected as first boot option. When powered on, it just loops into oblivion with “no bootable device found” (the boot option is also set only on virtio0 = os image).

We managed to find the solution (my colleague did). The problem we had was, because OpenNebula generates random image names, that we didn’t import the disk with its correct format (qcow2 or raw) and that’s the reason the VM wouldn’t boot (oh, such a simple fix). Importing from OpenNebula to Proxmox works now. I guess the process is similar for the other way around, however, OpenNebula manages its own image database, so I guess there would be a command to import the images first, then attach them to a VM Template.

I still don’t recommend OpenNebula. Maybe I’m just being ignorant of it and its documentation, but I had a better time with Virt-Manager than with OpenNebula, TBCH.

One more thing that we encounter during these past few weeks: OpenNebula doesn’t support migrating disk images to another datastore (storage). You have to clone the disk, then delete the old images, because of the way OpenNebula manages its VM and image database. And the new image clones will have different image ids than the old ones (obviously). They have an open issue since 2017 on github about this, it’s low priority and left for the community (if I knew how to code, I’d gladly contribute, just so that others won’t have the same issues I encountered).

I’m still baffled by OpenNebula, but maybe I’m just being salty.

Ok, this might become an OpenNebula failures thread. I don’t want to bash on the project too much, but this is unworkable. I had a VM that was working fine and was powered off and undeployed. After a few weeks, that machine had to be powered on again, guess what happened. The VM is stuck in “LCM_INIT”. It has enough resources to start, and I made sure it had since I first wanted to power it on (6th of June).

I ran virsh list on the host that I wanted to power it on and on the OpenNebula main host, the VM was not there (I got the name called “deploy id” of the VM from the main host). So the VM didn’t start at all and now it’s stuck.

I’m posting this just to let other people know about the problems we had with OpenNebula, so that they know what they get into.

1 Like

maybe using zfs send/recv or with pvmove in lvm, etc. About VMs ids you can edit mysql directly. We use ceph so no problems but in the server at home I have opennebula with lvm and used pvmove recently to upgrade to NVMe storage.

Anyway, I think that cloudStack should be a better bet than proxmox. It’s based in open standards like OpenNebula (virsh , etc) that you are already familiar with.

I discovered a workaround for a quick and dirty re-powering of the VMs in OpenNebula. I migrated the VM to another host in our Infrastructure and I got it started. I did this again with another VM, but it doesn’t always work. On a VM I did this on 4 hosts and it didn’t work. On another I had moved it to another host, then back to the original and I got it started.

In any case, I believe the terrible experience we had with OpenNebula wasn’t entirely the software’s fault (though its web interface that keeps unselected checkboxes still checked is a terrible bug). Now we moved entirely to Proxmox and we are quite happy about our cluster. Every host except the original OpenNebula Orchestrator is running Proxmox.

I’m just a little familiar with KVM / libvirt (I used it for a while at home), but Proxmox’s QM toolkit isn’t that much difficult. But anyway, most of the administration was done in the web interface (or on some older systems not put in the Infrastructure, on Virt-Manager). We are redoing a big part of the infrastructure in this company to both make our life easier and to ease the work on future IT Sysadmins that will come after us.

In our case, the VMs didn’t have hardware passthrough, they were just disk images (qcow2 and raw), so it was all just a matter of creating the new VMs in Proxmox with the same configuration and disk sizes (and as I mentioned, the problem was that we didn’t select the right driver for the disk images), then going to the terminal and overwriting the new disk by moving the disks from OpenNebula on top of them. It wasn’t that hard, as we used the same NAS storage for Proxmox, so when creating a new machine, we selected the storage to be the same as the one that the old machines were currently running from.

Migrating each VM was just a matter of creating a new VM in Proxmox, powering off the one in OpenNebula, moving the disk, then powering on the VM in Proxmox. Each migration took 1 minute at most. We migrated around 270 VMs in 6 weeks, 1 host each week. The actual migration took around 4 hours / week. We could have done it in 1 week, but we had other stuff to do, while the VMs were also needed / in use.

1 Like