The question is too ambiguous. What do you mean by “Cloud Platform?” Do you mean something to manage a cloud infrastructure, like AWS / Linode? Do you mean a hypervisor software that supports clustering? Or maybe a “cloud for others, managed by one person” type of software?
Given your description, I will assume that by private cloud you mean basically a self-hosted infrastructure that one person can handle. Well, it really depends, but the person to manage it would likely have to be the one setting it up, or at least being a witness of someone setting it up. At some point, the infrastructure needs to grow, so one would need to know how to set it up and expand on it.
Although I really don’t like it that much, Proxmox is really spanking easy to setup and to manage. All the complexity is hidden away behind a minimalist interface that gives you access to everything you need. Clustering servers is easy. Making HA groups and fencing is easy. Settings up a shared network storage on all the nodes from a cluster is easy. Managing the network is easy.
As much as I don’t like some of the Proxmox designs, for beginners it really is an easy platform to get behind. And the only reason why I don’t like it is because on the lower end with just 2 to 4 nodes in a cluster, it can get pretty bad if you can’t resurrect a host to establish quorum again. But with 5 and above nodes, Proxmox becomes really good. Talking from experience with my failed homelab and my previous workplace where I managed a few nodes that are still growing strong.
Someone mentioned OpenStack, but that’s a hard platform to get into. OpenNebula is much easier to get into and is pretty easy to manage. Given, we migrated from OpenNebula to Proxmox, because we inherited an infrastructure that was not really properly set up and managing it was a royal PITA and it was easier to migrate (since both use KVM, the migration was easy, just created the templates and moved the disk images to their places).
Given some time, I may have figured how it operated. I figured out that the VM templates were all associated with certain disk images that were completely randomly named and lived in a sqlite database (could have been MySQL or PostgreSQL, but the person who set it up used sqlite). And the web management interface was called sunstone. In the backend, it likely used SSH to get to the hypervisors (as they didn’t have anything OpenNebula related installed on them) and I am pretty sure it used libvirt to control VMs, as I could also control the VMs through Virt-Manager in parallel and OpenNebula would not freak out (for simple start and stop commands).
At the beginning, I was pretty appalled by OpenNebula. Now after a few years of not using it and reading more about it from time to time, it seems like a pretty decent platform for a self-hosted cloud. It even has features like “credits” or something like that built-in, that you can integrate with other platforms. Basically people would either buy credits and run their cloud infrastructure using the OpenNebula web managing interface on your hardware, or they would run it, you would calculate a cost and send them an invoice. We never made use of that, alongside many other features of OpenNebula, which made the moved to Proxmox make even more sense.
For a homelab and even a SOHO business, I would say to skip Proxmox for now and use libvirt and virt-manager. Depending on the size, you can save a buck by not running HA and having a good backup and restore procedure in place. I have seen a VPS infrastructure that was running on this concept. They didn’t offer HA to their customers (actually, not sure that any VPS does), so once a host would go down, some services would be affected, but they’d be able to get all the services back up and running in 1 to 4 hours and notify their customers. But it would be up to the customers to startup services, if they haven’t setup their own auto-start scripts, daemons or process supervisors, a.k.a. service managers.
While not necessarily easy, LXD does have a decent documentation and the tutorials are pretty straight-forward to follow. It is CLI only, but setting up LXD containers to save on some resources where possible is a nice touch, if you are not going to use Proxmox, as Proxmox already has a built-in system to manage LXC, although IMO, much more limited than LXD, especially when it comes to distro choice. Proxmox only has around 20 templates for containers, or so, with the turnkey linux included, while LXD has
lxc image list images: | wc -l
5727
Almost 6000 images. Given, some of them are duplicate, like say, Alpine 3.15 for i686, x86_64, armhf, armv7l, aarch64, s390x, ppc64le, Alpine Edge for the same architectures, and many other distros.
After all that said, this is just to answer your question. But in the scenario presented, I would tell the guys to go pound sand, because I don’t negotiate with terrorists / criminals.