Which Hypervisor would you suggest for my situation?

Howdy,
I’m a long time user of VMWare’s ESX/ESXi stack both personally and at work, and I use a VMWUG advantage license for my home lab currently. However, a host of factors have lead me to thinking it’s time to stop renewing my VMWUG membership this year and save the $200 to switch away (this was on my mind even before the Broadcom acquisition news, which only worsened things…). I’ve got 46 days left on my licenses to move if that’s the direction I do go, but I’ve not been paying much attention to the alternatives over the past couple years so I’m not sure which direction to take.
I’ll try not to be too rambly here, so I’ll get right to what I’m looking for.

Features I’m looking for:

  • Needs to run on both datacenter hardware as well as business-grade workstations. I have a 48U worth of servers that I still play with, but I’ve been migrating some of my normal workloads to cheap Opitplex 7070’s and offcast SFF PCs I get from work for power efficiency and to quiet the lab which recently had to be moved to my basement.
  • Needs to support PCIe passthrough and know not to move VMs with passthrough enabled to a different host. I have an HBA passed through to my TrueNAS VM so that it can see my disk shelves directly and do it’s thing. I also have a VM that I pass a GPU to, and other various networking VMs that get their own quad port NICs.
  • Ideally should have a centralized management portal +bonus points if I can create restricted user accounts as well. I share some of my VMs to other folks who admin them, E.g. my D&D DM has admin on our Virtual Table Top software and has access to the VM through VSphere currently.
  • Ability to move workloads between hosts. Although not strictly necessary that this be something that is automatic, this is super handy for me testing different hardware and moving workloads around as I add/remove hardware from the rack. Being able to migrate live hosts isn’t something I’ve used lately, but it’s obviously a huge plus in some situations.
  • Ability to assign multiple VNICs to machines. This isn’t nearly as big of a deal, as I could solve not having multiple VNICs with passthrough, and I could always just physically network my ISCSI network if needed, but it’s currently a virtual network on my host that handles my TrueNAS machine, keeping my current setup would be a plus.
  • Ability to bring up a remote console via a browser. Not all of my VMs have easy to use remote control options, and I have a lot of different machines that I need to admin from. Just on my desk now I have 5 different machines, so not needing to install any software to bring up a console is basically a necessity. If there’s a software console that works on all OSs (I use Windows, MacOS, Linux, and FreeBSD regularly) I can make it work, but I’d prefer to just have a browser option.
  • Ability to monitor the host hardware. Although I keep my high priority workloads on good and reasonably modern machines, most of my hardware is enterprise cast-offs and could die, or show sines of dying, at any time, being able to monitor for this is a big plus and makes my life much easier.

I know the last time I researched this a few years ago the community seemed to overwhelmingly love Proxmox, but, at least at the time, I wasn’t a huge fan. I’d certainly be willing to give it a shot if that’s the right answer, though.
I know Wendel did a video on XCP-NG semi-recently as well, but I’ve not looked into it much myself, and I never admin’ed our Citrix environment to have any experience with Xen.
Being a Windows Admin, you’d think I’d like Hyper-V, but I’ve never had a good experience with it, and I have no desire to use it personally. We’re switching to it for work, but purely for the Azure stack, and I don’t plan to implement Azure monitoring and such for my home lab.
I’ve used Linux KVM in the past, but last I used it, it wasn’t particularly easy to administer multiple boxes.
I’ve not used bhyve yet, and if I recall correctly, last time I was doing hypervisor research it was still not recommended to use for any production workloads, but is it worth checking out?
OR, should I just eat the $200 and get another year of ESXi/VSphere? Or is there something else out there that I’m not familiar with that would be exactly what I’m looking for? Hoping y’all can help me out here :slight_smile:
Thanks in advance!

EDIT:
Also, being able to automate the transition of my VMs from my current ESXi hosts to the new system would be a HUGE plus, obviously, but I’m willing to put in some work to do it manually if needed.

I think Proxmox does all of that except the moving workloads. They do allow clustering if you do that right from the start. You’d need 3 identical machine in your rack.

Proxmox has a couple of important features, ZFS by default which is much faster for snapshots, they’re instant. LXC containers allow full fledged Linux machines to boot instantly and require tiny resources, it’s Docker for normal people.

2 Likes

I use Proxmox at home, albeit a single host currently, and it seems like it would fit all those features. It’s just regular Linux so you get all the hardware support that comes with it. XCP-NG is alright, I ran into a bug involving not being to remove unused NFS shares that were no longer served, but otherwise was alright. Again, Linux but is a bit more appliance-like and you’re discouraged from making changes to dom0 most of the time.

We have a Hyper-V Failover Cluster at work that is pretty solid using Startwind vSAN. I’m mostly neutral on it since the MMCs are only okay. Could use some better interfaces without WAC or having to fork out cash for SCVMM.

2 Likes

I’ve started looking at Proxmox again, and it does seem like it’s either much improved since I last looked at it, and/or my understanding of the concepts it uses is much better, haha! I’ll definitely be putting it on a box to try it out again.

Do all machines in a cluster have to be exactly the same? That’s unfortunately going to make it mostly unusable in my environment. Even my systems that are the same model are usually different in CPU/Memory config… I do have originally 3 identically spec’ed IBM/Lenovo X3550 M4’s, (I’ve swapped some memory around so that one could be used for high-memory workloads, but that could easily be fixed) but they’re not the most efficient machines in my rack, so I definitely wouldn’t want to run all 3 as my primary cluster, and I only have 2 IBM drive sleds for those, so I’d probably have to boot them over ISCSI (assuming Proxmox doesn’t have an issue with that) if I do try to cluster them… Might be a good project to play with, though.

Interesting. You were using an NFS share mounted on the hypervisor itself? How was performance on that? Or was that more for ISOs and such?

I was really hoping we could move to VMWare VSAN at work and go for more of an HCI setup, but instead our head of infrastructure decided that the VMWare tax isn’t worth it anymore and to move us to Hyper-V with the potential to move to Azure HCI stack eventually (won’t ever happen with the current team he has under him, and we recently [mostly] mutually agreed I wouldn’t work out well working under him, haha!)

It was for ISOs and some VMs but basically if I deleted the share off the NFS server first without unmounting the share from within XCP-NG, it wouldn’t let me unmount it without rebooting the hypervisor host.

I think the machines in the cluster have to be similar enough that the VMs can be moved without any tweaking. Obviously no hardware passthrough if you want to do that, all your devices have to work with the same drivers so generally with will mean the virtual drivers not the normal hardware ones.

I suggest you buy some USB sticks for the Proxmox in each machine and try clustering your hardware with a very simple storage drive in each machine. Disconnect the current storage and just run this as a test. You could buy 3 cheap hard drives as the storage just as a test.

Your only few options are Proxmox, XCP-ng, Virt-Manager, oVirt and maybe some plugin for cockpit. That’s for a data center admin. If you want a self-hosted cloud, in which you can have multiple tenants running stuff in the background, OpenStack is really have, not really recommend it, but there’s also OpenNebula, its competitor, much more lightweight and more tightly integrated and somewhat (?) easier to setup. Except for virt-manager, none of these require anything be installed on workstations, other than a somewhat modern web browser.

Additional notes: except XCP-ng, all of those run on QEMU+KVM, with Proxmox using its own tooling called qm, while all the rest use libvirt. The tooling doesn’t matter, it’s just CLI voodoo, they are compatible with each other, just a pain to migrate between qm and libvirt (ask me how I know).

Proxmox
By far the easiest to use and easiest to get into. By a long shot! You’ll probably already be familiar with the panels layout in the web interface. Easy to install, easy to configure, pretty intuitive, pretty powerful.

That is wrong.

No. In my home cluster, I had a Pentium G4560, a Lynfield Xeon x3450 and a Celeron J3455 all with different specs. All you need is similarly configured servers if you want HA and fencing, so that for example, a VM having 32 threads doesn’t land on a machine with just 8 cores 16 threads total. Or one that has 64GB of RAM landing on a server with just 32GB. It’s not that it could happen, it’s just that it would fail. I think it would even tell you that you cannot do that, but I never tried it, as all my VMs I had in a production environment were small, with the biggest one having 8 threads and 32GB of RAM and that was a rarity.

If you want a HA cluster, you don’t even need 3 servers, just 2 similarly configured servers and a witness / tiebreaker that can be even a RPi running corosync. Keep in mind that if you only have 2 servers, you need to only use about 40% resources on both, so that if all VMs live migrate to the other, you’ll have enough resources for them. Don’t over-provision CPU and RAM.

Proxmox 6 was kinda buggy, 5 and 7 were better. But still, out of around 10 or so servers, only 1 had weird network card bugs, renaming its interfaces to “renamed6” and “renamed7” instead of something sane like “enp4s1” - and it was changing after each reboot.

XCP-ng
Pretty ballin’, uses Xen instead of KVM, still very compatible and fast, you won’t notice a difference between the two. I haven’t used Xen in years though and I never had a HA cluster or tried live migrating, so I don’t know how well that works. On KVM, live migration or HA failover is instant and you don’t even lose ping packets. I don’t know about Xen, but I will assume it’s similar, give that I think AWS runs on Xen if I’m not mistaken.

I can’t really say much about XCP-ng in general, as I haven’t used it, I only used XenServer and XenCenter on a cluster made out of desktops back in 2016-2018. Worked for what I needed. XCP-ng should be on-par feature wise with Proxmox.

oVirt
Can say even less about oVirt. It’s also using KVM as Proxmox, has lots of feature, I don’t know how clustering works on it. Do your own research, I guess? The only problem with oVirt is that Red Hat kinda abandoned it? At least that’s what I remember reading on this forum. But it is still getting updates, last one was in April. Big projects like this don’t get massive updates, mostly patches. But now it’s on life support, the community has to pick up the slack.

Virt-Manager
Not much to say. Easy to get into, hard to master. Has some advanced features via XML files. Can migrate VMs from a box to another, as long as you don’t run too different operating systems, like we used to. Couldn’t migrate between CentOS 6 and CentOS 7 (and that was when 6 was still supported). I don’t know if there are any HA and fencing options in it, but it is KVM, so the groundwork is there underneath… Just that setting it up would be a royal PITA. It’s mostly a replacement for the horrible Virtualbox on desktops, but you can run headless hosts and connect to them via Virt-Manager, as it uses SSH in the backend.

You need to install a program if you want to run it. The program is Virt-Manager itself. Runs on any Linux distro, can run on Windows under WSL2, I don’t know about macOS. Doesn’t run on Android AFAIK. Mostly used on desktops for VFIO / GPU passthrough builds for Windows VMs.

Cockpit
It’s a management interface for Linux servers. It has a plugin for KVM, so you can use this instead of Virt-Manager, but it still a bit of a DIY. But it gives you some control over the box you administer. Probably doesn’t have a lot of options, but may get the job done. Same deal with the HA probably applies, need to DIY.

OpenStack
It’s a big, clunky, resource hungry, DIY cloud infrastructure software. Most likely overkill for a homelab, probably still overkill for a datacenter that doesn’t have multiple tenants, or other businesses administering VMs. But it can do it. It’s the most widespread cloud infrastructure, with many corporations just rebranding it and adding a few touches here and there. Vendors include IBM and HPE, but openstack itself is open source, you don’t need to buy servers from those guys to use it. Just that you can’t use their own optimizations.

OpenNebula
I have used OpenNebula, it is pretty cool. It is a DIY cloud option, but the only real thing you have to install is opennebula itself and either a DB server, or use sqlite. The guy who implemented it before me used sqlite, the interface was fine for 3 users. Has a nice user administration and many options. It can get tricky to setup, but it doesn’t have to be overly complicated for a data center environment. It even has payment / credit options for VMs, in case you want to do calculations on what VMs would cost if you had tenants in them. Never used that feature.

OpenNebula only needs to be installed on a single host or VM, preferably something that doesn’t run VMs on, but you can run VMs on it too. The way it keeps track of VMs is pretty fascinating. Unfortunately, the disks in the backend can be hard to keep track of. I highly suggest using Postgres instead of SQLite, because you may need that if you detach a disk and don’t know from which VM it came from. The DB contain all that information. Well, normally you wouldn’t do that, but we had a problem on the way it was set up in our infrastructure and all VMs had their disks in one folder basically. Probably could have been fixed, but nobody knew how it was set up (the guy who did left the company 3 months after I came in and the only other IT guys were my colleague who came 1 month after me, and an outsourced dude).

I remember OpenNebula being very quirky and buggy, giving us a lot of headaches. For example, the web filter for VMs didn’t actually filter the VMs, when you would “Select All” you would not select just what was shown, but actually the whole page that had hidden items in it, risking to delete many VMs at once. Such a pain… We had to always go inside the VM properties and then delete it or move it or modify it.

But I do think we didn’t give it enough of a chance, but it did need a lot of rework to make it work for us. Moving to proxmox was a good move.

I wouldn’t really recommend it for your needs, but it’s there if you need it. Better than OpenStack because of integration.


Honorable mention

While not exactly VMs, although it can run VMs, LXD is an orchestrator for LXC, Linux containers. Not OCI containers (docker, k8s, podman), but just LXC. If your workflow allows it, like if you have tons of databases, web servers, mail servers, dns servers and so on, LXD can do wonders. It is CLI-only though, but it is very easy to get into.

For compatibility reasons, it can launch images in VM mode, but there is no much point. I find LXD to be way better than what Proxmox offers with their LXC controls (just like with qm, they use pct instead of lxc- commands - lxd confusingly enough uses lxc without - commands, probably standing for LXD client). Also, Proxmox can’t launch containers that it doesn’t know the OS of, so you are stuck with their offerings. They are not too bad, you have CentOS / Stream / Rocky / Alma / RHEL, Ubuntu, Debian, Alpine, Gentoo, Arch and many Turnkey Linux images, but it is a bit limited.


Now for an even more opinionated view, I personally do not plan to run Proxmox again, due to how hard it is to recover a cluster’s heath in a home environment. I had 2 servers die, so I couldn’t delete the cluster on the last one standing. I can put it manually in standalone mode, but it is still in a failed cluster state. I even tried creating proxmox VMs and adding them to the cluster to get it back in health, but that didn’t work out, because you can modify the cluster if it’s failed. And if it’s a standalone, you have some restrictions and you always have to run some commands after each reboot to allow you to start VMs and stuff.

And it’s a royal PITA. With more than 5 servers, Proxmox is fine. Nothing against Proxmox when it comes to having many servers, in fact, it makes things very easy. But it’s just not for me anymore, at least not for my homelab.

Moving to LXD on multiple ARM SBCs and Virt-Manager on a single x86 PC. bhyve sounds fun, but I don’t have much experience with BSDs outside of pfSense, OPNsense, a bit of FreeNAS before it changed to TrueNAS Core, and an OpenBSD VM from time to time. Planning an ARM OpenBSD router, because network devices shouldn’t run the same OS as the other ones.

I still recommend Proxmox to people, especially to VMWare refugees. If you can buy a subscription from them to support their efforts, all the better, but you’re not required to.

Again, above are opinions. Uninformed opinions at that, I don’t know much about oVirt, Cockpit and XCP-ng, I should have not talked as much about them.

Regarding user administration, I think both Proxmox and XCP-ng offers some advanced options. OpenNebula definitely does. Proxmox should be easier to get into. You have Admin, PVEAdmin, PVESDNAdmin, PVEPoolAdmin, PVEUsersAdmin, PVEUser and more roles to pick from.

4 Likes

I’ve got plenty of storage drives laying around. When we retired our last set of blade servers, our network guy just sat them in the scrap pile without even bothering to wipe them, so I took all the drives out for myself :slight_smile:
Already installed Proxmox on one machine and am evaluating it now. Seems pretty slick so far. It’s definitely simplified compared to what I’m used to so far, but that’s not necessarily a bad thing anymore. I was surprised at how easy it was to get a physical SATA disk passed through without passing the whole SATA controller through to a VM already… Definitely going well, haha!

2 Likes

Thanks for the breakdown here. I haven’t gotten to read through the whole thing, just came to post a quick update before bed, but I’ll definitely read through this thoroughly tomorrow.
I’ve already found one bug in Proxmox 7 that I installed on an Optiplex 7070 for testing, it seems it’s not compatible with certain newer kernel versions, specifically the version of 5.15 that Alma Linux 9 uses throws a panic as soon as you boot it up. I’m up and running with Alma 8 for now, though. Most of my “production” workloads are on RHEL8-based installs for now, so no big deal, but I was hoping to kill 2 birds with one stone here, haha.
Thanks again!

1 Like

Again, thank you for the breakdown here. I’ll have to do some experimenting with Proxmox’s clustering to see if it’s going to work out for me, but based on this, I expect it should, at least within each class of device. I suppose I could just have 2 or 3 separate clusters if I really needed to based on my hardware. Might be a good experiment to have the supervisors for one cluster be inside the HA pair of another cluster just to see how/if it breaks, haha!

I hadn’t thought about using Cockpit to manage KVM for some reason… It makes sense, but as you mention here, there’s a bit of a want for extra features I’d have to build out myself if that’s the route I take. Definitely something I should have thought of though, since half of the Linux distros I use want you to use it as the default administration option. I probably won’t go that route, at least this time, but it does give me some ideas of things to play with in the lab.

I will have to give XCP-NG a try as well. I also haven’t worked with Xen other than a handful of basic helpdesk-type stuff earlier in my career. I am already liking Proxmox at this point though, so even with the potential pitfalls, and the bug I found, I think I might just go ahead with rolling it out. Seems like it would be extremely easy to move between most of the options here, since it seems like most are just KVM under the hood.

I don’t have many workloads running in containers at the moment (probably should containerize more of my workloads, but maintaining that longterm just seems like extra work to me, since a lot of it is specific to my lab/work environment) but even the few containers I do have, I typically deploy them as a set of VMs that work as a Kubernetes cluster. Please feel free to let me know if I’m thinking of containers the wrong way, though, haha! Definitely not afraid of getting into the CLI in any of the OSs I work with. Even early in my career (like level 1 helpdesk tech) when someone at work needed something done on a CLI, I was the guy they asked if our Sr. Systems Architect wasn’t available, and he’s always busy, haha!

1 Like

LXC are just like normal VMs except very small and fast to set up and to load. You don’t even need to think of them as containers.

Interesting… Haven’t really looked at LXC, I’ll have to do some research. Thanks.

LXC or Lexy: Containers for people who prefer proper VMs.

I think the limitation is they use the host Linux OS so you can’t do Windows this way. You can go mad and have loads of them, very lite on the system but you can still allocate massive resources if you want.

1 Like

Yep. Two different beasts. Docker used to use LXC, but still had a different management and deployment.

As wayland mentioned, think of LXC as a VM. You can choose the available options in Proxmox and mess around, pretty easy to get you started. But I prefer managing through LXD, as there are more OS options and easier administration tools.

LXC uses the host OS’s kernel. So if you are running a CentOS 7 VM, you’d have the CentOS 7 kernel, I think 3.10 or something ancient like that. But if you are running CentOS 7 LXC, you’d be using your host’s kernel. For proxmox, that would be 5.10.something-proxmox IIRC. You can run all kinds of OS and all of them will run on the host kernel. One can have wireguard, another can have NFS shares (although that’s a bit glitchy / buggy), another can have a DNS that doesn’t use any of the kernel features and so on.

Managing and updating LXC is just like managing a VM. You SSH into it, or open your web console and use your package manager to update it. It is literally, like wayland put it, containers for people who like VMs.

They have their limitations, like for example, in Proxmox there is no live migration yet and containers need some higher privileges to access certain kernel modules, which basically makes it no different than running software straight on your host, but with its location changed (kinda like chroot on steroids).

You can restrict a container’s resources, like you can for a VM. It’s the default in proxmox, on LXD you just share everything with everyone, like a big happy family, until someone eats the last cookie.

Migration on Proxmox works by shutting down the container, copying the data over to another host, then starting it, or maybe copying the data, stopping and then starting, don’t remember. But it was like 5 seconds of downtime per migration. KVM does not have that limitation, the data is copied to another host and the RAM contents are also brought over, meaning no downtime.

With LXD, you can achieve the same thing with CRIU I believe, but I haven’t got into that just yet. For my own needs, a bit of downtime is no problem as of now. Rather, I prefer setting up redundancy at the application or at least the network layer side. Things like DNS, you can run 3 instances of bind9, on 2 of them you run keepalived, if one container goes down, the other one will take its IP address and serve things pretty fast. If both of them fail, or if the first one doesn’t go down, but there’s a config issue and bind can’t start up or can’t return queries, dns 3 will be the failover one. It will be slow, because DNS takes an awful long time to timeout requests, before it tries querying the secondary DNS server.

Same for database, just do a cluster and if one dies, the other one becomes active. HA can help with unreliable hardware, but I find that most of the time, from the biggest baddassest server to the cheapest ass SBCs, the software is usually the one that fails first, usually because of misconfigurations. It’s no wonder many VPSes can get away with running tenant VMs without HA, because their servers going down is very low, and they have a sane recovery scheme in place that can get those puppies back up and running in no time, but the customer still has to start their own applications if those don’t start at server boot time.

1 Like

That’s the part that always confused me. I’ve never really seen enough benefit to using the host’s kernel over just installing those services on a host machine and configuring an out of memory daemon to restrain anything that gets out of control. Maybe I’ll play around with them on some of my more disposable hosts, but I’ve got enough storage and RAM that I’d rather just run full VMs for now, I think, especially if management of updates works the same between the two.

Again, I really appreciate the breakdowns!

Just to follow back up on this, I’ve settled on Proxmox for the time being. Working on converting all of my hosts over now.

TL;DR - It’s way better than I expected/remembered, and clustering has been great in testing. Also I can’t go more than 2 sentences without getting sidetracked and needing parenthesis.

The clustering works very well for everything I’ve thrown at it so far (and just to confirm, systems definitely do not need to be the same, or even necessarily similar to be in the same cluster, seems to just need to be on the same network. I haven’t played with HA yet, but everything I’ve seen in my research says that shouldn’t be much of an issue either,) and other than not working with certain 5.x kernels, I’ve not run into any other issues in my testing either.

I definitely like that it’s not afraid to just let you play around on the shell without jumping through hoops (especially coming from VMWare, where it complains about just about anything you try to do outside of the GUI, and $Diety help you if you need support after even touching the SSH setting without getting a notarized letter from Pat Gelsinger himself… or whoever replaced him when he went to Intel) and that it’s just Debian under the hood. I’ve been dealing with RHEL-based distros a lot more in the past couple years, so I’d probably be more comfortable if it were CentOS or Alma-based, but I started with Debian/Ubuntu, so it’s been very straightforward so far.

It even let me try to store OS disks on an SMB share over a single shared 1GbE link (don’t do this, it’s not a good experience, use faster speeds and a better protocol if at all possible) and even did live migrations while doing so.

I’m surprised at how far it’s come since the last time I tried it (although, as I think I mentioned in a previous post, that was before I really had a lot of virtualization experience, so part of the problem could have been me) and that all of this functionality is free. Speaking of money, I’m not a big fan of the paid options of only having per-CPU options, as they seem pretty biased against labbers with a bunch of low-end hardware, but I might buy a single license just to throw a bit of money at the project and see what all the “Enterprise repository” entitles you to. I definitely can’t afford to license my whole lab with my hobby money, but if it’s worth it, I might maintain a small separate cluster of licensed machines for my “production” workloads.

I’m still not really getting the point of the LXC containers unless you’re extremely resource constrained, but I noticed there are some pre-built templates for a bunch of Turnkey Linux packages that I’ll probably play around with once I get all of my VMs moved over from ESX (which unfortunately will probably take me right up until my ESX license expires at the rate I’m going, haha!)

Thanks again to all of you for the assistance!

2 Likes

I did that on 2x 1Gbps LACP bond on CentOS NFS servers. It was a dedicated storage network though. It worked great for about 30-40 VMs per host. Live migration only moves RAM content when the storage is on a shared network location, which happens over the management interface (the IP on which you set the cluster on).

I don’t recommend people go and do their production builds with 1Gbps NICs, at least get a 2.5Gbps if you can’t afford 10G. But in a homelab, it shouldn’t be a big deal even with a single link, as there isn’t a lot of load to saturate the link constantly, only on bursty operations.

I can guarantee that you do NOT need 3 identical machines to do clustering because i had a cluster of 2 machines with wildly different hardware. It might prevent one or limit ones use of HA features though

1 Like

A 2 node cluster doesn’t allow you do to HA because of the split-brain problem (nobody knows who’s the actual offline node without a witness or peer to confirm). But live migration is fine, in case of planned maintenance. In fact, that’s how we used to reboot the proxmox servers: moving VMs around the hosts, rebooting one, moving them to this one and others, rebooting this, until it was finished. It was a bit of a pain in the butt, because of different specs, so not all VMs would fit on one host, which is why we started with the large ones first, because after that, those could take the influx of VMs from the smaller hosts.

To this day, I’m not exactly sure how I’m going to do maintenance on my LXD containers when I finally finish with the damn thing, because migration involves stopping them, at least by default, and I haven’t read about CRIU integration just yet.

1 Like

They had the same problem in The Minority Report. They were using an unreliable system to detect pre-crime so they set up a quorum of three psychics and took the best of two. Tom Cruise the police investigator wanted to see the Minority Report, that was the report from the psychic who did not agree the suspect would commit the crime.

I think computer clusters are generally better because they are not trying to predict what human free will is going to do.