Type 1 Hypervisor

KVM, and Hyper-v are sort of two peas in a pot.
Basicaly hyper-v exposes the CPU to a given VM, allowing for faster computation since it's not a "emulated" cpu, but rather a more direct access to to CPU.
KVM exposes a given hardware address to a VM, e.g. a GPU, but as allways the OS/driver requires exclusive access to the hardware, hense you need to detach the hardware from the host OS.
all in all hyper-v is the best for virtualization, since KVM basically cant really be used for much with out hyper-v, and is little more then an extension of hyper-v.
VMware is sort of just a wrap up of the technologies, like eqemu, and what ever else you can find to use for virtualization of a machine.

I've worked with Hyper-V on 2012 and I've worked with VMware.

VMWare is by far the superior of the two, but setup and cost of it can be prohibitive. Hyper-V on 2016 is coming close to being extremely competitive with VMWare, such as dynamic disk sizing and host swap memory integration, but VMWare is still the better of the two, particularly for it's scalability and performance.

So really it comes down to: What do you plan to do with it? What's your budget? How skilled are your staff?

In the meantime, I voted VMWare.

I've worked with the 3 of them. For personal use, KVM has always been my first choice. At work, I've manage pools of thousands of VMs both in VMWare and Hyper-V. VMWare is much more mature. It is really supreior to Hyper-V.

From a management perspective I think VMWare is the easiest to use and run (even large scale deployments). Hyper-V is also pretty intuitive, but I've only had very limited passing experience with it.

On my local machine I use KVM because it does what I need, nothing more nothing less. I've also had some passing experience with Xen and it seems to be pretty decent.

Most of them have made huge steps forward in eliminating problems with bad neighbors with the exception of networking. The rest of the schedulers seem to be pretty fair even in moderately over committed environments. I still wouldn't RECOMMEND over committing your hardware, but it can definitely be done with CPU and in some circumstances, RAM.

One of the nice(ish) things you can do if you're doing this for an office is have openstack manage your environment, then you quickly stop caring what your hyp is (after a lengthy setup process). The barrier to entry with openstack has gotten lower in the last 3-4 years, but it's still there, and still something to be aware of, if you decide to go that route.

I honestly don't think there's any features in any of them that make one a lot better than the other. I use V-Motion on nearly a daily basis in my deployment, but you can do something similar with Hyper-V and CIFS storage targets, and I think XenServer (from Citrix) has it as well, but I haven't touched it.

As of now, keep a ram hogging SQL server running (it's going to be going away sometime in the next few years), as well as another database that just got put in (to replace the SQL, still need to migrate all/most of the data), and a few other things that aren't really resource intensive. As of now, they're all running some form of windows server (2008 R2 and newer)... As for budget, we have a quote, but it's a lot more than we had expected.

We run a larger (or do we?) vmware cluster at work. 21 hosts, ~300 VMs, 5TB RAM, 50TB storage provisioned, ... and managing all this is easy and just works. We just migrated from our old vmware 5.5 cluster to new hardware, storage and vmware 6 and it was just live migrating the VMs over and thats all. We could do this during normal working hours and our users didn't even notice it. This was the easiest migration I ever had.

But vmware is expensive and if money is a problem, maybe have a look at other offerings like proxmox, xen server or open stack? I use proxmox at home and it works fine. I don't use no clustering or distributed storage, but the features are there.

Hyper-V is free, for the base host, you just pay for the license on top of that. That may be a consideration. In 2012 I know hosting a SQL server on it was.....sketchy and not really recommended unless you had some really good replication happening.

I'm with seeker above. We're a about 20 hosts below his count but his experience is spot on. The management of Vmware can't be matched, along with it's usability. That's where the cost is, you're paying for the user experience basically. The IBM machines we have our VMWare sitting on are also beasts. Those joker don't go down at all.

Back to Hyper-V, it's easy to manage as well, and 2016 allegedly fixes the bugs and usability issues that 2012 had, but I can't tell you that's true first hand. Go download the hyper-v manager and put some trial VM's on top of it and see how you like it. If you have a site license, it'll literally cost nothing to put it on your network and give it a shot.

We've already migrated the phone and web servers into Hyper-V in a GUI install of 2012 R2 a few months ago and haven't had any issues. It works, but it's apparent that it probably won't work too well running the database servers. I know we need to at least get everything moved over to a bare-bones Hyper-V Server, but I wanted to see if running VMware compared to Hyper-V was a difference between night and day.

I work with VMware and it works great but is really expensive. If you are just looking to play around with it you can get ESXi for free if you sign up for a VMware account. I built an ESXi server out of spare parts and it works great for testing. You just have to have a supported gigabit ethernet adapter. I picked one up on Amazon for $25.

VMware is definately the most mature, in terms of managing large virtualised estates, but KVM is technically very, very good. Hyper-V has improved massively over the past few iterations and can work out a lot cheaper to run as you have to licence Windows anyway. Costs go up if your estate is of a size that you then need to start using the Systems Center suite to monitor and manage it.

The latest versions of Hyper-V now have improved RemoteFX support/performance and finally MIcrosoft have enabled support for GPU pass-through. RDP compression performance has been enhanced so getting native GPU performance streamed to your remote PC over the RDP session is supposed to be decent. I'm going to play with that as soon as I can.

I'm a little confused by some of the comments about SQL Server above. SQL Server is rock solid running on Server 2012 on VMware or Hyper-V and performs very well when everything is properly configured. You do however need to size VM's acorrdingly and set affinity rules to keep very busy database servers apart - or accept that you might need to implement QoS rules.

Key points for SQL Server virtualisation;

  1. Set Advanced Configuration Options in SQL Server correctly - the default values are really terrible. Max Memory should be capped to leave the OS between 2- 8 GB RAM depending on total RAM size and what else you are running in the VM.
  2. Don't thin provision Memory or Disk for SQL Server VM's. Make sure in Hyper-v you use VHDX disks, not the older VHD and format the data-disks with a 64KB block.
  3. Don't give SQL Server VM's too many CPU's. Start with 2 and add another 2 when you can see the VM needs them. Far too many people give the VM 8 or more CPU's becuase the vendor/developers say the DB's will need that many. Often they don't...
  4. Tell the developers to look carefully at their index chioces and table statistics. I fixed many a performance problem by removing unneeded indexes and puting in a missing one or just by updating the table stats and forcing plan recompilation...

If anyone has SQL Server infrastructure problems feel free to PM me. I'll point you in the right direction if I can.

1 Like

I haven't used any of them in a professional environment, but I use an ESXi server at home for FreeNAS, PFSense, and a few other VMs. Here's a link to my old post about it: https://forum.level1techs.com/t/supermicro-dual-xeon-home-esxi-nas-build/103786

I should probably go update that thread with how the rig is setup now. ESXi hasn't given me any trouble; it just works for most everything I've tried. I've been toying with adding 10gig Ethernet to the server and my desktop so I can get more speed from the NAS.

Not sure what you are using the hypervisor for. But if you are going for a Server I like Xenserver a lot since it is free and open source and you have Citrix support if you want it.

You can get support for KVM from Red Hat as well.

If you are going for a homelab SmartOS is my Pick because it mixes a few Virtualization technologies together and have top notch support for ZFS pools. It is primarily KVM based.

If you just want to run VMs on a workstation KVM with whatever distribution you prefer is great.

If you want customer support and need the VM to run on Windows go VMware.

Esxi is too fussy for server use.

Hyper-v is far to limited to be truly useful unless you are locked in Windows Server environment for some reason.

It's super easy to stop SQL Server from hogging memory. There is a Max Memory setting. By default it is set to 2 petabytes. You probably don't have 2 petabytes of memory on your system. If you've got a dedicated machine (be it physical or virtual) set this to about 2GB less than what the machine has, and you'll be fine. If another application shares the server with SQL Server, then account for the memory required by that application, + 2GB, and limit SQL Server accordingly.

Also, I'm tossing in another vote for KVM. If you're not a Linux shop, you may consider Proxmox as your distro of choice. Were it not for NDAs, I would love to give a couple of examples of fairly sizeable companies that are using KVM on Ubuntu servers.

For personal use (which is mostly just testing *nix distros) I use KVM in combination with libvirt to make management easier. in my personal experience I've used virtualbox, VMware player, and KVM. So far KVM + libvirt has been the best experience for me personally.

That explains why it keeps growing to 50+GB, we just thought it was the program it talked to, which has a bunch of other problems.

Yep, over time SQL Server will just keep taking more memory - if it can use it, and you have not capped it.

The T-SQL command to cap it to 4Gb would be;

sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure 'max server memory', 4096;
GO
RECONFIGURE;
GO
sp_configure 'show advanced options', 0;
GO
RECONFIGURE;
GO

You can also do it from the server properties in SSMS.

Just a quick update:

We decided to stick with Hyper-V. Thanks for all of the advice everyone, there really isn't a better place (or any other that I could find) to find information like this. The price-tag on VMware was way too much to just buy it without finding this stuff out, which ended up meaning that we didn't need the what VMware had to offer over Hyper-V.

We realize that VMware has its place, but the benefits don't outweigh the cost in our network.

On another note, I loaded Xenserver on my server at home and am happy with it so far. There are a few minor storage management quirks (there's a way around this, but I can't seem to get it working, oh well) that I'm not completely thrilled with, but it's not a deal breaker.

you forgot Xen.

That's good that you think we as a community were able to help :-)

If you haven't already take a look at the Veeam.com website. They make tools to expand on Hyper-V and VMwares high-availability/disaster recovery features but also have a useful collection of white-papers on Windows, Hyper-V and VMware.

If you also need anything on SQL Servers High Availbility capabilities VMware recently published a white paper by one of the best non-Microsoft SQL Server experts there is; https://t.co/OMY2Cia6oH

...it still applies for Hyper-V as its managing HA at the SQL Server rather than hyper-visor level which is sometimes required for mission critical workloads.

This community is a great help for just about everything, I like to call for help sparingly here since I don't contribute very much. I'll have to look at those tools, they seem like they could come in handy.

I would add it if I could, but I got a message saying I couldn't edit the poll myself. In my own research it wasn't mentioned that much (also, I haven't heard good things about Citrix), so I ended up overlooking it. Shame on me, I guess, it's been great to me so far.