Wanting to switch to a new VMserver platform

Hello!!, First I want to start off by saying that I am familiar with virtualization, and the like such as Proxmox, HyperV, and ESXI, and Xenserver. I currently on the fence about wanting to switch from ESXI to a open hypervisor platform such as KVM or Proxmox along with Open vSwitch. I like the feature set of ESXI but paying for features for home use isn't worth the cost at the moment. . I am familiar with Ubuntu, Fedora, and Mint, and FreeNas. So I'll break it down with what I am after. I already have hardware that supports SR-IOV on NICS, full passthrough on everything that is I/O, and iSCSI along with SAN.

List of features that I use, and want for possible new platform.....

  • Fault Tolerance
  • Fail over
  • LAGP
  • PAGP
  • HA for Expansion
  • SR-IOV for GPU
  • SR-IOV for NIC's
  • Full pass-through
  • Support for HBA and RAID cards ( I mainly use Adaptec)
  • Open Stack (Open vSwitch) Support
  • Clustering
  • Live migration
  • Snapshots
  • Cloning
  • Resource scheduling
  • Web GUI along with CLI
  • SSH, VNC the like
  • Use of a managemant software like powershell for cluster control
  • Use of management server that controls a cluster (like vCenter server)
  • Ability to deploy Windows and Linux guest OS's, etc....

I think I have listed all main things. I have been doing research and I like the idea of using headless host OS like CentOS or Ubuntu server as the hypervisor. I am not sure about setting up KVM along with fencing or fencing for multiple host's. I also like Open vSwitch when I played with it in proxmox, and I like Nexus 1000v as well from what I have seen. I know that 1000v probably won't be supported, and that's ok. I'm really leaning more toward Open vSwitch anyways. Any suggestions that fit my list would be awesome!! It's been a while since I have visited, and I have returned from my adventures!! :smiley::sunglasses:

2 Likes

Have you looked at ovirt?

Not sure how many boxes it ticks for you.

I know for sure that Proxmox fully supports all of this. My poweredge t610 is using a PERC h200 flashed to IT mode but it would work just fine using it as a regular RAID controller as well. Proxmox by default is meant to operate in a cluster and provides fail over and fault tolerance but i cannot personally vouch for that since i'm only running a single node. I'm fairly sure that Proxmox can be configured in a master/slaves node configuration but again, i have not looked because i'm a single node user (for now). Proxmox also has some really interesting things it can do with the virtual bridge it uses for NICs. According to documentation you can do NIC teaming without any special switch that has spanning tree or any of that jazz.

This isn't going to be ready for a bit on Linux. It's coming, but will likely have performance penalties at first. I imagine it won't make it into things like ovirt and proxmox until 2018 at least. (depending on when it lands in mainline Linux)

Openstack and Open vSwitch are two entirely different things. Open vSwitch is software for virtual network switches and the like, Openstack is a large collection of software (optionally containing Open vSwitch) that, when used in conjunction, provides a solution similar to Azure, AWS or Rackspace "cloud" solutions.

Proxmox supports everything, short of easy SR-IOV and vfio passthrough.

FYI: https://forum.proxmox.com/threads/nexus-1000v-on-proxmox.23681/

2 Likes

Briefly, I mainly played with Proxmox back when it was version 1.x.

Yeah, I ment to separate those, thanks for the information. That's actually really cool that 1000v is supported in Proxmox. I

I wouldn't say "supported" but it should work.

Thanks! That actually is really helpful.

In terms of pricing VMUG Advantage gets you all the things for a home lab environment at an extremely affordable price. I am torn between proxmox and esxi.

I have been using VMWare workstation 12 for a while now and moving to ESXI is very appealing. I haven't tested ProxMox enough to know how well/transparently I am going to be able to test hardware devices from my vm. VMWare does a nice job of letting me expose the hardware directly to my test vms.

If you like ESXi then you will like Proxmox especially containers if you get to play with it. One cool things about it, if you set up a cluster, you can pick the master, and use the built in webgui to administer to all the nodes in this cluster. So you don't even need client side management software. I was super stoked about that feature back when ever I first came across it back some years ago.

Honestly I can live without SR-IOV for GPU in my setup. Would something along the lines of CentOS with KVM support SR-IOV for NIC's, because I mainly used Intel 350i-T4 adapters for advanced traffic along with SAN needs, and the hardware supports it. What about clustering of nodes with above mentioned OS?? I have read some on the process of installing KVM, and breifly read over setting up nodes in a cluster so I am really trying to figure out more details like host is performance, and I/O performance etc...