Condensing Multiple Machines - Sanity Checking my Plans

Okay guys, so I'm planning to condense 3 of my machines all into my primary rig using esxi free. I understand the basics of how it all works, but I'm still super new to virtualization so I want to make sure I've got the right idea.

Currently, my daily work/internet/gaming machine runs Windows 10, booting from a single SSD. Specs:
3930k, Rampage IV Extreme, 4x4gb Corsair Dominator, 2x GTX 780

My media server is openmediavault running on an old LGA775 setup, with 6 x 3tb drives in a software Raid 6. Drives are connected with a Dell Perc H310 flashed to LSI IT mode.

The third system I'd like to run is a PFSense router, which I haven't built so far so no need to transfer this.

So my main question is, will I be able to move these systems each into virtual machines without causing any harm to either OS? If I understand, I would disconnect the boot SSD, install 2 old hard drives as the ESXI boot drive, and install ESXI there. Do I put the drives in Raid 1 on the chipset, or within ESXI?

Then once ESXI is installed, I should somehow be able to use the existing SSD to create a VM, and it should basically behave like it did before? And I would be able to reinstall openmediavault into a VM and recover the Raid 6 pool?

1 Like

I cannot answer you directly as I actually abandoned nor I had any real need to move hardware machines into the vitalized environment. But I run few VMs on my home lab. So I can share some of my experiences.

I see following issues:

RDM - Raw Device Mapping

I think I was looking into doing something like that with SATA drive and Inter controller and it was not possible.

It is anyway not recommended for virtualization.

"my daily work/internet/gaming"

I kind of do not see this working unless you have a thin client PC that will allow you to either RD into windows OS or run have ESXi//VNC client (I actually never tried to use simple VNC client with ESXi). And also a out of ESXi OS that has browser to manage ESXI.

While theoretically possible, ESXI is not really best option to be used at home in a way you might have seen in LTT videos (link in the end). Simply ESXi ecosystem aims at data centers hardware compatibility and scale.

Regardless, you can probably forget about gaming with GTX 780 in VMs. NVIDIA graphics cards unlike AMD do not support virtualization except for Quadro cards. Also if I remember correctly 3930k is probably part of first customer CPU line that included support for that, but in later steppings - so basically I would assume that it do not support it either.

(and you have not mentioned nothing about new hardware you plan to build)

"moving existing installed OSes into VM."

I remember that even VMware had a tool to do so with windows. But I do not recommend especially for your daily rig. Not that I used this tool, but I remember that it was aiming at office PCs. What the system would see is that you moved HDD to completely new hardware.
I had some success with Ubuntu installed into VM (under VMWare workstation) and then running it on real hardware. But after tree weeks I stop playing with moving it back and forth.

ESXi hardware compatibility.

Even if you do not want to pass-through (or map) a real piece of hardware to the specific VM, you still need to check the hardware compatibility list. If I remember correctly ESXi is based on some Unix distro. And simply inherited a short list of hardware compatibility when not passing the devices directly to VMs. It is simply matter of drives that are only available for most commonly used devices in the servers/data centers.

Example, what is supported:
- Intel NICs
- Intel SATA controllers (but never any RAID configuration, that is always exclusively a software RAID done by Intel drivers for Windows).
- LSI

What is not supported and is common on home motherboards:
- any other NIC
- any other SATA controller. (those can supplement ports from Intel chipset).

It is possible that on some forum you will find the driver for a NIC or SATA controller. But my experience is that there is !5% chance that you find it, and then 25% that it will work on your particular model.

Do I put the drives in Raid 1 on the chipset, or within ESXI?

You will have options:
1. If it is LSi controller then maybe you will be able to RDM that raid into VM. But RDM is really intended for exceptional cases.
2. Keep the hardware RAID 1 and create virtual disk datastore on them (that is probably best option, but you loose data).
3. Dismantle the RAID. Create datastore on each drive and in VM create software raid (you can do it but that generally not best way).
4. I never actually played with VMDK possibilities for any RAID like configuration, if there are any.

The virtualization gives you simply more layers in case of RAID like storage configuration.

nstall 2 old hard drives as the ESXI boot drive,

You can actually install ESXI on the pendrive. ESXI do not do much of reads or writes on its own partitions.

And I would be able to reinstall openmediavault into a VM and recover the Raid 6 pool?
Maybe, if it is RAID on the LSI controller and RDM will work with it.

LTT video:

Try https://www.qubes-os.org/intro/ instead of ESXi. ESXi is more for standalone/clustered VM management. Qubes is more for friendly to desktop style use.

2 Likes

So what I'm gathering is that it might be possible to virtualize all 3 machines onto one system like I want, but not using esxi. I've watched all of the LTT videos about Unraid and that seems like basically what I want, but I'm too broke to be paying for software right now.

And to clarify, the Raid 6 I was talking about is a software raid running on OpenMediaVault.

So is there another virtualization software that is free and would do what I need? And what's bad about virtualization PFSense?

As for the platforms:

  • Unraid uses, I think, the Xen hypervisor.

  • On this forum I found that few times Proxmox was mentioned (also with regards to pfSense).

My further thoughts on your initiative:

My perspective, on what you are trying achieve is that your goal is not exactly in line with the intended purpose of virtualization. I'm not an expert, and my experience is mostly around VMWare solutions and VirtualBox, and mainly homelab (although I participated in creation of third party software for monitoring ESXi hosts). While on one platform it might be easier to do, than on the other, most of them are supporting features you would need, just, in general robustness and feasibility is not there yet for the home lab.

Other thing is that, in your particular case (regardless of virtualization platform):

  • virtualising NAS servers is not generally recommended. NAS or any storage solution is more likely the services to be used by hypervisors as a storage for VMs.
  • as for PFSense, I can only guess, but it is probably matter of performance and/or actual fact that hyper-visor is above and around its VMs so for example PFsense might protect your VMs (if you configure virtual network correctly), but what will be protecting your hypervisor?
  • raw device mapping for storage - the main purpose of virtualization if to abstract from resources as much as possible. In regards to the storage you go completely against it. E.g. in virtualization more important is the ability to move VMs between servers and datastores than connecting HDDs drives direly to VM.
  • pass-through support in customer devices is something that I would consider new and not mainstream at all, and probably only exists because it is cheaper (software and hardware wise) to have similar chips (if not the same) customer and "pro" versions of the cards.

And the feature you will need the most (regardless of virtualization platform):

  • using VM as a daily machine with directly connected peripherals (gfx, mouse, keyboard, audio), while possible, is simply a scenario that with today hardware and software is in my opinion just asking for major problems (starting with simple things like VM would not auto-start because of something).

agree it's dumb this problem exists, but it's not a show-stopper by any means. it's a single-line fix. kvm=off. and then nvidia cards suddenly work perfectly.

media server should probably stay on its own hardware.

pfsense should almost certainly stay on its own hardware.

you can virtualize just this, if you like. :slight_smile:
makes snapshotting + recovery nice.

you probably won't get both cards passed through, though. and not in SLI.

Jeremy isn't going to be able to pass-through his graphic cards because he has two graphic cards that are the same, need two graphic cards but they can't be the same. He might be able to if his cpu has a built in graphic.

I have never used esxi, I use ubuntu server with kvm/qemu for all my vms. I have had vms with drives that i wanted to reconfigure so i have set up a dummy vm then attached the drives after the fact using libvirt and virtmanager. if you are using compatible OS's it should not be an issue. basically what i would do is use one drive for the host with ubuntu server put xfce or any desktop enviroment on it after you install, then use gparted to make sure you have all the drives you need installed and set up in your fstab file so they will all mount on boot. you have to make sure you work with the correct drivers for windows vm's, but alas, i have not really played around with windows in kvm much so I wont be much help there.

This should not be an issue if using a kvm, you passthrough a network card to the pfsense vm for the wan, and then you connect to everything else with the lan configuration using a host bridge, if you pass the card to pfsense the host doesn't even know its there so everything from outside goes to the vm running pfsense.

In a production environment you would not use a vm for a nas but for a home media server basically just set up a samba server in a vm and add minidlna and share the folders you want everyone to have access to. or just install openmediavault/pass the drives for it through the host, you have to first create a dummy vm then switch out the drive devices like i said up above, as for the raid 6 you will need to use a compatible os on the host so it will recognize the array, openmediavault is debian based so either debian or ubuntu should allow you to pass the raid array through to a vm.

for a home environment running all your extras on a single machine is a good use of hardware, pfsense runs fine virtualized as does a nas with a media server, home loads are not that taxing on hardware, if you have an 8 thread machine you can just assign 4 threads to the vm's running pfsense and the nas and then all 8 threads to your windows vm. if you use kvm it will bounce the load around to which ever thread is available. pfsense for home only needs about a gig of ram and a nas/media server can get by with 1.5 gigs.

I currently use an amd 860k with 8gb's ram, 3 network adapters on an 88x motherboard, it hosts a pfsense vm, a nas mediaserver vpn server vm(all on one vm), a nextcloud vm and an onlyoffice vm. it rarely exceeds 20% cpu usage and uses about 50 watts of power. I have had no problems with pfsense or my nas/mediaserver installs except the problems i created for myself when testing something or trying to make something do what I wanted.

not true. you just can't blacklist them by model number.

this is how i passed through one gtx 760, and left the other for the host. it's not difficult at all.

+1

for KVM+QEMU. especially since you're looking at only a couple VM's which are intended for very specific purposes and will be fairly stable (not creating new ones all the time).

As you mentioned visualizing extras within one machine makes sense.

But if you add on top of this that this 1 machine also must have vritualized your "daily work/internet/gaming machine", then in my opinion while still doable in my opinion brings more problems than benefits.