I cannot answer you directly as I actually abandoned nor I had any real need to move hardware machines into the vitalized environment. But I run few VMs on my home lab. So I can share some of my experiences.
I see following issues:
RDM - Raw Device Mapping
I think I was looking into doing something like that with SATA drive and Inter controller and it was not possible.
It is anyway not recommended for virtualization.
"my daily work/internet/gaming"
I kind of do not see this working unless you have a thin client PC that will allow you to either RD into windows OS or run have ESXi//VNC client (I actually never tried to use simple VNC client with ESXi). And also a out of ESXi OS that has browser to manage ESXI.
While theoretically possible, ESXI is not really best option to be used at home in a way you might have seen in LTT videos (link in the end). Simply ESXi ecosystem aims at data centers hardware compatibility and scale.
Regardless, you can probably forget about gaming with GTX 780 in VMs. NVIDIA graphics cards unlike AMD do not support virtualization except for Quadro cards. Also if I remember correctly 3930k is probably part of first customer CPU line that included support for that, but in later steppings - so basically I would assume that it do not support it either.
(and you have not mentioned nothing about new hardware you plan to build)
"moving existing installed OSes into VM."
I remember that even VMware had a tool to do so with windows. But I do not recommend especially for your daily rig. Not that I used this tool, but I remember that it was aiming at office PCs. What the system would see is that you moved HDD to completely new hardware.
I had some success with Ubuntu installed into VM (under VMWare workstation) and then running it on real hardware. But after tree weeks I stop playing with moving it back and forth.
ESXi hardware compatibility.
Even if you do not want to pass-through (or map) a real piece of hardware to the specific VM, you still need to check the hardware compatibility list. If I remember correctly ESXi is based on some Unix distro. And simply inherited a short list of hardware compatibility when not passing the devices directly to VMs. It is simply matter of drives that are only available for most commonly used devices in the servers/data centers.
Example, what is supported:
- Intel NICs
- Intel SATA controllers (but never any RAID configuration, that is always exclusively a software RAID done by Intel drivers for Windows).
- LSI
What is not supported and is common on home motherboards:
- any other NIC
- any other SATA controller. (those can supplement ports from Intel chipset).
It is possible that on some forum you will find the driver for a NIC or SATA controller. But my experience is that there is !5% chance that you find it, and then 25% that it will work on your particular model.
Do I put the drives in Raid 1 on the chipset, or within ESXI?
You will have options:
1. If it is LSi controller then maybe you will be able to RDM that raid into VM. But RDM is really intended for exceptional cases.
2. Keep the hardware RAID 1 and create virtual disk datastore on them (that is probably best option, but you loose data).
3. Dismantle the RAID. Create datastore on each drive and in VM create software raid (you can do it but that generally not best way).
4. I never actually played with VMDK possibilities for any RAID like configuration, if there are any.
The virtualization gives you simply more layers in case of RAID like storage configuration.
nstall 2 old hard drives as the ESXI boot drive,
You can actually install ESXI on the pendrive. ESXI do not do much of reads or writes on its own partitions.
And I would be able to reinstall openmediavault into a VM and recover the Raid 6 pool?
Maybe, if it is RAID on the LSI controller and RDM will work with it.
LTT video: