I plan on building an ESXi host for my home. The machine will host my firewall appliance, file server (main concern), and any other VMs/OSes I want to kick around.
From what I can tell, these are my only real options to present a large volume of local storage to the File Server VM:
- RAID Controller passthrough to VM
- RAID Controller presented to ESXi as VMFS and then RDM the volume to the VM
- Make multiple VMDKs and jury-rig some kind of software solution to handle all of the VMDKs
I plan on using FreeNAS on the file storage VM (NAS+owncloud plugin+Plex Server make it most appealing)
Does anybody have any thoughts on the best approach to "giving a RAID array to a VM"?
I used to have my fileserver hosted on ESXi. I gave up on that pretty quickly though. If I remember correctly I created one large virtual disk on the RAID array. So I had the physical controller "managed" by the ESXi host and just passed it as one virtual disk to the VM. That worked okay but I believe there to be better solutions.
Yeah, I wanted to avoid giving the VM a big VMDK file; especially since the VMware documentation center says VMDKs are limited to 2 TB and I'll be putting a fair bit more than that into the RAID.
No need to bother with a RAID controller of the machine has enough SATA/SAS connectors for your disks. You should be able to pass the disks through to the VM's if you want to - and then the FreeNAS VM will manage them in a ZFS pool(s).
I'd prefer not to let FreeNAS (or any virtualized OS for that matter) handle the RAID in the case that I lose the VM or some weird shit happens. From what (I think) I know about RDM, I could nuke the VM and the whole RAID could be moved from VM to VM as if it was a VMDK.
IMO you would be more likely to loose data through a RAID controller problem than a ZFS problem. A ZFS pool can always be attached/imported into a new server. If you ever needed to create a new FreeNAS VM you could just give it the disks and import the ZFS pool.
Of course you would only pass in the disks you want to use for FreeNAS, the others you would use for other VM's.
Another option would be to just use a base OS that supports ZFS and run a hyper-visor on top of that - Bhyve (or just jails) on FreeBSD or KVM (or just containers) on Ubuntu for example. Lot's of options, you could even do it with a Microsoft solution using Hyper-V and an ReFS storage-space...
EDIT - for grammar/clarity.
Yeah, that's what I was thinking - however if that happens, I can always pop in a new controller. I've not played with ZFS too much so I'm not familiar with how nicely it plays with being moved around.
I'd like to stick with ESXi - Thats what I work with and trust, plus I have a license or two.
I was mistaken. I thought about my solution and realized that I had in fact found another way after facing exactly the same problem as you do. Giving the VM one huge VHD was my first idea but I settled for another solution in the end!
What I did was I created a RAID on hardware controller level and let ESXi handle that. I then passed the drive though to the VM running my storage server, but not the hardware controller but the volume instead. Meaning the VM handles the filesystem for your storage but the hyper-visor handles block-level operations. You will have all your files directly on the physical hard drives but the storage can still be handles inside the VM.
If I explained that poorly I will try to word it better again!
By "passed through" do you mean you used RDM to map the array using a VMDK file (its really just a mapping file)? Or did you use another method that I'm not aware of?
No, there is no virtual hard drive involved. The VM is actually managing the filesystem on the volume that is getting written to the array on the RAID controller. Some years have passed since I implemented that solution so please don't ask me where to click in vSphere :D You kind of add the Volume on the RAID controller as a device to the virtual machine. I sadly cannot walk you through the setup process but try to figure out how to add SCSI devices to the virtual machine. The way that this is done is very similar to adding a volume as a resource.
At the moment I don't have access to a machine where I could setup a similar environment so you need to figure out the rest of the process by yourself, sorry. But that is part of the experience right?