Hypervisor and a NAS on a single physical server with mix of software raid for OS and hardware raid for storage?

Okay I’m absolutely losing my mind over how to set up this insane monstrosity without funneling even more cash into it so bear with me please.

I currently have an old server with powerhungry Haswell CPU and failing SATA slots, so I bought a new Dell PowerEdge T140 server to replace it for super cheap (350e open box with Dell’s extended warranty) as it has OOB management, PERC S140 for hardware raid integrated and allows me to do work related experiments without bending over to finances to procure more lab capacity.

Old server runs Windows Server 2016 and acts as a fileserver, Hyper-V server and Plex server with 4x2TB spinning rust, 150GB SSD for OS and 240GB write-intensive SSD for VMs (regular SSD somehow got shredded in a few months). I want to migrate the data and the VMs onto the new server without migrating the whole OS, so I can move away from Windows.

The aim is to have the VMs and OS stored on a software RAID 1 volume due of use of SATA PCIe cards for boot and hardware RAID 5 for storage drives using PERC S140.

What even is the correct way of achieveing this?

  • When doing my own research, everyone is just screaming not to virtualise TrueNAS due of ZFS
  • I have most experience with XCP-NG and Hyper-V when it comes to hypervisors
  • Anything ZFS related requires direct access to the drives. With PERC S140 you cannot do hardware passthrough of the controller or flash it into IT/HBA mode when using it in a VM
  • Proxmox lists only Pre-12G RAID controllers as compatible
  • Do I just YOLO it with no redundancy for the OS and have Veeam recovery media at hand?

Here are the options that won’t work before anyone suggests anything:

Main reason why I don’t want to funnel more hardware into this thing is that Brexit made everything stupid to get shipped anything into Ireland from official sources and refurb places without being bankrupted by customs.

Honestly I can’t tell if this is a “This shouldn’t exist, you shouldn’t make it exist” scenario or not.

My first “real” server was a Dell R710 running ESXI 6.5 with FreeNAS, PFsense, and Windows all in VMs. I had everything all on one box and it worked just fine. The only thing that sucks is if I needed to do anything at all to the server, everything went down with it. Plus it being on the edge of my network meant I had to be more careful with virtual network changes and (being an older platform) keeping things updated.

The only think I would suggest is an HBA to handle all of your drive needs. You can pass it through to a TrueNAS VM and it wouldn’t know the difference. Your OS can boot from USB and your VMs virtual boot devices should live on fast low latency storage. The way I tend to set it up on proxmox is I have an SSD for VM boot and OS then a ZFS array for VM data storage and backups.

@thunderysteak There are three main reason why the advice to not virtualise FreeNAS / TrueNAS is so common.

  1. Because it’s very common for the person asking whether they can virtualise TrueNAS, to lack the basic understanding of the best practices of virtualising an OS that leverages ZFS, and rightly or wrongly, it’s easier and less time consuming to tell a newbie that they shouldn’t do it (and end up shooting themselves in the foot) than to explain all the caveats.
  2. The point of ZFS is that it’s software based. In order for it to function properly, it needs direct, unfettered access to the disks. In other words, you need to passthrough the storage controller / HBA / RAID card (which should be flashed to IT Mode).
  3. The third reason the advice is so common is that it’s seams like a lot users don’t understand points 1 and 2, but they’ve read the “don’t virtualise TrueNAS” company line, and they then repeat it because that’s what they’ve read in the past.

That may well make me sound like a neckbeard but I don’t say it flippantly.

The biggest issue with virtualising any OS that uses ZFS is passing through the disks. Then there are lesser issues with FreeNAS / TrueNAS that are usually specfic to your hypervisor of choice. When I was looking into it years ago, there were some issues related to shutting down a FreeNAS VM cleanly on Proxmox(?)… if memory serves.
.

I also use Proxmox. An SSD based pool for boot, LXC containers, and VMs, and a spinning rust ZFS storage pools that has it’s storage shared / accessed by my LXC containers via lxc bindmounts.

@thunderysteak if you’re familiar with TrueNAS Jails and how you mount or share storage to them, bindmounts are similar… slightly steaper learning curve and done mostly by editing config files (although some functionality can be done via a GUI) but they function in a very similar way.

What do you mean exactly when you say the S140 can’t be passed through?

Also, have you tried putting the controller into SATA mobo via the BIOS? If you did that, then a setup like mine might be an option for you.

https://www.dell.com/support/manuals/en-uk/poweredge-r840/perc_s140_ug_pub/entering-the-bios-configuration-utility?guid=guid-6647eb17-84af-4f9c-9778-75a0d2df9ac9&lang=en-us

Last time I’ve tried ZFS on a virtual machine it worked fine. Although it was a single drive which I exported directly as a virtio disk to the VM.

I use FreeBSD and ZFS everywhere I can. I avoid using VMs. The FreeBSD package collection is mostly on par with Linux specially with server software and you can pretty much run everything with Jails and VNET with good security and no virtualisation overhead. Exporting datasets to a jail allows that jail to work on that dataset as if it were a native pool.

This is basically the issue that I’m running into, there’s no proper guide how to do it, but I know enough to basically know the guides that actually exist are wrong. Even Lawrence made a guide on how to do it with XCP-NG but its still not a proper disk passthrough causing the hypervisor to choke

PERC S140 is not a PCIe device/addin card but a chipset based RAID controller. When doing my research, people keep suggesting to pass the HBA into the VM to prevent issues with ZFS

I actually didn’t think about that as at work I always smack stuff via the PERC.

So about that… There are some quirks that are a thing since 2013


This was one hell of a headache to get setup, and I feel another forum post with a guide how to make this a thing.

Now how do you properly benchmark the performance of TrueNAS? I’m pretty sure I’m not doing a proper benchmark as the read speeds are weird.

image