Return to Level1Techs.com

NAS in a VM worth it? / trade offs

I finally got a Windows 10 VM with GPU passsthrough working on top of a Manjaro install. But now I am unemployed and need another project without really having a lot of spare cash to throw at it. I have been toying with the idea of getting a cheap IBM m1015 HBA and creating a NAS VM with a simple mirrored array inside my singular desktop. I already have a stand alone synology NAS so this would be non critical / redundant backups, but I would ideally want it to have use beyond a simple learning project.

SO I guess the questions are:
What are the tradeoffs for a VM NAS vs hardware NAS?
Is data integrity any more of a concern on a VM?

System it would be going into:

Dell T5810
Xeon E5-2690v3 (12 core 24 thread)
32GB of ECC 2133 RAM
512GB SSD for (rarely used) Windows 10 install
1TB SSD for Manjaro / Storage
AMD RX470 4GB (Manjaro)
Nvidia 1060 6GB (Windows 10 VFIO VM)
825W power supply

NAS VM (FreeNAS? / Debian? / OpenMediaVault?) would add a IBM m1015 HBA passthrough with a couple of SATA HDDs attached

Any thoughts?

Unless you are eager to learn BSD, I would suggest OpenMediaVault and/or Debian. OpenMediaVault is a GUI on top of Debian, so the choice between them is more a matter of CLI vs GUI management.

They are much more flexible then FreeNAS, in terms of filesystems and also in plugins.

Not really if you are passing an HBA through to the VM. Maybe if you turn off the VM accidentally that would be an extra failure point?

I recently built a machine that runs Gentoo (though any linux/bsd would work) to use as a NAS/SAN. I personally prefer ZFS, so anything *NIX based would work to host it. There’s little reason to put it in a VM though if you have a fairly secure *Nix on the outside if it’s just a small personal backup IMHO as there’s no extra performance loss that way. I used to host my array on my desktop until I moved it into a dedicated NAS

Thanks for all the great ideas. So local if it comes down to hosting in Manjaro vs a VM: does one or the other make migration to a separate box easier when that eventuality happens. Having done a cursory pass on the forums / google it seems that both raid and zfs allow ways to pull an array out of one box and put it in an new one?

Hardware raid can be a nightmare. I can only speak from ZFS, but my array has traveled between 4 machines. Just run the export, bring it up on a machine, and import it. Done.

2 Likes

Been running FreeNAS in a VM with a Dell Perc H200 passed through for a year and a half now, it works just great. Note that current versions of FreeNAS do require 8GB of RAM just to install as part of it’s prerequisite check, but I believe you can reduce this once it’s installed. I will say that it will depend on if you just love FreeNAS, or if there’s a Jail you really want to run. For example, mine is a VM so I could consolidate my hardware to a single box and it runs all the HTPC jails (Sonarr, Radarr, Jackett, and Plex). If (or rather, when) I feel like separating my FreeNAS VM back to a physical box, I can export the FreeNAS config, migrate the drives, and import the config once I’ve installed FreeNAS on the new physical box (may need to reconfig NICs).

If you want ZFS and basically jails, a simple option is just to install any distro of linux with ZFS support and use something like docker. I used proxmox for a while for it’s ZFS support, and separated things in VM’s, LXC’s, and later with docker until I moved my pfsense to hardware. Now I use Gentoo for my storage (ZFS) and docker containers.

1 Like

This is all great info. Looks like a vm with zfs is in my future. While running the zfs natively sounds interesting, I really like my current base Manjaro setup and from my cursory reading the rolling updates can cause problems as zfsonlinux lags behind kernel updates just a bit. Besides setting up VMs can allow me to export / import the zfs pool through several systems. I will be sure to give an update once this things gets off the ground.

Manjaro does allow manual selection of kernels, though there’s very little overhead in going to a VM. Make sure you pass through the hardware of the hard drive. That allows them to be the most portable in my experience unless things have changed since I started with ZFS. Another quick note is you can post a ZFS from BSD to Linux (ZoL) but not easily the other way.

That’s good to know. Parts have been ordered so we will see sometime next how it goes.

1 Like

I’ll make sure that I put the note out now about setting a proper ashift value. I normally use the arch wiki for my docs on ZFS, but I thought I’d warn you now. If ashift is set wrong, it’s not something you can ever change without destroying the array.

If its a dedicated machine Im not sure its worth virtualizing at all. Just have a current disk image of the boot drive.

I can see if your all in one on a desktop having a headless OS booting your NAS and Desktop OS seperatly could be cool.

This is basically how I’ve setup my VMware all in one box. I’ve set it up to autostart my VMs so once I get access to my desktop then all my other services are started

With parts ordered here is the plan…
-Bump the host system to 64GB ram
-Add LSI HBA to system to be passed to VM(s)
-Add a silly little 4 bay 2.5" drive hotswappable drive cage to a front 5.25 bay (who needs a dedicated DVD burner anymore) I am using 2.5" drives for space / I have free ones to play with from old laptop upgrades (yes they will be SLOW, too slow…TBD)
-FInd /test a distro that does all the things (FreeNAS, OpenMediaVault, Ubuntu…)
Play around with Jails / Docker
Profit?

BTW what are peoples favorite / fun docker containers?

Not sure what you mean with ‘fun docker container’ in that context. But I run temporary databases in docker, plex server, jriver media center (made that one myself, so I’d say its my favorite ofc :P). And that is pretty much it. I ran handbrake for a while, but Im threw with ripping our collection of physical media so there was not really any use for it anymore.

There are a lot of great Docker containers! :wink:
For me it’s jellyfin, sabnzbd, radarr, sonarr, tvheadend, Nextcloud, bitwarden_rs, Minecraft modpack server (if I’m bored), jlesage/firefox, Duplicati (freaking life saver), traefik (https://github.com/jlesage/docker-nginx-proxy-manager good and really easy alternative)

nas inside a vm is a learning exercise only imho.

you want the nas to have direct access to hardware in the real world to do drive fault tolerance.

if you put it inside a vm you are at the mercy of hardware failure on the underlying datastore. unless THAT is raid then failure = you’re SOL.

and if it IS raid then why nas? just for sharing out? hmmm. if it is for learning though go nuts. you can play with removing virtual disks etc easily to observe what happens. :+1:

No need to run a NAS in a vm - just use one of the Manjaro ZFS kernels

* linux-lts-zfs 
* linux-latest-zfs

I’ve been running natively encrypted zfs for 2 years with no problems.

1 Like

ZFS doesn’t care about your hardware in any way shape or form, which is what they plan to use. I’ve moved an array with no thought or issues from one machine to another that shared no hardware or software except the hard drives that were moved, and this is expected behavior. Traditional raid is a nightmare, and if using traditional raid, you would be correct, but this is zfs raid.

That is in no way what I am referring to.

What I meant is that if you run ZFS in a VM, ZFS only knows what the hypervisor tells it.

If the underlying virtual disks are stored on a datastore (as opposed to physical pass through) then ZFS can’t do jack shit for actual fault tolerance or data integrity.

If you’re doing raw disk mapping (i.e., passing physical disks to the VM) then things are different, but in that case I’d still suggest using a physical box if you can.