NAS + Homelab Virtualization build... Suggestions?

How goes it Level1 forums?

I am planning a mostly consumer grade NAS/virtualization build, and I’m a little stuck as to which direction I want to go.

Specs overview:
Ryzen 2700X (for now… might upgrade to a 3900X/3950X at some point)
AsRock Rack X470D4U
2x 16GB Samsung non-ECC RAM (they’re on the QVL)
6x 4TB WD Red HDD (no HBA card as of now, might buy one if needed)
pcpartpicker[.]com/list/mjMjvW

I originally planned to just run FreeNAS with the 6 WD reds in a RAIDZ2 volume, but I keep hearing that the virtualization on FreeNAS can leave a little to be desired. I’ve read/heard a little about virtualizing FreeNAS under proxmox by passing the disks through, but tbh that makes me a little nervous about what would happen if anything went wrong.

At this point I’m kind of leaning towards installing proxmox and just setting the NFS/samba shares up manually. After all, isn’t proxmox just a specialized version of debian (This isn’t rhetorical, I don’t know that much about proxmox)? Are there any downsides to this approach? Other than not having the GUI, that is…

My main predicament is that I would really love to have a true level 1 hypervisor for homelab purposes, but I also want my NAS to be as dependable as possible.

Any suggestions???

It works fine just not the easiest as most people dont have tons of BSD experience

Dont worry about that, the disk can always be moved to another system that supports ZFS. Since you arent importing it will be even easier. You can also just use ZFS on proxmox and cut out the freenas vm.

Thats about it to be honest.

If you have time make a proxmox box, if not then just do freenas.

3 Likes

That’s good to hear.

Also, really good to hear…

I was still on the fence between using proxmox or not, but I think I may give it a go. I’m fairly confident in my linux sysadmin abilities so this may be the best fit for me.

Thanks for the advice and the quick reply @mutation666!

PS - Will the ZFS ARC work the same way in linux as it does on FreeNAS? The main reason I’m asking is that I keep hearing that you want 1GB of RAM per 1TB of disk space. Is this also true for ZoL?

Its suggested but by no means required

It should be the same guideline. Just shoot for as much as you can afford. Also dont worry to much about ram on the qvl, its not that big of deal in my experience. I would shoot for ECC if you can afford it (not that its truely needed but rather have another safety net then not) If you need some suggestions for ecc 32gb sticks at 2666 are roughly $250 so putting 2 in would give you plenty for VMs and the file system.

1 Like

Proxmox is a hypervisor with ZFS and not built as a NAS OS…so all those…NASy…features would have to be managed though CLI. I went with Proxmox because I haven’t had the best luck with bhyve and I know a lot more about Debian than I do OpenBSD. Though I still use PFsense, it is mostly though webUI for now.

I still have a FreeNAS VM I use. But I am going to use Proxmox and containers so I can ditch VMware, still have my ZFS NAS, run VMs, and have Open vSwitch.

Yeah, I have no problem rolling my own NAS functionality, and the more I stew on it the more I am leaning towards a proxmox box with a couple smallish SSDs and big RAIDZ2 vdev for longterm storage/backups.

+1 on the pfSense… I’m loving mine so far. I haven’t really done anything fancy with mine yet. Just setup a remote access VPN and pushed out a couple VLANs for my guest wifi / IOT network and another one for my lab network.

Is this what the FreeBSD/FreeNAS virtualization is based on?

You hear about how the FreeNAS name is going away as of version 12.0? It’s gonna be TrueNAS only now… separate free and enterprise versions from what I hear

That is the suggestion for an enterprise type of build, with many multiple users hitting the array. You can ignore that nonsense.

Until you get around the 100TB mark, you don’t need to worry about having more than 8-16gb of ram available to keep ZFS running. ZFS has to keep track of some things in memory, and the larger the array, the more it has to track, but it’s not very much. Everything past that is just nice for a ram cache. ZFS will use half your ram by default, so if you want it to use more, you have to change that.

Never use deduplication, which requires ZFS to keep track of orders of magnitude more information. Then yes you will absolutely NEED tons of ram, and if you ever don’t have enough you’re fucked.

ECC is great and wonderful, and not that expensive. It’s also not mandatory or “needed” any more than any other filesystem. All it is, is another potential vector for data corruption mitigated.

RAIDZ2 and HDD’s are great for bulk storage, but it’s not going to be fun if you want to stick VM’s on that due to shit IOP’s and write latency.

You want pool made of mirrored SSD’s for VM’s, which you can than backup to your raidz2 pool.

ZFS tuning performance considerations for data storage
Here’s what I do for my openmediavault (debian based, just like proxmox) pool meant for bulk storage. This is not intended for VM’s, or for BSD based systems. You’ve been warned.

  • zfs set acltype=posixacl yourpool/yourstoragedataset //Mo’posix mo’better, stores attributes in a more efficient manner. Possible OS portability issues?

  • zfs set compression=lz4 yourpool/yourstoragedataset //If this wasn’t default already, something is very wrong.

  • zfs set xattr=sa yourpool/yourstoragedataset //This would be default if FreeBSD and illumos weren’t too busy being gay. Definite OS portability issues.

  • zfs set atime=off yourpool/yourstoragedataset //Don’t fucking write to the pool everytime you even look at something

  • zfs set relatime=off yourpool/yourstoragedataset //Don’t fucking write to the pool everytime you even look at something

  • zfs set recordsize=1M //Or even 4M. Results in less iops need to read data and thus can feel more snappy if you aren’t constantly trying to read/write only a small part of the file.

Ashift (set on pool creation only) ahould be either 12 or 13. Note that some SSD’s have firmware that is optimized for 4K blocksize operations, and may perform better set to 12 even if they are technically 8K blocksizes. Only testing can tell you which it is.

Create /etc/modprobe.d/zfs.conf and then add
options zfs zfs_arc_max=17179869184 //the max arc ram usage, 16 GiB converted to bytes
options zfs zfs_prefetch_disable=0 //(enables prefetch, good for spinning disks with sequential data)
options zfs zfs_txg_timeout=10 //wait time in seconds before flushing data to disks.

ZFS tuning performance considerations for VM/database storage
For VM’s or other databases, I don’t really know a whole lot. In practice I just use a barely function VM on my main system that I turn on ever few months for some basic compatability needs.

What I do know is that in regards to ZFS you want to match the recordsize with
You likely don’t care but I thought I’d mention that database tables and logfiles are very different workloads. Logfiles should be recordsize of anywhere from default (128K) or even 1M.

A VM via KVM using a qcow2 format should be on a zfs dataset set to a recordsize of 8K

Note that ZFS is not perfect. Out of the box it’s “ok” at most everything, but you really need to clarify your use case and have a proper hardware topology and tuning to support that use case. There are also some ongoing performance issues that are being worked on slowly, but those mostly show up only if you are playing around with a load of NVME drives.

There are also fun ways to speed things up such as like:

  • Slog (mistakenly called a ZIL, which is ALWAYS present) for speeding up those sync writes.
  • L2ARC, for when your workload is larger than your maxed out ram, but smaller than some SSD’s
  • Allocation classes. Use faster storage to hold dedup tables, metadata and/or small io instead of sending it to the slow storage.

But in most cases you are better off just sticking your VM on your fast storage to begin with, rather than try to accelerate slow storage.

1 Like

I guess I didn’t mention it earlier but I kind of planned on using a couple SSDs for running the VMs/containers. Not sure if I am going to mirror them yet or not… but I am going to use the ZFS pool strictly for long term storage/NAS-y type stuff.

I’m actually fleshing out my plan now. Finally got all my parts in. Got tired of waiting on a delayed 2700x so I just bit the bullet and bought a 3900x for the extra threads.

Thanks for all the great info dude!

EDIT: This 3900x runs WAY cooler than my 3700x I’ve had for a while. Which is awesome, but makes me wonder WTF is going on there…? I guess the 3900x having the extra chiplet could be partly to blame.

I currently have a FreeNAS metal install that I want to move to Proxmox. It’s a daunting task, like the current user, I want to maintain my current ZFS pool with FreeNAS. Should/Can I virtualize my current metal install to a Proxmox VM? (And if so, what’s the safest/best route to do this?) Or should I create the FreeNAS VM from scratch and import my current ZFS pool into that? Will I run into trouble with my datasets either way?

1 Like

TBH, I’m kind of thinking you’d try importing your FreeNAS zfs pool within Proxmox, and ditch FreeNAS entirely - whatever services you used to run on FreeNAS you’d now setup from scratch again in Proxmox either in containers or VMs.

1 Like

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.