ZFS Proxmox move - Suggestions please

Hi

I’ve recently shifted over my ubuntu server to proxmox, where I separated out it’s functions into different VM/Containers

Currently the last bit remaining of this Ubuntu server (I current have the disk just passed through) is the NAS part.

It does the following:

Holds the zpool
Auto snapshots via zfs auto snapshot
Auto removes snapshots via zfs-prune-snapshot
NFS Server
SMB Server
Runs syncoid which I use to zfs send to my other linux desktop (Backup)

I really don’t know my best options here, so was just after some suggestions

It is my understanding that TrueNAS can achieve most of this (I am unsure of the syncoid zfs sending part - Ideally I wouldn’t have to change my scripting too much from the backup to linux desktop perspective )

I also believe there is additional overhead of it being it’s own BSD VM (Go for TrueNAS Scale?) though I guess this is minimal. I do like the idea of being able to manage things through a gui, instead

Any suggestions welcome

there is a bakers dozen ways to do this. depending on goals, yes TrueNAS Core is a valid option. TrueNAS Core bound to AD in a VM with HBAs passed through is actually what i use myself. But i need the AD connectivity and the BSD kernel with ZFS uses less RAM.

TrueNAS scale would also be fine, especially if you have a small number of users and shares.

either way it is best practice to pass through a disk controller, not disks. to the system containing your NAS OS.

the last real option is using ProxMox as the NAS. this is only recommended in situations where you have VERY simple requirements and low user counts. but ProxMox is Debian, and can fairly easily have a SAMBA configuration on a ZFS pool directly.

You could also run SMB in a lxc (privileged) container and mount a ZFS dataset. But personally I also use a TrueNAS VM with passed through SATAs controllers. On my storage server I use smb shares as VM storage (even if some system admins are flinching there).

What’s the reasoning behind this? Currently until I get everything sorted I have a combination of sata and pci nvme. With currently promox booting from a sata as well (This will change once I am sorted, so I guess I can pass the disks now and the controllers later)

It sounds like TrueNAS core is probably the best bet; however am I able to use syncoid with it ? Or something similar to send backups to my linux desktop ?

I tried (admittedly not that hard) to use TrueNAS Scale and this is what ended up stopping me. I, like you, started using ZFS with Ubuntu server, so I have all of my snapshotting/syncing set up using scripts and cron jobs. TrueNAS (Scale, at least) very, very much does not let you use scripting and cron jobs based on my limited testing. Shell access is restricted, and you definitely cannot install other software like Sanoid/Syncoid. I just ended up using the built-in ZFS functionality in Proxmox and then pass ZFS mounts to a LXC container that runs Samba. This allowed me to reuse all of my scripting (in addition to my smb.conf) from my old Ubuntu install and get up and running quickly.

ZFS needs physical access to the controller / disks without any abstraction in between in order to function properly, thats one reason you don’t use it with HW raid for example. Someone else might give a better technical explanation.

I’m not against using TrueNAS if it has alternatives to me being able to send the pool and snapshots to my linux desktop for backup - I’m just not sure this is an option.

The automated snapshot/expiry appears to be covered with the gui

1 Like

It has snapshot sync functionality built in to the gui, and I’m pretty sure it’s just a gui wrapper for zfs send/recv so it should work to send to any other zfs system as long as the zfs versions aren’t too far apart.

I’ve not tried it, but I’m pretty sure that’s how it works.

Mmm… To be honest; I struggled with the generic zfs send until someone on this forum recommended syncoid lol

For what it’s worth, I ended up setting up a ZFS pool in Proxmox, and then making an OpenMediaVault VM and giving it most of the capacity of that pool. Super easy, does I think all of what you want, with the exception of the snapshotting which you could do just in Proxmox.

Since it’s a Debian based distro, it’s not that different than going Scale.

Oh yeah, syncoid is great, especially for incremental stuff. For a simple single snapshot send/receive it’s easy enough to use the built in tools, but once you get into managing incremental sends I’m extremely happy that Syncoid exists.

Presumably you added your scripting to proxmox as you can’t syncoid/auto snapshot from a mount?

I think if I was starting from scratch TrueNAS would be a pretty compelling tool. But I already have my scripts and conf files and I’m a grumpy old man set in my ways. So the fact that there’s more or less just one way to use TrueNAS (gui only please) didn’t sit that well with me.

Yes, I see that wasn’t clear. Sanoid/Syncoid are installed on my Proxmox host, and my sync cron jobs run on the host. The smb.conf file is in the Samba LXC.

Makes sense. This would offer some advantages of not having to spin up other VM/Container first, as others are based off raw files on that zpool

Is running things natively from Proxmox a good idea? In terms of memory management etc

I think virtualization purists would suggest that you shouldn’t have anything running on the host, since hosts are supposed to be more or less interchangeable and replaceable. It means if your Proxmox host shits the bed, it’s one more thing you have to set up. But since zfs is native to Proxmox, and all of the settings for zfs live in the pool/datasets, there’s very little additional to set up within Proxmox. Proxmox even has automatically scheduled monthly scrubs on by default. So from the standpoint of simply running your zfs pool in Proxmox there’s very little downside.

In fact, I’d say there’s a decent upside from a memory management standpoint–you can let zfs use as much of your host’s RAM for ARC as it wants, and to the extent your VMs need additional RAM, zfs will bounce things from ARC and shrink it. If you run TrueNAS in a VM, you need to allocate the VM’s memory ahead of time and you can’t unallocate it without shutting down the VM. For example, I have 128GB of RAM on my primary Proxmox node. Given the zfs defaults of using up to 50% of system memory for ARC (I think–someone can correct me if I’m wrong), that means I can have 64GB of ARC for my zfs pool and still have 64GB of RAM available for my host and VMs. If I run a TrueNAS VM and give it 64GB of RAM, I still have the same 64GB of RAM available for my host and other VMs, but now my zfs pool only has up to 32GB for ARC available. This is of course, just an example, and you can manually tune your ARC differently. I think I may also have read that TrueNAS Core handles ARC a little more efficiently and may use a larger percentage of system RAM than the 50% that I’m remembering.

Negatives of using Proxmox as your NAS (the Forbidden NAS, heh) are that you’ll need to manually install, maintain, and backup your sanoid/syncoid configurations and scripts. But if you use Proxmox for only a VM and zfs host, that’s pretty much all you’re doing that’s different from a vanilla Proxmox install. So if you need to blow it away and start over, there’s not a ton that you need to do once you reinstall Proxmox.

Thanks for your clear answer. I think this is the solution I am going with. I will just adjust my ARC as necessary

This setup requires a privileged container for the mount, right?

Bind mounts are easiest to work with in a privileged container, and I run my sharing LXC in a privileged container. If you’re really comfortable with LXC configuration, you can map user ids and then bind mount directories in an unprivileged container. Getting my sharing LXC set up this way (the “proper” way) is on my list of things to do. Unfortunately, it’s very far down the list and has been for some time. I’m sure I’ll get to it eventually.

Do you have an example of the config you used to pass through this bind mount ? I think I am on the wrong track with

mp0: /NAS,mp=/mnt/NAS