Hello. New forum member and new homelabber here so apologies in advance if this is a well worn topic.
I started my homelabbing journey over the summer and have reached a point where I need to step up my storage game.
After debating DIY and off-the-shelf NAS options, I realized I have a GMKtec NucBox M7 ( AMD Ryzen 7 PRO 6850H (8C/16T 4.70Ghz) Dual NIC LAN 2.5G Desktop Computer, 32GB DDR5 RAM + 512GB Hard Drive PCle SSD, Dual USB4) sitting around underutilized that should be more than capable of the task from a compute standpoint. Currently, its running proxmox with only experimental VMs that I spin up as needed, no “in production” services.
I’m considering purchasing a Terramaster D4-320 External DAS, connecting it to the M7 via USB and running TrueNAS. This seems to be the cheapest and most straightforward approach to get where I want to go without buying a bunch of new hardware.
Obviously the primary concern is managing a ZFS pool over USB… obviously it’s not “ideal” but am I crazy for even considering it if this is going to be the primary network storage for my home? (family pictures, personal files, system backups, media server storage, security cam footage, etc.)
Secondly, I love the flexibility/portability of keeping my hosts virtual. Would running TrueNAS on the existing proxmox install so I can still utilize my experimental VMs when needed still be an option? Or should I just no longer consider this miniPC a proxmox node and install TrueNAS baremetal instead.
Looking for any and all advice or suggestions from users with experience on either of these topics. Thanks in advance.
Truenas in a proxmox vm is totally doable and fine. I run 2 instances on two servers. Each instance has a dedicated HBA passed through completely to the Truenas VM.
Truenas drives over USB is in my opinion a recipe for disaster and data loss.
It is totally doable and fine, but the truth is, Proxmox is Debian, and has native ZFS support.
Why not do the storage bare metal, and dispense with the complicated and less reliable passthrough issues?
I used to run a VM with FreeNAS (the BSD based predecessor to TrueNAS Core) with two passed through LSI HBA’s back in 2014 when I was running VMWare ESXi, but when I switched from ESXi to Proxmox in 2016, I just exported my pools in FreeNAS, and imported them in native ZFS on Linux in Debian.
It just seemed like less of a risk than dealing with PCIe passthrough which can be flaky on occasion.
It also winds up being more efficient and more performant to have the local VM’s and LXC containers that need access to the storage pool just do so as a local file system rather than using emulated network connections and the overhead of NFS/SMB
It has worked very well for
Trusted VM’s get local direct access to ZFS. As do trusted LXC containers (which winds up being most of my guests) I have one untrusted VM that needs storage access, so for that one, I just created a large zvol (emulated block device on ZFS) and pass through the emulated device for direct access by that VM, so it doesn’t have access to anything it shouldn’t. (most of this can be configured in the Proxmox GUI if you are not a fan of the linux command line like I am)
For clients and other devices that are not running on the Proxmox host, I export my NFS shares directly from Proxmox, and for devices that use SMB, I wound up creating a linux container that has local access to the ZFS pool, and shares certain folders on it using SAMBA.
It has been running this way now for 9 years through multiple system and drive upgrades and some failed drive replacements and it has been absolutely bulletproof.
For me it’s hardware consolidation. Been running fine for the past 5 or so years.
*edited to add- There’s plenty of cores on the Epyc CPUs I run and the ECC ram is already populated in there. Makes sense for me to come as close to full utilization of the hardware I already have and am powering than to add more to my seemingly always full rack. The only NFS shares are media for streaming so overhead isn’t a concern there. The LXC and VMs have a different dedicated HBA that servers the solid state storage array for proxmox and all it’s ZFS needs.
I’m not saying “add more hardware”. I agree that would be a silly thing to do.
I’m saying, run the storage bare metal on the host, the same Proxmox box you are running the TrueNAS VM’s on. Proxmox comes with ZFS pre-installed, and even has some basic GUI functions for managing it if you want to do it that way (personally I prefer to manage it from the CLI though.
In my setup there is bare metal SSD storage for proxmox on each proxmox server. Totally separate from the storage for the truenas instances.
edited to add - Truenas doesn’t like sharing drives so I have an HBA that is passed completely to Truenas so it thinks the drives are bare metal and runs fine. This is the way I have found that is the most reliable to run proxmox and truenas on the same box with the h12ssl-i motherboard and utilizing the backplane of the 828 chassis. I have no desire to run proxmox as a NAS nor do I want to run Truenas as a virtualization server. This allows me to run both how I want in the confines of the hardware and chassis I have. Fits my uses perfectly. OP asked if you can virtualize Truenas. I answered with a yes as it’s what I do.
Backplane connections and not having the drives pass through proxmox before they hit the truenas instance are the limiting factor here. If only one HBA is used then all the drives would have to pass from proxmox through to truenas. This can (and used to) cause issues with truenas. Running two HBAs allows one to be passed directly to truenas so proxmox never sees the drives and removes the corruption issues that truenas faced.
As I mentioned previously, I did virtualize FreeNAS (the TrueNAS predecessor) back in 2014 when I was running ESXi, but as soon as I switched to Proxmox in 2016, and the host was essentially just a linux box I could run stuff bare metal on, I ran all of my storage bare metal on the host.
Thank you both for your insight. The only reason I’m looking at this DAS in particular is that it seems to present and circuit each drive connection individually rather than any kind of split along the same connection - obviously the interface itself poses a risk, but I’ve hears stories of people running this setup successfully. Just kind of curious how stable it might be.
I’m in the “storage directly on Proxmox host” camp. In my case, it’s a BTRFS pool with 8 drives. I NFS export some portions directly and also run Samba in a container. Another container also serves DLNA.
I’ve spent a few months trying to like TrueNAS, but eventually I gave up on it and re-purposed the hardware as Proxmox Backup Server. The GUI just got in the way of things that are much easier using the command line.
I have not had good experiences with Proxomox 9 and hardware pass-through. It’s an unnecessary level of complexity for something as important as the main NAS storage.
I run my Truenas Scale in a VM with an HBA passed through without issue in Proxmox, I love how it makes backups so much easier. And it makes restoring / migrating really easy to as long as your HBA still works or you have another. It is possible to pass through on board sata controllers but that can sometimes be tricky so I am not sure how USB would fair. If you have a PCI USB controller where you can pass through the entire controller I could see that being usable.