TrueNas Scale - any reason not to manually run apps in the native copy of docker?

In this guide Wendell installed Debian in a VM on TrueNAS Scale in order to run apps in docker because the current state of apps on TrueNAS Scale through the native web interface is… not very functional.
Unfortunately he didn’t really give a reason why he used a VM as opposed to just manually running the apps in docker on the host through the CLI. The added complexity (and overhead) of running a VM and then connecting to storage over NFS (even if it just routed locally) seems a bit unnecessary.

Does anyone here know of any “gotchas” with manually setting up portainer directly on TrueNAS and using that to deploy apps like NextCloud (basically bypass the TrueNAS web UI for apps)?
I’m mainly interested in TrueNAS Scale for docker and the ZFS boot pool. Otherwise I may as well just run TrueNAS Core on top of XCP-NG and pass through the storage controller.

Edit:
Ok so apparently Wendell DID actually give his reasons, just not in the Youtube video about the guide (I had only watched the video when I posted this). I still would rather not have the downsides of running a VM such as less-than-seamless memory sharing, and if I’m going to have to live with those things I might just go for a dedicated hypervisor…
Time to install SCALE and play around with it before deciding I guess.

1 Like

If you install your docker containers directly on TrueNAS you’ve created a system dependency - your containers now depend on the TrueNAS install. This means your docker containers could break during a TrueNAS upgrade or configuration change. It also adds steps when moving your workload to a new machine - now you need to reinstall and migrate your docker containers. In other words it adds friction for common maintenance tasks.

A VM doesn’t care what version of TrueNAS you’re using, so upgrades are much easier. Moving a VM between hosts is easier then reinstalling and migrating docker containers. It also gives you the option of using VM High Availability in the future with no reconfiguration of your containers. This removes the system dependency, and makes your infrastructure easier to manage and more flexible.

Having said all of the above though, deploying docker containers in a VM does add more layers that you need to maintain. It also adds inefficiencies as you mentioned (RAM). In a homelab environment you might prefer the simplicity of less layers. Just understand that it makes upgrades and host moves more complex. Do what makes sense for yourself and your infrastructure.

One reason to manually juggle with Docker containers is the ability to bypass this Kubernetes crap, that keeps a CPU core pegged at 100%.

My MariaDB, Influx and Adguard docker containers survived upgrades just “fine”. That means you have to backup and restore the daemon.json every upgrade, but that’s about it.