My issue with doing this is mainly that TrueNAS no longer supports installing apps on encrypted volumes due to problems with k8s (k3s?) and pool-migrations, this is discussed on reddit.
On reddit there is also a reply suggesting a work-around to encrypt the storage.
Although there is not much follow up on using this method. Has anyone tried? I also don’t need pool-migration to work in my case so hopefully this would be safe.
A second option I was looking into was to mount the containers data on an encrypted volume but without too much experience in this it appears to already be done by the application
and I’m not really sure if that can be overwritten/replaced in a scalable way that won’t break down the line, e.g. if I were to put /bitnami/mariadb on an encrypted volume…
so if someone knows what the most sane approach here would be is please chime in?
So I tried the reddit approach linked above, that is to create an encrypted dataset and add subsets with the same structure and after that rsync stuff over, remove the unencrypted ix-applications and rename the encrypted dataset to ix-applications.
The encryption is set up using a passphrase so upon reboot it is locked, in the gui the applications show as running but are not and are missing data, like containers and history. There is also error message in the gui notifications under the bell icon.
Failed to sync TRUENAS catalog: [EFAULT] Failed to clone 'https://github.com/truenas/charts.git' repository at '/var/run/middleware/ix-applications/catalogs/github_com_truenas_charts_git_master' destination: [EFAULT] Failed to clone 'https://github.com/truenas/charts.git' repository at '/var/run/middleware/ix-applications/catalogs/github_com_truenas_charts_git_master' destination: Cloning into '/var/run/middleware/ix-...
2024-04-07 00:20:20 (America/Los_Angeles)
Upon unlocking the dataset with the passphrase the containers come back online working as normal (in my case mariadb and phpmyadmin). I unlocked the dataset a minute or two after boot, not sure if this is significant if there is a timeout on starting the pods, just a thought.
Yeah, so this is how the pods work. If they continue to fail, they’ll just use a progressive backoff to keep retrying the mount. (at least, that’s kubernetes standard behavior. ix might have done some other stupid things with their distro)
Can’t say why it says they’re operational when they’re down, but I’m glad it’s all working out for you now.