I added a way to get persistance between updates for sources list and docker daemon by putting them in your storage and linking to them.
Here are the instructions I also put in the guide:
Edit all the lines under Vars you need to change, edit the docker var to point to the Dataset you created in stage 1.
As for the other vars, I recommend creating a TrueNAS folder on your storage for everything related to TrueNAS Operations. Then point the other vars to files there.
Another thing I created to help updating without breaking stuff, is the apps and services script:
#!/usr/bin/env bash
# File to save installed packages
file="installed_packages.txt"
if [[ ! -f "$file" ]]; then
echo "file does not exist, taking first backup"
# Save installed packages if file doesn't exist
dpkg --get-selections | grep -v deinstall >$file
else
echo "File exists, restoring"
# Check for missing packages and install them
comm -23 <(sort $file) <(dpkg --get-selections | grep -v deinstall | sort) | while read -r p; do
package=$(echo "$p" | awk '{print $1}')
echo "$package is not installed, installing..."
apt-get install "$package" -y
done
# Update the list of installed packages
dpkg --get-selections | grep -v deinstall >$file
fi
echo "Services"
# Function to extract service names and status from systemctl output
extract_service_info() {
awk '{ print $1,$3 }'
}
# Get the list of all running services before restart
systemctl --type=service --state=running --no-pager | extract_service_info >services_before.txt
# Get the list of all running services after restart
systemctl --type=service --state=running --no-pager | extract_service_info >services_after.txt
# Check each service that was running before the restart
while read -r service status; do
# If the service is not currently running, try to start it
if ! grep -q "^$service" services_after.txt; then
echo "Service $service is not running, trying to start it..."
sudo systemctl start "$service"
fi
done <services_before.txt
# Check for any services that were present before the restart but are not found after the restart
while read -r service status; do
if ! grep -q "^$service" services_after.txt; then
echo "Error: Service $service was present before the restart but is not found after the restart."
fi
done <services_before.txt
Let me know if there are any issues with it before I add it to the main guide.
Yeah, youāre right about me doing things on multiple users. This is great advice and something I should be paying more attention to.
Unfortunately, this didnāt fix the issue Iāve been having. Instead, starting from a new docker_dataset did the trick. Not sure what the root of the problem is, but Iām assuming the old dataset just got corrupted one day or was incompatible with whatever updated? Not sure.
Anyway, thanks for all your help @Scepterus and Happy New Year!
By the way, I donāt exactly remember what are the advantages of setting up my apps with this native form of docker instead of using the TrueNAS Scale apps. My impression is that Kubernetes, which the TrueNAS Scale apps are based on, is an overkill or something? Not quite sure what the actual/practical repercussions are with using the apps.
In any case, if I decide one day to move back to doing things the standard way, because, for example, I understand weāre not supposed to be using TrueNAS Scale like a linux machine, can I continue from where I left off? Meaning to say, can I just plug the appdata from the various services I self-host (e.g. Jellyfin, NextCloud) into their corresponding TrueNAS Scale apps and continue on from there (i.e. no need to reconfigure anything)?
You probably had SMB shares on that data set, or set it differently from what the guide suggests. I think thereās a warning there about SMB and it locking files.
They are numerous, but mostly it is performance. If you are using a single machine, thereās no advantage to using the kub way. Plus, you are not limited to the apps on the list, you can add whatever docker you want manually.
That is some BS people are saying. As long as you donāt go to TrueNAS for support after you modify it, you can do what ever you wish to YOUR TrueNAS.
Youāll need to configure the first setup, but if you mount your settings on every container, they should come back assuming you mount them correctly in the app. You might run in to a version difference between the latest docker version of that container and whatās in the app, though.
Generally speaking, apps are limited and under performing. I trust you can manage to get things working with this method, and youāll be fine.
P.S. I updated the guide, scripts, and compose files in the guide, you might want to update your set-up accordingly.
Iām new here and also new to Truenas, Docker and everything related to it.
I welcome this guide and the work that has gone into it and thank everyone involved in advance.
I am happy to follow the guide with the aim of getting Paperless-ngx to run so that I can also add new options to the config of Paerless, for example: Separate pages by barcodes etc. all things that are specified in the yml-config. So far Iāve tried the native Truenas app from Paerless, which basically works, but I havenāt managed to add these additional options somehow/where.
As a test I had Docker running under Windows10 and Paperless on top of it to test different options in the config and to get a taste of Paperless, which worked right away without any problems.
Iām curious to see if I can do the same under Docker on Truenas scale Cobia.
Itās a pity that such things are not possible directly via the Truenas-scale Gui. Basically, I think the system is very good - except for the app problem.
Iāll be happy to keep you up to date on the success/failure. Iām very curious to see what happens
Greetings and thanks for the step-by-step guide.
Franky
P.s.: Sorry for my English, I hope itās easy to read, I translated it from German with deepl
If you use the same docker compose file you used on Windows, you should get the same results. Docker is meant to be agnostic to the host system. The only difference is the way you specify paths for mounts.
Youāre right, of course - BUT - Iām not an expert in Linux, Docker and how to handle the whole thing on the CLI and how to package the whole thing in an update-proof way. Sure, but I couldnāt get the instructions to work and the Doker-Compose app was removed from the Truecharts catalog. So if you can give me more detailed information on this, I would be very grateful - especially with regard to update security in relation to new Truenas versions. I will reply shortly with more infos about my System, Setup and my Problems.
Best regards and thanks for your encouraging reply
Franky
Hm maybe almost. The only thing that is different, i have still one Truenas App installed and running - maybe this might cause the problem? Is it essential to remove all apps and unset the pool?
So now I deleted all Truenas apps and unset the pool.
I put in the script but after restarting nothing happens, there is no logfile at tmp be written.
When I start the file by hand this is the output:
And this is my script:
#!/usr/bin/env bash
# Enable docker and docker-compose on TrueNAS SCALE (no Kubernetes)
# Follow the guide in this post before using this script:
# https://forum.level1techs.com/t/truenas-scale-native-docker-vm-access-to-host-guide/190882
# This script is a hack! Use it at your own risk!!
# Edit all the vars under:
# Vars you need to change
# to point to your storage
#
# Schedule this script to run via System Settings -> Advanced -> Init/Shutdown Scripts
# Click Add -> Type: Script and choose this script -> When: choose to run as Post Init
exec >/tmp/enable-docker.log
# Vars you need to change:
# set a path to your docker dataset
docker_dataset="/mnt/SSD-Pool/Applications/Docker"
# set the docker_daemon path on your storage for it to survive upgrades
new_docker_daemon="/mnt/SSD-Pool/Applications/Docker/TrueNAS/daemon.json"
# apt sources persist
new_apt_sources="/mnt/SSD-Pool/Applications/Docker/TrueNAS/aptsources.list"
echo "Ā§Ā§ Starting script! Ā§Ā§"
echo "Ā§Ā§ Checking apt and dpkg Ā§Ā§"
for file in /bin/apt*; do
if [[ ! -x "$file" ]]; then
echo " Ā§Ā§ $file not executable, fixing... Ā§Ā§"
chmod +x "$file"
else
echo "Ā§Ā§ $file is already executable Ā§Ā§"
fi
done
for file in /bin/dpkg*; do
if [[ ! -x "$file" ]]; then
echo "Ā§Ā§ $file not executable, fixing... Ā§Ā§"
chmod +x "$file"
else
echo "Ā§Ā§ $file is already executable Ā§Ā§"
fi
done
echo "All files in /bin/apt* are executable"
echo "Ā§Ā§ apt update Ā§Ā§"
sudo apt update &>/dev/null
echo "Ā§Ā§ Linking apt sources to your storage for persistance Ā§Ā§"
echo "Ā§Ā§ Please note that with this you'll have to update the links manually on your storage when there's an update Ā§Ā§"
aptsources="/etc/apt/sources.list"
if [[ -f "$aptsources" ]] && [[ ! -L "$aptsources" ]]; then
mv "$aptsources" "$aptsources".old
fi
if [[ ! -f "$new_apt_sources" ]]; then
touch "$new_apt_sources"
fi
if [[ ! -f "$aptsources" ]]; then
ln -s "$new_apt_sources" "$aptsources"
fi
echo "Ā§Ā§ Fix the trust.gpg warnings Ā§Ā§"
# Create a directory for the new keyrings if it doesn't exist
sudo mkdir -p /etc/apt/trusted.gpg.d
# Find all keys in the old keyring
for key in $(gpg --no-default-keyring --keyring /etc/apt/trusted.gpg --list-keys --with-colons | awk -F: '/^pub:/ { print $5 }'); do
echo "Processing key: $key"
# Export each key to a new keyring file in the trusted.gpg.d directory
gpg --no-default-keyring --keyring /etc/apt/trusted.gpg --export --armor "$key" >/etc/apt/trusted.gpg.d/"$key".asc
done
# Backup the old keyring
mv /etc/apt/trusted.gpg /etc/apt/trusted.gpg.backup
#Docker Checks
echo "Ā§Ā§ Docker Checks Ā§Ā§"
sudo apt install -y ca-certificates curl gnupg lsb-release &>/dev/null
if [[ ! -f /etc/apt/keyrings/docker.gpg ]]; then
echo "Ā§Ā§ Missing Keyrings Ā§Ā§"
sudo mkdir -m 755 -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod 755 /etc/apt/keyrings/docker.gpg
else
echo "Ā§Ā§ Keyrings Exist Ā§Ā§"
fi
if [[ ! "$(cat /etc/apt/sources.list.d/docker.list)" ]]; then
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list >/dev/null
sudo apt update &>/dev/null
else
echo "Ā§Ā§ Docker List: Ā§Ā§"
cat /etc/apt/sources.list.d/docker.list
fi
Docker=$(which docker)
DockerV=$(docker --version)
DCRCHK=$(sudo apt list --installed | grep docker)
if [[ -z "$Docker" ]] || [[ -z "$DCRCHK" ]]; then
echo "Docker executable not found"
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin &>/dev/null
fi
sudo chmod +x /usr/bin/docker*
sudo install -d -m 755 -- /etc/docker
if [[ ! -f /etc/docker.env ]]; then
touch /etc/docker.env
fi
. ~/.bashrc
echo "Ā§Ā§ Which Docker: $Docker Ā§Ā§"
## set the Docker storage-driver
echo "Ā§Ā§ Docker storage-driver Ā§Ā§"
version="$(cut -c 1-5 </etc/version | tr -d .)"
if ! [[ "${version}" =~ ^[0-9]+$ ]]; then
echo "version is not an integer: ${version}"
exit 1
elif [[ "${version}" -le 2204 ]]; then
storage_driver="zfs"
elif [[ "${version}" -ge 2212 ]]; then
storage_driver="overlay2"
fi
## HEREDOC: docker/daemon.json
echo "Ā§Ā§ Docker daemon.json Ā§Ā§"
read -r -d '' JSON <<END_JSON
{
"data-root": "${docker_dataset}",
"storage-driver": "${storage_driver}",
"exec-opts": [
"native.cgroupdriver=cgroupfs"
]
}
END_JSON
## path to docker daemon file
docker_daemon="/etc/docker/daemon.json"
if [[ ${EUID} -ne 0 ]]; then
echo "Ā§Ā§ Please run this script as root or using sudo Ā§Ā§"
elif [[ "$(systemctl is-enabled k3s)" == "enabled" ]] || [[ "$(systemctl is-active k3s)" == "active" ]]; then
echo "Ā§Ā§ You can not use this script while k3s is enabled or active Ā§Ā§"
elif ! zfs list "$docker_dataset" &>/dev/null; then
echo "Ā§Ā§ Dataset not found: $docker_dataset Ā§Ā§"
else
echo "Ā§Ā§ Checking file: ${docker_daemon} Ā§Ā§"
if [[ -f "$docker_daemon" ]] && [[ ! -L "$docker_daemon" ]]; then
rm -rf "$docker_daemon"
fi
if [[ ! -f "$new_docker_daemon" ]]; then
touch "$new_docker_daemon"
fi
if [[ ! -f "$docker_daemon" ]]; then
ln -s "$new_docker_daemon" "$docker_daemon"
fi
# Read the current JSON from the file
current_json=$(cat $docker_daemon 2>/dev/null)
echo "Ā§Ā§ $current_json Ā§Ā§"
# Check if current_json is empty and if so, set it to an empty JSON object
if [[ -z "$current_json" ]]; then
current_json="{}"
fi
# Merge the current JSON with the new JSON
merged_json=$(echo "$current_json" | jq --argjson add "$JSON" '. * $add')
echo "Ā§Ā§ $merged_json Ā§Ā§"
# Check if the merged JSON is different from the current JSON
if [[ "$merged_json" != "$current_json" ]] || [[ -z "$current_json" ]]; then
echo "Ā§Ā§ Updating file: $docker_daemon Ā§Ā§"
echo "$merged_json" | tee "$docker_daemon" >/dev/null
if [[ "$(systemctl is-active docker)" == "active" ]]; then
echo "Ā§Ā§ Restarting Docker Ā§Ā§"
systemctl restart docker
elif [[ "$(systemctl is-enabled docker)" != "enabled" ]]; then
echo "Ā§Ā§ Enable and starting Docker Ā§Ā§"
systemctl enable --now docker
fi
fi
fi
echo "Ā§Ā§ Which Docker: $Docker Ā§Ā§"
echo "Ā§Ā§ Docker Version: $DockerV Ā§Ā§"
echo "Ā§Ā§ Script Finished! Ā§Ā§"
Stage 5āEnabling VM host share access
For the last part, if you plan to use VMs and need them to access your host machine, weāll create a bridge in TrueNAS to enable the VMs to access the host.
HM maybe I do only need this step if I wouldlike to use VMs and for Docker its not necessary?
That part has nothing to do with Portainer.
If you want to run docker in a VM, you should have followed Wendellās guide, but that has a fraction of the performance of running it native on TrueNAS.
Either way, good luck.
Ty, I guessed it but was not shure.
I am on a good way.
Thank you so much for this awesome and yes, detailed guide, after removing these strange CR/LF characters from my script it works right out of the box.
Small note to Paperless - its OT.
Now Paperless is running, as well as your Watchtower and also Portainer.
Had a little issue to get the folders for paperless-ngx to work and shareb via SMB but now works super fine and also the automatic Pagesplit and ASN Barcode recognition works fine
Maybe change the script so that the links to aptsources and docker-daemon are updated if they are changed in the script after the first run.
Iām not new to Linux, but TrueNAS. On that evening there was lots of work and I was really glad finding this post here - but actually I didnāt take enough time to fully comprehend what was going on here with that extra TrueNAS folder u mention above. Because I didnāt want to create additional subfolders I just used ā¦/Docker/ to include those 2 files.
As I had more time, I eventually decided to move them to another folder - so I moved them and changed their path in the script and just went ahead with my work, believing on next boot (didnāt disable the script, I guess thatās ok ?!) everything is handled accordingly. Of course I didnāt check myself, so it took me another day to realise by chance those links point to nirvana.
Of course itās my fault alone.
I think it would be a nice addition to the script for more inexperienced users / those like me who sometimes just donāt take enough time and just trust.
Nono, thatās right, u did that. I got the aptsources.old, too.
What I meant wasā¦
At first I configured the script to use the following paths:
docker_dataset=ā/mnt/p0/dockerā
new_docker_daemon=ā/mnt/p0/docker/daemon.jsonā
new_apt_sources=ā/mnt/p0/docker/aptsources.listā
Because I didnāt want to create more subdirectories. Later I changed my mind and did: sudo mv /mnt/p0/docker/daemon.json /mnt/p0/docker/TrueNAS/daemon.json
ā¦
and changed the paths in the script, thinking that those symlinks the script created in ā¦/docker/
if [[ ! -f ā$aptsourcesā ]]; then
ln -s ā$new_apt_sourcesā ā$aptsourcesā
fi
if [[ ! -f ā$docker_daemonā ]]; then
ln -s ā$new_docker_daemonā ā$docker_daemonā
fi
would get updated on the next reboot, when the script runs again.
The script is executed at every reboot, isnāt it ?
But the symlinks still pointed to the old locations, so I had to update them manually.