TrueNAS Scale Native Docker & VM access to host [Guide]

I know this, I’ve been following it since 2022.
It’s not just APT that they’ve blocked now.
Do a test, install dragonfish on a VM there and try to run the script without enabling “install-dev-tools”, you will understand.
Anyway, the solution is very simple, just use the “install-dev-tools” command at the beginning of the script.

Can you share the output of the log of the script?

Of course, it’s right below:

enable-docker.log

§§ Starting script! §§
§§ Checking apt and dpkg §§
§§ /bin/apt not executable, fixing… §§
§§ /bin/apt-cache not executable, fixing… §§
§§ /bin/apt-cdrom not executable, fixing… §§
§§ /bin/apt-config not executable, fixing… §§
§§ /bin/apt-extracttemplates not executable, fixing… §§
§§ /bin/apt-ftparchive not executable, fixing… §§
§§ /bin/apt-get not executable, fixing… §§
§§ /bin/apt-key not executable, fixing… §§
§§ /bin/apt-mark not executable, fixing… §§
§§ /bin/apt-sortpkgs not executable, fixing… §§
§§ /bin/dpkg not executable, fixing… §§
§§ /bin/dpkg-deb is already executable §§
§§ /bin/dpkg-divert is already executable §§
§§ /bin/dpkg-maintscript-helper is already executable §§
§§ /bin/dpkg-query is already executable §§
§§ /bin/dpkg-realpath is already executable §§
§§ /bin/dpkg-split is already executable §§
§§ /bin/dpkg-statoverride is already executable §§
§§ /bin/dpkg-trigger is already executable §§
All files in /bin/apt* are executable
§§ apt update §§
§§ Linking apt sources to your storage for persistance §§
§§ Please note that with this you’ll have to update the links manually on your storage when there’s an update §§
§§ Fix the trust.gpg warnings §§
§§ Docker Checks §§
§§ Keyrings Exist §§
Docker executable not found
§§ Which Docker: §§
§§ Docker storage-driver §§
§§ Docker daemon.json §§
§§ Checking file: /etc/docker/daemon.json §§
§§ §§
§§ {
“data-root”: “/mnt/ssd/Docker”,
“storage-driver”: “overlay2”,
“exec-opts”: [
“native.cgroupdriver=cgroupfs”
]
} §§
§§ Updating file: /etc/docker/daemon.json §§
§§ Enable and starting Docker §§
§§ Which Docker: §§
§§ Docker Version: §§
§§ Script Finished! §§

Even though the log states that all files are executable, if you go to the shell and do apt, you will receive the following message:

Package management tools are disabled on TrueNAS appliances.

Attempting to update SCALE with apt or methods other than the SCALE web interface can result in a nonfunctional system.

Another thing I discovered is that it appears that parts of the file system are in read-only mode, at least the /tmp folder is for sure.

They are trying their best to ensure that there are no changes to TrueNas.
This is bad for me, because I’m only using it because I can easily make the changes I’d like.
With each release, it’s getting more complicated.

Yep, they also did this with removing docker in Cobia.
Do you know what happens when you run that command, and it’s already enabled?
I will also test it on Cobia to see that it won’t fail the script.

I know, it only does the checks and if it has already been done once, it will be very quick, just confirming that everything is correct.

The first time it’s done, it really releases everything, and it takes a while.

So, here I put it at the beginning of the script right after the variables and everything is fine.

I’ve tested this on Cobia, looks like it behaves the same. It does make some lines redundant, like apt update, which it does.

I’ll dig into this deeper later and modify the script accordingly. Let’s hope this is the last thing they do to try and lock us in.

Another thing I discovered:
The NVIDIA driver only activates CUDA options, such as the nvidia-uvm module, when we activate APPs.

When Apps are disabled, I can’t make the switch to Docker, even though it correctly displays the pass.

ffmpeg displays an error due to the lack of nvidia-uvm and nvidia-uvm-tools, which are only active when the Apps are turned on.

I’m almost giving in and using the native k8s Apps :frowning:

Or maybe I could give Unraid or even Proxmox a try, just thoughts :man_shrugging:t2:

Off Topic:
I now saw that Jip-Hop, from the TrueNas forum, continued his work on prisons, and even the ix itself presented its project in the documentation for the new Dragonfish on 01/26.

What do you think of this Scepterus?
From what I understand about using the jailmaker, could you adapt your script to work within a jail?

I think maybe it would be the best option after the ix blocks more and more things?

Are you sure your driver and kernel versions match? did you try the cuda test container? if so, what did that output when you ran it?

Saw that as well, and there’s a reason I stick to this solution, it is simpler and probably has more performance, but it’s more for the people who are just starting or just want basic docker and nothing fancy.

Haven’t looked into that, but it’s a shell script, it should work, you might have to change the way you refer to paths in it.

I do not think they have anymore to break. And the fact you found that command means they are aware of us, and don’t mind us continuing, they just want to mitigate the flood of support tickets from other people who do not know what they are doing.

You can always use Wendell’s solution of a VM, but lose performance, or you can run proxmox with TrueNAS inside to manage the storage alone.

But we got through this before, will figure this one as well. Where there’s a will, there’s a way.

Yes, I’m sure the drivers match.

And I just discovered what iX did, it moved the module from the /lib/modules/'uname -r' folder to the /lib/modules/‘uname -r’/updates/dkms folder and renamed it to nvidia-current-uvm.ko.

And when we activate the Apps, this module is returned to the /lib/modules/‘uname -r’ folder and renamed to nvidia-uvm.ko.

I’m creating a script to solve this, but basically what needs to be done is:

Summary

cp /lib/modules/"uname -r"/updates/dkms/nvidia-current-uvm.ko /lib/modules/"uname -r"/nvidia-uvm.ko

Update the module list with:
depmod -a

And then load the nvidia kernel modules with the command:
nvidia-container-cli -k -d /dev/tty info

Okay, resolved.

With the install-dev-tools command, plus the procedures above, we have Docker back, fully functional and with GPU transfer to the containers.

Tested and working.

For me it’s for two reasons, the first is performance, I tested it, and Docker is natively faster, not only the processing of containers, but their initialization is also better.

The second reason is for simplicity, managing and organizing is much easier, mainly because I can use a composer and upload an entire container stack at once.

I understand, but I don’t think it will be necessary yet :smiley:

I hope you are correct :raised_hands:

Sincerely ?
I don’t like the solution that Wendell came up with, before him I was already using something similar to CoreOS.
Here I use XCP-ng on another machine, just for baremetal virtualization, so the proxmox part I think is out of the question.
And yes, really in the open-source world you just need to have the will, it seems like everything has been solved so far.
We hope nothing else comes from iX until the final release of Dragonfish :rofl: :joy:

Won’t it be easier to just link these? Also, if they go to those lengths, we could start thinking about just installing the nvidia driver with the full kernel ourselves.

I’ve seen it done by a few YouTubers that deal with NAS software, I think craft computing, and Lawrence systems.

I agree, let’s hope nothing else breaks.

Using a symbolic link are you talking about instead of a copy?
It really seems like a better option, I’ll do it.

As for installing the nvidia driver with the full kernel, I don’t think it will be necessary, at least not yet :smiley:

From what I saw, they even installed additional modules that previously needed to be installed and are now in the system, such as “nvidia-container-toolkit” and “nvidia-container-toolkit-base”
Everything is already there, I don’t know if this happened because of the changes they made to the k8s Apps or whatever, but for me this was a positive point.
The problem with installing new drivers is that they locked the system folders, even using “install-dev-tools” they kept some folders locked, like /boot, if you want to test the “update-initramfs -u” command and you will receive the message:

root@truenas[~]# update-initramfs -u
ln: failed to create hard link '/boot/initrd.img-6.6.16-production+truenas.dpkg-bak': Read-only file system
cp: cannot create regular file '/boot/initrd.img-6.6.16-production+truenas.dpkg-bak': Read-only file system
root@truenas[~]#

Amen :raised_hands:

I think this will give us a bit more freedom, right now we’re limited to the kernel that iX systems ship with the current version of TrueNAS. If we remove that and install the drivers and kernel manually, we won’t have to rely on them and can always have the latest driver.

Why do you need this?

I don’t need it, it would only be necessary if we were to update the Nvidia driver, as we don’t need it yet, it’s ok.

I finished making all the changes and checks that needed to be done to work correctly with Dragonfish.

The longest part was the checking, but I tried to check all the possibilities.

I’m posting the complete script with the changes I made, they are in two blocks I created, one at the beginning and one at the end of the script, they are among the “Add by Finallf” comments.
Everything else in the script is exactly the same as the forum.

I could see that it will be necessary to update certain things that became redundant after the “install-dev-tools” command, I think only the file checking part, it seems, but it needs to be tested.

Scepterus, feel free to add this to your script and remove the “Add by Finallf”, as well as make modifications and updates as you see fit.

Enable Docker Script - Updated for Dragonfish + Nvidia runtime
#!/usr/bin/env bash

# Enable docker and docker-compose on TrueNAS SCALE (no Kubernetes)
# Follow the guide in this post before using this script:
# https://forum.level1techs.com/t/truenas-scale-native-docker-vm-access-to-host-guide/190882
# This script is a hack! Use it at your own risk!!
# Edit all the vars under:
# Vars you need to change
# to point to your storage
#
# Schedule this script to run via System Settings -> Advanced -> Init/Shutdown Scripts
# Click Add -> Type: Script and choose this script -> When: choose to run as Post Init
exec >/tmp/enable-docker.log
# Vars you need to change:
# set a path to your docker dataset
docker_dataset="/path/to/Docker"
# set the docker_daemon path on your storage for it to survive upgrades
new_docker_daemon="/path/to//TrueNAS/daemon.json"
# apt sources persist
new_apt_sources="/path/to//TrueNAS/aptsources.list"

echo "§§ Starting script! §§"

## ↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓ Add By Finallf ↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓
echo "§§ Activating Developer Mode... §§"
install-dev-tools
echo "§§ Developer Mode Enabled. §§"

KERNER=$(uname -r)
DUNAME=/lib/modules/"$KERNER"/
DIDKMS=/lib/modules/"$KERNER"/updates/dkms/

echo "§§ Checking if nvidia modules are started... §§"
if [[ ! -f "$DUNAME"nvidia-uvm.ko ]]; then
  echo "§§ nvidia-uvm.ko does not exist, creating module with symbolic link... §§"
  ln -s "$DIDKMS"nvidia-current-uvm.ko "$DUNAME"nvidia-uvm.ko
  echo "§§ Updating the module list... §§"
  depmod -a
  echo "§§ Starting Nvidia modules... §§"
  nvidia-container-cli -k -d /dev/tty info
else
  echo "§§ nvidia-uvm.ko exists, checking if it is started... §§"
  if [[ ! -f /dev/nvidia-uvm ]]; then
    echo "§§ nvidia-uvm not started, starting... §§"
    nvidia-container-cli -k -d /dev/tty info
  else
	echo "§§ nvidia-uvm is already started. §§"
  fi
fi
## ↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑ Add By Finallf ↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑

echo "§§ Checking apt and dpkg §§"
for file in /bin/apt*; do
  if [[ ! -x "$file" ]]; then
    echo " §§ $file not executable, fixing... §§"
    chmod +x "$file"
  else
    echo "§§ $file is already executable §§"
  fi
done

for file in /bin/dpkg*; do
  if [[ ! -x "$file" ]]; then
    echo "§§ $file not executable, fixing... §§"
    chmod +x "$file"
  else
    echo "§§ $file is already executable §§"
  fi
done
echo "All files in /bin/apt* are executable"

echo "§§ apt update §§"
sudo apt update &>/dev/null
echo "§§ Linking apt sources to your storage for persistance §§"
echo "§§ Please note that with this you'll have to update the links manually on your storage when there's an update §§"
aptsources="/etc/apt/sources.list"
if [[ -f "$aptsources" ]] && [[ ! -L "$aptsources" ]]; then
  cp "$aptsources" "$new_apt_sources"
  mv "$aptsources" "$aptsources".old
fi
if [[ ! -f "$new_apt_sources" ]]; then
  touch "$new_apt_sources"
fi
if [[ ! -f "$aptsources" ]]; then
  ln -s "$new_apt_sources" "$aptsources"
fi
echo "§§ Fix the trust.gpg warnings §§"
# Create a directory for the new keyrings if it doesn't exist
sudo mkdir -p /etc/apt/trusted.gpg.d

# Find all keys in the old keyring
for key in $(gpg --no-default-keyring --keyring /etc/apt/trusted.gpg --list-keys --with-colons | awk -F: '/^pub:/ { print $5 }'); do
  echo "Processing key: $key"
  # Export each key to a new keyring file in the trusted.gpg.d directory
  gpg --no-default-keyring --keyring /etc/apt/trusted.gpg --export --armor "$key" >/etc/apt/trusted.gpg.d/"$key".asc
done

# Backup the old keyring
mv /etc/apt/trusted.gpg /etc/apt/trusted.gpg.backup

#Docker Checks
echo "§§ Docker Checks §§"
sudo apt install -y ca-certificates curl gnupg lsb-release &>/dev/null

if [[ ! -f /etc/apt/keyrings/docker.gpg ]]; then
  echo "§§ Missing Keyrings §§"
  sudo mkdir -m 755 -p /etc/apt/keyrings
  curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
  sudo chmod 755 /etc/apt/keyrings/docker.gpg
else
  echo "§§ Keyrings Exist §§"
fi
if ! grep -q "https://download.docker.com/linux/debian" /etc/apt/sources.list; then
  echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | sudo tee -a /etc/apt/sources.list >/dev/null
  sudo apt update &>/dev/null
else
  echo "§§ Docker List: §§"
  cat /etc/apt/sources.list
fi

Docker=$(which docker)
DockerV=$(docker --version)
DCRCHK=$(sudo apt list --installed | grep docker)
if [[ -z "$Docker" ]] || [[ -z "$DCRCHK" ]]; then
  echo "Docker executable not found"
  sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin &>/dev/null
fi
sudo chmod +x /usr/bin/docker*
sudo install -d -m 755 -- /etc/docker
if [[ ! -f /etc/docker.env ]]; then
  touch /etc/docker.env
fi
. ~/.bashrc
echo "§§ Which Docker: $Docker §§"

## set the Docker storage-driver
echo "§§ Docker storage-driver §§"
version="$(cut -c 1-5 </etc/version | tr -d .)"

if ! [[ "${version}" =~ ^[0-9]+$ ]]; then
  echo "version is not an integer: ${version}"
  exit 1
elif [[ "${version}" -le 2204 ]]; then
  storage_driver="zfs"
elif [[ "${version}" -ge 2212 ]]; then
  storage_driver="overlay2"
fi

## HEREDOC: docker/daemon.json
echo "§§ Docker daemon.json §§"
read -r -d '' JSON <<END_JSON
{
  "data-root": "${docker_dataset}",
  "storage-driver": "${storage_driver}",
  "exec-opts": [
    "native.cgroupdriver=cgroupfs"
  ]
}
END_JSON

## path to docker daemon file
docker_daemon="/etc/docker/daemon.json"

if [[ ${EUID} -ne 0 ]]; then
  echo "§§ Please run this script as root or using sudo §§"
elif [[ "$(systemctl is-enabled k3s)" == "enabled" ]] || [[ "$(systemctl is-active k3s)" == "active" ]]; then
  echo "§§ You can not use this script while k3s is enabled or active §§"
elif ! zfs list "$docker_dataset" &>/dev/null; then
  echo "§§ Dataset not found: $docker_dataset §§"
else
  echo "§§ Checking file: ${docker_daemon} §§"
  if [[ -f "$docker_daemon" ]] && [[ ! -L "$docker_daemon" ]]; then
    rm -rf "$docker_daemon"
  fi
  if [[ ! -f "$new_docker_daemon" ]]; then
    touch "$new_docker_daemon"
  fi
  if [[ ! -f "$docker_daemon" ]]; then
    ln -s "$new_docker_daemon" "$docker_daemon"
  fi

  # Read the current JSON from the file
  current_json=$(cat $docker_daemon 2>/dev/null)
  echo "§§ $current_json §§"

  # Check if current_json is empty and if so, set it to an empty JSON object
  if [[ -z "$current_json" ]]; then
    current_json="{}"
  fi

  # Merge the current JSON with the new JSON
  merged_json=$(echo "$current_json" | jq --argjson add "$JSON" '. * $add')
  echo "§§ $merged_json §§"
  # Check if the merged JSON is different from the current JSON
  if [[ "$merged_json" != "$current_json" ]] || [[ -z "$current_json" ]]; then
    echo "§§ Updating file: $docker_daemon §§"
    echo "$merged_json" | tee "$docker_daemon" >/dev/null
    if [[ "$(systemctl is-active docker)" == "active" ]]; then
      echo "§§ Restarting Docker §§"
      systemctl restart docker
    elif [[ "$(systemctl is-enabled docker)" != "enabled" ]]; then
      echo "§§ Enable and starting Docker §§"
      systemctl enable --now docker
    fi
  fi
fi

echo "§§ Which Docker: $Docker §§"
echo "§§ Docker Version: $DockerV §§"

## ↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓ Add By Finallf ↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓
## Configure NVIDIA runtime in docker
echo "§§ Configuring Nvidia runtime in docker... §§"
RUNCHK=$(grep "nvidia-container-runtime" /etc/docker/daemon.json)

if [ -z "$RUNCHK" ]; then
  nvidia-ctk runtime configure --runtime=docker
  echo "§§ Nvidia runtime configured. §§"
  echo "§§ Restarting Docker §§"
  systemctl restart docker
else
  echo "§§ NVIDIA runtime is already configured. §§"
fi
## ↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑ Add By Finallf ↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑

echo "§§ Script Finished! §§"

Most of the changes are to enable the Nvidia runtime in docker.
For those who don’t use GPU in containers, the only thing that needs to be done is to add this line:

sudo install-dev-tools

Right at the beginning of the script

1 Like

Thanks, I’ll take a look, I already added the dev tools to my script, but without sudo as it is running as root already.

Since the script is supposed to run on startup, it will use root anyway. Also, it is best practice to save the username to a var and use that var in double quotes.

find commands are slow, you want to avoid them wherever possible. You can do the same with a simple grep of the file. Or a cat | grep.

always use double brackets, that’s more efficient in bash. Also, no need for sudo here as well.

I didn’t understand this part, username?
There is nothing in the script that uses a username.
If you’re talking about the part that has uname -r this is something different:

Regarding the other advice, I will adopt them right now :smiley:

I just edited the post with these changes.

Yeah, sorry, for some reason my brain saw whoami. Either way, my suggestion still stands, define it as a var and use it with double quotes.

1 Like

TrueNAS Scale - Setting up Sandboxes with Jailmaker (youtube.com)

Seems relevant. Seems to be the psuedo supported method to do this in TrueNAS going forward.

Sandboxes (Jail-like Containers) | TrueNAS Documentation Hub
Jip-Hop/jailmaker: Persistent Linux ‘jails’ on TrueNAS SCALE to install software (docker-compose, portainer, podman, etc.) with full access to all files via bind mounts thanks to systemd-nspawn! (github.com)

This was discussed before, I still maintain that the method here is more beginner-friendly.

2 Likes