TrueNAS Scale Native Docker & VM access to host [Guide]

Hello everyone, I decided to write this guide which is an amalgamation of all the solutions found on this post by Wendell:


This guide is based on my personal experience on 2 TrueNAS systems, follow at your own discretion. I will try to help if I can, and update this guide if I missed something.


I want to thank the following people:
@wendell for the original guide, and for his help with everything.
A user by the name of Jip-Hop on GitHub, who wrote the guide on how to connect the bridge to avoid VMs not having internet access. And created the original script.
A user by the name of tprelog on GitHub for adapting the Script.
@talung for showing me that the basic Kubernetes implementation is flawed and not efficient, and changes to the script. (See comments)

Main issues:

The first issue this aims to solve, is the fact that TrueNAS Scale uses k3s to run its apps, and that’s both not very performant and not usable for most of us with just one NAS.

The second issue is that VMs on TrueNAS do not see the host, and Wendell showed in his video a solution that is working, but interferes with the solution to the first issue.

The reason I put the solution for VMs here, is because when you implement the script for running Docker natively, they lose access to the internet with the solution Wendell showed.
So a little back and forth with the creator of the script, and I was pointed to Jip-Hop’s post with the solution to have both.

The overhead description:

The solution will look like so:
1. Native Docker and Docker Compose
2. Portainer to manage all other Dockers easily
3. Optional settings and recommendations
4. Virtual bridge to allow VMs to see TrueNAS host shares

Let’s get started!

Table of Content

Stage 1—Create the Data set

Stage 2—Making Sure everything is ready for Docker

Stage 3—Getting Docker to run Natively

Stage 4—Optional Upgrades

Stage 5—Portainer

Stage 5a—Optional things in Portainer

Stage 6—Enabling VM host share access

First off, if you have dockers and apps in TrueNAS, be sure to take note of their volumes and docker-compose file if relevant, as you’ll have to recreate them.

Stage 1—Create the Data set

This is obvious, but it has some pitfalls that some new users to TrueNAS and Linux in general might not be aware of.
We’ll need to create a Dataset in the pool you use for operations, and not just cold storage. So if you have a SSD pool or a more powerful pool that is dedicated to operations, you’ll choose that one.
In order to do that, in TrueNAS go to Storage and find the pool you will be using.

Click on the 3 dots to the right of the first line in that pool and click on Add Dataset.

Now you’ll see the creation screen, give it a name, and nothing else. It’s Critical that you don’t change the share setting to SMB, but instead leave it as Generic. SMB breaks chmod, which is crucial for dockers.

Now that you have the Dataset, we’ll prepare the groundwork for the move to docker.

Stage 2—Making Sure everything is ready for Docker

First thing’s first, we’ll have to make sure the physical interface has an IP address, and that you can reach TrueNAS on that IP. We do not want to be stuck out of TrueNAS and have to physically do these things on the server.
So go to Networks > Locate the active physical connection, and click on it.
Now, make sure DHCP is off, and add an alias in the following format:
where is the IP TrueNAS is responding to and shows the web portal on.
Test and Save the settings, make sure nothing broke, and if you came here from Wendell’s guide continue to delete the br interface.

Don’t forget to exclude this IP in your DHCP or assign it an IP outside the DHCP pool, so that other devices will not get this IP assigned.
To set a DNS in TrueNAS settings, pick your DNS of choice and set the IP in TrueNAS > Network > Global Configuration > Nameserver.

I’ll start off by clearing up things for people who used or set something in the TrueNAS apps, or tried Wendell’s solution with the Bridge.

If you are starting from a fresh TrueNAS, skip to Stage 3.

We’ll have to go to TrueNAS > Apps and stop and remove every app that you may have created. Once that’s done, we’ll go to:

and click on Advanced Settings, there we’ll make sure the IP is not set to a bridge, but to the physical Interface.

Once that’s done, go back to Settings from the image and click on Unset Pool. Give it time, and when it’s done we can move to stage 3.

Stage 3—Getting Docker to run Natively

We’ll create a file somewhere that’s accessible to you, if you want you can do it from TrueNAS shell or from a share.

Enable Docker Script
#!/usr/bin/env bash

# Enable docker and docker-compose on TrueNAS SCALE (no Kubernetes)
# Follow the guide in this post before using this script:
# This script is a hack! Use it at your own risk!!
# Edit line 14 of the script, set a path to the Docker dataset you created
# Schedule this script to run via System Settings -> Advanced -> Init/Shutdown Scripts
# Click Add -> Type: Script and choose this script -> When: choose to run as Post Init
exec >/tmp/enable-docker.log

## set a path to your docker dataset

echo "§§ Starting script! §§"

echo "§§ Checking apt and dpkg §§"
for file in /bin/apt*; do
  if [ ! -x "$file" ]; then
    echo " §§ $file not executable, fixing... §§"
    chmod +x "$file"
    echo "§§ $file is already executable §§"

for file in /bin/dpkg*; do
  if [ ! -x "$file" ]; then
    echo "§§ $file not executable, fixing... §§"
    chmod +x "$file"
    echo "§§ $file is already executable §§"
echo "All files in /bin/apt* are executable"

echo "§§ apt update §§"
sudo apt update

#Docker Checks
echo "§§ Docker Checks §§"
sudo apt-get install -y ca-certificates curl gnupg lsb-release

if [ ! -d /etc/apt/keyrings ]; then
  sudo mkdir -m 0755 -p /etc/apt/keyrings
  curl -fsSL | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
  sudo chmod a+r /etc/apt/keyrings/docker.gpg
if [ ! "$(cat /etc/apt/sources.list.d/docker.list)" ]; then
  echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list >/dev/null

Docker=$(which docker)
DCRCHK=$(sudo apt list --installed | grep docker)
if [ -z "$Docker" ] || [ -z "$DCRCHK" ]; then
  echo "Docker executable not found"
  sudo apt-get install -y docker-ce docker-ce-cli docker-buildx-plugin docker-compose-plugin
sudo chmod +x /usr/bin/docker-compose
sudo install -d -m 755 -- /etc/docker
if [ ! -f /etc/docker.env ]; then
  touch /etc/docker.env
echo "§§ Which Docker: $Docker §§"

## set the Docker storage-driver
echo "§§ Docker storage-driver §§"
version="$(cut -c 1-5 </etc/version | tr -d .)"

if ! [[ "${version}" =~ ^[0-9]+$ ]]; then
  echo "version is not an integer: ${version}"
  exit 1
elif [ "${version}" -le 2204 ]; then
elif [ "${version}" -ge 2212 ]; then

## HEREDOC: docker/daemon.json
echo "§§ Docker daemon.json §§"
read -r -d '' JSON <<END_JSON
  "data-root": "${docker_dataset}",
  "storage-driver": "${storage_driver}",
  "exec-opts": [

## path to docker daemon file

if [ ${EUID} -ne 0 ]; then
  echo "§§ Please run this script as root or using sudo §§"
elif [ "$(systemctl is-enabled k3s)" == "enabled" ] || [ "$(systemctl is-active k3s)" == "active" ]; then
  echo "§§ You can not use this script while k3s is enabled or active §§"
elif ! zfs list "${docker_dataset}" &>/dev/null; then
  echo "§§ Dataset not found: ${docker_dataset} §§"
  echo "§§ Checking file: ${docker_daemon} §§"
  if test "${JSON}" != "$(cat ${docker_daemon} 2>/dev/null)"; then
    echo "§§ Updating file: ${docker_daemon} §§"
    jq -n "${JSON}" >${docker_daemon}
    if [ "$(systemctl is-active docker)" == "active" ]; then
      echo "§§ Restarting Docker §§"
      systemctl restart docker
    elif [ "$(systemctl is-enabled docker)" != "enabled" ]; then
      echo "§§ Enable and starting Docker §§"
      systemctl enable --now docker
echo "§§ Script Finished! §§"

Create the file, let’s call it, on your TrueNAS. Edit line 20 to point to the Dataset you created in stage 1.

When you’re done, go to System Settings > Advanced > Init/Shutdown Scripts. There choose Script instead of Command, point to the script you just created, give it a name, and tell it to run Post Init.

Reboot the system, and wait, this might take a while. You can check the progress by cat /tmp/enable-docker.log or tail /tmp/enable-docker.log you should see where it is, wait until you see Script complete.

Now you’re set from the Docker side of things, you could use docker and docker compose commands freely. However, if you wish to do this with Portainer and have a few extra things, please continue reading.

Stage 4—Portainer

Now that we have docker compose we can create Portainer.
Create a docker-compose.yml somewhere in your pool, and paste this inside:

name: portainer
    container_name: portainer
    image: portainer/portainer-ce:latest
    mem_limit: "2147483648"
      default: null
    pids_limit: 4096
      - mode: ingress
        target: 8000
        published: "8000"
        protocol: tcp
      - mode: ingress
        target: 9443
        published: "29443"
        protocol: tcp
    read_only: true
    restart: unless-stopped
      - type: bind
        source: /var/run/docker.sock
        target: /var/run/docker.sock
          create_host_path: true
      - type: bind
        source: /mnt/OPS/Docker/Portainer/data
        target: /data
          create_host_path: true
      - type: bind
        source: /etc/localtime
        target: /etc/localtime
        read_only: true
    name: portainer_default

Save, and run docker compose up -d .

You should have access to Portainer now on port 29443 of the IP you set for TrueNAS.

Stage 4a—Optional things in Portainer

In Portainer, I have a few things that I prefer to set for ease of use and maintenance.

  1. Go to Environments>local>Environment Details, and put the IP of the server from Stage 1, then save. With this, you can easily launch containers directly from Portainer.

  2. You can also set Portainer to not ask for a username and password every 8 hours if you are running securely. You can do that by going to Settings>Authentication and setting it to what you feel comfortable with.

  3. I like to set up Watchtower to make sure the images and containers are always up-to-date, and I do not need to do this manually. In Portainer create a stack and put this inside:

name: watchtower
    container_name: watchtower
    image: containrrr/watchtower
      - -s
      - "0 30 0 * * *"
      - --cleanup

      - type: bind
        source: /etc/localtime
        target: /etc/localtime
          create_host_path: true
        read_only: true
      - type: bind
        source: /var/run/docker.sock
        target: /var/run/docker.sock
          create_host_path: true
    restart: unless-stopped
    name: watchtower_default

Watchtower runs 24 hours after it’s created by default, and every 24 hours after that. But in my compose, I set it to run at half past midnight, you can change it in this line: - "0 30 0 * * *"
That is a cron format, so if you do not know how a cron format is built, you can use this: or something other site like this.

Stage 5—Enabling VM host share access

For the last part, if you plan to use VMs and need them to access your host machine, we’ll create a bridge in TrueNAS to enable the VMs to access the host.

This is explained in Wendell’s video, and has not been fixed since. Basically, you need to go to your host via the network card.

Go to Network > Interface > Add and name it br0. Next, make sure DHCP is off, and give it an alias with an IP that’s outside your real network’s segment. i.e., if your network is you can give this bridge
Test and save your setting, if all goes well we can move to the VMs.

In Virtualization edit your VM and add another NIC to it, by clicking on the VM you want, Devices > Add. In this NIC, it’s important that you choose to attach to br0, then save.

Start your VM, and you’ll be able to set the IP of the second NIC manually. You’ll have to set it as follows:
Subnet Mask:
Once you save that, you should see the shares on the IP
If not, go to the hosts file of your OS, and add the following line:                  NameOfYourTrueNASServer

Save, and it should go there directly from now on.

That’s it, I hope this helps you, and saves you the time I had to put into these issues.


Just a correction, I did not write the script. Original script was by Jip-Hop on Github and then adapted by tprelog. I used tprelog’s version and posted minor changes and information in the other thread.

Can you please correct the post. I deserve no credit for this.

OK thanks for the correction, could you post the changes you made and explain what they change, so I could add it to the Guide?

Essentially it was just adding the nvidia passthrough on the daemon.json file. It is in the Github comments.

## HEREDOC: docker/daemon.json
read -r -d '' JSON << END_JSON
  "data-root": "${docker_dataset}",
  "exec-opts": [
  "runtimes": {
        "nvidia": {
            "path": "/usr/bin/nvidia-container-runtime",
            "runtimeArgs": []

But I believe the script has gone through a number of revisions since I first started using it and am running an older version of it. It works so haven’t felt the need to update.


OK, great! It’s important to have these here in case someone needs them.
I now have to figure out how to deal DHCP on that bridge, and with importing a Postgres db from a snapshot for one docker that got wiped when I transferred.

I hope this guide will help people, and save them the trouble I went through on 2 systems to get to this state.

Thank you Scepterus. I have some questions though.

In Wendell’s other thread you refer to, we are using and managing Docker containers inside the VM.

Here we have a VM, but we are using Docker natively on TrueNAS and optionally updating it with apt enabled, correct?

I ran into an issue where I wanted to utilize Intel Quick Sync within my VM within a Docker container and that doesn’t seem possible, but does here.

What is the purpose of the VM?

How do you manage your Docker containers, data, config, yaml files on here for backup as well as ensuring they survive TrueNAS updates? Do you have a script? Do you rsync the entire folder with the data? Snapshots? All 3?

1 Like

Firstly, you’re welcome.
Secondly, I’ll answer your question in the order you asked them.

  1. Yes, the VM has nothing to do with Docker though, I may not have been very clear on why VMs are here in my original post.
    The reason I put the solution for VMs here, is because when you implement the script for running Docker natively, they lose access to the internet with the solution Wendell showed.
    So a little back and forth with the creator of the script, and I was pointed to Jip-Hop’s post with the solution to have both.

  2. Regarding Quick Sync, did you pass the Intel GPU to the VM when you created it?

  3. I hope answer #1 covers this question as well.

  4. Well the use of volumes in the Dockers, as well as using a docker compose file, means you keep everything even if the docker container is removed.
    Assuming that happens, you only need to do docker compose -up in the correct folder, and you should be back to where you left off.

  5. I have snapshots of the entire OPS data set, which is what saved me, probably during the implementation of this. But, the volumes are mounted on a dataset, so updates and crashes and the like should not affect it.

I hope I answered all your questions, if not feel free to ask, so they may help others that come here.

  1. Sounds good.
  2. It does not look as though you can pass an iGPU to a VM currently with TrueNAS SCALE, only external GPU’s after the 1st GPU being used by TrueNAS itself.
  3. Got it
  4. & if I am running docker-compose and the config folders are on a share I’ll be good to go
  5. I was also planning on some replication to another TrueNAS box
  1. Well that was my 2 cents, you may be able to search how to change the classification of the iGPU to a dGPU in the shell.
  2. Pretty much, I do not see a reason why not, then again this hasn’t been tested through an update. Worst case scenario, you’ll have to repeat a few steps from this guide.
  3. With rsync you should be able to replicate the share to the other machine, as for the config, that’s a bit more complicated as far as I know.

So I recreated my Jellyfin instance in native Docker… but I am getting permission denied with my library…
I’m confused what I should be running my environment as PUID PGID
or should I create a new data pool for docker containers running natively?

Is it under a dataset with SMB share? Even if not, is the share type on that dataset set to SMB instead of generic? That was mostly the reason I ran into permission denied.

Like I said in the guide, you need a specific dataset just for Docker, it can’t be on an already existing one without some issues.

There’s an update coming to TrueNAS that will bring big performance gains to dockers, but it will probably necessitate a recreation of the docker dataset after the update.

1 Like

I created a new user “jellyfin” and a group “media” that I pass to the docker container using command line parameter -u <uid>:<gid> .
You add yourself/wife/kids to the group media (usermod -aG media jode ) and change owner/group of the files that jellyfin is servicing.
chown -R /opt/media
To make sure that future files have the right permissions and ownership when added to your media folder you set sticky bits to the folders.
chmod 6775 /opt media

Full command line to start jellyfin:
docker run -d --name jellyfin -u 1007:1005 -v /var/lib/jellyfin:/config -v /var/lib/jellyfin/cache:/cache -v /opt/media:/media --net=host --restart=unless-stopped --device /dev/dri/renderD128:/dev/dri/renderD128 --device /dev/dri/card0:/dev/dri/card0 jellyfin/jellyfin

1 Like

It’s there, if you want I could take the screen shot you uploaded and add it to that section.

That’s why I wrote the guide in a way that uses portainer and docker compose, that way you’ll only lose the container, but the volumes with the docker data are persistent on the share. So after an upgrade, you just bring up portainer, and recreate the stacks if needed. And you’re up and running and lost nothing but a bit of time.


Hi @Scepterus. Currently waiting for all parts to arrive to build my first ever PC and NAS, so I’m following your write-up and wendell’s TrueNAS Scale: Ultimate Home Setup incl. Tailscale guide with great interest. Thanks for crystalizing the lessons from your adventures into this guide.

In the midst of my research, I found out that TrueCharts seem to now support Docker-Compose in a way that avoids some of the pitfalls others have mentioned, and which you say your set-up will avoid. I would link to the blog post but it seems like I’m not allowed to include links in this post. If you visit the TrueCharts Blog website, you’ll find it as the most recent article.

  • It’s fully backed by TrueNAS SCALE Applications, so it will survive updates.
  • There is a GUI option to input your Docker-Compose file, that will survive reboots.

From what I can tell, this was released today.

Would you change anything in your setup given this piece of news?

I’m actually not sure what TrueCharts supports is equivalent to what you’re doing. Perhaps it’s complementary? I apologize in advance if my questions don’t make sense. I have zero experience working with TrueNAS and Docker.

Looking forward to your response.

Hello, and thank you for your feedback.
If you look in the original post by Wendell, somewhere in the later posts, you should see that I started this journey by going to truecharts. However, I was quickly informed that the performance on any kub based docker is abysmal compared even to wendell’s method.

There was already a method for docker-compose then when I tested truecharts, it was not great, but it worked. Still, I find that this way is much better, both in that I need to babysit it less, and that it does not break when I try to update something in it.

Updating or restarting those apps took ages compared to portainer stacks, which take 15 seconds.

Hope this answered your question.

On another note, to everyone on this topic, in the next TrueNAS update they are changing the storage controller to overlay2 which will break current dockers, I will update the guide soon with the solution from the GitHub Gist mentioned.


Hi @Scepterus. Thanks for clarifying!

One more question: if I were to follow your setup, does that mean that I will install all my apps (e.g. Jellyfin, Nextcloud) via docker on the command line?

Very much looking forward to the updated overlay2 guide!

Sorry for not seeing your reply sooner, No, if you follow the guide fully you’ll use Portainer which is a webgui for managing Dockers. Wendell shows it in the original video.

1 Like

I’ve upgraded to Bluefin (22.12) and will write the things that you need to do in order to migrate to it. There’s a major change in Bluefin, the switched the storage driver for docker from ZFS to overlay2, should improve performance and save you those tens of folders that are created every time a container updates.

  1. Backup everything externally, configs of containers, the config of Portainer, everything you run.
  2. Stop all dockers, including Portainer
  3. in the shell, do docker images then remove each image with docker image rm <id> where you take the id from the images command, and replace the with the string from that command. Repeat for each image until there are no images left.
  4. update the script to enable docker with the new script in the OP.
    As you did before, replace the line to point to your docker dataset.
  5. upgrade and reboot.
  6. Run apt update, and then run apt upgrade.
  7. Run apt install docker-compose-plugin to get docker compose back.
  8. Bring up portainer and restart the stacks. Some stacks will lose their config if you did not mount their config to your storage, so be prepared.
  9. enjoy your dockers!
1 Like

Also, if anyone is interested in increasing the time before reauthentication on the TrueNAS webgui, here’s the command:

sed -i 's/auth.generate_token",\[300/auth.generate_token",\[129600/g'  /usr/share/truenas/webui/*.js

Change, 129600 to how long you want in seconds.

1 Like

Hi @Scepterus, really great guide! I was able to follow all of it so far, making it to Stage 5. I am new to both truenas and docker, so I was very happy to have both of them up and running by the end of that. I, however, did ignore some errors and skip some steps to get to that point, which I hope you can help me resolve in this post.

There are 2 and it seems like they are both networking-related.

I realized in Stage 4 that I wasn’t able to connect to docker in that curl command. I also wasn’t able to ping to with ping -c 3 but could ping to Google primary DNS, suggesting to me something went wrong with resolving hostnames. I undid Stage 2 by enabling DHCP again and was able to download and get docker, docker compose and portainer running.

I want to go back to this step and do it properly. Was it expected that I lost internet access? Perhaps this has something to do with also not deleting the br interface?

What should I do exactly to delete the br interface? Is this something I need to go back to Wendell’s guide to find out?

Great guide once again and Happy New Year!