TrueNAS Scale Native Docker & VM access to host [Guide]

Hello everyone, I decided to write this guide which is an amalgamation of all the solutions found on this post by Wendell:
Post

Disclaimer:
This guide is based on my personal experience on 2 TrueNAS systems, follow at your own discretion. I will try to help if I can, and update this guide if I missed something.

Thanks:
I want to thank the following people:
@wendell for the original guide, and for his help with everything.
A user by the name of Jip-Hop on GitHub, who wrote the guide on how to connect the bridge to avoid VMs not having internet access. And created the original script.
A user by the name of tprelog on GitHub for adapting the Script.
@talung for showing me that the basic Kubernetes implementation is flawed and not efficient, and changes to the script. (See comments)

Basic issues:
The first issue this aims to solve, is the fact that TrueNAS Scale uses k3s to run its apps, and that’s both not very performant and not usable for most of us with just one NAS.

The second issue is that VMs on TrueNAS do not see the host, and Wendell showed in his video a solution that is working, but interferes with the solution to the first issue.

The reason I put the solution for VMs here, is because when you implement the script for running Docker natively, they lose access to the internet with the solution Wendell showed.
So a little back and forth with the creator of the script, and I was pointed to Jip-Hop’s post with the solution to have both.

The overhead description:
The solution will look like so:
1. Native Docker and Docker Compose
2. Portainer to manage all other Dockers easily
3. Optional settings and recommendations
4. Virtual bridge to allow VMs to see TrueNAS host shares

Let’s get started!

First off, if you have dockers and apps in TrueNAS, be sure to take note of their volumes and docker-compose file if relevant, as you’ll have to recreate them.

Stage 1—Create the Data set:
This is obvious, but it has some pitfalls that some new users to TrueNAS and Linux in general might not be aware of.
We’ll need to create a Dataset in the pool you use for operations, and not just cold storage. So if you have a SSD pool or a more powerful pool that is dedicated to operations, you’ll choose that one.
In order to do that, in TrueNAS go to Storage and find the pool you will be using.


Click on the 3 dots to the right of the first line in that pool and click on Add Dataset.

Now you’ll see the creation screen, give it a name, and nothing else. It’s Critical that you don’t change the share setting to SMB, but instead leave it as Generic. SMB breaks chmod, which is crucial for dockers.

Now that you have the Dataset, we’ll prepare the groundwork for the move to docker.

Stage 2—Making Sure everything is ready for Docker
First thing’s first, we’ll have to make sure the physical interface has an IP address, and that you can reach TrueNAS on that IP. We do not want to be stuck out of TrueNAS and have to physically do these things on the server.
So go to Networks > Locate the active physical connection, and click on it.
Now, make sure DHCP is off, and add an alias in the following format:
192.168.0.2/24
where 192.168.0.2 is the IP TrueNAS is responding to and shows the web portal on.
Test and Save the settings, make sure nothing broke, and continue to delete the br interface.

I’ll start off by clearing up things for people who used or set something in the TrueNAS apps, or tried Wendell’s solution with the Bridge. If you are starting from a fresh TrueNAS, skip ahead.

We’ll have to go to TrueNAS > Apps and stop and remove every app that you may have created. Once that’s done, we’ll go to:


and click on Advanced Settings, there we’ll make sure the IP is not set to a bridge, but to the physical Interface.

Once that’s done, go back to Settings from the image and click on Unset Pool. Give it time, and when it’s done we can move to stage 3.

Stage 3—Getting Docker to run Natively
We’ll create a file somewhere that’s accessible to you, if you want you can do it from TrueNAS shell or from a share.
Go here: Use docker-compose on TrueNAS SCALE without Kubernetes · GitHub
and copy the script.
Create the file, let’s call it enable-docker.sh, on your TrueNAS. Edit line 20 to point to the Dataset you created in stage 1, and run the script.
If you run into issues with it saying something about the bash not being recognized, remove " env" from the first line in the script.

If all goes well, you should be able to check systemctl status docker.service to see that it’s working.

When you’re done, go to System Settings > Advanced > Init/Shutdown Scripts. There choose Script instead of Command, point to the script you just created, give it a name, and tell it to run Post Init.

Now you’re set from the Docker side of things, you could use docker and docker compose commands freely. However, if you wish to do this with Portainer and have a few extra things, please continue reading.

Stage 4—Optional Upgrades
I personally am not a fan of being limited to the version of docker and docker compose that comes with TrueNAS, so I updated all of those, and I’ll describe how:

  1. Enable apt. Apt is present on TrueNAS Scale, but disabled. Run the following chmod +x /bin/apt* to enable it.
  2. Go here: Install Docker Engine on Ubuntu | Docker Documentation and only do set up the repository.
  3. Run apt update && apt list --upgradable | grep docker for each line in the result do apt install $Result where you replace $Result with the package name.

Stage 5 - Portainer
Now that we have docker compose (or docker-compose if you did not use Stage 4) we can create Portainer.
Create a docker-compose.yml somewhere in your pool, and paste this inside:

version: '3.9'

services:
  portainer-ce:
    read_only: true
    pids_limit: 4096
    mem_limit: 2g
    image: portainer/portainer-ce:latest
    container_name: portainer
    restart: unless-stopped
    volumes:
      - '/var/run/docker.sock:/var/run/docker.sock'
      - /mnt/Path/To/Stage1/Dataset/Portainer/data:/data:rw
    ports:
        - '8000:8000'
        - '9443:9443'

Save, and run docker compose up -d or docker-compose up -d depending on your choice in Stage 4.

You should have access to Portainer now on port 9443 of the IP you set for TrueNAS.

Stage 5a—Optional things in Portainer
In Portainer, I have a few things that I prefer to set for ease of use and maintenance.

  1. Go to Environments>local>Environment Details, and put the IP of the server from Stage 1, then save. With this, you can easily launch containers directly from Portainer.
  2. You can also set Portainer to not ask for a username and password every 8 hours if you are running securely. You can do that by going to Settings>Authentication and setting it to what you feel comfortable with.
  3. I like to set up Watchtower to make sure the images and containers are always up-to-date, and I do not need to do this manually. In Portainer create a stack and put this inside:
name: watchtower
services:
  watchtower:
    container_name: watchtower
    image: containrrr/watchtower
    networks:
      default: null
    volumes:
    - type: bind
      source: /var/run/docker.sock
      target: /var/run/docker.sock
      bind:
        create_host_path: true
networks:
  default:
    name: watchtower_default

Watchtower runs 24 hours after it’s created by default, and every 24 hours after that.

Stage 6—Enabling VM host share access
For the last part, we’ll create a bridge in TrueNAS to enable the VMs to access the host.
Go to Network > Interface > Add and name it br0. Next, make sure DHCP is off, and give it an alias with an IP that’s outside your real network’s segment. i.e., if your network is 192.168.0.1/24 you can give this bridge 192.168.254.1/24.
Test and save your setting, if all goes well we can move to the VMs.

In Virtualization edit your VM and add another NIC to it, by clicking on the VM you want, Devices > Add. In this NIC, it’s important that you choose to attach to br0, then save.

Start your VM, and you’ll be able to set the IP of the second NIC manually. You’ll have to set it as follows:
IP: 192.168.254.2
Subnet Mask: 255.255.255.0
Gateway: 192.168.254.1
Once you save that, you should see the shares on the IP 192.168.254.1.
If not, go to the hosts file of your OS, and add the following line:

192.168.254.1                  NameOfYourTrueNASServer

Save, and it should go there directly from now on.

That’s it, I hope this helps you, and saves you the time I had to put into these issues.

4 Likes

Just a correction, I did not write the script. Original script was by Jip-Hop on Github and then adapted by tprelog. I used tprelog’s version and posted minor changes and information in the other thread.

Can you please correct the post. I deserve no credit for this.
Thanks
Talung

OK thanks for the correction, could you post the changes you made and explain what they change, so I could add it to the Guide?

Essentially it was just adding the nvidia passthrough on the daemon.json file. It is in the Github comments.

## HEREDOC: docker/daemon.json
read -r -d '' JSON << END_JSON
{
  "data-root": "${docker_dataset}",
  "exec-opts": [
    "native.cgroupdriver=cgroupfs"
  ],
  "runtimes": {
        "nvidia": {
            "path": "/usr/bin/nvidia-container-runtime",
            "runtimeArgs": []
        }
    }
}
END_JSON

But I believe the script has gone through a number of revisions since I first started using it and am running an older version of it. It works so haven’t felt the need to update.

2 Likes

OK, great! It’s important to have these here in case someone needs them.
I now have to figure out how to deal DHCP on that bridge, and with importing a Postgres db from a snapshot for one docker that got wiped when I transferred.

I hope this guide will help people, and save them the trouble I went through on 2 systems to get to this state.

Thank you Scepterus. I have some questions though.

In Wendell’s other thread you refer to, we are using and managing Docker containers inside the VM.

Here we have a VM, but we are using Docker natively on TrueNAS and optionally updating it with apt enabled, correct?

I ran into an issue where I wanted to utilize Intel Quick Sync within my VM within a Docker container and that doesn’t seem possible, but does here.

What is the purpose of the VM?

How do you manage your Docker containers, data, config, yaml files on here for backup as well as ensuring they survive TrueNAS updates? Do you have a script? Do you rsync the entire folder with the data? Snapshots? All 3?

1 Like

Firstly, you’re welcome.
Secondly, I’ll answer your question in the order you asked them.

  1. Yes, the VM has nothing to do with Docker though, I may not have been very clear on why VMs are here in my original post.
    The reason I put the solution for VMs here, is because when you implement the script for running Docker natively, they lose access to the internet with the solution Wendell showed.
    So a little back and forth with the creator of the script, and I was pointed to Jip-Hop’s post with the solution to have both.

  2. Regarding Quick Sync, did you pass the Intel GPU to the VM when you created it?

  3. I hope answer #1 covers this question as well.

  4. Well the use of volumes in the Dockers, as well as using a docker compose file, means you keep everything even if the docker container is removed.
    Assuming that happens, you only need to do docker compose -up in the correct folder, and you should be back to where you left off.

  5. I have snapshots of the entire OPS data set, which is what saved me, probably during the implementation of this. But, the volumes are mounted on a dataset, so updates and crashes and the like should not affect it.

I hope I answered all your questions, if not feel free to ask, so they may help others that come here.

  1. Sounds good.
  2. It does not look as though you can pass an iGPU to a VM currently with TrueNAS SCALE, only external GPU’s after the 1st GPU being used by TrueNAS itself.
  3. Got it
  4. & if I am running docker-compose and the config folders are on a share I’ll be good to go
  5. I was also planning on some replication to another TrueNAS box
  1. Well that was my 2 cents, you may be able to search how to change the classification of the iGPU to a dGPU in the shell.
  2. Pretty much, I do not see a reason why not, then again this hasn’t been tested through an update. Worst case scenario, you’ll have to repeat a few steps from this guide.
  3. With rsync you should be able to replicate the share to the other machine, as for the config, that’s a bit more complicated as far as I know.

So I recreated my Jellyfin instance in native Docker… but I am getting permission denied with my library…
I’m confused what I should be running my environment as PUID PGID
or should I create a new data pool for docker containers running natively?
image
image
image

Is it under a dataset with SMB share? Even if not, is the share type on that dataset set to SMB instead of generic? That was mostly the reason I ran into permission denied.

Like I said in the guide, you need a specific dataset just for Docker, it can’t be on an already existing one without some issues.

There’s an update coming to TrueNAS that will bring big performance gains to dockers, but it will probably necessitate a recreation of the docker dataset after the update.

1 Like

I created a new user “jellyfin” and a group “media” that I pass to the docker container using command line parameter -u <uid>:<gid> .
You add yourself/wife/kids to the group media (usermod -aG media jode ) and change owner/group of the files that jellyfin is servicing.
chown -R jellyfin.media /opt/media
To make sure that future files have the right permissions and ownership when added to your media folder you set sticky bits to the folders.
chmod 6775 /opt media

Full command line to start jellyfin:
docker run -d --name jellyfin -u 1007:1005 -v /var/lib/jellyfin:/config -v /var/lib/jellyfin/cache:/cache -v /opt/media:/media --net=host --restart=unless-stopped --device /dev/dri/renderD128:/dev/dri/renderD128 --device /dev/dri/card0:/dev/dri/card0 jellyfin/jellyfin

1 Like

Hi @Scepterus - Nice guide!

I would add for anyone who has first tried using the any SCALE apps or TrueCharts - Be sure you Unset Pool to disable and prevent k3s from restarting after you reboot. The enable-docker script (by design) will not work if k3s is enabled and/or running.

image

I mention that because it has already confused a few people and they thought the script was not working.

I would also point out with Stage 4—Optional Upgrades that upgrades and any extra packages you might install on the host will be lost with every TrueNAS upgrade.

It’s there, if you want I could take the screen shot you uploaded and add it to that section.

That’s why I wrote the guide in a way that uses portainer and docker compose, that way you’ll only lose the container, but the volumes with the docker data are persistent on the share. So after an upgrade, you just bring up portainer, and recreate the stacks if needed. And you’re up and running and lost nothing but a bit of time.

2 Likes