TrueNAS Scale: Ultimate Home Setup incl. Tailscale

Hello All!! I’m a newcomer here and would like to honor Wendell’s well crafted guide and very excellent walk through.

After walking through the guide, I realized I may have found a way to do this utilizing OPNSense w/ Tailscale, and with TrueNAS VMs simply running Tailscale.

Obviously, I first and foremost, claim no responsibility and these instructions are As-is, use at your own discretion. But, In the spirit of helping others in whatever way, I’ll share what has worked for me to utilize my TrueNAS box.

This results in VM’s being able to hit the TrueNAS Box, with no modifications needed to TrueNAS.

My gateway is an OPNsense box, with TailScale installed. There isn’t a plugin for this, but you can install via command line, once this is finished, you then need to set Tailscale up, and advertise the subnet. *Exit node, is optional.

This TailScale KB provides a few commands as part of installation that must be followed. However, I’ll repost the commands to run on your OPNSense firewall, please refer to TailScale KB for full guide, I’m sorry this forum will not let me post URL link. You can google TailScale OPNSense or google the below commands. Anyway, I’ll repost the commands here for convenience.
Perform the following steps as root on OPNSense firewall: (You will need to enable SSH and Putty into the box)

# opnsense-code ports
# cd /usr/ports/security/tailscale
# make install
# service tailscaled enable
# service tailscaled start
# tailscale up

You also have to allow NAT-PMP port mapping, in services, there are also steps to follow in same KB from Tailscale (KB 1097)
You get within the SSH console a link to “activate” this device on TailScale. Do so, then verify the device is added to your TailScale admin portal.

Once all commands are ran, from the above article, then lastly advertise your subnet, in this example just replace subnet with yours, and the command would be the following in OPNSense console.

sudo tailscale up --advertise-routes=10.0.0.0/24

Now check your TailScale console / admin portal to verify Subnet route is toggled “ON” if it’s not already from advertised router.

Then install TrueNAS VM’s as normal. And then just install TailScale as normal on TrueNAS VMs.
For my WindowsVM, immediately upon installing TailScale, I could hit my local NAS IP. For Ubuntu, I had to run sudo tailscale up --accept-routes.

I tested Plex access from my Windows 10 VM, no need to forward ports at all. I hope this helps others. I’m happy to share what I can. Thanks,

1 Like

Hello everyone, i have this error when i try to start the docker :

docker: Error response from daemon: error while creating mount source path ‘/nfs/portainer_data’: mkdir /nfs: read-only file system.

the frustration is than i can touch /nfs/hello_world and the file is created, but when i try to create the docker command to create the portainer this error appears.

If anyone can help, that would be great. Thanks in advance!

I found this too…

Does anyone know why we have to do this?
As it does not seem like anyone else is having this issue so we must be doing something different :frowning:

If anyone of you TrueNAS/Linux folks could help me out with my problem detailed in the thread below, it would be appreciated:

Great guide @wendell , it helped me both with my work setup and at home.
However, there’s a better way I found to run docker containers that involve less resources, and I’ll detail it below.

First off, you’ll need to add TrueCharts as a catalog in the TrueNAS apps section, the guide can be found here:
TrueCharts Guide

It will take a while to sync, so don’t worry about that or the spike in usage as it verifies the new catalog.

After that, you’ll have the option to install portainer directly from TrueNAS apps DON’T, instead, create a docker compose file from the command Wendell showed in the video.
Here’s an example one:

version: '3.3'
services:
    portainer-ce:
        ports:
            - '8000:8000'
            - '9443:9443'
        container_name: portainer
        restart: always
        volumes:
            - '/var/run/docker.sock:/var/run/docker.sock'
            - 'portainer_data:/data'
        image: 'portainer/portainer-ce:latest'

Change the portainer_data to the path to your storage, and save this file as a yml in a location on your NAS.

Next go to the apps and search for docker-compose, hit install, and give it a name. I chose the very creative name of portainer, next next next until you reach the compose path, there you’ll have to manually enter the path to the file you just created.

Finish the setup, and wait for it to deploy, from there you’ll be able to use the rest of Wendell’s guide, only outside a VM which saves you the VM part of the overhead.

Finish this by deploying the app (pretty straight forward, next next next on most of the stuff) and set up a password for portainer, and for the rest just continue with Wendell’s guide in the first post.

EDIT: I may have misspoke, the TrueChart for portainer only supports Kubernetes as the local env, I am testing if using the docker-compose TrueChart can use docker instead, will update in a new reply.

EDIT2: Updated the flow to the correct one in this post.

Update: I’ve updated my last post to use the correct method, it works using the docker-compose chart.

Does this survive a reboot ? I was close to.doing it this way but the docker file in /etc gets clobbered on a reboot. Daemon.conf iirc? Otherwise compose doesnt work but oirtainer doesnt realize it doesnt work

@wendell yes, that does survice reboot. However you are still running portainer under their system. I was often getting “hangs” and issues that I needed to restart the portainer container which meant all the dockers running under it.

After your initial VM system I went to use this method, but found it as flaky as doing the stuff direct under kubernetes. The only methods I have found that are “speedy” and avoid kubernetes crap fest is the VM container and disabling Kubernetes entirely and using docker natively on TrueNAS box.

I believe I explained it further up, but here is the link to the script again. GitHub Script to run docker natively on TrueNAS

This has survived TrueNAS upgrades and imho, is the best way of running docker on TrueNAS. I have had none of issues with the other methods.

2 Likes

Just came back from a reboot, and can confirm using this method all the stacks remain and boot up with the machine.

Using the method I described (after the edit) is not using Kubernetes, but native docker. You can see this and manage the dockers natively from the TrueNAS shell with docker commands. So far it has been working fine for me, then again I am not running intensive dockers (yet) so YMMV.

1 Like

This is The Way ™ imho. I can get behind what the truecharts people are doing, but it has to be frustrating that truenas is at the helm vs what they’re trying to do with truecharts.

Also, its still the case (I think? correct me) that the pure docker way is also the only way to permit dns/53. You can’t use the networking stack as-is, even with truecharts to, say, open up port 53, for pihole. portainer+native docker lets you, and you can if you assign port 53 to an unused ip on your network (but you also need a different bridge for that))

as a best practice it is worth setting up the birdge ethernet interface so everything can talk to everything else to

Blockquote
Using the method I described (after the edit) is not using Kubernetes, but native docker. You can see this and manage the dockers natively from the TrueNAS shell with docker commands. So far it has been working fine for me, then again I am not running intensive dockers (yet) so YMMV.

I ran this method for a few months and the Truecharts Docker-Compose IS a kubernetes app, meaning that portainer runs UNDER kubernetes. Things inside portainer might be docker, but the entire thing is controlled via Kubernetes and the whims of it. You may or may not believe me, but that is the facts.

When TrueNAS decides to “load balance” and other stuff built into kubernetes, the docker-compose chart gets effected. Same thing after a reboot and all your apps are gone, the docker compose and everything under it is gone too.

To prove this. have your dockers running, the stop portainer docker-compose in TrueNAS. Your dockers withing should still be running… are they?

The only way to get native docker running is through commandline which is riddled with issues, or by stopping Kubernetes entirely (removing its pool assignment) and then going commandline and changing the daemon.json.

Again, let me reiterate, This method of trucharts docker-compose is hell of a lot easier to use than their truecharts, but it is still using their kubernetes system and under the whims of it.

EDIT: Haven’t checked this method since the last update or two… so it just occurred to me that it might have changed, but I seriously doubt it. Docker-Compose Chart is a KUBERNETES app

2 Likes

Yep, you are right, I just tried to update the docker-compose chart/app, and it just bricked every container that portainer launched. Thank God they have a rollback with snapshots, I was able to return to a working state (although with portainer now saying the stacks have limited control) rather quickly.

I do wonder now if there’s a comparison of resource usage between the three methods, Kub, VM Docker, and Native Docker. Because I just built my NAS and only have 16 GB of RAM (Non-ECC, but I’m planning to order at least 64 GB of [slower] ECC memory). And I am constantly running into low memory restrictions, when running 1 VM with 4 GB of RAM.

Haven’t tested it yet, but both TrueNAS and truecharts have a PiHole app you can just install and run, so I’m guessing it is possible in a way? I will try to deploy PiHole in my worknas soon and will surely update you with the results.

Yeah, the bridge is still there, and also the only way I got portainer to run was to have it inside a private network I defined in the compose file.

Also, I will mention, they have a portainer truechart you can install, but that would limit you to using a Kub env inside portainer. And from what I read, you can not have another/change your local env once portainer is up.

yup. I learnt the hard way too about this. Thats why I was so insistent on letting you know. lol

Comparisons between the various methods I run Geekbench in a the various scenarios: In VM, Outside VM in Portainer as you set it up, and in the Native Docker I currently have running. These are the results.

## Docker in VM
https://browser.geekbench.com/v5/cpu/16708564
1 Processor, 16 threads
Single Core: 503
MultiCore: 5416

## Docker outside
 https://browser.geekbench.com/v5/cpu/16708864
2 Processor, 32 threads
Single Core: 542
MultiCore: 1232

## New docker setup, no kubernetes
https://browser.geekbench.com/v5/cpu/17034640
2 Processor, 32 threads
Single Core: 610
MultiCore: 9263

Something about Kubernetes is sucking the life out of the processes. Hope that helps.

1 Like

I appreciate that.

So just to be clear, the “Docker outside” result is using Truecharts’ docker-compose app to install a portainer, and that portainer to run a docker, correct?

One question though, do you know what the difference was on system resources? How much RAM each method used? Was there a difference?

It does, a lot. If I could bother you one last time, I would ask how exactly you nuked all the Kubernetes stuff and installed docker natively? I enabled docker-compose, and tried to run the same compose file for Portainer, but it did not launch natively.

I think a short guide would benefit everyone here who uses TrueNAS SCALE, because with the results you’ve shown, the difference is very noticeable.

Check some of my posts previously… they have a link to script on github. Pretty much explains everything.

Essentially by unsetting the “pool” of kubernetes it stops it from running. Next step is a script to force the demon.json file after every restart.

I actually have an issue with Kuberenetes in my worknas, and unsetting the pool does not release the files in ix-software, when I try to delete them they say device or resource busy. I’ll look into your script and maybe check it today. Thanks!

Just tested this yesterday, had to actually delete the dataset of the dockers in order to unlock the process, but I have a few questions on this new flow.

  1. I’ve run into issues where either the docker version or the docker-compose version is too old and missing features, and thus not launching my yml file.
    Is there a safe way to update both, and does it help?

  2. I’ve run into an issue where I had 1 container fail to load (on the host network) with the log showing “cannot resolve GitHub.com”. Tested from TrueNAS shell and got response from github.com, so it’s not a network issue for the entire NAS.
    Is there a step I missed? Or is this just a one off container that is just broken somehow?

Probably better to ask in the GitHub thread to the guys that actually wrote it. I have been using it for a few months with no issues myself. Did @wendell bridge and then switched to using that script after trying the other methods.

As for docker-compose, I don’t use it and have never used it with that system. I tend to use the Stacks in Portainer, so have really found no need for it.

Not sure why you would have a error reaching github.com, unless the network for that container is blocked in some way.

Whether right or wrong, I tend to have my network stacks either setup like this:

version: "2.1"

services:
  dropbox:
    image: otherguy/dropbox:latest
    #image: talung/dropbox:latest
    container_name: dropbox    
    environment:
      - DROPBOX_UID=1000
      - DROPBOX_GID=1000	  
      - TZ=Asia/Bangkok
      #- DROPBOX_SKIP_UPDATE=true
    volumes:
      - /mnt/pond/appdata/dropbox:/opt/dropbox/.dropbox
      - /mnt/lake/cloud/dropbox:/opt/dropbox/Dropbox    
    restart: unless-stopped
    network_mode: host

Or on a self made private network like this:

version: "2.3"
services:
  emby:
    image: emby/embyserver
    container_name: emby
    runtime: nvidia # Expose NVIDIA GPUs
    #network_mode: host # Enable DLNA and Wake-on-Lan
    environment:
      - UID=1000 # The UID to run emby as (default: 2)
      - GID=1000 # The GID to run emby as (default 2)
      - GIDLIST=107,44 # A comma-separated list of additional GIDs to run emby as (default: 2)
    volumes:
      - /mnt/pond/appdata/emby:/config # Configuration directory
      - /mnt/lake/media/tv:/mnt/tv # Media directory
      - /mnt/lake/media/movies:/mnt/movies # Media directory
    ports:
      - 8096:8096 # HTTP port
      - 8920:8920 # HTTPS port
    devices:
      - /dev/dri:/dev/dri # VAAPI/NVDEC/NVENC render nodes
      #- /dev/vchiq:/dev/vchiq # MMAL/OMX on Raspberry Pi
    restart: unless-stopped
    networks:
      - privatenetwork

networks:
  privatenetwork:
    external: true

With “privatenetwork” setup as bridge
IPV4 Subnet - 192.168.50.0/24
IPV4 Gateway - 192.168.50.1
IPV4 IP Range - 192.168.50.1/25

Maybe that will help.

Thanks for the input, I prefer to use portainer as well, but run portainer from docker-compose and had to give up on certain features because of the older version of it.

As for the docker that got blocked, it is also set up with network: “host”, so no idea why. Will check with the developers.

Actually noticed some strange thing that may be related to this method. Up until now virtual machines on my NAS connected to the network seamlessly, since then they can’t seem to reach my network and nothing changed in the network config.

Could this be in some way connected? If so, is there a fix?