TrueNAS Scale: Ultimate Home Setup incl. Tailscale

Blockquote
Using the method I described (after the edit) is not using Kubernetes, but native docker. You can see this and manage the dockers natively from the TrueNAS shell with docker commands. So far it has been working fine for me, then again I am not running intensive dockers (yet) so YMMV.

I ran this method for a few months and the Truecharts Docker-Compose IS a kubernetes app, meaning that portainer runs UNDER kubernetes. Things inside portainer might be docker, but the entire thing is controlled via Kubernetes and the whims of it. You may or may not believe me, but that is the facts.

When TrueNAS decides to “load balance” and other stuff built into kubernetes, the docker-compose chart gets effected. Same thing after a reboot and all your apps are gone, the docker compose and everything under it is gone too.

To prove this. have your dockers running, the stop portainer docker-compose in TrueNAS. Your dockers withing should still be running… are they?

The only way to get native docker running is through commandline which is riddled with issues, or by stopping Kubernetes entirely (removing its pool assignment) and then going commandline and changing the daemon.json.

Again, let me reiterate, This method of trucharts docker-compose is hell of a lot easier to use than their truecharts, but it is still using their kubernetes system and under the whims of it.

EDIT: Haven’t checked this method since the last update or two… so it just occurred to me that it might have changed, but I seriously doubt it. Docker-Compose Chart is a KUBERNETES app

2 Likes

Yep, you are right, I just tried to update the docker-compose chart/app, and it just bricked every container that portainer launched. Thank God they have a rollback with snapshots, I was able to return to a working state (although with portainer now saying the stacks have limited control) rather quickly.

I do wonder now if there’s a comparison of resource usage between the three methods, Kub, VM Docker, and Native Docker. Because I just built my NAS and only have 16 GB of RAM (Non-ECC, but I’m planning to order at least 64 GB of [slower] ECC memory). And I am constantly running into low memory restrictions, when running 1 VM with 4 GB of RAM.

Haven’t tested it yet, but both TrueNAS and truecharts have a PiHole app you can just install and run, so I’m guessing it is possible in a way? I will try to deploy PiHole in my worknas soon and will surely update you with the results.

Yeah, the bridge is still there, and also the only way I got portainer to run was to have it inside a private network I defined in the compose file.

Also, I will mention, they have a portainer truechart you can install, but that would limit you to using a Kub env inside portainer. And from what I read, you can not have another/change your local env once portainer is up.

yup. I learnt the hard way too about this. Thats why I was so insistent on letting you know. lol

Comparisons between the various methods I run Geekbench in a the various scenarios: In VM, Outside VM in Portainer as you set it up, and in the Native Docker I currently have running. These are the results.

## Docker in VM
https://browser.geekbench.com/v5/cpu/16708564
1 Processor, 16 threads
Single Core: 503
MultiCore: 5416

## Docker outside
 https://browser.geekbench.com/v5/cpu/16708864
2 Processor, 32 threads
Single Core: 542
MultiCore: 1232

## New docker setup, no kubernetes
https://browser.geekbench.com/v5/cpu/17034640
2 Processor, 32 threads
Single Core: 610
MultiCore: 9263

Something about Kubernetes is sucking the life out of the processes. Hope that helps.

1 Like

I appreciate that.

So just to be clear, the “Docker outside” result is using Truecharts’ docker-compose app to install a portainer, and that portainer to run a docker, correct?

One question though, do you know what the difference was on system resources? How much RAM each method used? Was there a difference?

It does, a lot. If I could bother you one last time, I would ask how exactly you nuked all the Kubernetes stuff and installed docker natively? I enabled docker-compose, and tried to run the same compose file for Portainer, but it did not launch natively.

I think a short guide would benefit everyone here who uses TrueNAS SCALE, because with the results you’ve shown, the difference is very noticeable.

Check some of my posts previously… they have a link to script on github. Pretty much explains everything.

Essentially by unsetting the “pool” of kubernetes it stops it from running. Next step is a script to force the demon.json file after every restart.

I actually have an issue with Kuberenetes in my worknas, and unsetting the pool does not release the files in ix-software, when I try to delete them they say device or resource busy. I’ll look into your script and maybe check it today. Thanks!

Just tested this yesterday, had to actually delete the dataset of the dockers in order to unlock the process, but I have a few questions on this new flow.

  1. I’ve run into issues where either the docker version or the docker-compose version is too old and missing features, and thus not launching my yml file.
    Is there a safe way to update both, and does it help?

  2. I’ve run into an issue where I had 1 container fail to load (on the host network) with the log showing “cannot resolve GitHub.com”. Tested from TrueNAS shell and got response from github.com, so it’s not a network issue for the entire NAS.
    Is there a step I missed? Or is this just a one off container that is just broken somehow?

Probably better to ask in the GitHub thread to the guys that actually wrote it. I have been using it for a few months with no issues myself. Did @wendell bridge and then switched to using that script after trying the other methods.

As for docker-compose, I don’t use it and have never used it with that system. I tend to use the Stacks in Portainer, so have really found no need for it.

Not sure why you would have a error reaching github.com, unless the network for that container is blocked in some way.

Whether right or wrong, I tend to have my network stacks either setup like this:

version: "2.1"

services:
  dropbox:
    image: otherguy/dropbox:latest
    #image: talung/dropbox:latest
    container_name: dropbox    
    environment:
      - DROPBOX_UID=1000
      - DROPBOX_GID=1000	  
      - TZ=Asia/Bangkok
      #- DROPBOX_SKIP_UPDATE=true
    volumes:
      - /mnt/pond/appdata/dropbox:/opt/dropbox/.dropbox
      - /mnt/lake/cloud/dropbox:/opt/dropbox/Dropbox    
    restart: unless-stopped
    network_mode: host

Or on a self made private network like this:

version: "2.3"
services:
  emby:
    image: emby/embyserver
    container_name: emby
    runtime: nvidia # Expose NVIDIA GPUs
    #network_mode: host # Enable DLNA and Wake-on-Lan
    environment:
      - UID=1000 # The UID to run emby as (default: 2)
      - GID=1000 # The GID to run emby as (default 2)
      - GIDLIST=107,44 # A comma-separated list of additional GIDs to run emby as (default: 2)
    volumes:
      - /mnt/pond/appdata/emby:/config # Configuration directory
      - /mnt/lake/media/tv:/mnt/tv # Media directory
      - /mnt/lake/media/movies:/mnt/movies # Media directory
    ports:
      - 8096:8096 # HTTP port
      - 8920:8920 # HTTPS port
    devices:
      - /dev/dri:/dev/dri # VAAPI/NVDEC/NVENC render nodes
      #- /dev/vchiq:/dev/vchiq # MMAL/OMX on Raspberry Pi
    restart: unless-stopped
    networks:
      - privatenetwork

networks:
  privatenetwork:
    external: true

With “privatenetwork” setup as bridge
IPV4 Subnet - 192.168.50.0/24
IPV4 Gateway - 192.168.50.1
IPV4 IP Range - 192.168.50.1/25

Maybe that will help.

Thanks for the input, I prefer to use portainer as well, but run portainer from docker-compose and had to give up on certain features because of the older version of it.

As for the docker that got blocked, it is also set up with network: “host”, so no idea why. Will check with the developers.

Actually noticed some strange thing that may be related to this method. Up until now virtual machines on my NAS connected to the network seamlessly, since then they can’t seem to reach my network and nothing changed in the network config.

Could this be in some way connected? If so, is there a fix?

Not related to that method. I have zero issues with my network and am running the bridge as setup by Wendell in the original post.

I am only using a single network interface on the box. fyi

So I’ve been fighting with my Truenas Scale system for a few weeks now. I have it up and running reasonably well as just an SMB share, but now I’m trying to do other things and it is doing my head in.

My network is set up as follows. my PC connects via ethernet to my router, the router connects over wifi to a wireless bridge, which my Truenas machine is connected to via ethernet.

any apps I install (truecharts or official versions) result in both the truenas and the app’s interface disconnecting and lagging like crazy and becoming completely unusable (only tried to install pihole and plex, but it was the same for both).

in order to get the Truenas web interface to be accessible on a static ip, I had to set an alias on the network adapter (192.168.1.106/24 in case that’s important) but with nothing else installed, it’s working pretty well.

I’m using this guide to see if installing new things this way will still screw with the interface, and I have a debian VM up with docker installed. The debian shell is rock solid and the Truenas interface has been solid, so problem solved, right?
Then I tried to add the bridge for the VM to see the truenas storage. I went to update the network interface, and used the indicated ip for the alias (the ip of my truenas machine, not 1.1) and it throws an error saying that 192.168.1.0/24 is already part of an alias. Weird, as i used .1.106, not .1.0.
I thought maybe i was misunderstanding, and put the ip of the debian machine that I set up during installation, .1.107/24 now the error changed to 192.168.1.1/36 is already part of an alias.

I have next to no networking knowledge, and I’m completely stuck here. Could anyone point me in the direction of something to try? Is there something I messed up in the initial install to make the network seem to require an alias to be reachable? maybe the alias should be an ip other than the truenas box?

Let me know what information would be required to try and narrow it down…

Thanks,

Yeah, it was a long shot, since I went over the code, and it did not touch the firewall or anything. Really strange that all VMs lost network access right after this.

that is not a valid subnet. The max is 32 which is a single host.

It is weird, but did you set up the bridge from the CLI or the web interface? If you followed the video, you’d know only the CLI works best for this. (I found that out the hard way before Wendell posted his video or this thread.)

I went to double check it just now, and it only showed /24’s this time, so I’m not sure how I managed to do it last time, or which number it was.

I set up the bridge from the truenas box itself, not the web interface.

Yeah, it’s really strange that aliases need /24, that’s a whole class C subnet and not a specific host. That’s also the way it’s configured on my machine, and it works.

I wonder what the logic behind it is.

Would adding more physical NICs solve my problem, or would the bridging still say the network is already used in an alias? I just don’t know enough about networking to actually understand the problem to know where to start troubleshooting…

I am going to write a new post with the new way to do access to the host from VMs and have native docker. It will be a mix of solutions from @talung and one other person from GitHub.

There is one issue that I’ve yet to solve, and that’s how to have the new bridge (which will have no physical members) deal out IP addresses to all the NICs attached to it.

1 Like

Has anyone successfully setup TrueNAS SCALE → Debian VM → Jellyfin Container WITH Intel Quick Sync?
I can’t get /dev/dri/renderD128 to show on the Debian VM.

I’ll just post a link to the guide I created from the result of the comments on this topic in case anyone is interested.
https://forum.level1techs.com/t/truenas-scale-native-docker-vm-access-to-host-guide/190882/6

5 Likes

Did you figure this out? Running into the same issue: database is locked when using an NFS share. I used

rw,async,noatime,nfsvers=4,rsize=8192,wsize=8192,hard,tcp,timeo=14

Edit: No soltuion so I switched to zvol. It’s not so bad since you can mount them uniquely by partuuid.