TrueNAS Scale: Ultimate Home Setup incl. Tailscale

@wendell, I would like to get your thoughts on using portainer to manage the NFS / SMB shares that are passed into a container.

I’ve been putting my containers in using compose.yaml via the stack in portainer. I previous mounted a couple of shares manually and added them to the fstab (same as you). I have since started to just add the NFS or SMB share folder into compose.yaml file in the stack.

volumes:
truenas-movies-smb:
driver: local
driver_opts:
type: cifs
o: username=guest,password=****,rw,uid=1000,gid=1000
device: “\\192.168.1.5\storage\movies”

My question is have you tested the differences? Will portainer manage share disconnects and reconnect?

Love the Portainer interface and like having everything for to make my containers work without need the host to have extra configs setup.

Hi there,

I have an issue with setting up the VM an issue that I have had for a long time on truenas with VMs

After I go through with the installation and asks for the reboot I shut it down to remove the installation media and after rebooting it just won’t start.

Any help would be appreciated.
Thanks

1 Like

Hi There. Thanks Wendell for this guide. The bridge fix was gold. Solved a ton of frustration for me.

I have got portainer up and running and most stuff seems to be working. However, I am installing phantombot and it insists of making a volume. The default location of portainer volumes appears to be " /var/lib/docker/volumes/". I have been struggling trying to find a way to change this.

Is there a way? Do you know da wai?
Thanks

I had this issue as well. I’m trying to look it up, but my memory is that you have to change the location of a boot file. This might be it…

TrueNAS Scale virtualization and Debian UEFI boot loader issue | TrueNAS Community

1 Like

I found da wai in the docker compose at the end when you set the volumes. Just FYI for anybody that finds this or for me when I forget. lol

volumes:
  PhantomBot_data: 
     driver: local
     driver_opts:
       type: none
       o: bind
       device: /mnt/pond/compose/phantombot
2 Likes

First off thanks so much for the guide! I’ve used docker on unraid for years without really understanding it, I learned a ton going through this.

In the video you mention creating individual datasets for dockers so that snapshots would be possible. I am a little confused about how to go about setting up for that. Following the guide, currently all my docker data is in /NFSdocker/nfsdckr but the /nfsdckr folder is not a dataset.

Tell the NFS share system to treat the “root” user on the client as root on this system. Map user and Map root should both be set to rot.

I didn’t really understand this step. I set Maproot User and Maproot Group both to root in the NFS share /NFSdocker. So why change the mount point and use the home directory /nfsdckr ? Would it make sense to just create sub datasets under /NFSdocker for each container and keep the original mount?

Warning to those who intend to run sonarr/radarr/etc with this setup: the database is NOT happy being on an NFS share. You will end up with a flood of " Database is locked" errors. There a bunch of threads about it, wish I had known in advance.

Not sure whether to just store it on the VM or start working toward a different solution.

3 Likes

HI!

i managed to created everything following the guide, just some shenanigans with NFS that were fixed easily messing with truenas settings.

I managed to connect to other containers on the same machines, but Im trying to access the rest of the network (my router for example), and the connection shows on the tailscale container log, but it doesnt load on my phone.
Anyone knows what would be needed?
I don’t have any firewall set up on my VM system (I did it with ubuntu server 22 though)

EDIT: found it, I just needed to add

iptables -t nat -A POSTROUTING -j MASQUERADE

EDIT2: doing this works for external devices fine, but my VM loses ability to resolve DNS names, I can ping 1.1.1.1 though

EDIT3: Wow, this took a while to figure it out, you actually need to exclude the loopback interface from the masquerade, so…

iptables -t nat -A POSTROUTING ! -o lo -j MASQUERADE

Now I have the exit node working and I get the external IP of my tailscale VM at home, even while on LTE if I choose to use the exit node in iOS, I also get access to the rest of the lan, and to the rest of the containers.

2 Likes

Hi! new poster here. I tried following this guide with the Nextcloud container not having the correct permissions for a sub directory in the nsf share. However the db container seems to not have this issue as it was able to create /nfs/nextcloud/database/. I tried to use mkdir in /nfs/nextcloud/ using the root user of the vm but that also resulted in a permission denied error.

I’m honestly quite new to this nsf and docker stuff so I have no idea where to look for any potential causes.

Edit 1

Here is the exact error that portainer gives me when the stack is first deployed

Deployment error
failed to deploy a stack: Container nextcloud-db-1 Creating Container nextcloud-db-1 Created Container nextcloud-app-1 Creating Container nextcloud-app-1 Created Container nextcloud-db-1 Starting Container nextcloud-db-1 Started Container nextcloud-app-1 Starting Error response from daemon: error while creating mount source path '/nfs/nextcloud/data': mkdir /nfs/nextcloud/data: permission denied 

Edit 2 - -Solved-

Using the truenas console in the GUI I was able to figure out that ownership of /nfs/nextcloud was given to the truenas root user. I used chown to transfer ownership to the VM user and the apps group. This allowed the stack to deploy correctly. I don’t have any idea why ownership was given to root in the first place though.

Edit 3

I guess it’s not going to be that simple. It seems like none of the files created by the container are having ownership / permissions added correctly, and after looking at the logs it be came apparent to me that this issue is also affecting the db container also. I still have no Idea what is causing this, so I might retry this from scratch. Wish me luck

So I didn’t start from complete scratch. I nuked the NFS share, data set, and user in Truenas. I also got rid of the portainer instance and /nfs mount on the VM. I then I tried to folow the guide the best I could to recreate them. I ended up getting the same results.

What the directories look like

From Truenas gui shell:

 root@truenas[/mnt/SmallNetapp/VMs/DockerData/nfsdckr]# ls -la
 total 79
 drwxrwxr-x 4 nfsdckr nfsdckr    8 Aug 16 22:09 .
 drwxr-xr-x 3 root    root       3 Aug 16 21:50 ..
 -rw-r--r-- 1 nfsdckr nfsdckr  220 Aug 16 21:50 .bash_logout
 -rw-r--r-- 1 nfsdckr nfsdckr 3526 Aug 16 21:50 .bashrc
 -rw-r--r-- 1 nfsdckr nfsdckr  807 Aug 16 21:50 .profile
 drwxr-xr-x 3 root    root       3 Aug 16 22:09 nextcloud
 drwxrwxrwx 7 nfsdckr nfsdckr   10 Aug 16 21:58 portainer_data
 -rw-rw-rw- 1 nfsdckr nfsdckr    0 Aug 16 21:54 testfile

and from the docker VM:

 root@DockerVM:/nfs# ls -la
 total 82
 drwxrwxr-x  4 1001 1003    8 Aug 16 23:09 .
 drwxr-xr-x 20 root root 4096 Aug 16 11:49 ..
 -rw-r--r--  1 1001 1003  220 Aug 16 22:50 .bash_logout
 -rw-r--r--  1 1001 1003 3526 Aug 16 22:50 .bashrc
 drwxr-xr-x  3 root root    3 Aug 16 23:09 nextcloud
 drwxrwxrwx  7 1001 1003   10 Aug 16 22:58 portainer_data
 -rw-r--r--  1 1001 1003  807 Aug 16 22:50 .profile
 -rw-rw-rw-  1 1001 1003    0 Aug 16 22:54 testfile 

Why?!

So the files/dirs that I manually created through the VM such as “testfile” and “portainer_data” have the correct ownership settings applied to them but the files/dirs created by the docker instances launched through portainer do not. Interestingly all the files created by docker in the “portainer_data” dir all have the correct ownership. I’m going to try to manually create the nextcloud dir before I build the stack in portainer to see if that changes anything.

Try to put no_root_squash in your /etc/fstab mount command. That would allow you to use the CHMOD in case any of your container need it.

And create the folders first with your Debian user before run any docker container

5 Likes

Adding “no_root_squash” to fstab and rebooting the vm fixed it. Thank you, I really appreciate it.

2 Likes

im having this “shell” trouble also after install. any help would be appreciated.

I am worried about the resource stealing from the main system. I am using an 8 core xeon with 128gb ram and I dont mind allocating a fixed amount of ram but I worry about fixed CPU cores, does the VM hypervisor in TrueNAS scale allow for flexible CPU utilization (use all 8 cores when Plex going ham but then back to the main OS when serving SMB)? Considering switching over from Core but dont love the idea of the somewhat janky setup of not using the built in systems.

1 Like

type this
FS0:
edit startup.nsh

inside the editor type this:
FS0:
cd EFI
cd debian
grubx64.efi

press Ctrl+S and return to save
press Ctrl+x to exit from the editor

and type reset to restart and see if is working.

5 Likes

Is it possible to create a bridge (E: specifically in TrueNAS) without any physical members? I recall doing something similar in the past where network interfaces were exclusively managed virtually (e.g. pfSense) and the host had a virtual interface to the management instance.

I keep running into an issue where after restarting the debian vm portainer wants me to create a new admin or restore from backup. Looking at the logs I see:

time="2022-08-21T21:28:50Z" level=info msg="2022/08/21 21:28:50 http: TLS handshake error from 10.10.1.155:57236: remote error: tls: unknown certificate"
time="2022-08-21T21:28:51Z" level=info msg="2022/08/21 21:28:51 http error: A valid authorisation token is missing (err=Unauthorized) (code=401)"
time="2022-08-21T21:28:51Z" level=info msg="2022/08/21 21:28:51 http error: No administrator account found inside the database (err=object not found inside the database) (code=404)"

And if I proceed with creating a new admin, I get the error, “This stack was created outside of Portainer. Control over this stack is limited.”

How’s the access performance when using a ZFS storage pool to game in a VFIO VM? Considering this, but if the performance can’t be increased, may switch to BTRFS and unRAID.

@Grits69FordF100 & @sssetheliss

Mine did this too.

The problem is that Debian doesn’t put the EFI boot file where the VM UEFI expects it.

What worked for me:

  • Type EXIT at the Shell> _ prompt.

That will drop you to a BIOS/UEFI setup menu.

Untitled

  • Navigate thru Boot Maintenance Manager > Boot From File > <YourBootDriveGUID> > <EFI> > <debian> > grubx64.efi

You will probably want to add this as a boot option and configure the UEFI to use it automatically, but if you have the same issue I did, that will be a waste of time as the VM will just ignore all your changes even though you save them.

To avoid having to repeat the steps above at every boot…

Once debian is booted, open a shell and using root privileges run:

mkdir -p /boot/efi/EFI/BOOT
cp /boot/efi/EFI/debian/grubx64.efi /boot/efi/EFI/BOOT/bootx64.efi
6 Likes

@wendell just wanted to give you another shoutout and thankyou for this setup.

Just rebooted my TrueNAS and all the apps GONE! been struggling for the last hour to try find them and get them back. I was using some in truenas but most in the VM as I assumed the TrueNAS ones would come up quicker than the VM ones. ie. PiHole.

VM apps, not affected and running 100%… still can’t get trueNAS ones back. Luckily the ones I had used persistant data away from the ix-systems stuff, so nothing was lost, but I trust that system now as much as I trust facebook to keep my data private.

3 Likes