TrueNAS Scale: Ultimate Home Setup incl. Tailscale

yeah… see if you have the bug where there are 60,000 snapshots now.

It really is baffling. Install docker, install portainer, DONE.

The VM + nfs mount is as close as you can come without otherwise disturbing the pond and going off-script majorly for truenas. They’ll figure it out for themselves. Eventually.

1 Like

my harddrives were thrushing… how would I check for that bug now? Still noobish with TrueNAS, used to proxmox.

Just run a: zfs list -t snapshot

and definitely less than 60k. So I don’t appear to have that bug. Must be something else… forums recommend boot, reboot and reboot again till it comes back… sounds like a windows solution :smiley:

Edit: That would be a great Fork TrueNAS Scale Home … Everything Except replacing their app crap with portainer and standard docker. I understand the “enterprise solution” but I think a majority of users use it as a Home System.

PS… even my cats can’t find the Apps

7 Likes

BTW, I have still been trying to get docker working without kubernetes on the bare metal without the need of the VM, and I have found a solution. Might be worth looking at. Running it atm without any issues and survives a reboot. Apparently also survives upgrades.

My reason for this is that things like Dropbox hate shared drives etc. so they cannot be used under the VM which is annoying as hell. Also, sometimes the VM runs before the shares initialize, so need to reboot the VM… also annoying as hell.

Here is the script I found. You have to unset your APP pools as it won’t run together.

3 Likes

Can’t touch NFS mount, help please?

Let me back up a bit. This has been great trying to learn how to do this whole project on my own, but I keep running into problems I can’t resolve, and most of them seem to be permissions related. But I’m now days and days in and just stuck.

I was able to make full progress until much later in the guide in when trying to get the Portainer run step, when I got this message

docker: Error response from daemon: error while creating mount source path ‘/nfs/portainer_data’: mkdir /nfs: read-only file system.

At that point, feeling kinda lost around some of the previous steps, I decided to wipe the VM and start the steps again. It’s worked in the past for me when I get stuck; stop, go back a few steps, try again, learn, but not this time.

Now I get to the point of mounting the NFS onto the VM and permissions problems again.

Thanks in advance all!

It now worked, and other than black voodoo magic with fairy dust, I have no idea why.

Does anyone know of any good (video) tutorials on how permissions work in Truenas Scale? Most content out there is either written as documentation for an experienced Linux user, or overly specific around answering a question on some forum that misses sharing the why behind the answer.

And if not… sounds like a suggestion for a future video topic? :slight_smile:


2 Likes

@Yummyfudge, did you do a google advance video search? I just did a quick search and found many videos on the topic you are asking about.

I was running into the same issue with automount the NFS share.

Discovered an alternative method below that works great.

systemctl enable systemd-networkd-wait-online.service

Run this on the VM and restart it. The nfs share should automatically mount.

5 Likes

It would definitely be interesting if this solution is superior to the VM one. I have recently set up my TrueNAS system and was wondering whether to go the apps or VM route. Your experience helped me make up my mind in this regard, but now this nice scripty comes along :eyes:

Anyway, @wendell’s opinion would be appreciated as well.

Edit: I assume this script would make the VM part of this guide redundant, while still enabling users to save their application’s data on a dedicated NFS share as mentioned in the guide.

Wendells big brain energy would definitely be welcome. I have been using this without issue now. Even got nvidia passthrough to work by updating the demon file in script. For me this runs perfectly and have had zero issues. It is pretty much the same as running docker on any other bare metal machine. The docker-compose is out of date, but I don’t use that ass I do everything through portainer stacks.

As for NFS shares… you don’t need them as you can connect directly to the drive themselves. Cuts out out a lot of traffic and overheard as you are using the zfs file system directly.

TLDR; Disables kubernetes with all the annoyances it brings with and enables the base docker experience. Working awesomely for me so far.

1 Like

You Had My Curiosity GIFs | Tenor

So, instead of

docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v /nfs/portainer_data:/data portainer/portainer-ce:latest

I would run this:

docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v /mnt/tank/dockerstuff/portainer_data:/data portainer/portainer-ce:latest

Yup pretty much. Just be aware, it makes the portainer mounts on the base system, so you might see a bunch of them floating about. After that all the docker downloads should go into the selected folder.

This was my command:
docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v /mnt/pond/appdata/portainer:/data portainer/portainer-ce:latest

and adjusted the daemon.json in the batch file to look like this:

## set a path to your docker dataset
docker_dataset='/mnt/pond/dockerset'

## HEREDOC: docker/daemon.json
read -r -d '' JSON << END_JSON
{
  "data-root": "${docker_dataset}",
  "exec-opts": [
    "native.cgroupdriver=cgroupfs"
  ],
  "runtimes": {
    "nvidia": {
      "path": "/usr/bin/nvidia-container-runtime",
      "runtimeArgs": []
      }
    }
}
END_JSON

My folder results looks like this:

As you can see after the portainer “volumes”, the rest goes to my “dockerset” dataset.

But please do at your own risk, and follow what was in that link. All I can say is the annoying “It worked for me”, so good luck :smiley:

EDIT: I can’t type :frowning:

2 Likes

I did a bit of digging and found the following two Jira issues:
https://ixsystems.atlassian.net/browse/NAS-115010
https://ixsystems.atlassian.net/browse/NAS-114665

Both mention the possibility of either using docker directly or through a VM. Note though, that the second issue concern itself with the possibility of docker being removed further down the road. This would break the script solution, but would not have any impact on the VM one.

Since I am sure that Wendell did some digging around before releasing the video, it is likely that he stumbled over those threads and chose for the VM path in order to guarantee future update comparability, which is not ensured by this script.

Edit: Also, as a kind of related question. If the host have 4c/4t how many cores and thread should I give the guest debian VM? As far as I can remember, Wendell said in the video that even 2c/2t are enough, but should I give it 3c/3t or should I split it evenly between host/guest? I assume the host will not be able to take more than its dedicated cores, but is the host able to take more cores if they are not use by the VM?

First one was about docker-compose which I don’t use at all, so that being removed is meaningless.

As for the “docker” being completely removed, I highly doubt it. If they do, I will cross that bridge when it comes to it and move it into a VM. At the moment I am going with the least resource intensive way.

Since all my data is on a dataset, and I have all the “stacks” setup, moving it onto a VM would be simple. Things like dropbox would be an issue and annoyance again, but everything else should work fine with minor tweaks.

If in doubt, definitely stick to the “safe” method. As for their current Kubenerties IX system, I think it is a load of poop and would avoid using it as much as possible. Only had the box running for a few weeks and have already been burned multiple times by it.

As for your resources on the VM machine, it really depends what you are running on and running in. Nothing is stopping you from shutting down the VM, adding more resources and firing it up again. Hell, you can just clone it and and screw around as much as you want without screwing up your working system. VMS are great for that.

1 Like

Ah, I got them mixed up apparently. Thanks for the clarification :+1:

Yeah, one could possibly just import the data again and proceed.

That’s definitely true though! I will probably go ahead with the docker solution in the coming days unless there is a clear argument against it in this thread (or similar ones).

1 Like

I figured I might just go ahead and install docker and portainer with the script and guide provided, but instead of Nextcloud I would prefer to use Seafile.

Thus, it should be easy peasy to make use of portainer’s stack and adjust the script provided like so:

version: '2.0'
services:
  db:
    image: mariadb:10.5
    container_name: seafile-mysql
    environment:
      - MYSQL_ROOT_PASSWORD=wow_sosecure_wow
      - MYSQL_LOG_CONSOLE=true
    volumes:
      - /mnt/tank/apps/seafile_mysql:/var/lib/mysql  # Requested, specifies the path to MySQL data persistent store.
    networks:
      - seafile-net

  memcached:
    image: memcached:1.6
    container_name: seafile-memcached
    entrypoint: memcached -m 256
    networks:
      - seafile-net
          
  seafile:
    image: seafileltd/seafile-mc:latest
    container_name: seafile
    ports:
      - "80:80"
#     - "443:443"  # If https is enabled, cancel the comment.
    volumes:
      - /mnt/tank/apps/seafile_data:/shared
    environment:
      - DB_HOST=db
      - DB_ROOT_PASSWD=wow_sosecure_wow
      - TIME_ZONE=Etc/UTC
      - [email protected] # Specifies Seafile admin user, default is '[email protected]'.
      - SEAFILE_ADMIN_PASSWORD=asecret     # Specifies Seafile admin password, default is 'asecret'.
      - SEAFILE_SERVER_LETSENCRYPT=false   # Whether to use https or not.
      - SEAFILE_SERVER_HOSTNAME=docs.seafile.com # Specifies your host name if https is enabled.
    depends_on:
      - db
      - memcached
    networks:
      - seafile-net

networks:
  seafile-net:

Now though, I’m curious about the following things related to my setup:

  1. What does networks mean? Is this used to let docker know that they belong together and/or are allowed to communicate with each other?

  2. Given this runs natively and not in a VM my TrueNAS machine has one IP (assume 192.168.1.1 like in Wendell’s example) and the proper way to connect to it is either truenas.local or 192.168.1.1. Given that I love encryption I also use HTTPS which means I connect to TrueNAS via https://192.168.1.1:443 and to Portainer with https://192.168.1.1:9443. Thus, the ports are the distinguishing factor. As a result, both ports 80 and 443 are already taken and I need to use different ones for Seafile. So, I could simply use 81 and 444 (assuming I will neither be using TOR, nor SNPP). However, I do configure a SEAFILE_SERVER_HOSTNAME as well as set a SEAFILE_SERVER_LETSENCRYPT which seems like it would mess with my setup. I think if i set SEAFILE_SERVER_HOSTNAME to truenas.local it will lead to issues with TrueNAS’ GUI, whereas using a different name might lead to other problems.

Any help/information on how to proceed would be appreciated :blush:

Edit 1: Would it be possible to just do something like this:

ports:
     - "192.168.1.2:80"
     - "192.168.1.2:443"

However, I will probably change my IP addresses in the future to something different, can I just change the stacks configuration then without having to re-install anything?

Edit 2: It seems like my proposed solution above does not work, given the expected format is <CONTAINER_PORT>:<MACHINE_PORT>. I have also discovered that I might just get away with the use of with the following network section:

...
networks:
  seafile-net:
    driver: bridge

However, I am not familiar enough with docker to be sure, so I’m looking through the respective manuals. Thus, my initial question/problem still stands as to what is the best practice to have TrueNAS and Seafile use ports 80 and 443 without getting in each other’s way.

1 Like

I give up, I have Seafile running now in HTTP only on a different port, but thus far I have not been able to configure it as I have intended.

Trying to go through this but currently getting stuck at:

I’ve tried replacing 192.168.1.1 with the IP of my TrueNAS machine, which is what 192.168.1.1 was in the walkthrough and with the IP of my Debian VM. In either case I still get:

mount.nfs: failed to apply fstab option

The part I think I may have gone wrong is:

Maybe I’m misreading this but Docker was installed on the Debian VM and at this stage we haven’t started any VM’s with Docker so what is the Docker VM that is being referred to? The IP of the Debian VM or something else?

Any thoughts or idea of troubleshooting things I can do would be appreciated.

Edit 9/7/22: Problem solved, issue was two fold. I was missing a / in the command and I wasn’t running it as sudo.

Can anyone elaborate on Wendell’s choice to use NFSv4 ACL type when configuring the NFS dataset? The scale docs seem to suggest that it’s only required if you want interoperability with ZFS systems created or used outside of the linux world. That sorta makes sense to me as a default but I don’t want to make permissions more complicated than they need to be, and my homelab and requirements are entirely linux-based, so I’m considering using POSIX instead.

If NFSv4 ACL type is actually a hard requirement for Docker storage, then my next question would be if it’s specifically a Docker-only requirement, and if child datasets of the docker NFS dataset should also be set to NFSv4, or if I can use POSIX for those. Typically I would have a child dataset for the persistent storage of each docker container.

EDIT: Also, as an aside, I had trouble getting the NFS mounts on the docker VM to mount reliably on boot. There was a systemctl command mentioned somewhere else in this thread which seemed to help but ended up not totally solving my problem. I added the bg option to each NFS share entry in /etc/fstab which completely solved my problem. The description of this option:

Specify bg for mounting directories that are not necessary for the client to boot or operate correctly. Background mounts that fail are re-tried in the background, allowing the mount process to consider the mount complete and go on to the next one. If you have two machines configured to mount directories from each other, configure the mounts on one of the machines as background mounts. That way, if both systems try to boot at once, they will not become deadlocked, each waiting to mount directories from the other. The bg option cannot be used with automounted directories.

EDIT AGAIN: Alright, well, I spoke too soon, it’s still not working reliably. Relevant syslog section:

Sep  8 18:19:01 docker systemd-timesyncd[421]: Network configuration changed, trying to establish connection.
Sep  8 18:19:01 docker systemd[1]: Finished Wait for Network to be Configured.
Sep  8 18:19:01 docker systemd[1]: Reached target Network is Online.
Sep  8 18:19:01 docker systemd[1]: Mounting /nfs...
Sep  8 18:19:01 docker systemd[1]: Starting Docker Application Container Engine...
Sep  8 18:19:01 docker dhclient[467]: DHCPDISCOVER on ens3 to 255.255.255.255 port 67 interval 4
Sep  8 18:19:01 docker sh[467]: DHCPDISCOVER on ens3 to 255.255.255.255 port 67 interval 4
Sep  8 18:19:01 docker kernel: [   14.875753] FS-Cache: Loaded
Sep  8 18:19:01 docker kernel: [   14.961827] FS-Cache: Netfs 'nfs' registered for caching
Sep  8 18:19:01 docker kernel: [   14.975692] Key type dns_resolver registered
Sep  8 18:19:02 docker kernel: [   15.147869] NFS: Registering the id_resolver key type
Sep  8 18:19:02 docker kernel: [   15.147875] Key type id_resolver registered
Sep  8 18:19:02 docker kernel: [   15.147876] Key type id_legacy registered
Sep  8 18:19:02 docker mount[504]: mount.nfs: Network is unreachable
Sep  8 18:19:02 docker systemd[1]: nfs.mount: Mount process exited, code=exited, status=32/n/a
Sep  8 18:19:02 docker systemd[1]: nfs.mount: Failed with result 'exit-code'.
Sep  8 18:19:02 docker systemd[1]: Failed to mount /nfs.

It seems to think it’s connected already, then proceeds with the NFS mount, and it still fails with “network unreachable”. I don’t have any issues mounting from a shell after boot is complete, so… I’m not sure what the deal is, other than somehow it not waiting long enough.

ANOTHER EDIT: Alright I’ve had either a good string of lucky restarts with functioning mounts, or disabling IPv6 on the entire debian VM (for another unrelated reason) has fixed the problem.

I am new to setting up a homelab. Whenever I try to mount i get an nfs error.

mount.nfs: requested NFS version or transport protocol is not supported.

Ive installed nfs-common and nfs-kernel-server. Ive run systemctl status nfs-server and nfs-mountd.service and both running are active and running. Im not sure how to proceed and mount the drive.
Thanks for the help.

What did you end up doing? I’m considering adding another ZVOL and mounting that as a disk in the VM.

EDIT: Guess I found the downside of using a zvol, getting data in and out of it is a bit more annoying.