Return to

ZeroTier in LXC (Proxmox)


I’ve been experimenting with zerotier recently, and I got it installed on my proxmox host, however, right now all containers made for running some webapp like transmission gui or a network share have to be accessed using the lan IPs… which is OK as long as I’m at home, but to access them from my laptop when I’m somewhere else I would need a dedicated VM running some kind of a http proxy or something.

Is there any way I could install zerotier in said container without risking problems? (I dont want to discover one day that nginx container joined the dark side and did something bad to the host os)

1 Like

I haven’t tried setting up ZeroTier in a container but I have set it up on a VM and it works great. If you want, you can config zerotier flow rules with some restrictions:

# Whitelist only IPv4 (/ARP) and IPv6 traffic and allow only ZeroTier-assigned IP addresses
drop                      # drop cannot be overridden by capabilities
  not ethertype ipv4      # frame is not ipv4
  and not ethertype arp   # AND is not ARP
  and not ethertype ipv6  # AND is not ipv6
#  or not chr ipauth      # OR IP addresses are not authenticated (1.2.0+ only!)

# Allow the following TCP ports by allowing all TCP packets (including SYN/!ACK) to these ports
  ipprotocol tcp
  and dport 22        # SSH
	or dport 3389     # RDP
	or dport 80       # HTTP
	or dport 443      # HTTPS
	or dport 8006     # Proxmox Web UI
	or dport 5985     # PowerShell Remoting HTTP
	or dport 5986     # PowerShell Remoting HTTPS
	or dport 2049     # NFSv4
	or dport 445      # SMB

# Drop TCP SYN,!ACK packets (new connections) not explicitly whitelisted above
break                     # break can be overridden by a capability
  chr tcp_syn             # TCP SYN (TCP flags will never match non-TCP packets)
  and not chr tcp_ack     # AND not TCP ACK

# Accept other packets

I’m already running zerotier in VMs, there is no problem with that at all, however I don’t know how I could get it working in a LXC container, my guess is that there is a problem with setting up the tun/tap network device.
I want to run it in containers because I really like the disk, ram and CPU savings I get.

Also, since the time I posted the question I kind of accepted the fate and everything that I need to have a zerotier address is just a VM… : /

Hi, made an account just to help you.
I have a few working LXC containers with Zerotier.

Essentially you need to give the LXC container the permissions to be able to create TUN devices. My LXC containers are on Proxmox, so my instructions are based on that.

edit your container config in proxmox /etc/pve/lxc/XXX.conf
Add the following lines

lxc.cgroup.devices.allow = c 10:200 rwm
lxc.hook.autodev = sh -c "modprobe tun; cd ${LXC_ROOTFS_MOUNT}/dev; mkdir net; mknod net/tun c 10 200; chmod 0666 net/tun"

First line gives permissions, second line starts the module and cretes the dev folders.

Please see here for further / the same details. I can’t add links so u will have to add this link together

Unfortunately I doesn’t work in my case

[email protected]:/etc/pve/lxc# pct start 110
Job for [email protected] failed because the control process exited with error code.
See "systemctl status [email protected]" and "journalctl -xe" for details.
command 'systemctl start [email protected]' failed: exit code 1
[email protected]:/etc/pve/lxc# systemctl status [email protected][email protected] - PVE LXC Container: 110
   Loaded: loaded (/lib/systemd/system/[email protected]; static; vendor preset: enabled)
   Active: failed (Result: exit-code) since Sun 2020-05-03 04:14:34 CEST; 15s ago
     Docs: man:lxc-start
  Process: 2444 ExecStart=/usr/bin/lxc-start -n 110 (code=exited, status=1/FAILURE)

May 03 04:14:33 keksmoks systemd[1]: Starting PVE LXC Container: 110...
May 03 04:14:34 keksmoks lxc-start[2444]: lxc-start: 110: lxccontainer.c: wait_on_daemonized_start: 874 Received container state "ABORTING" instead of "RUNNING"
May 03 04:14:34 keksmoks lxc-start[2444]: lxc-start: 110: tools/lxc_start.c: main: 329 The container failed to start
May 03 04:14:34 keksmoks lxc-start[2444]: lxc-start: 110: tools/lxc_start.c: main: 332 To get more details, run the container in foreground mode
May 03 04:14:34 keksmoks lxc-start[2444]: lxc-start: 110: tools/lxc_start.c: main: 335 Additional information can be obtained by setting the --logfile and --logpriority o
May 03 04:14:34 keksmoks systemd[1]: [email protected]: Control process exited, code=exited, status=1/FAILURE
May 03 04:14:34 keksmoks systemd[1]: [email protected]: Killing process 2455 (lxc-start) with signal SIGKILL.
May 03 04:14:34 keksmoks systemd[1]: [email protected]: Killing process 2582 (apparmor_parser) with signal SIGKILL.
May 03 04:14:34 keksmoks systemd[1]: [email protected]: Failed with result 'exit-code'.
May 03 04:14:34 keksmoks systemd[1]: Failed to start PVE LXC Container: 110.

May 03 04:14:34 keksmoks lxc-start[2444]: lxc-start: 110: tools/lxc_start.c: main: 332 To get more details, run the container in foreground mode

Try and get these additional details. Post your container config as well as the extended info.

I don’t remember exactly how to run in foreground mode, search google for the instructions if you dont know.
But regarding running ZT on proxmox containers, it works and is a solvable problem.

Might be a typo, might need some additional permissions I know its solvable.

I tried to respond earlier but I could not find this website where I had answered you previously and could not find my sign up mail.

Ok, little update. Did a little testing, unrelated but needed to setup ZT on a container so I checked the details

Your container must be “privileged”
The default for containers is unprivileged.

The proxmox way to convert a unprivileged to privileged, is to backup, and restore while making sure the check box is in the correct state.

When my container was not privileged, it would crash/not run when started.
Suggest you read about the implications of changing this setting, but as long as these containers are under your control, its generally not a big deal to give them higher privileges.

If you must keep them unprivileged, read here for a solution:

What I also read on the Proxmox forums, they are considering adding an easier facility to be able to use TUN and related devices, such as how they have done with NFS/SAMBA. Good luck

1 Like

@effgee ; just created a login to say thanks!. Your solution worked at once. Many thank! //Daniel

Hey, thanks for the tips, however I’m using unprivileged containers for everything.

When I could use privileged ones since its just a home server that only I’m using (like you say) but I don’t want to reconfigure everything now. I know that converting it to privileged one is just simple backup and restore but I just got everything working by using some duct tape grade workarounds.

For anyone curious:
I’m just assigning more IPs in zerotier to a KVM VM which sits on the same LAN as the target containers, it’s just NATing all of the traffic from zerotier IPs to the respective LAN IPs with iptables… however I often run into CPU bottleneck, something I could probably tune in the KVM’s CPU settings.

I’ll mark your solution as the right one since it’ll more than likely will be the right one for anyone else.

Sorry I very infrequently come to these forums and my notifications don’t seem to work.

Lots of duct tape means lots of work if it crashes and bad backups. Definitely test your backup restore procedure. :blush:

Although yes, in some locations I use a ZT dedicated container, running simple NAT or a simple bridge into the remote network. Also valid.

Additionally the most recent proxmox has experimental support for creating devices as part of the advanced options for the container, it is “experimental”, although I do not know if it works with unprivileged containers yet. Just too busy.

@Daniel_Sjodin Happy to hear it!!!

Dropping this here in case anyone else runs across this post like I did.
I recently upgraded to Proxmox 7 and my ZeroTier stopped working. During the upgrade, I had a warning about cgroup which I had forgotten until now was the magic ticket for LXC + ZeroTier. On Proxmox 7, it looks like this is now `cgroup2’ so the above config changes work perfectly if you make that swap.

cgroup changes to cgroup2

lxc.cgroup2.devices.allow = c 10:200 rwm
lxc.hook.autodev = sh -c "modprobe tun; cd ${LXC_ROOTFS_MOUNT}/dev; mkdir net; mknod net/tun c 10 200; chmod 0666 net/tun"
1 Like