TrueNAS Scale: unable to curl page from Docker container on same server

Hi all,

A couple of months ago I set up my TrueNAS Scale server and got Docker up and running following the excellent guide I found on this very forum.

Here’s how my infrastructure looks like:

  • TrueNAS Scale server running Docker containers in a proxy network
  • Nginx Proxy Manager running in Docker on the same proxy network
  • Unifi Express gateway, server has a static IP address both in Unifi and on its physical NIC (10.0.0.7)
  • no firewall rules on Unifi currently - for testing purposes
  • port forwarding enabled in Unifi, directing 443 to 10.0.0.7:443, where Nginx Proxy Manager is listening on
  • Authelia in Docker - auth.mydomain.com, internally on the proxy Docker network

A couple of days ago I noticed this strange behaviour: if I do ping 111.111.111.111 (imagine this is my public IP address) from the TrueNAS server and any other Docker container, it goes fine, without any packet loss. However, if I do curl auth.mydomain.com/.well-known/openid-configuration, the command terminates with connection timeout. On the other hand, in a browser, the same page loads fine and I can curl it from my laptop which is in the same Unifi network and subnet.

Additionally, all the commands – ping, curl, traceroute, telnet – do return the correct public IP address. Nevertheless, the server cannot… talk to itself.

Please help me figure out what could be causing this.

Thank you!

curl auth.mydomain.com/.well-known/openid-configuration works from a laptop but not from … ?

add https in front, and try doing curl -v and see if ssl negotiation happens, … or if perhaps port forwarding maybe isn’t working as well.

compare this to curl -v from a laptop.

my guess is probably DNS settings are different between laptop and various containers, or maybe port forwarding isn’t set-up correctly and you only have a DNAT rule, and not a SNAT rule for packets coming in from local network.

Yes, curl works from my laptop, but not when I attempt it in an ssh session on my server.

I honestly don’t know the different between a DNAT and an SNAT rule. Port forwarding is set up in the most basic way: in Unifi Network > Settings > Security > Port Forwarding I’ve put

From: Any
Port 443
Forward IP: 10.0.0.7 # my server's internal IP
Forward port: 4443 # Nginx Proxy Manager is listening on this port
Protocol: TCP
Logging: Enabled

What else must I have?

Here are the results of curl on the server:

root@truenas[~]# curl -v https://auth.mydomain.com/.well-known/openid-configuration
*   Trying 111.111.111.111:443...
* connect to 111.111.111.111 port 443 failed: Connection timed out
* Failed to connect to auth.mydomain.com 443 after 129373 ms: Couldn't connect to server
* Closing connection 0
curl: (28) Failed to connect to auth.mydomain.com port 443 after 129373 ms: Couldn't connect to server

There is Adguard Home running on the server and it is set up as the local network’s DNS (in DHCP), so it is the DNS picked up by my laptop, for instance. The TrueNAS server, on the other hand, explicitly uses Cloudflare’s 1.1.1.1 DNS.

You probably already tried but just in case: are you to able to ping your private IP from all the same sources?

1 Like

Some people call this “hairpin NAT”…

Let’s say you only have a DNAT rule to begin with, no hairpin NAT.

  • TCP SYN packet goes out to 111.111.111.111:443
  • Firewall catches it and rewrites the destination address (DNAT) to 10.0.0.7:8443
  • Firewall remembers this connection for later replies
  • Packet gets routed to nginx
  • nginx sends back a TCP SYN-ACK packet with destination address and port equaling the source address and port of the original packet.

If the destination address of the SYN-ACK reply, goes through the same device, and through the same firewall rules, DNAT table entries will match this packet to a previous connection, they will now rewrite the source IP:port of the SYN-ACK reply to 111.111.111.111:443

That way the client opening a connection is always talking to 111.111.111.111:443 and when reply packets come there’s no issues.

For example if, you’re port forwarding from the internet, it’s all good packets go through your router.

However, If you’re on local network, lets say 10.0.0.33, and reply doesn’t go through that same firewall

Then this SYN-ACK reply will have a destination address of 10.0.0.33, and source address of 10.0.0.7:8443 … when a client host on 10.0.0.33 receives it it’s going to think “WTF I’m trying to talk to 111.111.111.111:8443, what’s this junk, drop, keep waiting for 111…”“”, which will never come.


So, in hairpin NAT, you also rewrite the source address, forcing your nginx to reply to the firewall even when you have a more direct route to the client. It’s a SNAT rule, because it’s the source address of the first packet in a connection that gets rewritten, with DNAT it’s the destination address.

That means your nginx proxy will see all those connections as originating on the firewall itself, and will reply to them and that’s the client IP that’ll show in all the logs.

For completeness, in enterprise/datacenters they have a different technique for load balancing incoming connections called DSR or direct server reply, where they wrap client packets by adding IPIP or GRE headers that get stripped by the receiver/server networking stack and the receiver knows what to reply from. It’s a lot of work to get right, it doubles the bandwidth per unit of expensive routing silicon, but home networkers don’t do it.


Another alternative to hairpin NAT, used both in homes and enterprise is DNS, but port numbers need to match.

On the Internet, auth.mynetwork.com resolves to 111.111.111.111 , at home, internally, auth.mynetwork.com would resolve to 10.0.0.7 . There’d be no NAT involved at home.

But as I mentioned, you need 10.0.0.7 listening on 443 for the URL to look the same.

2 Likes

I don’t want to be cheeky, but would you also happen to know how to accomplish this with a Unifi device/firewall? :blush:

The thing about the DNS solution is, I have a local DNS – AdGuard Home – where I could add a local A record, but because it’s on the same server, I opted to use Cloudflare for the server itself.

EDIT: I tested successfully a somewhat crude solution, namely adding a line in the server’s /etc/hosts file like this 10.0.0.7 auth.mydomain.com and then binding it in each Docker container’s volumes section. Don’t know if it’s ideal, so if someone can recommend a better approach, I’d be grateful.

Hi, I don’t have any unifi routers/firewalls/gateways on hand.

I also use Adguard Home for my local DNS stuff, and there I have a bunch of things under Filters > DNS rewrites.

How are you using containers on TrueNAS scale? Which exact guide did you follow? Docker containerd for example lets you override DNS and hosts through command line or if you use docker compose through the config file.

For some reason, I can’t add links to my posts, otherwise I would have included it in my OP. I used the guide called “TrueNAS Scale Native Docker & VM access to host [Guide]” available on this very forum.

Since I have another, separate machine on my network, I thought I could install AdGuard on it, sync the two instances and set the second instance as a DNS in TrueNAS – this way it won’t have to rely on itself, so to speak, in the even of Docker failure or not having started yet. What do you think? Or is there a better way?

Can you access TrueNAS web ui, or SSH, without a working DNS if Adguard Home doesn’t start?

e.g. it looks like you should be able to install “Tailscale App” on TrueNAS scale and use that to get in an fix whatever needs to be fixed.