Return to

2 docker containers communicating using domain names (Nextcloud + Onlyoffice)



I have a server with multiple web-servers in separate docker containers and an nginx reverse-proxy. It is sitting behind a NAT with 1 public IP, which forwards connections on ports 80 and 443 to my server so nginx can proxy them according to the domain. This part works fine.

However, I need 2 containers to communicate with each other using the public domain name (nextcloud requires onlyoffice’s full domain name for some reason), and when I curl/wget from the server to the url of one of the containers, I get connection refused.
As I understand it, packets have the server’s IP as both source and destinations, so the response is being sent to the localhost, and hairpin NAT rule on my NAT didn’t solve the issue.

So, how should I fix this?
Thanks everyone in advance.



Can you traceroute the fqdn to confirm it’s taking the path you expect?

Is there a firewall blocking private subnets anywhere in the mix?



If all you want to do is get requests originating from your nextcloud container to to resolve to the local IP of your open office container then just put an entry in the nextcloud container’s hosts file to do that:

 [email protected]:~# echo "10.0.0.whatever" > /etc/hosts


Yes, I use this method for some other containers on the work, but they have uptimes of months and are monitored by a large team 24/7, so re-applying this workaround at every container restart is easy. But at my home I want to use Watchtower to automatically update containers, so I would have to write a script to monitor restarts and send a command to containers.

Would like to avoid this if possible and make a persistent config, but thank you for the answer.

Though about it again, I can simply make /etc/ persistent with mounting it outside the container, so hosts file will stay the same between reboots. Pretty obvious, but thank you for the tip.



Traceroute gives me 1 hop - my public IP. So I think it proves that the packet passes through my NAT, goes to the server, server responses to the localhost, connection breaks.
Firewall is not envolved there, I even tried with all drop rules turned off.

1 Like


I’d turn hairpin off if you haven’t already. You shouldn’t need it if it’s passing through the public ip.

My guess is it’s a port and/or proxy issue. Can you try to find the connection attempt in the logs?

How’s the proxy setup and how is it handling the certificate(s)?



This doesn’t make sense to me. Are you seeing that in the logs? Once the packets exit your NAT the private source ip should be translated to your public ip.



No, but this is basically the problem the concept of "hairpin NAT’ is solving, this case’s only difference being both source and destination run on the same host, not just in the same subnet.

cburn11’s advice helped me to solve this, both hosts now communicate happily, resolving their fqdns as docker internal IPs. I will go and mark his answer as a solution since I’ve implemented and tested it.

1 Like


Docker containers jwilder/nginx-proxy:alpine + jrcs/letsencrypt-nginx-proxy-companion.
Proxy looks for containers with specific env. variable with string=fqdn and points requests with this fqdn to this container. Letsencrypt container looks for similar variable and passes fqdn to certbot to request/renew certificates.
So this means that all fqdns are pointing to my external IP, and my router is forwarding all the requests on ports 80 and 443 to my server’s internal IP, where proxy is getting them and forwarding to the container with needed fqdn.



I’m using below to do what you need for all containers. Can also use docker container names via configuration options in for instance owncloud. At least, when I ran it, without problems, I had this in my vhost and site files. Resolver line allows for nginx to use dockerd internal DNS, set $upstream line prevent nginx from firing an error if a container is not up. Last line you know.

resolver valid=30s;
set $upstream_bitbucket bitbucket;
proxy_pass http://$upstream_bitbucket:7990;