Help needed - docker container inaccessible via reverse proxy

Hi all –

I’ve been trying to set this up for a solid week, and am wholly out of ideas (and searching has thusfar turned up nothing).

This should be simple: a Docker container running on a Debian VPS, with nginx setting up a reverse proxy so I can use a subdomain.

The container basically works, and is accessible via http://[domain]:[port]. However, going to the subdomain results in a 504/Gateway Timeout. From the VPS, running curl -vvv localhost:[port] connects and sends the initial GET request, after which it just hangs for awhile before failing with a message that Connection reset by peer.

I’ve tried a different Docker image with the same result. Running curl on localhost from within the Docker container doesn’t show any errors. The nginx error logs show a 111: Connection refused message if trying to access the subdomain from the web, but that’s it.

The main server on nginx, i.e. not the one for the subdomain, works fine.

Here’s the nginx config for the subdomain:

server {

    index index.html index.htm index.nginx-debian.html;
    server_name [subdomain.domain.com]; # managed by Certbot

        location / {
                proxy_pass http://localhost:5874;
        }


    listen [::]:443 ssl; # managed by Certbot
    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/[subdomain.domain.com]/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/[subdomain.domain.com]/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

}server {
    if ($host = [subdomain.domain.com]) {
        return 301 https://$host$request_uri;
    } # managed by Certbot

        listen 80 ;
        listen [::]:80 ;
    server_name [subdomain.domain.com];
    return 404; # managed by Certbot

}

I’m really stumped. At this point I don’t even know where else to look for figuring out what’s wrong.

I’m exceedingly grateful for any suggestions y’all may have!

1 Like

That sounds to me like the issue might be with Docker. Try something simple like this:

$ docker run --name testing-localhost --rm -p 8987:80 traefik/whoami --verbose
$ curl http://localhost:8987
Hostname: 403b256fde51
IP: 127.0.0.1
IP: ::1
IP: 172.17.0.2
RemoteAddr: 172.17.0.1:61094
GET / HTTP/1.1
Host: localhost:8987
User-Agent: curl/8.11.1
Accept: */*
$ docker logs testing-localhost
2024/12/25 20:53:26 Starting up on port 80
2024/12/25 20:53:26 172.17.0.1:61094 - - [25/Dec/2024:20:53:26 +0000] "GET / HTTP/1.1" - -
$ docker stop testing-localhost

Does this work?

Can you share the compose-file?

Is there another reverse proxy inside the docker stack?

Edit, also:

  • Do you run a firewall on the host?
  • Which distribution are you running on?
  • Is SE Linux enabled?
1 Like

Thanks very much for the reply!

For your example (the test image with traefik), docker ps reports that it’s up, but curl localhost:8987 sits there for awhile, followed by curl: (56) Recv failure: Connection reset by peer.

docker logs, meanwhile, just says Starting up on port 80 with nothing after that.

The compose file is as follows. It’s from the web app (I can’t link it due to being new here), but with a change in the port and an actual key for JWT_SECRET_KEY:

version: "3"
services:
  front:
    image: tombursch/kitchenowl-web:latest
    restart: unless-stopped
    # environment:
    #   - BACK_URL=back:5000 # Change this if you rename the containers
    ports:
      - "8080:80"
    depends_on:
      - back
  back:
    image: tombursch/kitchenowl-backend:latest
    restart: unless-stopped
    environment:
      - JWT_SECRET_KEY=PLEASE_CHANGE_ME
    volumes:
      - kitchenowl_data:/data

volumes:
  kitchenowl_data:

I’ve also tried the “all-in-one” compose file from that site, but it didn’t work, either.

As to whether there’s another reverse proxy inside the docker stack: maybe? Checking the logs of the front service (from the above compose file), there are references to some nginx configs, but with paths that don’t match those for the “main” nginx instance (i.e. the one on my VPS, not the one within the container).

Here’s the log for that container:

/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/01-kitchenowl-customization.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf differs from the packaged version
/docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
20-envsubst-on-templates.sh: Running envsubst on /etc/nginx/templates/default.conf.template to /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2024/12/26 03:38:36 [notice] 1#1: using the "epoll" event method
2024/12/26 03:38:36 [notice] 1#1: nginx/1.26.2
2024/12/26 03:38:36 [notice] 1#1: built by gcc 13.2.1 20240309 (Alpine 13.2.1_git20240309)
2024/12/26 03:38:36 [notice] 1#1: OS: Linux 6.1.0-28-amd64
2024/12/26 03:38:36 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2024/12/26 03:38:36 [notice] 1#1: start worker processes
2024/12/26 03:38:36 [notice] 1#1: start worker process 37

As I mentioned, the only oddity is some of those paths for nginx; there is no /etc/nginx/templates on my VPS, for example.

For your other questions:

  • Firewall is iptables. I’ve got a rule set for port 80, and have tried stopping iptables entirely, but that doesn’t change anything
  • Distro is Debian, kernel version 6.1.119-1
  • No SE Linux as far as I can tell (for example, running sestatus results in “command not found”)

Ok, that’s the problem we have to focus on.

Have you tried 127.0.0.1 instead of localhost?

$ curl http://127.0.0.1:8987

Next steps would be to look at it with tcpdump:

$ sudo apt-get install tcpdump
$ docker run -d --name testing-localhost --rm -p 8987:80 traefik/whoami --verbose
$ sudo tcpdump port 8987 & 
$ curl http://127.0.0.1:8987

And try it without docker:

$ mkdir test
$ cd test
$ echo 'test' > index.html
$ python -m http.server 8765 & 
Serving HTTP on :: port 8765 (http://[::]:8765/) ...
$ curl http://127.0.0.1:8765
::ffff:127.0.0.1 - - [26/Dec/2024 09:52:31] "GET / HTTP/1.1" 200 -
test
2 Likes
  1. curl 127.0.0.1:8987 gives the same result

  2. With the test container up and running, tcpdump gives:

[1] 84730
listening on ens3, link-type EN10MB (Ethernet), snapshot length 262144 bytes

Running curl again does the same as before: nothing happens for awhile until it times out (Connection reset by peer). tcpdump quits at the same time with the same message. (I ran it with the -vv flag, btw.)

  1. The test without docker works properly.
2 Likes

What happens if you try it with the docker IP? For instance, to get to my mariadb I use 172.17.0.3

docker inspect -f ‘{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}’ <container_name_or_id>

or something like that, to get that IP

2 Likes

Since the Python test server works but Docker containers don’t respond, let’s dig deeper into Docker’s networking configuration. Please check:

  • Docker network setup:
docker network ls
ip a show docker0
  • Docker iptables rules:
sudo iptables -t nat -L -n -v | grep DOCKER
sudo iptables -L -n -v | grep DOCKER
  • Docker service status and recent logs:
systemctl status docker
journalctl -u docker --since "1 hour ago"
  • Given that local networking works outside Docker but fails within it, let’s try bypassing Docker’s network stack by running a container with host networking, then test again with curl:
docker run -d --network host --name testing-localhost --rm traefik/whoami --verbose --port 8987
curl http://127.0.0.1:8987
  • if that still fails, we should examine network connectivity more closely:
netstat -tulpn | grep LISTEN
tcpdump -i any -nn -vv "port 8987"
  • Finally, check the Docker daemon configuration:
cat /etc/docker/daemon.json

Based on what we’ve seen so far, this could be:

  • A Docker networking configuration issue
  • A system-level restriction affecting the Docker network stack
  • A possible iptables rule specifically affecting the Docker bridge network

Did you follow this guide to install docker?

Or was it preinstalled in the VPS Debian image? This is a real head scratcher, never seen something like this, was this a new installation?

2 Likes

To start, your link to the install docs made me wonder. When I first installed docker, it was via the cache repository my hosting service uses. I tried installing it from the official repository (per the instructions you linked), but unfortunately I still get a 504 Gateway timeout on the subdomain address with the container running (and curl localhost:8080 still times out).

All of the info below is with this “new” installation of docker and after a restart of the VPS to be sure.

With the container I’ve been fighting with running, here’s what I get from docker network ls:

NETWORK ID     NAME               DRIVER    SCOPE
5cd814282674   bridge             bridge    local
5c52b9d69451   host               host      local
a769213e182e   none               null      local
1da4b4e580fa   [username]_default   bridge    local

ip a show docker0:

3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:9c:4b:e0:8b brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:9cff:fe4b:e08b/64 scope link
       valid_lft forever preferred_lft forever

iptables [ ] (first command in your post):

  398 18963 DOCKER     0    --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL
    0     0 DOCKER     0    --  *      *       0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL
Chain DOCKER (2 references)

Second iptables command:

   99 66117 DOCKER-USER  0    --  *      *       0.0.0.0/0            0.0.0.0/0
   99 66117 DOCKER-ISOLATION-STAGE-1  0    --  *      *       0.0.0.0/0            0.0.0.0/0
    0     0 DOCKER     0    --  *      br-1da4b4e580fa  0.0.0.0/0            0.0.0.0/0
    0     0 DOCKER     0    --  *      docker0  0.0.0.0/0            0.0.0.0/0
Chain DOCKER (2 references)
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
   48  5001 DOCKER-ISOLATION-STAGE-2  0    --  br-1da4b4e580fa !br-1da4b4e580fa  0.0.0.0/0            0.0.0.0/0
    0     0 DOCKER-ISOLATION-STAGE-2  0    --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0
Chain DOCKER-ISOLATION-STAGE-2 (2 references)
Chain DOCKER-USER (1 references)

Checking systemctl status shows docker.service as active/running.

Next, running the “testing” container, I do now get a valid response from curl http://127.0.0.1:8987:

Hostname: [hostname]
IP: 127.0.0.1
IP: ::1
IP: [server's public IPv4 address]
IP: [ipv6 address, not sure if it's public or not]
IP: 172.17.0.1
IP: [ipv6 address, not sure if it's public or not]
IP: 172.18.0.1
IP: 172.19.0.1
IP: 172.18.0.1
IP: [ipv6 address, not sure if it's public or not]
RemoteAddr: 127.0.0.1:43992
GET / HTTP/1.1
Host: 127.0.0.1:8987
User-Agent: curl/7.88.1
Accept: */*

netstat [ ]:

tcp        0      0 0.0.0.0:443             0.0.0.0:*               LISTEN      7261/nginx: master
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      7261/nginx: master
tcp        0      0 0.0.0.0:8765            0.0.0.0:*               LISTEN      84965/python3
tcp        0      0 0.0.0.0:2740            0.0.0.0:*               LISTEN      8829/sshd: /usr/sbi
tcp6       0      0 :::8987                 :::*                    LISTEN      88655/whoami
tcp6       0      0 :::443                  :::*                    LISTEN      7261/nginx: master
tcp6       0      0 :::80                   :::*                    LISTEN      7261/nginx: master
tcp6       0      0 :::2740                 :::*                    LISTEN      8829/sshd: /usr/sbi

tcpdump [ ]:

tcpdump: data link type LINUX_SLL2
tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
^C
0 packets captured
2 packets received by filter
0 packets dropped by kernel

There is apparently no /etc/docker/daemon.json file.

2 Likes

Hi again mietzen. Hope you had a good holiday and I appreciate the assistance you’ve given so far. I just wanted to check in and see if the most recent information has suggested anything?

UPDATE

In a fit of desperation, I nuked the VPS and started over. And of course now the docker image is working perfectly out of the box.

I cannot thank you enough for working on this. I still haven’t the first idea why it didn’t work (or what changed). There are only two things I did differently:

  • used ufw rather than iptables directly
  • changed the ports the docker container is pointing to (from [whatever]:80 to [whatever]:[whatever])

¯\_(ツ)_/¯

3 Likes

Hey, sorry I was ill after Christmas and had to recover. Glad to hear that you got it working. Be careful with ufw and docker:

2 Likes

No problem, and I hope you’ve recovered!

Reading that, I’m confused about something. Don’t I want the container to be accessible from the public web? That’s the whole point of a web app, after all…

1 Like

Thanks, I’m better now :slight_smile:

Yes, but I have always viewed the firewall as the last line of defense, with only the rules defined there being applied. So, seeing Docker punching additional holes in it felt at least counterintuitive to me.

The worst part, in my opinion, is when using the standard -p 8080:80 option. This opens the port on *all interfaces, effectively creating holes in the firewall across all interfaces. One could argue that this is a case of RTFM, since you can bind the forwarded port to a specific interface with -p 192.168.1.123:8080:80. However, I would still argue that it is unexpected behavior for an application to override the firewall in this manner.

I discovered this years ago when I had only allowed ports 22, 80, and 443 in UFW, yet I could still access the Traefik dashboard on port 8080 from the web. You could argue that this was poor implementation on my part—not using the secure dashboard, not binding it to 127.0.0.1, etc.—but mistakes like this happen. Especially if you think you got a firewall securing your server. In my opinion this is like setting your firewall up correctly, then installing e.g. postgres and since in the default config it is not bound to an interface, it will open it ports on all interfaces, disregarding all firewall rules.

TL;DR: There are edge cases where you want to expose a service only to specific interfaces, not all. Most (if not all) of these cases can be addressed by binding to specific interfaces or using a reverse proxy to manage access rules. Nevertheless, it is unexpected for an application to disregard firewall rules.

1 Like

I’d definitely agree with the “unexpected” part. In my case, though, especially since I am using a reverse proxy, I’m still unclear on to what extent this is actually a security risk/vulnerability.

I’m of course also nervous about making changes when things are finally working…

As long as you only publish port 80 and 443 in your compose file and manage all access via a reverse proxy you are fine.
Just keep in mind that any new / other compose file / docker run command might open ports. So review them carefully.

1 Like

Hmm…I definitely have the docker container set to 8080:8080, and I was wondering if that’s part of what worked. But again, isn’t having that open to the web a requirement for it working? Also, wouldn’t I specifically have to use a port other than 80 with a subdomain (assuming I want the domain root, i.e. not the subdomain, to work as well)?

1 Like

Sorry I don’t fully understand what you mean. Could you post your compose file?

Are you running the reverse proxy bare metal or in Docker?

1 Like