Multiple docker-compose projects on different subdomains

Hey there,

I am currently trying to set up Jitsi and Nextcloud on my own VPS, each on a separate subdomain. I have both of them running as docker-compose projects and both work fine by themselves. The issue is that they both have a web interface and listen on the standard HTTP(S) ports using NGINX. This means I cannot run them at the same time without changing the configuration. I have added a NGINX service on the machine (aside from the 2 NGINX instances in the docker-compose projects) to handle the subdomains for each. My idea was to define a server block for each subdomain, and forward the traffic to different ports on the upstream NGINX containers. But for some reason I can not get this to work properly. I am still playing around with the configs but does this setup make sense? I am also uncertain how to handle SSL. Currently each of the NGINX instances in the projects are responsible for handling it, but I might have to change this so the my new NGINX service handles this.

This is the current config for my NGINX service. I was trying to forward traffic on port 80 to 8000 and 443 to 8443. I have changed the ports on which the meet NGINX containers listens as well, but with no luck.

server {
    if ($host = nextcloud.domain.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot


	listen 80;
	server_name nextcloud.domain.com;

	location / {
		proxy_pass http://localhost:8080;
		proxy_set_header Host $host;
		proxy_redirect off;	
}


}

server {
	listen 443 ssl;
	server_name nextcloud.domain.com;

	location / {
		proxy_pass http://localhost:8443;
		proxy_set_header Host $host;
		proxy_redirect off;
	}

    ssl_certificate /etc/letsencrypt/live/meet.domain.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/meet.domain.com/privkey.pem; # managed by Certbot
}

server {
	server_name meet.domain.com;
	
	location / {
		proxy_pass http://localhost:8000;
                proxy_set_header Host $host;
		proxy_redirect off;
	}

    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/meet.domain.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/meet.domain.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

}


server {
    if ($host = meet.domain.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot


	listen 80;
	server_name meet.domain.com;
    return 404; # managed by Certbot


}

Any help/ideas would be appreciated. Cheers!

Vicious

Unless you have some specific reason not to, I would suggest letting nginx handle all of the SSL stuff, and the proxy pass via http, since it is all one a single machine, no need to encrypt that portion. You then could set the proxy pass on the ssl nextcloud to port 8080

You have the wrong ssl certificate for the nextcloud instance, unless the meet.domain.com certificate also has nextcloud.domain.com on it.

You should be able to remove the proxy pass from the nextcloud non-ssl block. Just return a 404 like the meet nginx block does.

1 Like

I was doing something similar with nginx this last weekend - basically I have a single public/routable IPv4 at home, and I have multiple hosts that are doing their own ACME certs.

So, my router - completely agnostic of tcp stream context, does DNAT from my public IPv4 to one of the hosts onto port 8443. There, nginx is setup to listen on that 8443 using ssl_preread which then let’s nginx multiplex to the correct backend based on SNI fields.

...
map $ssl_preread_server_name $name {
    backend1.example.com      backend1;
    default                  backend0;
}

upstream backend0 {
    server 192.168.0.10:443;
}

upstream backend1 {
    server 192.168.0.11:443;
}

server {
    listen      8443;
    proxy_pass  $name;
    ssl_preread on;
}
...

It’s one of the examples from: nginx 1.15.2, ssl_preread_protocol, multiplex HTTPS and SSH on the same port - Raymii.org

  • I don’t have a clean setup for /.well-known/acme-challenge stuff (just regular forwarding)
  • I only have a single http host talking quic over udp port 443 and multiplexing this to various other containers and it’s not using nginx, I don’t know if quic is multiplex-able as easily.
1 Like

I managed to get it to work by just proxy passing via http. Just had some issues with my docker nginx trying to upgrade the connection again and therefore ending in a redirect loop. It is now working as expected. Thanks for your help!

1 Like

This is very interesting.

I’ve been doing simple proxying to unencrypted HTTP backends. Since the backends are on the same host in docker containers that is fine. But i have been wondering how to handle proxying to other hosts and was thinking of using a locally signed certificate for those proxied backends, but sometimes that doesn’t work to well, if the backends do some checking of the communication from the browser.

But how does this not perform a MITM? Do those proxied backends then really do their own ACME challenges? How does this work?

How about running it through a wireguard tunnel?

Now that really is an interesting idea…

This works because the Hostname in classical TLS 1.2/1.3 is unencrypted. The SNI field in the first message the client sends when establishing the TLS connection (ClientHello) is sent in clear text. It enables the nginx to identify the host. And it enables simple webserver software to pick the right certificate and crypto parameters to use for the connection.

In the config above, nginx doesn’t actually decrypt traffic or terminate the TLS session, it doesn’t need to; it does however terminate the TCP connection, and just peeks into the ClientHello TLS packet in order to know down which connection to forward it, and it does so unaltered…

…well, if you further enable “proxy_protocol” then it prepends source IP and port in plaintext, right before ClientHello so downstream server has that info too. Don’t do that if server doesn’t expect it.


ESNI and ECH might break this in the future.

ESNI (Encrypted SNI - Encrypted Server Name Identification, ECH (Encrypted Client Hello - where Client Hello is the first thing sent by the client in TLS ).

ESNI is complicated to deploy (client needs to encrypt the server name with servers public key, which it needs to get from DNS by querying DNS with a server name)… so you leak the server name unless you’re using DNS over TLS or over https … in which case you’re only leaking the fact you’re using DNS.

ECH is different, and even more complicated also requires DNS changes to let clients take advantage of this.

Cloudflare has good blog posts on these.

1 Like