Return to

HAProxy-WI -- Run lots of public services on your home server

What on earth is this?

Note: This how-to assumes you are familiar with the underlying technologies at least a bit: Comfortable on the Linux CLI, familiar with installing packages on your distro, familiar with SSH and key-based authentication. At least a vague notion about nginx, web services, port forwarding and network address translation (NAT).

If you’re a tinkerer chances are you’ve forwarded a port from your home router to some machine inside your network.

In an effort to make things more newb-friendly, a protocol called UPnP was created where in a device, like an Xbox, could tell your router to forward it an in-bound connection. (Sometimes you see complaints about “strict NAT” not permitting inbound connections with these devices…)

What I want to do is setup something like:

and offer services on them. I know many of our community run things like NextCloud in the cloud, but I want to run these on my home internet connection. The main problem is that I don’t want to have DNS resolution lead directly to my home IP. I also want an extra layer of filtering and protection than just what I would get with my home firewall.

Why my home connection? While bandwidth and latency is not as good as “the cloud” the compute and storage costs are much, much lower. As in, just barely, cost-of-electricity-only much lower. It’s a sunk cost that I’ve got all these old drives and computers laying around – they can run my infrastructure.

In truth, 5-10 megabit of upload is a sort of lower-bound to do “OK” with these services, but 25-100 megabit upload and you’ll be hard-pressed to tell any difference with a cloud setup. If you’re a data hoarder, then this is a great setup.

It means the ideal setup is to have these DNS entries, and services, terminated on something in the cloud (such as a linode machine – – ) and then proxy, or forward, connections to the service and port on my home machine.

Introducing HAProxy
The Reliable, High Performance TCP/HTTP Load Balancer Yes, most people will load balance to just a single node on their home/homelab connection. Or maybe not? haha. The Dramble.

HAProxy is in every major distro. You can even configure it to work with SSL/HTTPS and the configuration is pretty straightforward.

Just forward Port 80/443 from your public IP to the internal machine running HAProxy, and then HAProxy can forward to any number of internal machines that you configure.

For email, you can use the “TCP” forwarding mode of HAProxy and forward port 25 to a non-standard inbound port on your home machine (almost all ISPs block inbound port 25-- and for good reason. Most people do not know how to securely maintain an email server!)

Too complicated? – Need GUI?

Enter HAProxy-WI

One person’s (apparently) labor of love – a reasonable GUI to manage the madness. While it is open-source, if you want prebuilt buinaries or a docker image, you are supposed to subscribe.

I am not sure how I feel about this, but I did talk the author into adding a lower-cost subscription tier for home/personal use.

I am a believer in supporting creators, so I subscribed to try it out and I used the docker image, as well as the source on github. Both seemed to work more or less equivalently.

I would feel better about it if it were formally audited. I would also feel better about it that, rather than using binaries, that I build my solution directly from github in an automated way.

( Any precocious forum users want to create a docker-compse yaml that builds an appliance from github? :smiley: )

This project is a little more sophisticated than just a front-end for HAProxy, too, which is fully free (libre) and open source, is a proxy that is intended to serve as a single-endpoint that forwards requests to a pool of many servers behind it.

HAProxy-WI gives us a nice gui for that, but also NGINX (which is a nice and sophisticated web server), and KeepaliveD. It is fairly straightforward to manage these processes directly from the command line – and for a home users these services mostly work fine on a low-power device, like a raspberry pi (for tens to hundreds of megabits of bandwidth, anyway).

Because most home internet connections only have a single IP, it is possible for HAProxy to be “the” server running services such as HTTP, HTTPS, etc directly from that IP and then, through the magic of packet sniffing (or, in the case of HTTPS then SNI) and figure out which internal server you intended to connect to.

You do NOT need this gui to use the knowledge in this little how-to, but it is a quality of life improvement.

More Complex Setups

Some ISPs filter inbound connections on HTTP/HTTPS. That’s okay. You can run haproxy-wi on a very small instance on Linode for example ( and then forward inbound port 80/443 on your linode machine to port 52135 on your home internet connection. No ISP is filtering port 52135. And because of the way the proxy system works, the web clients hitting your website don’t know the traffic is really coming from your home computer on a non-standard port.

Linode    IP   HAProxy   Home IP (public)     Router Port Forward  Internal IP      >  (tcp forwarding) >     >  (tcp forwarding) >
                    >     8 .6.5.2:8444 (tcp forwarding) >

While it is possible to skip the “cloud endpoint” and run this on your home internet connection, I think I should probably write that up separately.

This is a great way to have a giant media collection “online” but not have it directly on the internet.

(And, not with HAProxy-WI, but with doing it manually, you can layer on filtering and a web application firewall, like Snort to help protect your home network that much more).

Furthermore, you can restrict inbound traffic to your home IP only from your Linode machine (or other server on the internet) in case your ISP starts snooping on your traffic.

Getting Started with HAProxy-WI

It’s not time for that yet! To use HAProxy-WI, you need working HAProxy and (ideally) Nginx!

The first step is setting up a VM which has a public IP address (ideally) and getting your dns setup. Since HAProxy-WI is just a front-end, you also need to go ahead and install HAProxy and NGINX (the web server) on your cloud host.

If you want a setup like mine, just setup a dns wildcard A/AAAA record like


that has your linode public IP(s) – 46.xx.yy.zz / [::] etc for example.

Once you’ve done that, I’d recommend also setting up docker, and then configuring haproxy-wi as a docker image.

**Note that the step that says to run it on port 443 – instead I suggest to run it on another port such as 8443 if you are planning to use this same IP address for your wildcard domain. **

Linode/Cloud IP : 443 (HAProxy service itself) : 8443 (HAProxy-WI gui)

You see what HAProxy-WI does is connect, via ssh, to the server you setup and then it updates the haproxy and nginx configs.

I found it convenient to just run Haproxy-WI in a docker container and have it connect back to localhost via ssh.

Ideally, you setup a user that is a limited user, but that has access to update haproxy and nginx configs. It isn’t recommended that you use the root user because any vulnerability in haproxy would expose a key that you could connect back to your gateway machine as root. Not good.

I am not going to cover that in this how-to. But if you’re a total newb and lost, maybe I can cover that in another how-to. Speak up and let us know.

For the purposes of “set it and forget it” I would recommend manually starting and stopping the docker container only when you need it. You lose the monitoring capabilities a bit, but this is a safer.

Add a server in HAProxy-WI –

If this is the same setup I’ve been describing, just enter localhost and setup your ssh keys. Probably port 22 as well.

If this is a different machine on the internet, enter its IP and SSH credentials.

The idea is that HAProxy will connect via SSH to your machine running haproxy and nginx, and configure it for you, via the gui.

Navigate to Admin Area, Servers and add a server. This is the server on which you have already installed haproxy and nginx, probably though apt, dnf or your distro’s package manager.

If that is NOT the case, then you will need to forward all the ports you set on your router to the IP Address of your HAProxy-WI machine (or docker container) here, and then use your internal IP address.

For example:

Public IP (your router internal/external address) ->

HAProxy IP

Fancy Pants PiHole IP

Your client computer

You would configure your router to forward port 52135 (tcp and udp, ideally, or at least tcp) to and then in the HAProxy WI gui, set the IP to and the port to 22.

Once you do that, you can load the config and setup an HTTP load via the haproxy-wi gui.

Be aware that, by default, there are 3 users created on HAProxy-WI. You’ll ant to disable the ones you don’t use and set a secure password on the ones you do:

PFSense/ OPNSense Users take note

PFsense has built-in a reasonable and perfectly functional HAProxy gui. It works fine. If you are using PFsense for your router, just use that. It doesn’t have as many bells and whistles as HAProxy-WI, but it does the job.

Configure HAProxy

Ideally configure via HAProxy-WI, but you can do it via the CLI no problem.

The idea is to create a front-end that looks at the incoming hostnames and routes it to the appropriate backend.

In our example, we have plex, nextcloud and a wiki running on different nonstandard ports. will proxy to port 58212 and will proxy to port 582132 and so on.

In the haproxy config for each of these backends, you can elect to use encrypted or unencrypted http connections. Use ssl to use ssl.

Plex is a little special and requires a few more options to work properly with HAProxy:

#/etc/haproxy/haproxy.cfg snippet
frontend  main
    mode http
    bind :::443 v4v6 ssl crt /etc/letsencrypt/live/
    acl url_static       path_beg       -i /static /images /javascript /stylesheets
    acl url_static       path_end       -i .jpg .gif .png .css .js

    acl host_plex hdr(host) -i

    acl root_dir path_reg ^$|^/$
    acl no_plex_header req.hdr_cnt(X-Plex-Device-Name) -i 0

    acl host_radarr hdr(host) -i
    acl host_sonar hdr(host) -i
    acl host_sabnzb hdr(host) -i
    acl host_synology hdr(host) -i
        acl host_nc hdr(host) -i
    use_backend synology if host_synology
    use_backend sonar if host_sonar
    use_backend sabnzb if host_sabnzb
    use_backend radarr if host_radarr
        use_backend nc if host_nc

    redirect location code 302 if no_plex_header root_dir host_plex
    use_backend plex_any if host_plex

    use_backend static          if url_static
    default_backend             app

So this is an HAProxy frontend directive. We will have only one of those, because we have only one public IP and it is listening on port 443.

[ Note that I am using letsencrypt here (see below for letsencrypt if you are lost and don’t know how to get your free letsencrypt SSL certs) ]

I am telling HAProxy to listen on ipv4 and ipv6 port 443 (ssl) and the parth to my LetsEncrypt key+certificate. Depending on what the hostname is, HAProxy will route your http request to a specific backend in the HAProxy vernacular. Each backend is defined as a host and port that HAProxy will connect to.

So we end up with an haproxy config something like


backend app
    mode http
    balance     roundrobin
    server  app1 check

# The ssl verify none here means to use ssl, but ignore if the cert
# is self-signed. 
backend plex
    balance roundrobin
    server check ssl verify none

# This is not an ssl connection. Bad idea for security!
backend synology
    mode http
    balance roundrobin
    server synology check

# This is an nginx instance running on port 81:
backend app
    mode http
    balance     roundrobin
    server  app1 check

# and so on, with many more backends defined. 
# One for each of the front-ends defined above. 
# you will NOT be able to copy-paste this config.
# think it through. 

to service all our inbound hostnames that all map to this one IP address.

(haproxy also supports tcp connection proxying! but things like our acls for filtering by host won’t work in that case…)

On the nginx side it is very simple:

 server {

    return 302 https://$host$request_uri;

        listen       81 default_server;
        listen       [::]:81 default_server;
       return 404; 

HAProxy gets the inbound connection on port 80, and by default forwards it to this local nginx server running on the internet. It just serves static content. So anyone going to this server that doesn’t know the magic words or urls will not be able to use the proxy to get their traffic forwarded. This layer is a bit security-by-obscurity but it is mostly effective at preventing bots from scanning for things like plex installations or nextcloud installations, or from those services being indexed.

Plex Is special!?

Note the plex config also. Plex is a bit weird to proxy/forward, so I had to add some extra rules. Those rules should be copy-pastable if you desire a similar setup with HAProxy and Plex Media server.

The documentation is quite good on haproxy as well, and has lots of examples for various use-cases.

Your Local Firewall

On your local firewall, at your home connection, you can restrict inbound connections so that only your Linode (haproxy machine) is permitted to access these non-standard ports. It’ll keep snoops away, and if your ISP notices traffic to those ports they won’t be able to connect themselves to see what it is.

For troubleshooting, you can also permit inbound connections from anywhere on mappings like port 58212 . For example, if that mapped to your pi-hole, then in the browser you should be able to load:

http(s):/s/ and see the page content, even before we add HAProxy to the mix.

Let’s Encrypt!?

It IS possible to configure a the HAProxy host to support lets encrypt. This is largely outside the purview of HAProxy-WI, though HAProxy-WI can manage certs in a more traditional way for you.

I can’t give you a clear-cut recipe or how-to, yet, but I’m working on it.

Here’s a rundown so you can DIY it if you’re handy at the CLI:

  1. Setup your DNS forwarding/wildcard (e.g.

  2. Setup nginx to listen on port 80, and set it to do a redirect to https (http is port 80, https is port 443):

    server {
        listen 80;
       return 302 https://$host$request_uri 

On your haproxy machine, use certbot standalone to stop nginx, and haproxy, then run certbot to get your cert for all the domains you’re actually using (wildcard cert is also possible with dns challenges) then, cat the certificate chain and private key into a new file. You can specify that file in haproxy ssl configuration, and have haproxy listen on port 443.

Here’s the renew script I use, which stops haproxy and nginx:


# - Congratulations! Your certificate and chain have been saved at:
#   /etc/letsencrypt/live/
#   Your key file has been saved at:
#   /etc/letsencrypt/live/

service nginx stop 

service haproxy stop

certbot --expand --authenticator standalone certonly \
-d \
-d \
-d \
-d \
-d \
-d \

# didn't work?
# --pre-hook "service nginx stop && service haproxy stop" --post-hook "service nginx start && service haproxy start"

sleep 2

#Let's Encrypt Needs the cert *and* the key in one file. This cat command will fix you up. Be sure the paths match what you've set in your HAProxy conifg
cat /etc/letsencrypt/live/ /etc/letsencrypt/live/ |tee > /etc/letsencrypt/live/

sleep 2

service nginx start
service haproxy start

Just add this to /root/ and run it weekly. Be sure to adjust the paths and filenames to match what certbot is actually updating. Don’t be tempted to use the nginx plugin for certbot, as it’ll wreck things in this setup.

With haproxy listening on port 443, and the app front end configured properly, you can now route things based on those ACL directives in the HAProxy config.

At this point, the HAProxy-WI gui should reflect the work you’ve done here, if you want to go that route. It has a gui text-file editor for setup you’ve done beyond the initial gui config.

It is possible to accomplish this same thing but entirely without nginx. You can just have HAProxy do the redirects. Personally, I like to also setup an nginx site on this same machine so that I can offer a flat html website.

Anyone going to this IP address or with the base domain (or an unused subdomain) will see the placeholder page. Only when you go to will you get the plex media server. Note that the Letsencrypt SSL certificate will “give away” all the subdomains you are using since the registry of all certs requested via LE is public, and since all the domains are listed on the cert itself.

You can layer on HTTP autorization and even HTTP client-side certificate authorization for real security, but that’s perhaps best left to a future how-to.


Don’t forget if you’re setting this up on CentOS or Fedora, that SELinux makes things pretty locked down by default:
You may need to allow haproxy/nginx to make outbound connections. This is disabled by default.
You may need to allow http/nginx to listen on non-standard ports. Also disabled by default.
*You may need to run firewall-cmd to permit inbound traffic/open ports. *

# by defualt can't make outbound proxy connections, for security. 
 setsebool -P haproxy_connect_any 1

#by default port 81 isnt allowed for nginx to listen on
 semanage -P port -a -t http_port_t -p tcp 81

Wrapping Up

See the thing is ALL the inbound traffic goes through HAProxy-WI. It ends up deciding which machine to send the traffic to based on the hostname you’re trying to connect to, which is a relatively new trick in networking. This is, of course, unnecessary if everything can have its own IP address. Most people don’t have that type of a setup, except maybe with IPv6.


And here I am just barely getting a VPN on my router running so I can connect back to home. I got a lot of work to do



1 Like

8000% increase in bot activity after cresting the dns entry on lol


What about dynamic IP addresses… Aren’t they still really common?

Honestly I think TOR authorized hidden service(s) are the way to go https p FgbdRTFr.html though it doesn’t look like HAProxy can do Socks OR IPv6 OR Domain for server, so it’s use is needlessly handicapped.

Instead of listening on random localhost ports(81), use Domain. For software that doesn’t support Domain support should be added, but for most cases the support in sshd or socat and even netcat is enough.

This works great with dynamic IP? Especially dynamic dns. Just use dns entries in haproxy and as soon as your ddns is updated haproxy works too. And you can have a fixed domain name even with a dynamic IP when combined with a ddns service. It’s possible to script this with a bash script and not even need ddns.

Ipv6 works with haproxy.


I googled “list all dns entries”, but I only found sites where you have to input a domain name to get the IP address or input the IP to get the domain.

So, if they have to already know the domain name or IP address, then how do they find it?

ddns has it’s own problems. I only looked at haproxy a bit, but I didn’t see any configuration for when to perform the lookups. For example should new connections be held up until after DNS lookups or should a DNS lookup be performed at some interval or only after a failure(timeout!!)? There are also issues related to the ddns client’s ability to change the record timely. On top of the DNS Caching and ignoring of TTLs. The ddns service is also a point of failure, the service could be down when an address change occurs.

In contrast, TOR offers redundancy in it’s P2P network. The Rendezvous Point is expected to change, unlike DNS entries that are expected to be mostly static. I couldn’t find any documentation on how TOR detects stale TCP connections. I can only imagine they use an echo request on idle connections… In any event TOR actively maintains a connection to the network, instead of passively maintaining a ddns record.

I like this concept for both personal VPS and devices and home network opened to the internet.

Fedora version coming soon in this post.


To not have downtime when certbot is renewing the certificate you can use webroot via Nginx or standalone via HAProxy.
HAProxy confiig

frontend http
  bind *:80
  acl a_letsencrypt path_beg -i /.well-known/acme-challenge
  use_backend b_letsencrypt if a_letsencrypt
  default_backend app

backend b_letsencrypt
  mode http
  server certbot check

And call certbot with --standalone and --http-01-port:
certbot certonly -d domain1.fqdn -d domain2.fqdn --agree-tos -n --standalone --http-01-port 8080 --deploy-hook /path/to/deploy/

Niginx config

server {
    listen 81 default_server;
    listen [::]:81 default_server;
    server_name    test;
    location ^~ /.well-known/acme-challenge/ {
        default_type "text/plain";
        root /path/to/webroot;

    location / {
        return 302 https://$host$request_uri;

And call certbot with --webroot and --webroot-path:
certbot certonly -d domain1.fqdn -d domain2.fqdn --agree-tos -n --webroot --webroot-path /path/to/webroot --deploy-hook /path/to/deploy/

With the two methods you can use the following script to deploy the certificate and reload HAProxy:

 cat $RENEWED_LINEAGE/fullchain.pem $RENEWED_LINEAGE/privkey.pem > $RENEWED_LINEAGE/fullcert.pem
 haproxy -c -f /etc/haproxy/haproxy.cfg
 if [ $? -gt 0 ]; then
     exit 1
     systemctl reload haproxy

For IPv4 all the addresses are already known and it doesn’t take long to walk them. This is another matter with v6, but I wouldn’t rely on an address being unknown. DNS is unencrypted, so it’s possible names are sniffed. Typically DNS zones are secure from someone wanting to list out all the names, but secondary DNS servers can be configured to XFER zones from the primary.

Thank you for the post! Been considering something like this, but did not know about HAProxy!

I do have a few questions/confirmations about this approach in general:

  1. This does add an additional network bottle neck so you’re limited by both your home’s and VPS’s bandwidth correct?
  2. If I’m on my home network and want to connect to my services locally with the same URL, but not have to leave my network, how would one go about that (e.g. I could go to on my home network and keep all the traffic local for faster download speeds)? I imagine there would be a local DNS server to redirect just those domains?
    • In that scenario, how would acquiring and using SSL certs work?
  3. From a security perspective, what about this approach would be different from having the VPS use a VPN to connect to a home network with something to divvy up traffic there?

Thank you for the answers in advance!

Hi, I was curious what you were doing for SSL, and I noticed that you use certbot.

I also use a wildcard cert for my proxy, but I’ve found to be much nicer to use, supporting a lot more DNS services. It’s a posix compliant shell script that does it all :slight_smile:

I usually do my VPS stuff as ansible roles/playbooks, because I’ll fail to properly document what I did otherwise :stuck_out_tongue:, and I use this role for

Hey Wendel, when having to configure and coordinate multiple different online services including DNS and SSL. And the VPS. Then simply just bundling together hap with nginx is not enough to simplify all of those other steps. Because why? Because DevOps thats why. So what I believe would be a more appropriate layer of abstraction is a modern tool such as Terraform or Pulumi. Where you can actually codify and template those additional external steps between those foundational network services.

I dont necessarily believe it makes the job easier or less work because you still have to come up with a repreentation of the same work with terraform. But it does then make it easier to redo over again. Or for example for clients. Or if you wanted to sandbox test within your local lan (or use ‘localstack’) etc. Or swap linode out for a different VPS etc. Because the tasks can be templated and swapped over as modules of code.

So your haproxy gateway setup here would be such a cool and practical ‘real world project’ for somebody who wanted to learn a devops tool such as terraform. Because it deals with only 1 gateway VPS machine, a small number of docker conatainers. Yet it is cheap legitimate use for a VPS server and includes both SSL and DNS.

Great how-to guides. I’m a huge fan of Plex. I’ve tried other solutions such as Kodi and Jellyfin. While I like them, they all seem to be lacking some needed features such as hardware acceleration or simple library importation.

While I appreciate this guide, is there anything comparable for UDP? Right now I have what I believe to be a amateur setup where I have a VPS connected to a local machine via VPN and routing specific ports accordingly with iptables but I would love a better way of accomplishing this.

1 Like


After some tinkering I was able to get this to work.

There are several bits regarding SELinux which need refinement.

My setup is Cloudflare (name server) -> Linode -> Router

I currently run a bunch of stuff behind a nginx reverse proxy on my home network so this attempt was to remove the dns pointing to my home IP.

So this is how I spent my Saturday morning.


Unfortunately, @wendell forgot to mention that in order for HAProxy to parse the full cert correctly, once must combine the fullchain.pem and privkey.pem into a new file and reference that in the HAProxy config.

After you obtain your keys from letsencrypt with the above methods cd into /etc/letsencrypt/live/ and concatenate the output of these two files and name it something nice.

cat fullchain.pem privkey.pem >
# this makes /etc/letsencrypt/live/


Note: I am using port 8080 instead of port 81 like in this tutorial. Just copy and paste this and it should work:

semanage import <<EOF
boolean -D
login -D
interface -D
user -D
port -D
node -D
fcontext -D
module -D
ibendport -D
ibpkey -D
permissive -D
boolean -m -1 cluster_use_execmem
boolean -m -1 haproxy_connect_any
port -a -t http_port_t -r 's0' -p tcp 8080

For ansible users:

- name: Allow cluster to use execmem
    name: cluster_use_execmem
    state: yes
    persistent: yes

- name: Allow haproxy to connect any
    name: haproxy_connect_any
    state: yes
    persistent: yes

- name: Set up port customizations
  shell: |
    semanage port -D
    semanage port -a -t http_port_t -r 's0' -p tcp 8080

Config files


# Example configuration for a possible web application.  See the
# full configuration options online.

# Global settings
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #   file. A line like the following can be added to
    #   /etc/sysconfig/syslog
    #    local2.*                       /var/log/haproxy.log
    log local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/
    maxconn     4000
    user        haproxy
    group       haproxy

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

    # utilize system-wide crypto-policies
    ssl-default-bind-ciphers PROFILE=SYSTEM
    ssl-default-server-ciphers PROFILE=SYSTEM

# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

# main frontend which proxys to the backends
frontend main
    mode http
    bind :::443 v4v6 ssl crt /etc/letsencrypt/live/
    acl url_static       path_beg       -i /static /images /javascript /stylesheets
    acl url_static       path_end       -i .jpg .gif .png .css .js

    # custom service backends
    acl root_dir path_reg ^$|^/$
    acl host_tv hdr(host)               -i
    acl host_git hdr(host)              -i
    acl host_resume hdr(host)           -i
    acl host_cockpit hdr(host)          -i
    acl host_cloud hdr(host)            -i
    acl host_chat hdr(host)             -i
    acl host_dev-tsp hdr(host)          -i
    acl host_prod-tsp hdr(host)         -i
    acl host_metrics-mc2 hdr(host)      -i

    use_backend static          if url_static
    default_backend             app

# static backend for serving up images, stylesheets and such
backend static
    balance     roundrobin
    server      static check

# round robin balancing between the various backends

# nginx backend
backend app
    mode http
    balance     roundrobin
    server  app1 check

# jellyfin
backend tv
    balance roundrobin
    server tv check ssl verify none

# gitlab
backend git
    balance roundrobin
    server git check ssl verify none

# mattermost
backend chat
    balance roundrobin
    server chat check ssl verify none

# nextcloud
backend cloud
    balance roundrobin
    server cloud check ssl verify none

# static resume site
backend resume
    balance roundrobin
    server resume check ssl verify none

# cockpit management
backend cockpit
    balance roundrobin
    server cockpit check ssl verify none

# minecraft server metrics
backend metrics-mc2
    balance roundrobin
    server metrics-mc2 check ssl verify none

# dev
backend dev-tsp
    balance roundrobin
    server dev-tsp check ssl verify none

# prod
backend prod-tsp
    balance roundrobin
    server prod-tsp check ssl verify none


# For more information on configuration, see:
#   * Official English Documentation:
#   * Official Russian Documentation:

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/;

# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    #keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    # Timeouts
    client_body_timeout 12;
    client_header_timeout 12;
    keepalive_timeout 15;
    send_timeout 10;

    # Buffer sizes
    client_body_buffer_size 10K;
    client_header_buffer_size 1k;
    client_max_body_size 8m;
    large_client_header_buffers 2 1k;

    # GZip Compression
    gzip             on;
    gzip_vary        on;
    gzip_comp_level  5;
    gzip_min_length  1024;
    gzip_proxied     expired no-cache no-store private auth;
    gzip_types       text/plain application/x-javascript application/javascript text/javascript text/xml text/css application/xml;

    # Load modular configuration files from the /etc/nginx/conf.d directory.
    # See
    # for more information.
    include /etc/nginx/conf.d/*.conf;

    server {
        listen       80 default_server;
        listen       [::]:80 default_server;
        server_name  _;
        root         /usr/share/nginx/html;

        # Load configuration files for the default server block.
        include /etc/nginx/default.d/*.conf;

        location / {

        error_page 404 /404.html;
            location = /40x.html {

        error_page 500 502 503 504 /50x.html;
            location = /50x.html {

# Settings for a TLS enabled server.
#    server {
#        listen       443 ssl http2 default_server;
#        listen       [::]:443 ssl http2 default_server;
#        server_name  _;
#        root         /usr/share/nginx/html;
#        ssl_certificate "/etc/pki/nginx/server.crt";
#        ssl_certificate_key "/etc/pki/nginx/private/server.key";
#        ssl_session_cache shared:SSL:1m;
#        ssl_session_timeout  10m;
#        ssl_ciphers PROFILE=SYSTEM;
#        ssl_prefer_server_ciphers on;
#        # Load configuration files for the default server block.
#        include /etc/nginx/default.d/*.conf;
#        location / {
#        }
#        error_page 404 /404.html;
#            location = /40x.html {
#        }
#        error_page 500 502 503 504 /50x.html;
#            location = /50x.html {
#        }
#    }



server {
    if ($host = {
        return 302 https://$host$request_uri;
    listen 8080;
    listen [::]:8080;

    location ~ /.well-known {
        allow all;

    return 404;

That’s actually in the script example I shared but I didn’t include a comment saying what that’s doing. I’ll comment it in a bit, good catch.

I might rework the tutorial a bit to use an haproxy ACL to redirect the well known urls transparently but the reason I didn’t do that because you don’t initially have the ssl so haproxy wouldn’t start. You have to at least initially fetch the certs stand alone before starting haproxy or juggle an initial then running config.

Great work. This is what the level1 community is all about. This right here. Yass.


Ahh thanks! Glad I could help. :smiley:

I’ve been using HAproxy on my pfSense as SSL offloading with Let’s encrypt. But found a very weird problem of congestion.

Screenshot 2020-04-13 17.47.10

I got this kind of congestion of TCP when I use HAproxy thought it would be my ISP, betwen my ISP I’ve got full bandwidth and between VLANs…

So I started testing other Users of HAproxy getting the same issues of Congestion (pfsense and standalone) and with NGNINX as proxy not getting this issue.

Connections with 600/600 Mbits.

iPerfs and without HAProxy goes 600mbit full Duplex (1,3Gbit)