HAProxy-WI -- Run lots of public services on your home server

In short, they don’t have to know either.

There are many ways to do it, possibly the easiest way, and probably the most widely used, is to subscribe to a service like the “Newly Registered Whois Database” list on whoisdatacenter, which is why you get a million calls from people wanting to create a website for you after registering a domain without whois protection. It’s not free, they charge $20/month, another was $95, although there are ways to get the information for free, as it is public information, people are willing to pay for someone else to do the legwork.

From there, to get the DNS records of the domains, it’s easy, really, and can be scripted. Download list of new domains > query dns records for each site with something like nslookup (now you have a list of all the DNS entries for all the newly registered domains) > run nmap on said ips on the domains, do attacks, scan for vulnerabilities, exploits, whatever you want. Afterall, a site in the process of being setup is most likely to not yet be hardened, and may have a vulnerability that can be exploited. That’s why I don’t put the IP of my VPS or whatever on my DNS until they’re sufficiently hardened.

Whoisdatacenter just does that, but with the whois records, and checks when they were registered.

I found a script on sans.edu that tracks newly registered domain names, if you’re curious. It’s way too easy. The script, if you’re familiar with bash scripting, is quite easy to follow, too, so it’s worth a look if you’re interested. It’s very short.

I can totally understand not finding any of that, as the keywords aren’t what you would think unless you know how they’re finding the dns entries. You need the domain first, then you get the dns entries for that domain, which you can do with nslookup. The first part is a little harder, but with that script, or a subscription, that’s done easily, too. The keywords I used were “new registered domain names list”.

Hope that was helpful/informative. :slight_smile:

1 Like

Hi,

If any want terraform example i have some of digital ocean, vultur, aws , linode so the most complete are the tree first providers but the aws one was made by a colleague. There are other stuff in the repo feel free to user them.

repo devstuff terraform

1 Like

You can also redirect http to https via HAproxy:

frontend http-in
bind *:80
mode http
http-request redirect scheme https

If HAproxy fails to start after a reboot (e.g.with ubuntu) you can try to edit the service config file:

nano /lib/systemd/system/haproxy.service

and change the Unit part to :

[Unit]
After=network-online.target

Overrides should go in their own directory.

In this case haproxy.service.d/override.conf

One should never directly edit service files.

Scratching my had here concerning dynamic DNS. If I have my nice lets encrypt cert configured to a beautiful *.mycollhomelab.com and set my backend as app.mycollhomelab.com forwarding to bad.dyndns_url.com - will it not screw up the cert?

Just curious how it will behave…

No, certs fine because the client never sees the ssl handshake between the proxy and your dynamic dns

1 Like

No one experience throughput problems with HAproxy?

git cloning via :22 seems to be broken when hosting gitlab behind HAproxy though.

So I’ve been trying to use a method where it detects ssh traffic over port 443 and uses a different backend, otherwise, go to the web backends.

However, I keep getting key exchange failures.

debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Connecting to git.example.com port 443.
debug1: Connection established.
...
debug1: Local version string SSH-2.0-OpenSSH_8.1
kex_exchange_identification: Connection closed by remote host

maybe it will be easier just to change what ssh port im running and forward 22 with haproxy instead.

It’ll work only with tcp forwarding. The http forward depends on reading the http header but it’ll bomb out when ssh is in play.

Git over ssh is different than git over http(s) as well.

Yeah I tried to do this:

frontend main
    mode tcp
    bind :::443 v4v6 ssl crt /etc/letsencrypt/live/example.com/fullcert.pem
    tcp-request inspect-delay 5s
    tcp-request content accept if HTTP

    # set https forward
    #acl https ssl_fc
    #http-request set-header X-Forwarded-Protocol https if https
    
    #acl url_static       path_beg       -i /static /images /javascript /stylesheets
    #acl url_static       path_end       -i .jpg .gif .png .css .js

    # Add a X-Forwarded-For containing the client IP address if none were already present
    #acl h_xff_exists req.hdr(X-Forwarded-For) -m found
    #http-request add-header X-Forwarded-For %[src] unless h_xff_exists

...

# gitlab alt-ssh
    acl client_attempts_ssh payload(0,7) -m bin 5353482d322e30
    use_backend gitssh if !HTTP client_attempts_ssh

To my dismay though I can’t seem to figure it out.

They aren’t handling https and ssh on the same endpoint. You’d have to create an alternate dns entry and bind it to an alternate ip.

Then use tcp forwarding to forward 443 to 22.

Altssh.yourdomain.com port 443 to 22.

In that case, you should better use a separate host which forwards port 443 to port 22 of your GitLab instance. You can do this with HAProxy or any other loadbalancer, or even with IPTables.

Right? The inspect acl rules are… problematic … as you are discovering. Way easier to just bind that to a different public IP if at all possible.

Yeah haha what I tried to do was get it all done under a single linode instance.

I haven’t given up hope yet though.

Do you think setting ssh to a different high random port and then hijacking port 22 with haproxy and then forwarding that with the tcp mode will work?

Linode will give you another public IPv4 for $1/mo :slight_smile: same instance.

Yes, that’ll work fine.

1 Like

Oh sweet if they’ll do that then I’ll finally be able to get GitLab pages to work!

This is how my pfsense firewall currently looks

web request -> linode -> specific header -> haproxy -> pfsense -> specific port -> service

That it did!

I need to update my original post. Maybe make my own wiki thread.

1 Like

:thinking: before i even attempt this, couldn’t i tunnel it all under wireguard?

Yes you could, but if we later on a web application firewall later this setup will be super convenient

Yes a tunnel works. I use openvpn server on my linode vps and my Synology nas can import openvpn config files by default. Now I don’t have to open a port and dynamic dns for my home router.

Yeah really it would make more sense to setup a vpn tunnel from the linode instance. But you dont want that public VPS to see anything on your local lan. What you want it it terminating into a VLAN subnet. Where it can only see and access endpoints to those specific services you are choosing to make public. You would then do all this haproxy forwarding stuff on that VLAN. Rather than over the wan. I think this is more secure but of course you still have to pay a lot for a proper * wildcard SSL cert. There is also the question of VPN performance piping multiple services over the same single tunnel. But you know, that performance aspect depends on a lot of different factors and what you are actually doing with it.

Personally I would use a docker network for the local VLAN. And some containers. The problem though is that the Docker Network driver is not designed with VPNs in mind. Because on container start docker will connect the networking before the container has had the opportunity to setup it’s routing table for the VPN. This results in a small time window when packet leakage can occur. I think this can be solved easily in the docker networking upstream driver. But nobody else has thought about it of discussed it yet with the Docker folks. As far as I am aware. (Last time I checked was a few months ago, it is unlikely anything has changed much since then). A bit of a missed opportunity I suppose? Could have raised an issue on github earlier about this but I did not seriously expect them to add the missing functionality in to the docker driver. I dont see how anything is going to change though until that happens.

BTW it was also mentioned by Wendell you might want to take additional steps to hide or obfuscate services from sniffers for example by throwing up an htaccess basic http auth in nginx.

I just remembered and wanted to mention that you can instead of that put Kong in front of HAProxy (or behind / whatever initial redirect).

And what is great about kong is that it can secure any arbitrary https web service o api. And has many plugins for that. Plus it can also do other useful things such as rate limiting.

Kong has integrations (bsupport modules) in terraform and pulumi.

Here you can see a simple example of what kong is on their landing page. However (bas mentioned) there may be more functionality than that if you go and search for its plugins.

The other claim to fame of kong is that it is very low latency. Or ‘lowest latency’ whatever that means. So all in all I thought it was worth mentioning. Sincce perhaps there are some gaps / missing bits which is HAProxy all by itself cannot do.

1 Like