Lancache DNS failover using Nginx

Because of high energy prices in Europe I don’t keep my storage server on at all times. I wanted to use lancache but the requirement of it being the only DNS server would cause me to lose DNS whenever it was off. So I wanted the setup as pictured below:

Effectively the DNS will failover to the upstream pihole if the lancache is not available.

I spent some time looking for a solution and it seems that you can configure failover in Nginx plus, however this costs thousands per year so not really a solution for my homelab. It turns out that you can compile nginx with lua support which would allow you to implement this functionality yourself.

The easiest way to do this is to install Openresty https://openresty.org/en/installation.html.

Once that is done configure the service to start automatically. We can use the fact that lancache monolithic serves the files on port 80 to test if it is running. We configure Nginx to make a socket connection to the port, if it is successful we use the IP of our lancache DNS if not, the failover DNS.

If your lancache DNS and fileserver are on different IP addresses you will need to edit the config to reflect this.

This stream directive of your nginx.conf will need to contain the following block, substituting your lancache DNS and server IP addresses for 10.10.10.2 and 10.10.10.3 respectively. nginx.conf is found at the following location for me: /usr/local/openresty/nginx/conf

stream {
    upstream dns {
     server 10.10.10.2:53;
       balancer_by_lua_block {
            local balancer = require "ngx.balancer"
            local socket = require "socket"

            local host = "10.10.10.2" -- Your lancache server
            local port = 53
            local timeout = 0.1

            local test_connection = socket.tcp()
            test_connection:settimeout(timeout)
            local connect_ok, connect_err = test_connection:connect(host, 80)
            test_connection:close()

            if not connect_ok then
                host = "10.10.10.3" -- Your failover server
            end

            local ok, err = balancer.set_current_peer(host, port)


       }
   }

    server {
        listen     53 udp;
        proxy_pass dns;
        }
}

Restart the openresty service and point your clients DNS to the openresty server.

You may need to increase the number of files or connections that Nginx can open by adjusting the worker_rlimit_nofile or worker_connections parameters.

This method introduces some additional latency but in my experience it hasn’t been noticable

1 Like

Ok, but what is your dns balancer? Will this machine still consume electricity?

I don’t use LC only pihole but to prevent dns loss I have two machines(odroid HC, zeropi) with pihole master and slave. I also do not use any solution, I just use both dns and if one is off, NS2 takes over its role.

Instead of creating more complicated solutions, I would do it very crudely.
LC as NS1 and PH as NS2. NS2 pings NS1, if it loses response for x time then it allows firewall rule for clients trying to access dns. When NS1 is running NS2 is blocking dns access.
In such an extremely brutal way you have a situation where all dns traffic goes to NS1 (NS2 is unavailable) and when NS1 is off NS2 becomes available.
You just set client endpoints on a two-dns basis. The OS will always ask NS1 first and if it doesn’t answer it will immediately try NS2.

Why are we blocking NS2 when NS1 is available? In order for all responses to go to and from NS1, NS2 only plays a fallback role in this model.

dns

As long as we’re only talking about dns itself… LC can still use NS2 as upstream.

In this way, we have a fairly simple and highly native solution that gives us continuous access to dns regardless of the state of the LC and does not require any additional software or hardware along the way.
If the goal is to limit active machines as well as their loads in order to save energy, such a solution seems to make sense. :wink:

If using the docker containers, you could put pihole and the lancache-dns containers on your always-on machine and only put the lancachenet/monolithic cache host on your storage host. The caveat there is that downloads for anything it caches will be broken while it’s off (obviously), but it may also have an effect on authentication or other traffic to the services. That will depend on which service and what’s in the cache domains list (GitHub - uklans/cache-domains: Domain Names required for LAN Content Cache DNS spoofing) as to whether or not you’re sinkholing traffic into a box that isn’t on.

Somewhat tangential, but you can put pihole in front of lancache if you care about which clients are doing what. The lancache just needs to be the only resolver that the pihole uses for upstream, and the lancache would be the one making the final call out to your upstream resolver of choice.

I would suggest using dnsdist [1] instead of nginx for the DNS load balancing tough as it is intended as such and actually knows about the protocol as it shares its code base with PowerDNS while nginx/openresty do not.

Apart from following the quick start guide one would only need to set an order or weight [2] according to which of your two servers you want preferred and when lancache is down the other will automatically get used instead.

One thing that needs some acclimatization is that its configuration language is Lua so comments are double-dashed-out instead of hashed. As can be seen in an example config. [3]

A docker version is available as well. [4]

It can be configured to not consume much resources so should be able to nicely co-exist with pi-hole on a raspberry or other power efficient hardware.

[1] https://dnsdist.org/
[2] https://dnsdist.org/reference/config.html?#newServer
[3] https://github.com/PowerDNS/pdns/blob/master/pdns/dnsdistconf.lua
[4] https://hub.docker.com/r/powerdns/dnsdist-master