Lets build a DNS server - Lancache + Pihole on a single Ubuntu server

Background:

  • I’ve been trying to find a guide on getting a DNS server setup with both Lancache and Pihole running on the same Linux host but kept finding the inefficient solution of ‘make two linux hosts and run one on each’, which I knew shouldn’t be needed. Found what I needed to make happen with Wendell’s Forbidden Router post, but figure it may be helpful if I post how I setup my server:

The Setup:

  • I setup an Ubuntu 20.04 VM w/ 4c, 8Gb RAM, 32gb OS drive, 500gb secondary drive for the lancache. Feel free to scale this as needed for the size of the cache you wish to implement though
  • Performed a basic Ubuntu install. Only thing of note is setting the static IP to the IP I wish for the Lancache DNS and installing the Open SSH server service. Once finished, SSH into the VM

Prerequisites and Dependencies:

  • Double-check there are no pending OS upgrades, install the docker-ce dependency packages, add it’s repository, and install docker.

sudo su
apt update && apt upgrade -y &&
apt install apt-transport-https ca-certificates curl software-properties-common -y &&
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add &&
add-apt-repository “deb [arch=amd64] Index of linux/ubuntu/ focal stable” -y &&
apt install docker-ce -y

  • Update the interface config to add our second IP for Pihole. This requires going to where it has ‘- YOUR_IP_ADDRESS’, hit enter for a new line under it, space until it is to the same depth in the YAML file because YAML formatting, and adding the second IP in the same way as the first. Test ping to verify the second IP is reachable

nano /etc/netplan/00-installer-config.yaml
netplan apply

  • Next we need to setup our secondary drive. If you’re doing a RAID array, DigitalOcean did a decent writeup on getting an mdadm array setup. In addition to setting up the drive, I decided to setup the docker folders and permission sets for them to use. I ended up setting up the log files on the OS disk, but feel free to set them up on the secondary disk instead. Just make sure your drive mounts and persists through reboots and you cap your max LANCACHE size to be less than the secondary disk actual size.

lsblk # Identify your secondary drive. In my case it was /dev/sdb

parted /dev/sdb mklabel gpt &&
parted /dev/sdb mkpart primary ext4 0% 100% &&
mkfs -t ext4 /dev/sdb1 &&
parted /dev/sdb name 1 docker &&
mkdir -p /docker/lancache/logs &&
chmod -R 775 /docker/lancache/logs &&
mkdir -p /lancache &&
chmod -R 775 /lancache &&
mkdir -p /docker/pihole &&
chmod -R 775 /docker/pihole &&
echo “/dev/sdb1 /lancache ext4 defaults 0 0” >> /etc/fstab &&
mount -a &&
df -h

  • Next, we need to make sure that port 53 doesn’t get bound by Ubuntu, which is does by default. To free it up, we need to set the resolved.conf that systemd uses to disable the DNSStubListener. Note: While this step SHOULDN’T be necessary as we’re binding our containers to specific IPs, I’m recommending this be done anyway to prevent any issues.

echo “DNSStubListener=no” >> /etc/systemd/resolved.conf

rm /etc/resolv.conf &&
ln -s /var/run/systemd/resolve/resolv.conf /etc/resolv.conf &&
service systemd-resolved restart &&
nslookup forum.level1techs.com #Verify DNS queries still resolve

Docker install and setup:

  • Run the following to add your user to the docker group and setup docker service to start and auto-start on reboot

usermod -aG docker YOUR_USERNAME &&
systemctl start docker && systemctl enable docker

  • Now for the actual docker run files. If you make a mistake you can run docker stop CONTAINER_NAME && docker rm CONTAINER_NAME to stop and remove the container. For my run files below, I specified 1TB for the size of the cache. The cache index size is set to 250MB, as monolith documentation recommends 250MB per 1TB of Cache. I set FIRST_IP as the IP that was setup during the initial ubuntu install, PIHOLE_IP as the IP that was added after the install, and I specified a larger than default CACHE_SLICE_SIZE of 8m as this caused better performance with Battle.net and Origin downloads with no noted difference with steam, but your milage may vary.

docker run -d
–name=lancache-dns
–restart unless-stopped
–detach -p FIRST_IP:53:53/udp -e USE_GENERIC_CACHE=true -e LANCACHE_IP=FIRST_IP
-e UPSTREAM_DNS=PIHOLE_IP -e TZ=America/New_York
lancachenet/lancache-dns:latest &&

docker run -d
–restart unless-stopped
–name lancache
–detach -v /lancache:/data/cache -v /docker/lancache/logs:/data/logs
-p FIRST_IP:80:80 -p _FIRST_IP:443:443
-e CACHE_INDEX_SIZE=250m -e CACHE_MAX_AGE=3650d -e CACHE_DISK_SIZE=100000m -e CACHE_SLICE_SIZE=8m -e TZ=America/New_York
lancachenet/monolithic:latest

  • Next we’ll setup our upstream DNS server, Pihole

docker run -d
–name pihole
-p PIHOLE_IP:53:53/tcp -p PIHOLE_IP:53:53/udp
-p PIHOLE_IP:80:80
-e TZ=America/New_York
-v /docker/pihole:/etc/pihole
-v /docker/pihole/etc-dnsmasq.d:/etc/dnsmasq.d
–dns=1.0.0.1
–restart=unless-stopped
–hostname Level1DNS
-e WEBPASSWORD=YOUR_PASSWORD_HERE
-e VIRTUAL_HOST=“Level1DNS”
-e PROXY_LOCATION=“Level1DNS”
-e ServerIP=“127.0.0.1”
pihole/pihole:latest

Test and Verify:

  • With all the containers up and running, the only thing left is to verify it’s all working as expected and persists through reboots.
  • First is to run a quick docker ps to verify all 3 containers are running. Next reboot and verify they all come up
  • Next run tail -f /docker/lancache/logs/access.log to check the HIT/MISS logs for our cache and open http://PIHOLE_IP/admin/index.php to check our Pihole dashboard
  • Set your client of choice to point to the FIRST_IP and try downloading a game. The tail -f /docker/lancache/logs/access.log should show activity and refreshing the http://PIHOLE_IP/admin/index.php should show a new client along with an increase in total queries

In hindsight I should probably just setup a run script to build this out automagically, but I’m not that good of a scripter at the moment.

3 Likes

Is this in esxi? How is the NIC situation handled?

I tried this on Windows Server and DNS LOOSES IT’S MIND. It some how causes a loop of some sort till the router CPU maxes out and hard locks.

I was thinking of ditching windows server and
moving to Ubuntu server or something but that would be a pain and I would loose some things

Also is it possible to easily added Linux or Ubuntu repos to LAN cache?

2 years later …

Pihole is “just” a dhcp and dns server all in one, with a UI.

Lancache is “just” an nginx server that has some caching, and a dns server.

Some folks use squid instead of nginx, same principle applies.


Nginx needs to be able to resolve DNS names on the actual Internet. In order to get the contents to cache and serve to their clients.

The clients would be resolving DNS in some way that points them at nginx for these domains.

You might end up with things not working if nginx points to itself.

docker run supports a --dns flag that you can set to e.g. 8.8.8.8 or 1.1.1.1 on the lancache nginx container. While everything else can just use pihole which points everything else at nginx whenever they run into a cache-able domain.

Yes but, it’s not that simple with hypervisor because the NIC is being shared with the host and there are different ways you can set the NIC up. Some even run multiple NICs for that reason. It’s not a simple flat Network in that case. Esxi from what I can see has eve more options than Hyper V offers.

How are you running docker, in Ubuntu VM, or on a Windows server directly-ish with WSL2, how are you running pihole?

It’s been a while since I touched windows, but as far as I remember if you just run docker directly, the setup is similar to how networking behaves on Linux by default. That is, docker will create its default bridge network (vSwitch), it’ll grab the first available /24 from a 172.16 range for stuff on that switch, and will enable NAT from host.

If this is confusing, maybe you should look at lancache nginx configs, and just run nginx on Windows directly. Nginx doesn’t have to use the system DNS resolver, you can set the upstream DNS in its config to whatever you like, and IIRC lancache does it.


Also is it possible to easily added Linux or Ubuntu repos to LAN cache?

Nginx caching works, I use it with Debian, Alpine, and Arch, but I control the repo mirror configs on hosts, and I don’t use lancache configs, just my own hand rolled simpler ones.

I don’t know how well it would work with stock lancache nginx configs because I think lancache will cache each mirror contents separately, ideally these should be deduped. Also, it doesn’t work with https, but most distros don’t rely on https by default and you can disable it with no harm (packages are usually signed with gpg anyway).


Look at these scripts as a starting point: GitHub - uklans/cache-domains: Domain Names required for LAN Content Cache DNS spoofing

They’re not pretty, but the overall approach is sound: basically “here’s a domain list, build me a config for my DNS system of choice to hijack them to my IP of choice”.

And read through lancache nginx configs around proxy_cache here: monolithic/overlay/etc/nginx/sites-available/cache.conf.d at master · lancachenet/monolithic · GitHub

1 Like

Wsl2 has terrible I/o you would not want to run lan cache from it.

I run it In Docker on an Ubuntu VM in hyper V in windows server 2019 machine with a 10G Intel x550 NIC. I don’t remember the NIC mode hyper V is setup on.

Also I’m responding on mobile. May need to edit reply a few times

There is a reason I couldn’t run docker on windows directly either poor performance or I wasn’t able to store the docker VM on my secondary drive and I was forced to use an Ubuntu VM.

I will also be ditching my dad wrt router soon and am going full unifi. I may also replace windows server install with Ubuntu server and virtualize the windows server. Damn blue iris is still windows only as is the raid manager software for my raid card. Easier than rebooting and going in to bios to check up on the raid health. And it is faster than zfs and ceph. At least for cost, size and storage amount.

Though I don’t think it’s enough for lan cache I’m looking at a 1.5tb optane.

I thought it was lame lan cache didn’t have a windows version! However I have one Griner with server 2019, it does not like password less SMB shares and Linux/unpaid has no qualms

Never got fat enough for pinhole since I never got lan cache to work right.