Hello, I am a long time linux user who recently got into self hosting. Making this thread mostly to share the random things I’ve discovered and to receive any helpful feedback/tricks from a community I’ve often lurked in.
This is my current setup. I can provide more granular description at a later date. Documentation is not my strongest suit and with these posts I will try to improve it.
I have a 5th node that I’m still setting up. It will be dubbed nixbox as my test environment. I will probably use it as a nix remote builder for the lower powered nodes like nixpi and nixer.
Disk Setup
Most of my nodes here use ZFS on mirrors. Nixer for instance uses a mirror on a pair of 8TB drives while Nixdesk uses 2x 1TB NVME.
Nixpad uses ZFS on a single disk mostly for migration purposes. I rarely use this laptop. It’s mostly for when I’m on the go. An hr to a trip I’ll send over the project dataset and be on my way.
Services
I don’t have a lot services running on these.
I have
- Immich
- Arr stack for ISOs :^)
- Navidrome for music
- Unbound for local DNS
- DNScrypt for bad times. ISP and national interference. 3rd world problems
- Chrony with a USB GPS module for time sync
- Syncthing for personal data
- Home Assistant
- Searx-ng
- Prometheus, Node Exporter, Grafana as monitoring stack
- Netdata on new nodes for insight at the start
I am currently investigating a few things
- Zrepl for zfs snapshots and backups
- Diskless setup for nixpi and nixbox
- K3s. I am comfortable with docker swarm but I really want to find out the hype with k8s
- Better time management. I really want to have a reliable stratum 0 clock on the network. I have no particular reason for this other than sheer fun. Can also add it to the ntp project if it’s acceptable.
Zrepl
I have checked the basic documentation and will proceed with the recommended TLS config. My reasons for moving from my trusty syncoid setup is better speeds. SSH is said to be slow and over my network I’ve noticed that less than half the possible disk speed is left on the table during transfers. I’ve added some mellanox cards with 10G modules to see how far I can push it.
Diskless Setup
I don’t have a lot hardware options around these parts. 10yr old hardware is still easily in the $300 range. Drives are equally expensive. I would like to invest on a solid NAS solution and push those as far as can. This makes bargaining on sales a little easier if I can get them to throw out the drive.
I have found a seller with some optane drives that I plan to use as SLOG (mirrored) and some cheap SSDs for L2ARC.
Still figuring out the software side of things. Current candidate setup is to use dnsmasq and ZFS zvols. I have looked into ISCSE but I just cannot figure it out on nixos. Any help here would be great.
K3S
I don’t quite understand the appeal and will be looking into it to finally understand. As my nodes increase in number I will probably need something like this for high availability. My current favorite is docker swarm.
NTP Server
I got gifted a Ublox 7 USB receiver. I thought the best way to use this was to incorporate into a drone I’ve building but it wasn’t quite that good over there. Next best use case is as a reference clock. It is currently connected to nixer and used when network services are down. With no PPS it’s not the best but it does get the job done.
I have some meshtastic nodes. I plan to strap one on a water tower around here and use a local node strap over serial to get NMEA pkts to improve it. Will update here when I get some progress.
Stratum 0 Clock
Progress on this has been okay. Working on it between family and rest. I got some Heltec Wireless Trackers early this year for an IoT project. I later learned of the Meshtastic project and tried it out. It’s pretty neat but I am probably the only node in my entire country.
The idea was to have on these on a water tower and get good GPS fixes that would be transmitted to a module connected to a server through USB. Meshtastic allows you to log NMEA packets straight to serial.
Thought it’d be way straightforward after this but …I cannot turn of device operations logs from appearing.
Attempts to use them as is have crashed chronyd. Currently working on creating a service to read this, filter the unnecessary fluff and present it as a device. The code is clumsy at best. Still trying to iron it out. Any help would be appreciated. Wrote this piece with the help of GPT but it didn’t quite workout. Will give it a try again later tomorrow.
# services.nix
systemd.services.nmea-filter = {
description = "Filtered NMEA serial from Meshtastic device";
after = [ "dev-ttyACM0.device" ]; # Wait for the hardware
bindsTo = [ "dev-ttyACM0.device" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
Type = "simple";
# We use 'sh -c' to ensure the pipe and redirections are handled correctly by a shell
ExecStart = ''
${pkgs.socat}/bin/socat -d -d \
PTY,link=/dev/ttyNMEA,raw,echo=0,mode=660,group=dialout \
EXEC:"${pkgs.bash}/bin/sh -c '${pkgs.coreutils}/bin/stdbuf -oL ${pkgs.gnugrep}/bin/grep --line-buffered \"^\\$\" /dev/ttyACM0'"
'';
Restart = "always";
RestartSec = "5s";
};
};
I still need to figure out how to share these NMEA loads from remote nodes with better visibility over serial to local nodes. It’s probably better to use the simple node here over USB alone but I’m curious of benefits of multiple GPS streams.
ZFS & NixOS Badness
I’ve recently had to write a config for a multi-user system. I was too lazy to write the explicit config and came up with this masterpiece to get the thing going. Thought I’d share this terrible snippet. It creates a home dataset per user on the pool and ensures it’s mounted. There’s probably better ways to do this but for now this will do.
{ lib, pkgs, ... }:
let
users = [
{
username = "user";
... # Other user details
}
... # Other users
];
usersWithDatasets = map (
user:
let
username = user.username;
in
{
inherit username;
dataset = "zpool/home/${username}";
mount = "/home/${username}";
}
) users;
zfsCreateUserDatasets = pkgs.writeScriptBin "zfs-user-datasets" ''
#!${pkgs.python312}/bin/python3
import subprocess
import pwd
import os
import json
users = json.loads('${builtins.toJSON usersWithDatasets}')
for u in users:
ds = u["dataset"]
mp = u["mount"]
# Create dataset if missing
try:
subprocess.run(
["${pkgs.zfs}/bin/zfs", "list", ds],
check=True,
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
)
except subprocess.CalledProcessError:
print(f"Creating ZFS dataset {ds}")
subprocess.run(
["${pkgs.zfs}/bin/zfs", "create",
"-o", "mountpoint=legacy",
"-o", "quota=150G",
ds],
check=True,
)
# Ensure mountpoint exists
pw = pwd.getpwnam(u["username"])
os.chown(mp, pw.pw_uid, pw.pw_gid)
os.makedirs(mp, exist_ok=True)
os.chmod(mp, 0o700)
'';
in
{
environment.systemPackages = with pkgs; [
python312
zfsCreateUserDatasets
];
users.users = builtins.listToAttrs (
map (u: {
name = u.username;
value = {
isNormalUser = true;
home = "/home/${u.username}";
createHome = false; # Service will create and mount these dirs
};
}) usersWithDatasets
);
systemd.services.zfs-user-datasets = {
description = "Create ZFS datasets for users";
wantedBy = [ "multi-user.target" ];
after = [
"zfs-import.target"
"zfs-mount.service"
];
wants = [ "zfs-import.target" ];
serviceConfig = {
Type = "oneshot";
ExecStart = "${zfsCreateUserDatasets}/bin/zfs-user-datasets";
RemainAfterExit = true;
};
};
fileSystems = lib.listToAttrs (
map (u: {
name = "/home/${u.username}";
value = {
device = "zpool/home/${u.username}";
fsType = "zfs";
options = [ "defaults" ];
};
}) usersList
);
}



