ThatGuyB's rants

Being an infallible human (hello fellow hoomans), I sometimes forget to update my system. I do tend to believe that due to using minimalist software, I’m at less risk to vulnerabilities, and many malware being made for x86 makes me even slightly less vulnerable, due to using an ARM platform.

Still, I want to at least have a way to check for updates. My desktop consists of Sway running on Void Linux. There is no way to automatically check for updates and get a notification from what I know. So back to the minimalist way, using whatever tools we have at our disposal. To the drawing board!

Previously, I have mentioned somewhere that I am just doing a simple crontab check every 6 hours, outputting a number into a text file and if it’s 0, sway will say “no updates,” otherwise “updates-available.” That worked for just a little time and I noticed I haven’t updated in a while, so I did not depend on the status bar to always be accurate. But a feature that is working sometimes is worse than a feature that doesn’t work at all, because it creates a dependence and a false sense of security on said feature.

So, here’s my simple solution to a pretty boring problem. Keep in mind that keeping the solution simple is a hard thing to do.

doas crontab -l -u root

0 6,12,18 * * * /usr/bin/sh /path/to/update-check.sh

cat /path/to/update-check.sh

/usr/bin/xbps-install -S
/usr/bin/xbps-install -Su --dry-run | /usr/bin/wc -l > /path/to/update-check-output.txt

So, up to this point, the root user’s crontab runs the update-check.sh script every 6 hours, but only 3 times a day (at 6, 12 and 18, not worth doing an additional check when I’m likely not there to see it). xbps-install -S requires root privileges to update the internal db. Technically I could move the --dry-run command to my own user, as that doesn’t require root once the packages have been synced, but I’m keeping it like that for the sake of simplicity.

Thus far, either 0 or another number is getting written to ~/tmp/update-check-output.txt. If it’s 0, no updates, otherwise, updates are available.

grep status_command ~/.sway/config

status_command while ~/.sway/status.sh ; do sleep 1; done

cat ~/.sway/status.sh

#!/bin/dash
# The Sway configuration file in ~/.sway/config calls this script.
# You should see changes to the status bar after saving this script.
# If not, do "killall swaybar" and $mod+Shift+c to reload the configuration.

# Produces "21 days", for example
VAR_UPTIME_FORMATTED="$(uptime | cut -d ',' -f1  | cut -d ' ' -f4,5) ↑"

# The abbreviated weekday (e.g., "Sat"), followed by the ISO-formatted date
# like 2018-10-06 and the time (e.g., 14:01)
#(date "+%a %F %H:%M")
VAR_DATE_FORMATTED=$(date +'%a %Y-%m-%d %H:%M:%S')

# Get the Linux version but remove the "-1-ARCH" part
VAR_LINUX_VER="$(uname -r | cut -d '-' -f1) 🐧"

VAR_UPDATES_AVAILABLE=$(cat ~/tmp/update-check-output.txt)
[ ${VAR_UPDATES_AVAILABLE} -eq 0 ] && VAR_UPDATE_INFO="up-to-date ✅" || VAR_UPDATE_INFO="updates-available 🔄"

# Returns the battery status: "Full", "Discharging", or "Charging".
#VAR_BATTERY_STATUS=$(cat /sys/class/power_supply/BAT0/status)

# Volume
VAR_VOLUME=$(pamixer --get-volume)
if [ ${VAR_VOLUME} -eq 0 ]
then
        VAR_VOL_INFO="${VAR_VOLUME} 🔇"
elif [ ${VAR_VOLUME} -gt 0 ] && [ ${VAR_VOLUME} -lt 40 ]
then
        VAR_VOL_INFO="${VAR_VOLUME} 🔈"
elif [ ${VAR_VOLUME} -ge 40 ] && [ ${VAR_VOLUME} -lt 75 ]
then
        VAR_VOL_INFO="${VAR_VOLUME} 🔉"
elif [ ${VAR_VOLUME} -ge 75 ]
then
        VAR_VOL_INFO="${VAR_VOLUME} 🔊"
fi

VAR_CPU_THERMALS=$(cat /sys/class/thermal/thermal_zone0/temp)
VAR_CPU_TEMP="$((VAR_CPU_THERMALS/1000)).$(((VAR_CPU_THERMALS%1000)/10))'C 🔥"

echo cpu-temp=${VAR_CPU_TEMP} ${VAR_UPTIME_FORMATTED} ${VAR_LINUX_VER} ${VAR_UPDATE_INFO} vol=${VAR_VOL_INFO} ${VAR_DATE_FORMATTED}

The end result makes the sway status bar look something like this:

Keep in mind that .sway/status.sh runs every second, so the script ought to be able to execute and finish in less than a second

time ./.sway/status.sh>/dev/null

    0m00.02s real     0m00.02s user     0m00.00s system

I prefer keeping it this way, but in theory, I could have a program or script running whenever I log in, check if sway is running and if the file contents have been modified, do a desktop notification, as opposed to changing my status bar, in order to remove clutter. For those who would desire such a thing, I just thought of simple solution number 2.

So, we keep the root user’s crontab in place, and the update-check.sh, but instead of running status_command every second, we can run entr in our user’s shell rc profile (usually .bashrc).

So, at the end of .bashrc, add:

echo /path/to/update-check-output-file.txt | entr -p /path/to/special-sauce-script.sh &

entr is a neat little tool that monitors when a file changes when entr is running. We only want it to run whenever our user is logged in, so not worth putting it in crontab @ reboot. Besides, we still need to configure some stuff on the special-sauce-script, which should look like this:

#!/bin/sh
[ $(ps -ef | grep -c sway) -gt 0 ] && [ $(cat /path/to/update-check-output.txt) -gt 0 ] && notify-send "Updates are available 🔄"

For desktop notifications on sway, you will need to install mako and if I’m not mistaken, have dbus running. So, entr is always running in the background and checks to see if the file we echoed into it gets modified. entr looks at the file itself, not the contents. So the script that entr executes when the file gets modified checks if sway is running, checks if the content of the file is bigger than 0 and if that’s true, does a send-notify (a desktop notification), which mako grabs through dbus and gives you a notification like this:

2022-01-29_18-01-1643472422


That’s basically it, an easy way to check for updates automatically and get a notification. Don’t forget whenever you update your system to echo 0 > update-check-output.txt, so that the sway status bar will change. For the desktop notification, it doesn’t matter, because you only get a notification when the file changes.

Keep in mind I have not tested the thing with the bashrc, I just ran entr in a terminal, so YYMV. Do report back if it does or does not work, for others to know.

For people who follow me for Void tips and tricks, I knew of xcheckrestart utility from the xtools package for a long time, but always forgot to use it. Here’s the deal. Void doesn’t automatically restart services or programs once you update them via the package manager, Void allows you to update, then schedule a restart at any point you want.

This has the advantage of saving an admin time, instead of spending a lot of time updating software, an admin can just update the software and only schedule a 10 minute downtime once the update is finished to restart services. And to discover what services are running that are using the older version, you use xcheckrestart.

I just updated my grafana (7.1.5 → 8.3.3), prometheus (2.28.1 → 2.33.1), openssh and openntpd servers and thought that it’s a good time to use this tool. I saw that grafana, prometheus and openssh were being updated, but I didn’t know openntpd got updated too, which was interesting. Following that, I just used doas sv restart grafana and the rest of the services and was up and running again. Reusing xcheckrestart only showed sshd needing a restart, but that’s because I was still in the session. Disconnected and reconnected and the xcheckrestart did not present any other program that needed restart other than udev (eudev), but that requires a full OS restart, so I’ll leave that for now.

xcheckrestart was run as root (with doas) because those services were running as root. But the tool can and it is recommended to be run under an unprivileged user, but that won’t show the processes that are running under root user that need to be restarted. xcheckrestart can be used when, for example, you update firefox while it is still running and you don’t know if you need to close and relaunch firefox. So after an update, you can run xchecktool on your user and see if you need to close and reopen any programs, or under root to check if you have any services that need to be restarted.

Pretty nifty tool, I’m not sure how Ubuntu, Debian, OpenSUSE or Fedora does this, except with restarting the server just to be sure, or the admin taking a note of what pieces of software are getting updated and doing a manual restart. Of course, there could be some software out there that may restart automatically through scripts ran from the package manager.

I do remember GitLab restarting automatically after a yum update, but needing a gitlabctl reconfigure to apply the changes after the update. Speaking of GitLab, yeah, when I was a sysadmin, running mostly CentOS and OEL, when I had to update some critical infrastructure, like GitLab or Jira, I had to take my time to schedule the maintenance window, update the software via the package manager or downloading the binaries, then restarting the service. Seems like my home servers are way better at maintenance windows and saving my time than my production ones, lol.

I mean, if I was running Void in production, I would just do updates whenever Ii wanted, use xcheckrestart to see if there are any services that need restart, then schedule a maintenance window when I could remote into the servers and restart. It would probably take 10 minutes, instead of 30 min to 1 hour, if everything went smoothly, that is. But then, I’d be running Void in production, so…you get the point.

While I am comfortable running Void on my personal servers and on basically anything, I probably wouldn’t be so willy-nilly to do it on a production server at a company I work for. I doubt it would be too bad, but I wouldn’t like to take my chances. I would however run Alpine in production in a heartbeat if I’d have the opportunity, just because Alpine is just so much smaller than Void. But then again, the only reason I ran CentOS, Ubuntu or Debian was because most software I needed were either in the main repo or EPEL, or adding a repo was really easy, so I could update everything at once via the package manager. Void has a wild selection of programs right in its main repo, but if something isn’t available in there, you’re basically out of luck, you have to either compile it manually, or use xbps-src and do your own templates to compile, which defeats the purpose. Alpine has even less software selection.

1 Like

Oh hey, you finally decided to consolidate your info dumps :rofl:

2 Likes

Yeah, second comment contains the links to my easy-to-follow not so easy to follow guides.

2 Likes

What did Kirby swallow to turn into this?

Also, I am a big proponent of Hydrogen technologies. Definitely we need to have a multi pronged attack to become energy sustainable but the quickest and easiest transition for combustion vehicles is to go hydrogen to bridge the gap until we can make non-ICE vehicles affordable. We have the technology for hydrogen ICE now. Toyota and Hyundai are the new pioneers in this field. Honda used to be at the front but they are trailing far in third place on novel approaches to Hydrogen technologies at large.

2 Likes

Obviously the autistic screeching guy.


3 Likes

I just wanted to see what you would post next… it worked.

1 Like

I’ll try to keep it short today. I was looking through some posts and wanted to rant about computing efficiency and availability, more specifically small clusters vs single beefier box.

I’m not sure what’s the scope (as in, size / plan / always running services and their number) of other people’s homelabs, but, if you aren’t going for high availability, a single box will always be more efficient in terms of computing efficiency, price, space utilized and other hardware needing to be added to the setup.

A cluster always requires adding stuff and sacrificing something to gain availability. A multi-server setup will require multiple servers, multiple OS drives, switches with a bigger number of ports and potentially more switches, which means higher cost for switches if you want a good pipe between them and so on. Then, if you go for the low-power option, 3 NUCs will be less compute efficient (computing per watt) than a single beefier ryzen build. Then, you are also wasting OS install space on the SSDs you already have, instead of using that space for VMs - plus that you could use 2 drives for just virtualization / containerization.

And in high-availability scenarios, you also need to allocate around 50-60% of your server’s resources to the running VMs, in case one servers comes crashing down and needs to start / move VMs on another host.

And I’m the crazy one who proposes high availability to people with homelabs, but I want to send the message somehow, that one box is always more efficient with resources, budget for build and budget for usage. Kind of like informed consent / knowing your requirements. Of course, HA doesn’t mean backup, so always backup your data even if you have a HA cluster.

Running all your home services like DNS, file server, video streaming, VPN and so on, on a single box makes everything more manageable, easier to segregate and allows you to use your hardware to potentially 100% of its resources and saves you a buck if you allocate enough resources for your services from the get-go.

One thing that people don’t usually take into consideration about HA is that this doesn’t protect you from software failure or data corruption. If your web server comes crashing down, but the OS it is running on is still fine, the VM will keep running on the same host. If the host goes down, the VM will be moved to another host, but the web server will still be in an unusable state.

While I’ve been a proponent of HA for a while, I realized that maybe we should be going back to our roots, back when virtualization wasn’t a thing, and start thinking about software that is resilient to failures. DNS, while slowing things down tremendously when the primary one goes down, it still allows you use the secondary one if your first DNS response times-out. Web servers can present the same information, but be load-balanced, so if one of them goes down, the other one can take the whole load (or be shared among more, so that one server doesn’t get overloaded). A lot of databases have clustering options, or at the very least active-standby modes. A lot of software is not resilient and can only run in single-mode, so HA does help in those situations. But from an efficiency standpoint, running things basically twice doesn’t make much sense.

Going back to single host setups, this will take away your choice to HA services, but will make you think about how would you solve problems if the software inside your VMs somehow catastrophically fails. Automatic restoration to a certain point through snapshots may be an option.

2 Likes

In other news, I can’t seem to get my RockPro64 to boot, as soon as I plug the power, the LAN LEDs turn on, along with the green LED near the power button. The power button does nothing. I haven’t had time to try armbian or manjaro to see if my board is functional, but openbsd miniroot70.img doesn’t seem to work, which is frustrating. And getting either Alpine or Void to work on the Odroid N2+ has been quite the challenge for me. Last thing I need to try is installing it through chroot on an already running Manjaro install, I guess I’ll have to give that a try once I stop being lazy.

2 Likes

There’s ways around that.

But what if I have old kits of ram or other stuff laying around and it’s the cheapest option for setting up something.

And if I only have itx MBs with 2 slots and I need 100 GB of RAM, then two systems seems like a good alternative to buyer yet more RAM ( 4x32 kit at minimum)

I mean, one system is nice but if it’s easier for me to cobble two or together, I’d do it.

It’s a homelab … I don’t need HA

3 Likes

I know netboot is a thing, and OS running from RAM aren’t exactly a new thing. But even considering someone who is capable of this, the other limitations still apply.

I’ve got a Xeon X3450 server completely for free, with case, 500W gold-rated PSU, 11 HDDs, 1 SSD, mobo and 24 GB of (functional) ECC RAM (had 32, but one was not allowing the server to boot, tested it by itself in different slots). Not 1 dime spent on acquiring it. I’ve ran it 24/7 for a year or a bit more. The cost of running that thing were so bad that I could have built a more capable Ryzen 3300X server for cheaper and pay a few months of the power bills, that’s how bad that thing was. And this time, I’m not exaggerating, I was spending $70 to $120 on electricity bills only on the server (that’s by how much my power bill increased after getting it). My power bill more than doubled, almost tripled (I was pretty frugal with electricity).

Depending how old your components are, it may be worth building around them still. 1st gen ryzen and 6th gen intel are still efficient enough, so getting them for cheaper may offset the power bill costs. But there’s really a balance to that.

It may, however, be worth to buy older platforms that were sipping power to begin with, like older Celerons with Atom cores (J1900 comes to mind). But obviously, you give up ECC and other niceties (like PCI-E lanes + other I/O, like SATA ports).

I’d like to have a form of HA, especially for my home VPN, so I can access data from my home servers wherever I go. Overkill? I guess. But depends on what you run, I wouldn’t be running more than 1 git server for example, just have good working backups and that’s all I would need. But my plan for a HA cluster on SBCs (and the networking part too) is mostly for potential production environments at home, things like nextcloud or your own chat server that people would depend on and wouldn’t want to ever be down.

2 Likes

I can really see where your coming from here. I was really happy with a ryzen server I had built. So much so I use it for my home services. The ryzen 2700 works great and I don’t think I’ve ever gone above 30% usage. I have isolated my dns and firewall services. I purpose built the ryzen server for 24/7 running, platinum psu, lower power processor 65w version etc. It really runs on average less than a 75w bulb. :slight_smile:

It’s been a great learning tool. I can see the limits of HA as I have explored it. It seemed I might need at least two identical machines and then possibly a pi running as a third authority for the HA cluster. I invested in a managed 10g switch for servers and backup speeds etc, then a second managed switch for a trunk to another location in the home to isolate wired backhauls for mesh wifi networks. I had a lot of the devices for this setup, but it was a bit of a investment. I think if I had stayed with the one box lab I could do alot. I’m still learning alot.

Either way, HA while nice is probably a bit overkill for my smaller lab. :slight_smile: Just my thoughts as a novice of servers and networking services. I do think disaster recovery is probably more important than HA for me at this point as I have no web based services (web pages, vpn tunnels, proxy servers etc). I wouldn’t feel comfortable doing those till I had a bit more knowledge.

EDIT: The ryzen with the ASRockRack X470 mobo, I am comparing to the Dell T320 (420) conversion Tower server I have. I have to admit the Much LARGER (U5) tower platform (which utilizes one 120mm fan and shroud setup and larger heat sinks) is MUCH quieter than the U2-U3 sized servers (which rely on multiple smaller fans at much higher speeds and smaller heatsinks). However, the older Intel Xeon E5-2470 v2 2.40GHz 10-Core LGA 1356 that are 95w each ( 190W with dual cpu’s) for much slower speeds but more wasted heat. Also, as a not this is a very unique chipset that was one of the last gens to still use DDR3 ECC ram, which was a cost factor for me (192GB was pricey, BUT nowhere near what that would be in DDR4 ECC). The Dell T320 utilizes Dual Platinum PSU’s from 450W, 750w and 1100w Depending on needs for GPU’s, CPU’s and PCIE needs. Thats a pretty beefy power need. I also heavily modified mine from default cooling configuration with Active CPU cooling, a fand dedicated to Fresh Intake, 140mm on the disk cage and a 120mm x 25 noctua in place of the thicker higher speed 38mm thick stock fan.

I think it really depends on iff you need that redundancy and HA functions. I can see it for people who are in the field and need a FULL HA structure for testing and usage for work related functions. People at home can create a much smaller system that’s much more efficient on smaller platforms. My Raspberry Pi 4 with 8GB of ram is way overkill for just pihole. The Protectili with 4 cores for a firewall as well is a bit overkill, then again I go big or go home…lol. I’m also investing in my time and developing my skills beyond just home use.

Sorry for the added rant lol.

2 Likes

Today I’m ranting about how the concept of Unix and Unix-like is dumb. That is not to say that the Unix philosophy is bad or anything, just that the concept of Unix when talking about operating systems administrators / “experts” is not just irrelevant, but also stupid.

When you encounter “Unix admins,” a broad term used to describe Linux, BSD and other Unix-like OS sysadmins, you don’t get much, if anything from the words themselves. The original Unix arguably doesn’t exist anymore. And while there are projects that are direct descendants of Unix, today they are way different than what you would have seen in the old days.

So, going back to someone who is being called or identifies oneself as a “Unix admin,” this description or label won’t tell you much about their actual knowledge on the matter in question, Unix-like OS. And that is because all the Unix-like OS diverged greatly from one another. You have some general ideas, like /dev containing devices, /var/log containing general logs, /bin or /usr/bin containing binaries etc., plus the tools that are used, like ls or cat or vi. But that’s where similarities end.

Ask a Linux expert to investigate an issue on an AIX system and you will see him struggle to find where to look and what commands to use. He may be able to navigate through the OS, but he won’t be able to do jack s**t for most of the time. Oh, and good luck searching on web search engines about AIX manuals. And if said Linux admin only had experience managing systemd only systems, or had very limited experience with OS using Upstart, oh boy, grab some popcorn, because you’re in for a lot of fun.

There is no journalctl there. And /var/log/messages is a binary file. You have to use errpt to generate an error report from the logs. A Linux admin will not know that. All of them are categorized as “Unix admins,” which again, tells you nothing about what they actually know. HP-UX is even worse, the log location is /var/adm/syslog/syslog.log. Look over the HPE community forums for all the confused Linux admins trying to manage HP-UX

Solaris? It’s still alive and man, is it another kind of beast than Linux and BSDs. Try telling a Linux or AIX admin to manage Solaris Zones and they will not know where to even begin looking. FreeBSD admins might have a general idea, because the concept of Jails and Zones are somewhat similar. Still, Solaris is different than FreeBSD and I don’t know how much of the knowledge would be transferable between one another, because the tools used are completely different, even if the core concepts are the same.

Changing network settings is yet another one of those things where the “Unix admin” label is absolutely useless. Heck, even in Linux, changing networking settings is pretty different:

  • /etc/network/interfaces for Debian and Alpine
  • ip commands for Void
  • netplan for Ubuntu
  • /etc/sysconfig/network-scripts for RHEL family
  • wicked or YaST (still wicked) in OpenSUSE
  • NetworkManager, usually in RHEL family, but it’s an abstraction of the other tools, kinda like YaST is for wicked
  • ConnMan - same as NM
  • systemd-networkd - I believe Arch uses it

You get the idea. Those tools don’t translate that well between each other, let alone between completely different OS that fall under the general term Unix or Unix-like.

I have a great appreciation for the Unix philosophy of keeping things simple and doing one thing and doing it well, I try to follow that as much as I can in my digital life and I’m also trying to apply the same concepts to everyday life, because simple things are easier to make and maintain. The efficiency that you gain from simplicity, IMO, greatly surpasses any benefit that you would get from complexity. Of course, sometimes you can’t have simple things, like imagine stripping out parts of a bike in order to make it simpler: you would probably create a unicycle, which indeed is a way easier design than a bike and arguably easier to maintain, but you don’t have something that you can call a bike anymore.

But the issue I am having is with the divergence between all Unix-like OS, like Linux, BSDs, AIX, HP-UX, Solaris, MINIX, Redox, macOS / Darwin and the term “Unix admins” that is used to describe someone who “knows them all.” I hate this term, and not because of some irrational thoughts, like some people hate the term “moist,” but because the term doesn’t give an accurate description of someone’s knowledge. There could be someone out there who managed them all and has experience with them all, or maybe not all, but even most of these, but this label is used often so wrong that I believe the expression should be discarded from most people’s vocabulary, even if it actually it accurately describes someone, only because the label is so misused in the industry that is beyond ridiculous. It would be better to just use Linux admin, AIX admin, BSD admin, HP-UX admin etc., even if there are big differences between the different flavors and distributions of each of these (at least the open ones).

For example Free, Dragonfly, Net and Open BSDs are similar in their administration, but still have their own quirks, but probably not as different as say between RHEL, Debian, nixOS and OpenSUSE. But those labels, even if they are a bit more abstract, are far more suited for the job of describing things than Unix, because the difference between these isn’t that much big and the tooling used is not too different, if at al different (like df in Linux vs the one in BSD or AIX, or fdisk in Linux vs OpenBSD, or le cat -v).

Despite my autistic screeching, I doubt anyone in the industry will stop with the misuse of the term, even though “BSD, Linux and Solaris admin” is not that much harder than “Unix admin” to say, it rolls on the tongue quite well and describes exactly what one does, as opposed to the abstraction of the term that has been removed from my vocabulary.

2 Likes

This is an important note to self.

Today I just realized that people are doing setup tutorials for all kinds of services, from docker images like vaultwarden, to manual instance creation like nextcloud, and nobody is telling people how to backup their freshly set up services. Not only that, but also basically nobody is showing people how to restore from their backups.

I just finished making a backup and restore tutorial for someone on another forum for nextcloud and that moment it stroke me! None of the tutorials I watched and even more shameful, none of the tutorials I made had any backup and restore guides in them.

Of course it’s a lot more to do, writing additional stuff for backup and restore, but people forget about backups. And when the worst happens, they are s*** out of luck. I decided that I will be writing those additional steps in my own articles, even if it means just “crontab rsync -va this folder to another host from time to time.” This would be pretty easy, especially for things like wireguard or nginx / apache / haproxy, as those only have configs to backup.

But things like NextCloud (which I mentioned I just did a backup and restore tutorial based on someone else’s setup guide) or GitLab that have a DB to backup and restore aren’t as straight forward. Well, GitLab has its own backup utility (I think written in ruby or a chef cookbook, don’t remember) that is just one command to do backups and you can symlink or bind mount (or simply mount) a remote location into gitlab’s default backup directory.

I will have to go back to my previous tutorials and update them… Some should be pretty easy, the road-warrior VPN router basically never changes, so you only need to copy the 2 scripts to somewhere and you’re done. Or even better, keep my tutorial bookmarked, so you don’t even need a backup (unless this forum goes down, or I delete it).

It really boggles my mind that there are so many techie people who are spreading the information and help people be their own internet landlords, but none of the ones that I watched thought about showing how to back up certain services that they show how to set up.

It feels like I’m alone in the world, I need to super-spread the word, at least the tutorials with DBs and other non-intuitive backup methods need to have a backup and restore method.

2 Likes

Thanks for working on this, as someone that has been in charge of backup in Enterprise environments for ages, there is no sense in backing up if you don’t know how to restore the things. Especially if your configuration information is not 100%.

2 Likes

just run raid

4 Likes

Why do I need to back up when I run RAID 0. RAID is bettar than a backup! j/k

3 Likes

Awsome! I myself had this on my checklist. After a debacle with my pihole messing up all my internet access due to a bug in the new version of pi-hole (9.0ftl- can not flush logs or device records) you had help me with this. It is something I need help with for sure especially if there are good way to do remote backups and or backups to network devices such as a NAS.

I have been making backup images on proxmox of my LXC containers and tried to set up snapshots (the format of LXC wont support this for some reason- but LXC is small enough I dont care), but I need to automate how many backups to save to delete older versions to cut down on storage bloat and be efficient with my limited home lab resources.

I look forward to reading your guide as always @ThatGuyB

I do not have a cloud backup, but I do incremental backups to a USB (one HDD one NVME) I keep in a firesafe/ and one kept at my folks place in my dads gun safe)…I like my data lol

1 Like

I don’t have a Pi-Hole server, maybe I should make one just to see what needs to be backed up. I have worked with BackupPC in the past, with mixed results, and I have heard people having mixed feelings about Bacula. I would like to give Bacula a try for myself though, although I would guess it may be a pretty heavy software to run.

Depending on the type of service you are running, backups can be just made from a few rsync scripts, one that runs twice a month and backs up everything, and one that runs every day except when full runs and only grab the modified files in the past day (incremental) or since the last full (differential). And thus, restoring would be just a matter of installing the software, stopping the service, doing rsync in reverse (first the full, then the rest), then starting the service.

Or, it could be something more complicated, like if you are running NextCloud.

In any case, I have a passion for keeping things simple, so I tend towards rsync or copying files automatically and maybe compressing the previous backups, but I’m not sure that will cut it. I can think of situations where, if scripts aren’t properly set to verify the last backup time, it could be pretty bad and I would rather have actual backup solutions do the work.

Bonus thing to think about is the trust you put on your infrastructure: would you like to have the backup server connect to the clients and run the backups, or would you like the clients to push the files to the server? Each have advantages and disadvantages depending on your workload and security concerns. For home use, it shouldn’t matter that much though and I’m concentrating on self-hosters, not on businesses.

1 Like

I was hoping to explore some of the tools that are a part of zfs. I know theres ways to do the backup, then snap shots… I just havent messed with it yet because the second server i would like to run with a decent set of drives cant be on at the same time as the rest of the gear in my office lol… sooooooo thats a bummer.

I also have no other device that can run the six eight TB drives. Home server has five six TB drives raidz2. The only thing I can back up to is a WD NAS with two six TB drives in raid0 with very limited functionality at the moment because the firmware is no longer being supported :frowning: and its not a linux device so I cant have a program running on both ends for sync.

I was able to have it mounted and monitored by proxmox locally… I just need to see how I can leverage that to do a backup routine if I can. Ive been reading proxmox manual, but I havent gotten deep into it yet. I wanna try first lol.

I was going to “MacGyver” an old cougar case with the drives and a itx x470 board…BUT I have to do the research of if it will even boot headless with a ryzen 2700 cpu and no gpu. I can use my cheap gt 710 for initial setup, but I know some boards wont post with no gpu.

I dunno, got a million ideas…

3 Likes