[Build Log] Apartment Network and Security PLUS RANTS

https://wiki.archlinux.org/title/Network_UPS_Tools

It exposes a local UPS state over network over UDP.

There’s windows clients too, … or you can make your own (e.g. small Python server that starts up on power on, listens on UDP and runs a shutdown subprocess when it sees an “ac power lost” message).

2 Likes

Well, the breaker having issues did bring about a few good things to learn. So I use my Raspbery Pi4 in the ArgonOne Case with the fan. I like the sleek look of the case, and I was able to OC the pi if I wanted without worry of over heating it. I really have no need to do so as all it does is run Pi-Hole and Unbound to create a Recursive DNS Resolver with AD-Blocking Features (Link to page I followed-if you see any errors or problems with this set up PLEASE let me know).

In having the breaker cut out I noticed my DNS was failing because the pi was off and had to be turned on manually. A little googling later I found some scripts that should have enabled automatic reboot…This option was a fail. So I looked at the hardware just to be sure.

The ArgoOne has a adapted power button and side card so I took a look inside.

That simple. There was a jumper on the card where 1-2 was power on only with the button, and 2-3 was always on! BAM problem solved.

Hopefully a temporary problem as I look for a UPS that will be suitable for me… (IE once wife approves purchase LOL)

I think next step is either coding, using python, chron job or something to set up automatic updates, upgrades(bad idea?) and a reboot for my pi with Pi-hole and unbound. I hope learning this here will be transferable to my ryzen server with Proxmox which may get a upgrade to the 3700x from the 2700 because I now have the 3900XT working after fixing the pins. Still rocking project blackout for games at the time due to waiting on maintenance to do the breaker. I started to do it and the wife came home and freaked out thinking I would kill myself so I’m leaving it to them. :frowning:

2 Likes

… do automatic backups first.

Then, instead of upgrading, make a script to restore from a backup using the latest version of software instead.

2 Likes

That sounds like an amazing plan… any ideas where to go about learning the scripts for such a process? Could backups be saved locally on current machine or do I need another machine up and running for such a procedure? Would I also need to clear out the oldest backup each time so I don’t run out of space or would I always just have the one working backup as a fall back?

SMTP could be made to work, but I doubt you’d want to mail your PCs to shutdown. I think it was supposed to be SNMP. However, you could have the UPS connected to a “management server,” something that doesn’t consume a lot of power (the Pi-Hole seems like a good idea) and when you get a “no power” signal from the UPS for more than a minute, you initialize a shutdown script, so the Pi starts SSHing into everything and powers them off (SSH server is available for Windows too). This seems like the easiest thing to do frankly. Just have to find a decent PSU that doesn’t use horrible software / clients or special sauce drivers. Basic is the best (like apcupsd, but I kinda avoid APC nowadays - you can get lucky with some consumer APC UPS’es though).

Use crontab + rsync, or of you’re not comfortable with that, BackupPC for backups. As for scripting automatic updates, backups come first. If you really want them, find a way to keep your logs / stdout (the output of the update command).

Search the web for “how to xyz (in) terminal” and the best results are usually stackoverflow, unix exchange, ubuntu forums, arch wiki (even for other distros) and other related exchange sites (server, sysadmin etc.).

That’s gonna be a little hard for things like Pi-Hole on the Pi (unless you got multiple Pis to be your staging servers), so backups are still the best way, then crontab-ing auto-updates and restoring if something goes wrong.

Yes, but highly not recommended (it becomes a pain if the OS goes poof, so you still have to transfer them over on another PC and then restore the OS and copy the backup over).

That depends on how you set the backup policy. You can do one full backup once, then incremental / differential constantly, but if you discover your backup is broken, you’re in trouble. You can also do a full backup weekly or monthly (depending on how often your system changes configuration or updates) then incremental / differential daily or weekly (or hourly or every couple of days or whatever) and have a retention policy of 3 previous full backups with all of its inc / diff backups, delete everything that is older (or overwrite them). I find this to be the most sane approach. If suggest weekly full backups, 3 weeks is long enough and if you customize your system often, it’s gonna be enough of a chore to recreate your stuff (well, basically 2 weeks actually, with the incremental backup).

2 Likes

Thank you sir. I’ll do my best to Google my way through. Crontab and rsync seem to be the way to go, I just need to find a good guide on one of the pages you mentioned. Shouldn’t be too hard to decipher… keeps getting easier the more I do and read. I just want to make sure I understand what I am doing and why and not just “script kidding” everyhting.

Backup on the same machine isn’t ideal, but necessary till my power issue get resolved. ALSO, it let’s me practice with the theory and implementation of backing up and updating. I assume the script will change to a remote location once that’s possible. It would be even nicer to use zfs to do snap shots of changes to save space once I have a full image backup (not that it’s a huge concern at the moment). I just need to get raspbian, pihole and unbound. Is unbound updated with raspbian as a package? If not I’ll need to update that manually as well for now.

I picked the pi because it’s the easiest thing for me to loose and try to recreate with out too much time lost following my documentation. Also, if the service gets dropped I just change my dns on my machines to the backup dns server without much hassle.

2 Likes

Turns out docker pihole does part of the job for you.

What you’re left with is:

  1. Figuring out how to make a copy of the persistent data elsewhere (e.g. to another ssh host somewhere, or to a cloud file locker maybe using rclone).

  2. Figuring out how to check if an update is available, pull the image files, backup, stop old/start new, health check/verify working (e.g. if only using it for DNS, just call dig or nslookup in a loop until it starts working or too much time has passed) …
    … and, ideally of not back up within 15s, bring back up the old container version and exit with a failure.

  3. You can save the upgrade script (probably 50 lines of shell script) on GitHub somewhere.

2 Likes

Well I’m operating on a Pi. I have the option of a windows 10 PC I always have running or can use a Linux on it I have ubuntu I think. Then there is also the WD NAS I currently have set up I should be able to save to as well.

I’m currently looking at this guide for some assistance. Good resource? @risk @ThatGuyB ?

1 Like

The solution with compressing the archive would make it basically a full backup each time. It has its uses if you only want to backup a very few critical files and folders, like just /etc, the folder where Pi-Hole is installed (if it’s a docker container, then its mount-point folder) and your /home/user, but excluding /home/user/{Downloads, .local, .cache, etc.). But if you want to do a full system backup, you shouldn’t be archiving.

Personally, where space is not a concern, I just backup everything excluding /dev, /proc, /run and /sys. So a script for that would look lik:

  • bkp.sh:
#!/bin/sh

LOGFILE=/path/to/backup/log-$(date +'%Y%m%d').log
touch ${LOGFILE}
rsync -varoglAX / --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"} backup-user@your-backup-server:/path/to/backup/folder >> ${LOGFILE} 2>&1
  • crontab (run everyday at 2am) - note: this is the root user’s crontab, or you can use sudo on your user and make the command run w/o a password
0 2 * * * /bin/sh -c /path/to/bkp.sh

You can also do the above from the server, so rsync in reverse (user@host /path/to/backup/folder). On the server side, you would need to have:

  • archive.sh:
#!/bin/sh
FILENAME=/path/to/archive/location/pi-backup-$(date +'%Y%m%d').tar.gz
BACKUP=/path/to/the/folder/where/pi/sends/backup/via/rsync
tar -c -f -z -v ${FILENAME} ${BACKUP} && [ -d ${BACKUP} ] && rm -rf ${BACKUP}

I haven’t tested this script (dang, what I wish for my Pi to be working right now), but in theory it should make a tar.gz archive of your folder (don’t remember if archiving deletes the folder, doubt it) and if successful, checks if the folder exists, then deletes it.

  • crontab (run every Sunday at 9pm) - can be the backup user’s crontab, so long as he can access (read and write) all the files and folders in the script:
0 21 * * 0 /bin/sh -c /path/to/archive.sh

I think from 9pm to 2am (until the next backup starts) should be enough time to archive and delete the files, so that a new full backup can be made.

  • crontab (every Monday at 6am) - same as above, just add it on another line, this is used to delete the 4th oldest full backup:
0 6 * * 1 /bin/find -name "*.tar.gz" /path/to/archive/location/ -mtime +15 -delete

This command finds any files ending in .tar.gz in the folder /path/to/archive/location/ and if they are older than 15 days, it deletes them. So you should be having 2 archives (one from 2 weeks ago and one from last week) alongside your current backup in your /path/to/the/folder/where/pi/sends/backup/via/rsync.

A lot can be improved here, like for example you could split stderr and stdout in 2 separate files and if the error.log is NOT empty, sends you an alert or a mail or something. Also, again, I have not tested these scripts, they are similar to what I have done before, but I just wrote them now. Test them with a few unimportant files first to see if / how they work. I also suggest you read and understand what these commands do (running arbitrary code found on the internet, lmao). Also, old log files will accumulate. They won’t use a lot of space, however, it’s still a good hygiene to clean old log files up, so you could use the same find command to first delete backup logs older than 30 days (so you have the last 4-5 logs intact).

So, in summary, this should make a full backup of the Pi, including the OS (adapt the rsync command to your needs), then rsync will just send incremental files (only the files that changed) for a week until the server archives that whole week of backups. Then, after the folder was archived, a new full backup will be made. The server will only keep 2 archives and delete older ones (again, if you want more, adapt the find command to your needs, at the cost of storage usage).

2 Likes

Sweet, yes…as I said I am going to read the code and try to understand it before using it for sure. Like I said I don’t like the idea of being a script kiddie. I’m doing this too learn but without formal classes it seems like a lot to just jump into and expect to be productive.
I was ok with VMs taught myself how to pass through a video card, and modify some files to read hardware info correctly on my x470 board… if there’s a problem I can usually find a solution. Just with something so specific it seems harder to find a solid reliable answer. I’m still making my way through my texts but I’m no where near far enough for the types of things I would like to learn to do.
So it’s baby steps, but learning more each day. I should have some time soon to really sit down and dig into scripts and their implementation.
I also signed up for a github free account to have a place to consolidate my work that’s not kept locally. Sanitizing the scripts I’m guessing is the safest way so machine names and file folder structures isn’t exposed or is that too paranoid?

1 Like

GitHub lets you keep a few private repos, start there;

it’s ok to keep everything in one repo. I suspect you’ll mostly have plain text files in markdown - your own notes… and shell scripts.

For example pihole, think about how you’d set it up from command line, non interactively… Even if you can only do parts of that and you need human to do a thing, you can make a shell script or a markdown doc with some shell command snippets inside of it.

2 Likes

They opened it up to unlimited private repos a while ago, but yeah you can probably just keep everything in one. It’s also super easy to self-host your own repos, and then use github as an off-site backup. Just create an additional remote for the repo and you’re all set.

https://git-scm.com/book/en/v2/Git-Basics-Working-with-Remotes

If you’ve got a workstation of some kind, a separate physical machine for your local repos, and then some private repos on github that takes care of the 3-2-1 rule for backups pretty easily.

2 Likes

Yeah I’ll be parsing this for days… I still can’t get rsync to work with a windows machine (Share Folder- I can SSH into the windows machine from pi though) or with my WD My Cloud which is the current local network backup till they change the breaker.

Here are a few errors I received. The first had to do with no log file @ThatGuyB

pi@filter:/usr/local/bin $ ./backupremote.sh
touch: cannot touch '/path/to/backup/log-20210914.log': No such file or directory
./backupremote.sh: 5: ./backupremote.sh: cannot create /path/to/backup/log-20210914.log: Directory nonexistent

Sorry I had to ditch that part because I couldn’t figure it out. I wasn’t sure if I had to locate or create said folder for it to populate correctly…
*** EDIT - Actually seeing it now and really reading the error it seems I just needed to define a location…opps will try more later…or am I wrong in how I read that error?

Then I had this issue:

Unable to negotiate with 192.168.50.215 port 22: no matching host key type found. 
Their offer: ssh-dss 
rsync: connection unexpectedly closed (0 bytes received so far) [sender] 
rsync error: unexplained error (code 255) at io.c(235) [sender=3.1.3]

Turns out WD Mycloud security wont allow rsync to communicate… I was reading a bit on it and there are some lengthy workarounds and or scripts I don’t want to “Just Run” lol

So I attempted finally to try to rsync to a share I had made on my windows 10 machine… after some work ensuring SSH agent and server were working correctly in the Power Shell I made a bit more progress to now this after entering the password for access I get:

'rsync' is not recognized as an internal or external command, operable program or batch file.
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(235) [sender=3.1.3]

So I am making progess and I am learning, just no success just yet. This I’m sure (Or at least I hope!!!) will be much easier when I use Linux to Linux… Unless I start to use TrueNAS and that gives me fits lol.

Thanks for all the help so far guys and gals.

Also, I looked for where to save scripts “best practices” and didn’t really find a good solution. If I plan to use said script with crontab is there a particular location I should keep it in? or does it need to be added to the “PATH” (dont know what this is for sure yet)…

1 Like

TL;DR … You can go far if you just stick them wherever, home dirs can work just fine - it’s a good start. Don’t put stuff where system keeps things if you don’t want to manage your stuff same way distro maintainers do.

Official best practice is to install your own stuff into /usr/local/bin and /usr/local/lib … but once it’s there you’re not supposed to hack on it in place.

I’m kind of a default wallpaper kind of guy, unless the default is too distracting… I’m also a filthy lazy developer when it comes to my own personal stuff, I keep my stuff in ~/projects/foobar , as the pinnacle of my organization efforts. Not all things in there are git repos, but as a developer, a lot of these are. Some are just clones of other people’s repos - stuff I work on/play with, some are old SketchUp and Fusion 360 projects that I didn’t have anywhere else better to put.

With desktop class machines where I bother to have a home directory and install creature comfort software and have a regular user account … there’s these projects dirs as above, however, if it’s a nanopi machine whose whole purpose in life is to be a “smart 18TB backup hard drive”, I keep stuff all over /root, but I never (well more like rarely) hack on stuff there - that whole os is disposable to me - despite being Debian and being capable of more, if my microSD card in it died tomorrow, I’d curse a grab a spare from a drawer and be back up and running in 30minutes with junk all over /root (and luks rolling and reencrypting stuff for the next week probably). I’m kind of expecting it to die.

With complex stuff, best practice is to write a Makefile to make install from your source code repository in ~/projects/foobar to /usr/local/bin/foobar. I do this with go and c binaries and shell scripts. For the purpose centric nanopi adventure, I have a make deploytar and make deploy on my desktop that puts together a tarfile that can unpack into /usr/local and the other one calls into ssh to install the binaries and a few blobs remotely… this approach was a bad idea, I should be making a Debian package instead of a tar file, maybe I’ll do that at some point.

With Python most scripts I have, most are very light on dependencies, so I just live with them breaking as I install stuff on their first use. There’s pip and venv and all that, but meh, my stuff is simple.

Some folks say, “oh you should build your own” distro specific package or metapackage , but with backups and git I have enough comfort for my own tiny management scripts.

I think folks who distro hop or reinstall stuff often on different hardware might have better reinstallation scripts and practices then I do. I mostly use my Linux boxes on the command line over ssh and it’s Arch and Debian (testing) that are rolling release and I tend to use my hardware until it dies or I run out of physical space. I setup new stuff maybe once a year and it’s always a different kind of machine. I have stuff in ~/local/bin/ … like my homegit wrapper which helps me sync my dotfiles like my bashrc tmux.conf .vimrc .gitconfig and a handful of things within ~/local/bin itself. This helps me keep the command line experience mostly the same across 3 (sometimes 4) places where I have project dirs for development like stuff.

I have systemd timers running as myself and running as root reading stuff from projects and invoking things in /root and all that stuff, it works, it’s not ideal, it’s a lazy approach.

3 Likes

Tried writing this last night, my kb/touchpad battery died, I had no spares, had to wait a lot to recharge.

I never had a Linux course and everywhere I went (school, work, libraries etc.) I had to use Windows. So all the Linux stuff I knew I learned from web searching how to fix issues or how to do stuff. :slight_smile:

Probably too paranoid. I mean, there’s a whole thread on how people name their PCs and VMs (naming convention) and not to mention the NeoFlex thread where your hostname is visible. Same goes for backup location, should be fine as long as you don’t insert credentials in them (plaintext passwords or private keys).

Last time I checked, you had to pay for that. But it was a long time ago.

wut?

Rsync won’t work the usual way. To use rsync on Windows, you have to install cygwin and win-rsync or whatever it was called. I don’t remember how, but you’d use a cygwin config file for rsync to map “C:” drive as /cygwin/c (or any path to any folder under cygwin). I only did it once when I tried using rsync-bkppc (a custom rsync made by BackupPC that sucks ass) and it wasn’t working with Windows (well, for that matter, that rsync-bkppc didn’t work with some Linux servers either). I cannot seem to find the how-to for windows rsync… But I recall that win-rsync worked with a normal linux rsync.

Ok, I’ll have to do some more checks to make sure the directory also exists. The folder needs to exist first, but it’s good hygiene to also check if it exists in the script (because sometimes paths may be remote mounts and may sometimes disconnect, so you don’t want to backup to the local hdd).

Yes, the /path/to/backup can be something like /mnt/wd/backups or /mnt/samba/share/backups or /media/user/1349881/usb-folder etc. You only have to change it on the first variable definition (i.e. BACKUP=“your path”)

That WD garbage uses old key exchange algorithms. This is why I abhor commercial home NAS boxes and routers, they never receive updates, which is why I want to build my own or find one that just runs a barebones Linux distro, so I can update it basically indefinitely.

In any case, ssh-dss is deprecated due to security reasons, so modern SSH clients won’t know how to communicate with old ssh servers. To solve that, all you need to do is force the old kexalg:

ssh -oHostKeyAlgorithms=+ssh-dss user@hostname

Or if you want rsync, you do the same, you add this " -e ‘ssh -oHostKeyAlgorithms=+ssh-dss’" after rsync, followed by your parameters (-varoglAX or whatever else, man rsync), so the rsync command from the script would look like:

rsync -varoglAX -e 'ssh -oHostKeyAlgorithms=+ssh-dss' / --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"} backup-user@your-backup-server:/path/to/backup/folder >> ${LOGFILE} 2>&1

This might be helpful.

And yeah, I recall that the rsync I tried was from msys2.

Depends on the individual. I prefer to keep my “production” scripts in /opt/scripts/ or something similar. Some people prefer in the bin folder, alongside other programs. Historically I kept all the scripts that I hacked on in ~/Documents/scripts/, however that might change a little once I start using git more.

Try looking if Alpine is available for the nanopi and run it in diskless mode, then rsync you /var from 10 to 10 minutes to your SD. It should further its lifespan. Otherwise, you can do the same with Debian, but move your /var in a tmpfs then do the same every 10 minutes. OR you can do what I did with Void on my Pi 4, use a 2GB SD card for /boot, then install everything on an USB SSD (/, swap, /home etc.). Normally Pi 4 should be able to boot from USB, but mine doesn’t (yes, I used Raspbian to update to the latest stuff, that didn’t do anything).

For any custom programs that are not maintained by your package manager, extract them to /opt/bin. I believe that’s what documentations are saying. And this is what I have done with Oracle Java and headless chrome and other proprietary garbage.

Scripts don’t need to be built for any distro, they run everywhere. Scripts are more like a recipe calling on ingredients, you can put the recipe in any kitchen and it will only work if the kitchen you are in has all the ingredients (ignoring the difference in ingredients between Linux, *BSD and other *nix for the sake of simplicity, not all core-utils work the same).

I don’t distro hop, but my adventure to minimalism has resulted in me having a portable setup. I can switch setups with just rsync’ing my /home and running my xbps-install script (there may be some variations or unavailable software between x86 and ARM though, I encountered some, like libreoffice not being available on the aarch64 repo).

Edit 2: the original -oHostKeyAlgorithms=+ssh-dss was right, the command was wrong

4 Likes

A year or two ago there used to be a limit on the number of private repos github let you have with a free account, but since then they’ve removed that.

3 Likes

I had seen this in my trial with the WD Drive, and I reattempted it again but I got the same error message…

rsync: Failed to exec yAlgorithms=+ssh-dss: No such file or directory (2)
rsync error: error in IPC code (code 14) at pipe.c(85) [sender=3.1.3]
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: error in IPC code (code 14) at io.c(235) [sender=3.1.3]

From this error it seems I need to install a package for the “-oHostKeyAlgorithms=+ssh-dss” portion of the command to work properly if I understand correctly?

I’m just putzing around this am… I did however find a way to SSH into my WD drive as root…LOL Oh boy I can really be dangerous now LOL jk

1 Like

No, the -oKexAlgorithm is part of the OpenSSH client, it’s just a parameter. Also, it seems you are doing something wrong if the error says:

Because it should be -oKexAlgorithms=+ssh-dss

OH. My bad, I copy pasted something wrong. The command is -oKexAlgorithms=+ssh-dss. I’ve edited my previous post.

2 Likes

I gave that a shot an got something else…

rsync: Failed to exec xAlgorithms=+ssh-dss: No such file or directory (2)
rsync error: error in IPC code (code 14) at pipe.c(85) [sender=3.1.3]
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: error in IPC code (code 14) at io.c(235) [sender=3.1.3]

HOWEVER… I did almost get a copy to work requesting rsync from my Ubuntu distro to a local folder. I did have to use a modified command. I dont have it as I rebooted back to Windows machine to try rsync to WD Mycloud.

It’s not a HUGE rush this can take me weeks. It’s my first script…and first time using rsync EVER. lol I’m reading about it now and what the different modifiers are. I do see the addition of the “exclude” is a nice option.

I’m not really worried about space, and it would be easiest to just back it up in a format I can just re-install on a SD card with etcher or pi imager.

@ThatGuyB I really appreciate the help, this is all so foreign to me. I know formatting is so important so I had made a few mistakes there I think I corrected. I’m learning the back end of the rsync command tells it what to copy from and to. I think I got that format right.

rsync -oKexAlgorithms=+ssh-dss / --exclude={"/dev/*","/proc/*","/sys/*","/run/*"} [email protected]:/Public/BUpiTEST

I hope it doesn’t give away too much info. From what I’m reading either names or ip’s can be used for destination.

@risk Thanks for the info. I really try to keep my file system simple. And I have to keep notes to keep track till I get more familiar. I’m still so used to being able to see a “tree” whenever I want to know where things are. So I keep one on paper lol

2 Likes

Another thing that can go wrong with certain commands (happens a lot to me when using find) is the order of the parameters.

Also, it appears that -oHostKeyAlgorithms=+ssh-dss that I found for the first time was actually correct (the -oKexAlgorithms I used for other stuff, usually involving old switches that don’t have new openssh servers and requiring the older +diffie-hellman-group1-sha1 option).
https://www.openssh.com/legacy.html

Also, I just found a solution:

rsync -varoglAX -e 'ssh -oHostKeyAlgorithms=+ssh-dss' / --exclude={"/dev/*","/proc/*","/sys/*","/run/*"} [email protected]:/Public/BUpiTEST

And no, your private IP and that path doesn’t help the reader with anything regarding your setup (other than the path where you save stuff, which, unless one finds your public IP address and manages to get remote or physical access to your network, then it’s useless).

2 Likes