Any SystemD Automation Experts out there? (Help needed)

Its not everyday that you come across the need and requirement to use systemD for automation. We often take for granted all the work it does in the background for us. We love to toss around terms like just use cron… or you know… just use cron… or even better… just use cron. Just because something is the accepted way and fits most requirements does not mean that it fits all.

For that explicit reason I have zero reason to entertain any cron, S6 init or alternative systems conversations in this thread so please dont mention them. We can discuss the differences in the lounge if you would like. I dont mind those conversations so if thats what you want to have @ me in the https://forum.level1techs.com/t/the-lounge-2020-two-edition/179938?u=phaselockedloop

Cool now that I got that statement over with. Here are my requirements and also heres a long thorough post of stuff so you arent lacking information. I am hoping that if I am successful here that other souls out there seeking the same thing can use this as a base and build on it. Maybe just maybe

  • OS Agnostic: SystemD is in wide use. I should be able to tailor it to deal with my OSes. Each of them. Rocky, Arch and Fedora.
  • Fault Tolerant: If something fails it needs to be able to kick off additional actions to recover before failing out right
  • LOGGING: It must output to journalD. Otherwise my later function where I email myself logs and alerts wont work as seemlessly. Good logs save time.
  • Proper Process Handling: I want good signal termination, exit status code handling, startup error handling etc. Not just the basic process or so I am hoping.

AFTER GOAL: (required but optional for us to look at right this minute)
When this is fully setup I want the system to email me errors and maybe I can start creating rudimentary repeatable diagnostic steps triggered by a systemD failure of a process but this is AFTER I get this working flawlessly.

So whats on my automation TODO list that I need help with?

I want help understanding forking processes?

How can I make one maintenance timer/service execute other children maintenance timers and services?

Heres an outline of some tasks on Odin that need to be completed.

Nextcloud has a cron.php that needs to be executed from the host in the docker container. The irony here. I need systemD to automate the cron tasks. PHAHA lol. Currently I have a timer and service that handles this itself. Thats fine

# PLLs Nextcloud System Daemonization of Cron for Nextcloud (NCSysDCN)
# Timer File

[Unit]
Description=Timer executes nextcloud cron tasks every 30 minutes
Requires=NCSysDCN.service

[Timer]
Unit=NCSysDCN.service
OnCalendar=*-*-* *:*:30
Persistent=true

[Install]
WantedBy=timers.target
# PLLs Nextcloud System Daemonization of Cron for Nextcloud (NCSysDCN)
# Service File

[Unit]
Description=System Daemonization of nextcloud cron tasks every 30 minutes
Wants=NCSysDCN.timer

[Service]
Type=oneshot
ExecStart=docker exec -u www-data nextcloud-server php cron.php

[Install]
WantedBy=multi-user.target

Easy. I really need to change its name but easy. Where this gets more complicated is updating, cleaning and pruning docker containers. Would this be a candidate for a forking process and how would I set that up?

Here is what is currently occuring:

Timer:

# Docker Image Updater
# Timer File

[Unit]
Description=Timer executes fresh docker image pull
Requires=DImgUpdate.service

[Timer]
Unit=NCSysDCN.service
OnCalendar=*-*-* 5:30:00
Persistent=true

[Install]
WantedBy=timers.target

Service:

# Docker Image Updater pull
# Service File

[Unit]
Description=System Daemonization of docker image update
Wants=DImgUpdate.timer

[Service]
Type=oneshot
ExecStart=docker-compose -f /mnt/OnePoint21GigaWatts/containers/docker-compose.yml pull

[Install]
WantedBy=multi-user.target

This really isnt a good way to do it. Im thinking this is what I want to do to AIO my systemD script to keep docker running smoothly and pruned and updated

I will PRE-execute docker shutdown of all containers

docker-compose down <yml path>

Then I will execute the clean up and update with :

/bin/zsh -c 'docker system prune -af  --filter "until=$((30*24))h" &&  docker pull <compose.yml path>'

to update all the containers then just postUp the containers?

docker-compose up -d <yml path>

I think this is much more lightweight and reliable than watchtower. Im not sure I want to go as far as implementing watch tower. Let me know your thoughts? I have yet to decide if this will be simple or forking or how I am going to write it to do so. Id like all this to be logged in journalD.

Now in addition to this. I want to auto update. Clean and reboot as well and right now I have all seperate systemD processes and timers for this.

autoclean.timer

[Unit]
Description=Timer trigger for automatic pacman cleaning

[Timer]
OnCalendar=*-*-* 3:00:00
Persistent=true
Unit=autoclean.service

[Install]
WantedBy=multi-user.target

autoclean.service

[Unit]
 Description=Automatic Cleaning
 After=network-online.target 

[Service]
 Type=simple
 ExecStart=pacman -Scc --noconfirm
 TimeoutStopSec=3500
 KillMode=process
 KillSignal=SIGINT

[Install]
 WantedBy=multi-user.target

autoupdate.timer

[Unit]
Description=Automatic update timer trigger at 0400 hours tango

[Timer]
OnCalendar=*-*-* 4:00:00
Persistent=true
Unit=autoupdate.service

[Install]
WantedBy=multi-user.target

autoupdate.service

[Unit]
 Description=Automatic update at 0400 hours tango
 After=network-online.target 

[Service]
 Type=simple
 ExecStart=pacman -Syyu --noconfirm --noprogressbar
 TimeoutStopSec=3500
 KillMode=process
 KillSignal=SIGINT

[Install]
 WantedBy=multi-user.target

autoreboot.timer

cat autoreboot.timer
[Unit]
Description=Reboot Scheduling.

[Timer]
OnCalendar=*-*-* 05:30:00
Unit=reboot.target

[Install]
WantedBy=timers.target

The thing is… I definitely should be able to just execute a newer timer called maintenance.timer that executes maintenance.service which would then execute forking child processes. One for nextclouds container, one for docker, one for updating, one cleaning the pacman cache and one to reboot after everything passes to instantiate the updated stuff.

The parent process should execute this and wait for all of them to complete then report back what was successful and what wasnt. Parent and Child Processes must log output to the journal. This would greatly simpify and greatly reduce the time it takes for the system to handle it all. Instead of assigning arbritrary spaced out timers.

I have no idea where to start and know I need some deeper understanding of systemD. Thoughts and suggestions welcome.

> OS agnostic
> systemd
Pick one.

I’m not sure about this in systemd. But

[Unit]
StartLimitInterval=200
StartLimitBurst=5
[Service]
Restart=always
RestartSec=30

Before systemd fails the restart, it will try 5 times, with 30 seconds in-between them. If your services takes a while to start, increase the restart sec.

The docker thing goes beyond me, so you’ll have to explain in layman’s terms what you want to do. Systemd can be used on a timer to start a service that can terminate with exit 0 and you can set that unit file to depend, or rather conflict with another unit file / service, meaning that you can tell systemd to stop a service, then run this service.

And not sure what you mean by forking services. Why would you want to do that if you want to keep everything under journald? Keep everything in their own service / unit files, but don’t try to fork the processes, you’re going to lose the systemd monitoring if you do so. Unless I misunderstand what you mean by forking.

Of course, I’m not exactly a systemd guru, so I’ll need to look on the internet on how to do that.

It would still stay in the journal. As long as its part of the process.

If set to forking, it is expected that the process configured with ExecStart= will call fork() as part of its start-up. The parent process is expected to exit when start-up is complete and all communication channels are set up. The child continues to run as the main daemon process. This is the behavior of traditional UNIX daemons. If this setting is used, it is recommended to also use the PIDFile= option, so that systemd can identify the main process of the daemon. systemd will proceed with starting follow-up units as soon as the parent process exits.

Basically this lets me create a sequence and monitor by PID and service name

Why doesnt the docker image do all of scheduled things it needs to do, why must this be done on the host instead of in the container.

2 Likes

Its not built into the container yet. There are things nextcloud does right and there are things they eat glue in the corner on.

WebCron, Cron and pretty much every choice but AJAX has to be handled by the docker host executing php cron.php. Though I heard that might have changed in the latest docker image. I know there were a lot of git issues on it. I havent removed it because it just works

Then what you do is expose cron.php to the host and you add all the cron jobs to it. I just use the container as was instructed way back in version 20

1 Like

In principle, you only want to orchestrate these in systemd if you have to, otherwise you’d add stuff in a shell script where you can write basic conditionals and functions more easily.

You can use before/after to have systemd run units in some coordinated way.

Watchtower works fine for blind updates. In an ideal world you’d have some health checking before and after a container update and an automated rollback on regressions, however, watchtower in my experience at least does what it says on the tin.

So, in bash:

function clean1() {
  something something
}

function clean2() {
  something2 something2
}

clean1 | logger -t clean1 &
clean2 | logger -t clean2 &
wait

… and then you run this from your unit, and you get these two in parallel, or not, or you could have one of these be a systemctl start command, … you could also do set -x to get tracing into log, or make this as complicated or as simple as you want, details up to you.

2 Likes