Easy to follow Beginner Guide on s6 Starter Pack

Table of Contents

Goal of this user guide

The main goal is to get people started with the bare basics of s6, without the use of suite66 (Sign in · GitLab). This is not a systemd bashing thread. We can make a separate thread for init wars and service supervision wars, but this thread is not for that. Here we just discuss s6 and how to use it.

Details about s6; click to read more, or skip to the User Guide.


s6 is a suite of software tools that take care the system init, PID 1, service supervision and logging.

s6 stands for skarnet.org’s small and secure supervision software suite.

You can find more about s6, its design choices and comparisons to other inits and service supervisions on its official website (here, here and here).

Think of s6 as a replacement for systemd’s most important system features: initialization and service management.

The s6 suite is composed of 3 main pieces, each handling its own part of the system.

  • s6-linux-init
  • s6-svscan
  • s6-rc

s6-linux-init

The first one is not something to look too deep into. This is just a minimal init made by skarnet. It can be replaced by anything else, really, as long as it calls on the next 2 components. This is the first and only process that the kernel calls upon. The focus is on Linux, as it’s currently the only supported kernel officially (there’s no reason s6 couldn’t have a BSD version, just that nobody took time to implement a BSD s6-init version / port).

Skipping ahead of the system boot process (as it’s irrelevant). The only thing that you really need to know is that the bootloader (e.g. grub, gummiboot, petitboot) will load the initramfs into memory and the kernel will start the init process (s6-linux-init in our case). The kernel command line argument passed from the bootloader to the init is always by default default.

The reason this is mentioned is because you can force your system into special modes by adding a cmdline to the bootloader. In grub, you’d press “e” on the line you want to boot and in the “kernel / linux” line, you would add at the end a variable (without the “=” sign) to force s6 into loading only a certain service or bundle of services.

For example you would add “single-user” to make s6 load the “single-user” bundle. There’s no such bundle by default, this is just an imaginary example (i.e. trying to boot into single-user mode, if that existed). Or if you wanted to start the “networking” service and all its dependencies, but nothing else, you would add “networking” at the end of the bootloader kernel line.

Note: this assumes that “networking” is properly configured to include all the startup system dependencies, like loading kernel modules, rootfs mounting and starting a TTY.

Similarly, if you only wanted to start “sshd” and nothing else, you would add “sshd” at the end of the line (again, given that sshd and all its upstream dependencies are properly configured).

As can be deduced from the above, the purpose of s6-linux-init is to start PID 1. The part between the init and PID 1 starting up is called “init 1” or “init stage 1” by some.

s6-svscan

This one needs some attention, as part of the day-to-day operations of s6 will involve the PID 1 (indirect) control. This s6 process will always be PID 1 in a linux system running the whole s6 suite (s6-linux-init, s6 and s6-rc). It can also be PID 1 invoked by another init, but that’s another story. The phase between PID 1 launching and the system shutting down is called “init stage 2.”

The purpose of PID 1 in a daemon-tools (or runit if you’re familiar with that)-like environment is always to either start monitored processes that are not started or that die and to reap orphaned processes. Won’t get into the latter, but will focus on the former.

For every monitored daemon, the s6-svscan process will launch a monitoring process, called s6-supervise. If we take dhcpcd as an example, then the processes would look like this:

UID        PID  PPID  C STIME TTY          TIME CMD
root       525     1  0 22:17 ?        00:00:00 s6-supervise dhcpcd-eth0
root       937   525  0 22:17 ?        00:00:00 dhcpcd: [manager] [ip4] [ip6]

You can see how s6-supervise launches the service named “dhcpcd-eth0” and how the dhcpcd daemon’s parent PID is 525.

Each process supervised by s6-svscan will have its own directory under /run/service by default. The scope of s6-supservise is to actually monitor the state of the child process and “log it” in the service’s directory (e.g. /run/service/dhcpcd-eth0).

Without getting into details, s6-svscan takes action based on the contents of each service’s folder.

By default, all monitored services in a full s6 suite are stopped, by having a file named “down” in the service folder (e.g. /run/service/dhcpcd-eth0/down). Without any service manager at all, all services would be started and monitored by s6-svscan. This is very similar to daemon-tools and runit. With s6-rc, the “down” files get created in order to ensure dependency between services is satisfied and s6-rc will remove the down file to have s6-svscan start more processes.

Note: s6-svscan will still trigger the s6-supervise process for any long-running service, even if there’s a down file. That’s because the service is still supervised, even if it’s stopped. That way, if the down file disappears, the s6-supervise process will launch the service immediately. You can use this to your own advantage to stop and start services on-demand. And this is what you’ll be doing, kinda (although using s6 utils that’ll be discussed later).

To get rid of the s6-supervise process (even if the service is down), you need to remove the service from the supervision. In an s6-rc environment, you can do that by compiling a new DB that completely excludes a service from the source folder. Otherwise, it’s probably easy to just delete the symlink from /run/service and s6-supervise would come crashing down, but I don’t think that’s recommended and upon reboot, it would be recreated again (/run is a tmpfs).

s6-rc

This is the 2nd most important aspect of this wiki and probably also the 2nd most important component of a supervision suite (after the process that keeps spawning other processes, like s6-svscan).

This utility is used by sysadmins and package maintainers to define dependencies between supervised processes.

s6-rc can manage 3 types of services:

  • longruns
  • oneshots
  • bundles

Out of these, only “longruns” and “oneshots” are considered “atomic services.” The “bundle” service is just an internal group that references atomic services or other bundles.

All atomic services can have dependencies on other atomic services (longshots or / and oneshots) and / or bundles. However, bundles can’t have dependencies (this is by design, because of other aspects that would be difficult to manage both in code and in manageability of an environment).

To make the administration of s6-rc services easy, its design has been thought out in such a way that it makes running automation around s6-rc really easy. There’s frontends for that, like the 66 suite (that’s literally “a bunch of scripts” that help you control s6). And you can do scripts and run commands around s6 to make your life easier. One example of scripts and commands around s6-rc sources is the other wiki that focuses on an opinionated s6 suite configuration.

The reason the focus is on s6 here is to give you an understanding of its inner workings and allow you to control s6 directly.

The only thing that bundles can apply (propagate) to its services is the file called flag-essential. What this flag does is defining a service to be “important,” so that a command (s6-rc -d change) doesn’t work on it and you must use a stronger version of said command (s6-rc -D change) in order to stop or restart a process. We’ll come to those commands later in the User Guide.

s6-rc requires that you create a “source” folder, containing all services on the system that you want defined and you compile an s6-rc database. Don’t think of this like compiling the linux kernel or a browser, it’s a 1 second operation (even for many services in the source). You can think of it like a “configuration save / commit to a bundle of files.”

Each type of service (longruns, oneshots and bundles) have their own properties, which generally translate as files in the source service folder, which will be discussed under each type of service.

longruns

Those are classic “daemon” type services, e.g. sshd or dhcpcd. Those kind of services run indefinitely, until they are stopped or they crash. s6-rc controls the state of the service (if it should be up / down) by ensuring its dependencies are met. The s6-svscan (and its leaf s6-supervise) takes care of actually starting the service.

As explained before, all services are born with a down file in their service folder. s6-rc checks if the service dependencies are met (or if it has no dependencies) and removes the down file, allowing s6-supervise to start the daemon.

The longrun type services have the most kind of definitions in the s6-rc source file:

  • type (mandatory) = [type] file, [contents] longrun

  • run (mandatory) = [type] file, [contents] execline script that launches a daemon

  • dependencies.d (optional) = [type] folder, [contents] files named after services

  • producer-for (optional) = [type] file, [contents] name of another service

  • consumer-for (optional) = [type] file, [contents] name of 1 or more services

  • pipeline-name (optional) = [type] file, [contents] name that doesn’t conflict with another service

  • flag-essential (optional) = [type] file, [contents] irrelevant

  • timeout-up (optional) = [type] file, [contents] integer

  • timeout-down (optional) = [type] file, [contents] integer

  • timeout-kill (optional) = [type] file, [contents] integer

  • timeout-finish (optional) = [type] file, [contents] integer

  • finish (optional) = [type] file, [contents] execline script

  • notification-fd (optional) = [type] file, [contents] integer

  • lock-fd (optional) = [type] file, [contents] integer

  • max-death-tally (optional) = [type] file, [contents] integer

  • down-signal (optional) = [type] file, [contents] kill signal number (integer)

[WIP]

  • data (optional) = [type] directory (optional). [contents] etc.
  • env (optional) = [type] directory (optional). [contents] etc.

oneshots

This is kind of a “new-age” thing. In the *nix world, services can’t really be started without first meeting some one-off dependency, like mounting a file system. But mounting a file system isn’t going to leave a long-running daemon, it’s a one-shot script (mount /dev/disk /path). After that, the process is done.

Oneshots are just that, scripts. Their purpose is to run commands that won’t be long-running (literally “not longruns” - makes sense, right?). The idea is that you need some preliminary dependencies for longruns to start (like a logger needing a writable file system mounted, even if it’s just a tmpfs).

The oneshot type services can have these definitions in the s6-rc source file:

  • type (mandatory) = [type] file, [contents] oneshot
  • up (mandatory) = [type] file, [contents] an execline script that in the end must exit with a code (exit 0 = success, exit 1 or another code = failure)
  • down (optional) = [type] file, [contents] an execline script that in the end must exit with a code (exit 0 = success, exit 1 or another code = failure)
  • dependencies.d (optional) = [type] folder, [contents] files named after services
  • flag-essential (optional) = [type] file, [contents] irrelevant
  • timeout-up (optional) = [type] file, [contents] integer
  • timeout-down (optional) = [type] file, [contents] integer

Note: just because the up (and down) file is lexed by execline, the script doesn’t need to be an execline script necessarily. It’s recommended, but the up script can be any other kind (shell, python, perl) as long as it has the proper shebang at the top (if no shebang is present, the script is treated as an execline by default).

The reason you should write the up / down scripts in execline is to save space in the s6-rc db (by only running an invocation execline script to the actual shell or what-not script) and because of the very slight chance that the parsing of a shell script could go wrong (unlikely, but theoretically possible).

bundles

Bundles, like mentioned before, are nothing more than a collection of atomic service and / or other bundles, i.e. a bundle can be comprised of:

  • atomic services
  • bundles
  • atomic services and bundles

The bundle type services can have these (very few) definitions in the s6-rc source file:

  • type (mandatory) = [type] file, [contents] bundle
  • contents.d (mandatory) = [type] folder, [contents] name of services (can be empty)
  • flag-essential (optional) = [type] file, [contents] irrelevant (for bundles, this gets propagated to actual services)

The type of contents mentioned under each service.

type

Applies to all services.

Not much to say here, the contents can only be either longrun, oneshot or bundle. This defines the type of service you’re going to run. Mandatory for all services.

run

Applies to longrun services.

This is the script used for the longrun services. As mentioned, it should be preferably written in execline, but doing it in shell or other languages works, as long as you define the shebang (e.g. #!/bin/sh) at the top. If you’re doing execline, you don’t have to specify a shebang (by default that’s what s6-rc-compile assumes).

Note: the daemon that’s being run must not background itself. For example, if you’re writing an sshd service, you must use the -D option, to prevent it from detaching. Another example is dhcpcd, you must specify -Bflag. If a process detaches, it’s going to get killed and you also won’t see any logs from it. The daemon must run on the invoked command line indefinitely.

dependencies.d

Applies to atomic services.

When adding files under the dependencies.d folder, you must ensure that the services actually exist and are defined, or compiling the s6-rc db will fail. The content of the files doesn’t matter, s6-rc doesn’t check for contents, only the presence of files and their names.

It’s ideal that those are just empty files though (touch /path/to/source/sshd/dependencies.d/networking). In theory, you could utilize the file to define a comment or something.

producer-for

Applies to longrun services.

This is utilized to output the stdout (and preferably you redirect stderr to stdout in the run file) to another service.

In this file you write the name of another service, typically a logger service, e.g. openssh-server would be a producer for openssh-server-log service. The service for which this service is a producer-for must exist.

A service can only be a producer for a single other service. However, you can have a service that’s both a producer and a consumer, meaning it’ll log stuff from itself and another service and then also redirect the stdout to another service.

If you have a producer-for defined in a longrun, you must have a consumer-for defined at the other end of the output pipeline.

consumer-for

Applies to longrun services.

This is the opposite of producer-for. This is utilized to receive output from one or more other services.

You can do whatever you want with the output received. A logger will typically write it to a log. I haven’t tested it, but I think it’s possible to have a service openssh-server-log receive output from openssh-server service, write the output to a log file, then this openssh-server-log could be a producer-for an rsyslog, so you write data both to a log file locally and externally to a remote logger service.

And since a consumer-for can receive input from multiple other longruns, you could have the final rsyslog service technically consume the logs of all the local file loggers. Alternatively, as another example, you could have the logs written to files, for manual analysis and the loggers could have the output sent to a log analyzer, to map (maybe using grep or something smarter) errors and attach them to a database. Grafana Loki (another example) also comes to mind as a final consumer, where the contents of the logs don’t get analyzed, but instead labels each log stream.

If you have a consumer-for defined in a longrun, you must have a producer-for defined at the other end of the input pipeline.

pipeline-name

Applies to longrun services.

Its contents will generate a bundle service automatically, with the bundle’s name being the contents of this file. E.g. if the contents is sshd, a bundle with that name will be automatically created from all the services involved in the pipeline.

This file can be created anywhere you have a consumer-for file, but it will only take effect at the final consumer service. If we take the previous example of openssh-server, openssh-server-log and rsyslog, if you have the pipeline-name file in both openssh-server-log (which is consumer-for = openssh-server and producer-for = rsyslog) and inside rsyslog source service definition (only consumer-for = openssh-server-log), then the pipeline-name will only be applied at the end, on the rsyslog.

If you have a different name in the pipeline-name file in the 2 services sources, only the last one will take effect. If you have the same name, then the middle one is ignored anyway and also only the last one will take effect.

flag-essential

Applies to all services. On bundles, it propagates to atomic services inside the compiled database.

Its contents aren’t checked, only its presence is checked. It’s preferable that it’s an empty file. As explained before, this will prevent the s6-rc -d change service to function on said atomic service or bundle and to make it truly go down, you need to utilize s6-rc -D change service command. This doesn’t do anything else than literally define a service as important, to avoid accidental restarts of it.

As an example, one can define the bundle sshd as essential. The sshd bundle would normally be part of a multi-user bundle. If one would want to s6-rc -d change multi-user in order to stop all multi-user services, like a GUI shell, a login manager, other non-important services and so on, with the flag-essential file inside the sshd bundle or the openssh-server service, the latter should remain up.

In a more precise example, if you have a website bundle named “web” and its contents are mysql, nginx and php services, and you run s6-rc -d change web, but you have marked mysql with flag-essential, then mysql won’t go down, but the others would. Normally you’d define mysql as a dependency for nginx and php too, so you could have a bundle “web-all” and another “web-without-db” or just “web,” which you could use to control services at once.

timeout-up

Applies to atomic services.

Its contents must be an integer, which defines the maximum number of milliseconds that s6-rc will wait for a successful completion of a service start. If your service is supposed to only start in a certain number of seconds and it takes longer, s6-rc will declare the state change (service starting up) to be failed.

If the contents = 0, or contents are empty, s6-rc will wait indefinitely. If the files doesn’t exist, s6-rc treats the value as 0 and waits indefinitely for the service to start (default).

This could have been useful in identifying the xz vulnerability, had it not been written just for systemd. Imagine if you know openssh-server must start in under 1700 milliseconds, but instead, because of the xz vuln, it needs 500 more ms to start. An immediate red flag!

timeout-down

Applies to atomic services.

Similar to timeout-up, its contents must be an integer, which defines the maximum number of ms that s6-rc will wait for a service to stop. If the service didn’t stop in this amount of time, s6-rc will report it failed to stop it.

If the contents = 0, or contents are empty, s6-rc will wait indefinitely. If the files doesn’t exist, s6-rc treats the value as 0 and waits indefinitely for the service to stop (default).

timeout-kill

Applies to longrun services.

Its contents must be an unsigned integer. It defines the maximum number of milliseconds that s6-supervise will wait for a process to die. When s6-supervise receives the command s6-svc -d or s6-svc -D (which are triggered by s6-rc -d / -D), it will wait this much after it has sent a SIGTERM and a SIGCONT signal to a daemon. If the longrun doesn’t die after amount of time passed, s6-supervise sends a SIGKILL signal.

If the file doesn’t exist, or its contents are either set to 0 or an invalid value (like a negative integer or a string), then the service is never forcibly killed and s6-supervise (and thus s6-rc) waits indefinitely for a service to come down just from SIGTERM.

If you’ve used systemd and you’ve seen that super annoying message when you shutdown your linux box “waiting 30 seconds for process to finish,” then increments “waiting 1 min and 30 sec for proc to finish,” then “waiting 3 min,” then “w8ing 5min” (I’ve once waited over 40 minutes for /home to be unmounted and had to force poweroff my PC, thanks Manjaro / systemd!), then this is similar, but with none of the non-sense.

If your linux box has services that just hang and you’ve defined a timeout-kill for 15 sec for all your services, then that’s what you’re going to get for each before they’re kill -9’ed. No silly incrementing values because something’s important. But keep in mind when you shutdown your system, the 15 sec will be triggered in order, i.e. the hung service that has no dependents gets killed first after 15 seconds of waiting, then s6-rc moves to its dependencies and if that hangs, you get to wait 15 more seconds and so on (in reality, it’s unlikely that many services will hang).

timeout-finish

Applies to longrun services.

Its contents must be an unsigned integer. It defines the maximum number of milliseconds after which the finish script (detailed next), if it exists, will be killed with SIGKILL.

If this file doesn’t exist, its default value is 5000 ms. If the finish script is running for more than 5 seconds, s6-supervise will kill the finish script. If the contents of timeout-finish is set to 0, then s6-supervise will wait indefinitely for finish to complete.

finish

Applies to longrun services.

This file is similar to run, it can be any executable file (even a binary), but should instead be treated like the down script from oneshots. This is an executable that is always going to be run after a longrun run script dies (be it intentionally stopped or crashed).

Its purpose is to clean-up after a supervised process, but it can be anything. You can literally make this into an email notification daemon, e.g. if sshd is stopped or dies or an important server, finish will send you an email of the incident. It probably has better uses, like deleting non-volatile data which the daemon is not programmed to have when it starts, as an example. Or you could have a daemon that processes data and if it ever dies, you can have finish take a backup or something (just make sure that you defined your timeout-finish time, otherwise, finish gets killed in 5 sec).

If run is supposed to be up (e.g. it was only restarted or it crashed and needs to start), then run only gets launched after finish has executed.

The finish script is executed with a few environment variables:

  • the exit code of the run script (= 256 if run was killed by a signal)
  • an undefined number (= the signal used to kill run)
  • the name of the service directory (e.g. openssh-server)
  • the process group id of the defunct run script (useful for finish to clean up children processes left behind by run if it died - apparently not really reliable if run was spawning child processes in different process groups)

If the finish scripts exits with code 125, then s6-supervise treats it as a permanent failure of the service and won’t attempt to restart it.

If s6-supervise was instructed to exit after a service stops (e.g. s6-svc -x or SIGHUP), then this will be the last invocation of finish and stdin and stdout will point to /dev/null.

notification-fd

Applies to longrun services.

This file, if present, defines the service as supporting readiness notification. Its contents must be an unsigned integer. That content defines the number of the file description that the service writes its readiness notification to.

This is used by s6-ipcserverd and other more advanced s6 stuff. Won’t get into it here. Just keep in mind it’s going to be very utilized in logger services.

lock-fd

Applies to longrun services.

Contents are unsigned integer, representing a file descriptor that will be open in the service. Advanced stuff, won’t cover here.

max-death-tally

Applies to longrun services.

Contents must be unsigned integer, between 0 and 4096. Default value is 100. This defines the maximum number of deaths (stops or crashes) a service can have. This is tracked by s6-supervise.

When the service dies more than this number of times, the oldest events will be forgotten. This file / feature is useful to track services for things like service throttling.

Not to be confused with how many times a service can die until it is permanently stopped with a down file. [WIP] not sure if such a feature exists in s6 yet, but it can be implemented using finish by increasing a counter in a file and when the counter reaches your maximum allowed deaths, finish could create the down file for you (or rather, execute s6-rc -d change and give you a notification or a log or something).

down-signal

Applies to longrun services.

Contents must be the number of a signal, followed by a newline. This will be used to kill the supervised process when s6-svc -d or s6-svc -r is utilized. The s6 command will send this kill signal, instead of the default SIGTERM in case down-signal file is absent (you can, obviously define SIGTERM in it).

data

Applies to longrun services.

[WIP]

This folder is a guarantee that s6-supervise will never touch the files in this directory. With the evolution of s6-supervise, it gained new files, like notification-fd and timeout-finish (back in 2015) and users who had files with the same name had to change them. Within this folder, s6-supervise is certain to not mess with your files.

This is used to store user data for your service, e.g. if your service requires credentials, I believe it can grab them from a file from here (please don’t store plaintext passwords).

While you could store something like a database, don’t do that, work outside of s6 and have a dedicated data folder.

env

Applies to longrun services.

[WIP]

Similar to data. This folder will never be touched by s6-supervise. Instead of storing user data, env is supposed to be used to store environment variables that get loaded by the service run itself (and maybe other options where applicable, like finish).

up

Applies to oneshot services.

Contents should ideally be an execline script, but anything can be used, as long as you define the shebang.

This file defines the oneshot service and it’s literally a service that runs a single time and when it exits with code 0, s6-rc treats this as successful script run and continues doing what you’ve defined (like resume the dependency tree startup or what-not).

down

Applies to oneshot services.

Contents should ideally be an execline script, but anything can be used, as long as you define the shebang.

This optional file is used to make s6-rc run something like a reverse operation when a oneshot service is stopped or restarted. If the file doesn’t exist or is empty, s6-rc interprets this as if the service does nothing and always succeeds.

[WIP] This can lead to interesting scenarios, which I haven’t checked yet, like if a script is supposed to mount a filesystem, but its mounting fails (thus the up script exits with code 1) and there’s no down file, then you might get dependencies activated, despite the actual up-bringing failure. If down is defined with the hypothetical umount, then this wouldn’t apply, but in some situations, you might not want to define a down file.

contents.d

Applies to bundle services.

It’s a directory and its contents should be the files named after existing services. If the services don’t exist in the source folder for s6-rc-compile, the s6-rc db compilation will fail. Contents can be empty, but then you’ll have a useless bundle defined in the DB. It’s probably ideal to automate the finding of such a bundle and have it removed from the souce folder.


s6-rc is a service management tool (more on that here). I highly recommend reading all the links I provided, because they all have some great insights on how s6-rc works and why it is such an amazing software, like:

A significantly time-consuming part of a service manager is the analysis of a set of services and computation of a dependency graph for that set. At the time of writing this document, s6-rc is the only service manager that performs that work offline, eliminating the dependency analysis overhead from boot time, shutdown time, or any other time where the machine state changes.

s6-rc is the equivalent of sysv-rc, BSD init’s /etc/rc, OpenRC, Upstart, launchd and systemd. s6-rc is called a machine state manager and it uses a database containing services. s6-svc is a program used to send signals (SIGHUP, SIGKILL etc.) to running s6-supervise processes. You will use s6-rc more often, but s6-svc is nice to know. s6-rc is similar to “systemctl enable / disable,” but because s6-supervise changes states on the go (like runit when you symlink a service), it works more like “enable / disable --now.” It manages both longruns, oneshots and bundles. s6-svc is like using kill (since most of you will be familiar with it, but there are other signals too) without knowing the PID of the daemon, s6 takes care of it. Then, there is s6-svstat to check the status of your services.

Current limitations on this user guide

There are some commands I didn’t manage to get into it, like s6-rc-compile and s6-rc-update, so I don’t currently know how exactly s6 works, so I cannot explain that. Like always, contribution is appreciated.


The actual User Guide:

enable and start a daemon (up)

s6-rc -u change food

disable and stop a daemon (down)

s6-rc -d change food

check a daemon status

s6-svstat /run/service/food[-srv]

or

s6-svstat /run/s6-rc/servicedirs/food[-srv]

Note: some daemons may have the -srv extension at the end. To check the status of the logger, (if it exists and it should, but for example, elogind didn’t have one) use:

s6-svstat /run/service/food-log

list all active services

s6-rc -a list

Note: not sure about this, I will get into it when talking about s6-rc-db

list services related to daemon food

s6-rc list food

(for example, it should return food-srv and food-log - in s6, the daemon and the logger are two different processes, so even if the daemon dies or is shut down, you never lose logs, the logger will only go down after you tell s6 to stop the daemon and it will always come up before the daemon is started)

#I’m not sure what this one does, I believe it shows all processes that are related to food

s6-rc listall food

For example, for dhcpcd, it will show dhcpcd, s6rc-oneshot-runner, udevd, kmod-static-nodes, mount-cgroups, mount-devfs, mount-procfs, tmpfiles-dev. And for elogind, it will show elogind, s6rc-fdholder and dbus. For nginx, it shows s6rc-fdholder, nginx-srv and nginx-log.

add a daemon to default bundle

s6-rc-bundle-update add default food

remove a demon from default bundle

s6-rc-bundle-update delete default food

create a new bundle

s6-rc-bundle create bundle1 food bard

delete entire bundle

s6-rc-bundle delete bundle1




Once we get past those, it gets a little tricky, but if you made it to here and are reading this, you should be fine. More info if you click this text.

I mentioned that s6-rc uses a database. In the documentation, it is called compiled service database, abreviated “compiled.” To compile a service db, all you have to do is run s6-rc-compile. Artix-s6 has a specific custom way to take care of it through the package manager (pacman), by installing the package followed by your init of choice.

So if you install “nginx-s6,” you are basically grabbing a s6 service file and adding it to the s6-rc DB and running s6-rc-compile to make the db. When you uninstall it, the service gets removed and the database gets recompiled.

This meant I was protected by pacman / artix team from getting too deep into s6, so I may need to change to a distro that doesn’t have s6 by default (Alpine, Devuan, Void) and change the init and service manager to s6-linux-init and s6-rc myself (there is work to bring s6 on Void, but using 66suite unfortunately - I wanted to get deeper on how s6 works; helper scripts, unit files and interpreters aren’t exactly my cup of tea).

To read from a compiled database, you use s6-rc-db tool. An s6 service (oneshot, longrun, bundle) has to be writen in the s6-rc source format. Then s6-rc-compile takes all the service definitions in the service directory you point to and creates a new db. It should be automatically created with the name “compiled-unixtimestamp.” If I’m correct, once you compile it, you (the administrator) have to change the symlink “compiled” (usually found in /etc/s6/rc/compiled) to link to the latest “compiled-unixtimestamp” directory (in the same folder). To live update the current service database, all you have to do is run s6-rc-update.

To debug s6-rc-compile, you can use the verbose option. It’s cool that s6 is POSIX compliant and instead of using -vvv for very very verbose, you give it a number argument: -v 0 is just like not using -v at all, -v 1 is the classic -v, -v 2 is -vv and -v 3 is -vvv. I haven’t yet created an s6 service, I will probably attempt haproxy-srv and haproxy-log, because I haven’t seen them in artix-s6.

list the services in a specific s6-rc-db

s6-rc-db list all|services|oneshots|longruns|bundles

No idea how to use s6-rc-compile or s6-rc-update. The first one is to compile a new s6-rc database for the next boot or for a later reload, the second is to update the s6-rc db on the fly (live).

4 Likes