Video Encoding to AV1 Guide (WIP)

This is a guide based on my experience with AV1 Software Encoding. As it is AV1 guides are spread out and there is not an all encompassing thread. I will start with libaom or aomenc It is the initial encoder that was released and generally is the slowest, but also best one for overall quality of encodes.

A little information before we start.

What is a video codec?

It is a language of sorts that stores video information and is sent to a decoder which a player uses to display an image on a computer, phone, tablet, or television. There are audio codecs and video, and text and so on. What I am focusing on is a more recently developed video one known as AV1.

The gist is for each evolution and subsequent codec that is released you are able to do one of two things or land inbetween. For instance h264 aka avc was developed in the mid to late 2000s. It is what we call a more efficient codec. It can handle higher bit rates (the rate at which an image is written to a display). Bit rate is also the determining factor of how big a video or audio file can be. In this case a movie generally has audio and video included in a single file.

What I am getting at is that going from mpeg2 to avc was a huge upgrade in what image quality could look like. Going from h264/avc to h265/hevc is a similar step forward where you can at similar bit rates have but the newer one having a higher quality image output.

Using hevc over avc you may see up to 50% better image quality (highly dependent source material codec). If both files are 8GB in size, the hevc in theory would look better than avc depending on what setting and parameters were used to convert to hevc and avc. Another way to look at it is via compression. Taking a avc source input and outputting it to hevc can lead to smaller sizes while retaining much of the overall fidelity. So you can have higher quality at the same bitrate or shrink the file size down and have the same next to near the same quality.

I would like to add that bitrate is seen as kbps (kilobits/second) or mbps (megabits/second) which is how much information is sent per second from the video file to a player. Depending on the codec it can be smaller and still retain most of quality to that of a less efficient codec. The thing is that it is a general idea as to what quality might appear. It still has to be decoded on the end of a video player. It comes down to to a software decoder or a hardware decoder which convert the codec to a visual signal the display can output an image.

All of this, compression is lossy. Which means the image will never be 100% of what it looks like from source. But it can be close enough. It can be 95% of the source in appearance all while being a smaller file size leading to less network load. Leading to a smaller bitrate.

It is highly dependent on source material and how much actual information you have to work with. For instance you can compress bluray video more easily and shrink the size than an already small size file. i.e. older youtube videos, dvds, etc. The more information the source material has the better.

Blurays are fantastic because they already have high bit rates and high fidelity. So using bluray rips are a good source to attempt at compressing while retaining most of original fidelity. It requires tinkering with parameters to find what you are willing to end up with. There are trade offs of course. Some loss, but in the best of circumstances it leads to no or very little fidelity loss.

As I said the blurays are a good starting point for compression because the more a source file has in terms of fidelity and bitrate the better compression can be. Smaller video sizes are harder for encoders to work with because the smaller the size the less information there is to compress. The more information the better it can compress.

The special sauce is basically the more a encoder has to work with the more it can do. When you use smaller already compressed or sources the less it can do.

You might ask why do this at all if there is any kind of loss?

The answer is storage space is expensive and you can cut down on the use by compressing video and audio. As well as less network strain and bandwidth consumption if you host your own media server (plex, jellyfin, etc).

The issue is most codecs are licensed out by royalties for software developers. They have to license hevc and avc and mpeg2 and mpeg4, etc.

This is where AV1 comes in.

AV1 was created by The Alliance For Open Media (aom) as an open source, royalty free codec. This simply means to include it in applications does not require licensing fees. Anyone can do whatever they want with the source code and not have to worry about violating licenses or having to pay for it.

As I said hevc is ~50% better at compressing video than avc is. So what does AV1 offer that hevc does not? Well, in theory with proper parameters you can see 30 to 50% better compression with av1 than hevc. Again we can either have better quality at the same bitrate or have a smaller bitrate and have it the same quality as the source/input file.

The issue is every time we take a step forward with a new codec the processing power needed increases. Compressing mpeg4 to h264 is easier and takes less time than h264 to h265. As computation increases and hardware accelerators begin to come out you will see a wider adoption. But generally speaking new codecs don’t receive wide spread adoption until years after the fact. When development and need come together you see actual widespread adoption.

AV1 is still an infant being released initially in 2019. libaom or aomenc was released at this point. Which the issue is it is extremely slow and not threaded well at all. So multi core machines don’t benefit natively from aom. It is painstakingly slow by itself. SVT-AV1 is a encoding application that is developed by intel and adopted by aom as a production ready encoder. As it is libaom is not production ready as it requires additional tools and hacks to speed it a long. Svt on the other hand is much faster and more wide spread. Tools such as ffmpeg and handbrake already have added svt-av1 to their arsenal of encoding functionality.

Though you may find people willing to use libaom for absolute quality:compression over speed. I am one of those people that have spent over a year encoding using libaom to get the best compression to quality ratio. In this guide I will get you started on the AV1 adventure and will focus on libaom as that is all I have really used since 2021.

Need is coming with the rise of 8k and beyond. AV2 is already in the works, h266 as well. Netflix and Google are big time investors in AV1 as it can reduce costs of 4k streams. When everyone has adopted hardware decoders, it is a win win. Lower bit rate than previously all while offering higher quality video.

4 Likes

libaom/aomenc guide:

There are a few basic tools that you will need to speed up encoding time so it is not taking a month for one file.

The Tools

  • LSMASH

  • Vapoursynth and Plugins

  • mkvtoolnix/merge

  • Av1an

What are vapoursynth and Av1an?

Av1an and Vapoursynth are two programs used in tandem to chunk a source/input file. When I said libaom is slow it generally does seconds/frame instead of frames/second because it is not able to utilize all the threads and cores your computer may have. What av1an does is split a source file into workers. These workers are chunks. So instead of compressing a 2hours movies with a single thread it will split the video file into multiple files that are split from the source. It then pins or assigns a thread(s) to a chunk of the video. This leads to significant speed improvements. As fast as 25fps in some cases if not more.

There is a docker container that has everything included for it. You don’t need to compile anything from source. Though I generally do for a few reasons. If you want ease of docker go down to the post bellow this one.

All of this requires you to know how to compile from source. I will include what I do for ubuntu/debian based distros.

To install av1an you need to install vapoursynth. So I will start with vapoursynth.

VAPOURSYNTH COMPILE FROM SOURCE With Dependencies

Dependencies: (qt5-default is not necessary for ubuntu or popos) (if on linux mint you need to run pip3 install cython)


$ sudo apt install git build-essential meson autoconf automake libtool git nasm yasm python3-dev python3-pip cython3 libass-dev qt5-default libqt5websockets5-dev libfftw3-dev libtesseract-dev ffmpeg libavcodec-dev libavformat-dev libswscale-dev libavutil-dev libswresample-dev libmediainfo-dev pkg-config libavformat-dev libavformat-dev llvm libclang-dev autoconf automake libtool libzimg-dev cmake perl

(note
install dependency for python depending on your version of pything)`


$ python3.9-dev or python3.10-dev

Now in ubuntu you need to create a symbolic link between dist-packages and site-packages found in the python lib folder

$ cd /usr/local/lib/python3
$ sudo ln -s dist-packages site-packages

now you will make the directory that will be used to house the source code you need to compile

$ mkdir $HOME/.installs

l-smash

$ cd $HOME/.installs
$ git clone https://github.com/l-smash/l-smash.git
$ cd l-smash
$ ./configure –enable-shared
$ make lib
$ sudo make install-lib

zimg (if you cant download libzimg-dev via apt)

$ cd $HOME/.installs
$ git clone https://github.com/sekrit-twc/zimg.git
$ cd zimg
$ ./autogen.sh
$ ./configure
$ make
$ sudo make install

Time to compile Vapoursynth

$ cd $HOME/.installs
$ git clone https://github.com/vapoursynth/vapoursynth.git
$ cd $HOME/.installs/vapoursynth
$ ./autogen.sh
$ ./configure
$ make
$ sudo make install

# Now we need to create the VS plugin folder

$ sudo mkdir /usr/local/lib/vapoursynth
$ sudo ldconfig

If all goes accordingly you have installed vapoursynth.

VS Plugins needed for Av1an (ffms2, l-smash-works)

$ mkdir $HOME/.installs/plugins
$ cd $HOME/.installs/plugins
$ git clone https://github.com/HolyWu/L-SMASH-Works
$ git clone https://github.com/FFMS/ffms2

FFMS2

$ cd $HOME/.installs/plugins/ffms2
$ ./autogen.sh
$ ./configure
$ make
$ sudo make install

(ln ffms2 lib to vs plugin folder)
$ sudo ln -s /usr/local/lib/libffms2.so /usr/local/lib/vapoursynth/libffms2.so

L-SMASH-WORKS

$ cd $HOME/.installs/plugins/L-SMASH-WORKS/VapourSynth
$ meson build/
$ cd build
$ ninja
$ ninja install

Now it is time to compile Av1an from source

We need rust, easiest way is to use rustup and follow the prompts as they appear.

$ curl --proto =https --tlsv1.2 -sSf https://sh.rustup.rs | sh
$ source ~/.cargo/env

Av1an can be installed via cargo or built from source.

To install via cargo run the bellow command

$ cargo install av1an –verbose
$ sudo ldconfig

(unfucks
shared libraries)

`To
build from source follow the bellow instructions

$ cd $HOME/.installs
$ git clone https://github.com/master-of-zen/Av1an
$ cd Av1an

There is a Cargo.toml file in the Av1an directory. With a text editor
whether it be vim or nano or your choice you will need to edit it.

When opened look for [profile.release] and change lto = “thin” to lto =
“fat“ and add opt-level = 3 if you want most optimized av1an
build.

Save.

Still in the Av1an directory run the bellow command (pray)

$ RUSTFLAGS="-C target-cpu=native" cargo build –release
$ sudo ldconfig

The binary will be in Target/Release

On to libaom

$ cd $HOME/.installs
$ git clone https://aomedia.googlesource.com/aom/`
$ cd aom
$ mkdir -p aom_build
$ cd aom_build

# This is what you can modify. Leave all the same except for flto=8. Replace 8 with whatever number of threads your cpu has. replace znver2 with your architecture and or just make replace with native. -march=native or -march=znver2 or 3 or 4.

$ cmake .. -DBUILD_SHARED_LIBS=0 -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-flto -O3 -march=znver2" -DCMAKE_C_FLAGS="-flto -O3 -march=znver2" -DCMAKE_C_FLAGS_INIT="-flto=8 -static" 

# replace 16 with how ever many threads your cpu has

$ make -j 16
$ sudo make install

You have it compiled and installed now!!

So all of this should be compiled now. Av1an, Vapoursynth + plugins,
and libaom

The final thing you might want is mkvemerge as the chunking method`

$ sudo apt install mkvtoolnix

Now that all the programs are compiled (hopefully successfully) You can start encoding…

3 Likes

Reserved for actual av1 settings and parameters

Guides until I can consolidate (thanks to bluesword)

Libaom fine tuning

Grain Synthesis How To and Why Should You.

Second Generation of BlueSword Guide

Official Av1an and AOM docker

# linux
$ docker run --user $(id -u):$(id -g) -it --rm \
--privileged \
-v "$(pwd):/videos" \
masterofzen/av1an:latest \
-i S01E01.mkv {options}

# windows
$ docker run  -it --rm \
--privileged \
-v "${PWD}:/videos" \
masterofzen/av1an:latest \
-i S01E01.mkv {options}

For both linux and windows you would use 
the same parameters for av1an and the encoder 
options. They are interchangable between 
differing OS'.

If you Run With Non Docker Binaries

#############av1an script################

#!/bin/bash
cd /home/argone/00-Local/work2/input;
for x in *.mkv;
do av1an -v "--cq-level=22 --end-usage=q --cpu-used=3 --enable-fwd-kf=1 --aq-mode=1 --lag-in-frames=48 --bit-depth=10 --kf-max-dist=240 --kf-min-dist=12 --enable-qm=1 --sb-size=64 --enable-keyframe-filtering=2 --arnr-strength=1 --arnr-maxframes=3 --deltaq-mode=0 --sharpness=1 --enable-dnl-denoising=0 --denoise-noise-level=5 --tile-columns=1 --threads=2" --photon-noise 10 -e aom -m lsmash -w 16 --concat mkvmerge --resume --verbose -i "$x" -o "/home/argone/00-Local/work2/final/${x%.*}.mkv";
done


Important Parameters (works with docker and non docker binaries


####AV1AN Specific Parameters##############
-i input.mkv
-o output.mkv

-m lsmash or ffms2 # tool that chunks the input video lsmash is generally whats preferred
--concat mkvmerge # merges chunks at end of encode
--photon-noise n # synthesized grain/noise, i find 8 to 10 for basic content to be ideal, 10-16 for heavy grain source.

##########encoder specific parameters#######################
-v # encoder parameter that libaom/aomenc use (in the quotation marks)
--cq-level=x # (x is a number, 16 is super high fidelity, the larger the number the less fidelity, I find 16 to 22 for newer releases to be ideal. 26-28 for older releases that have a lot of grain)
--end-usage=q # makes cq-level value of crf (constant rate factor)
--cpu-used=n # replace n with a number (0-10; 10 being fastest encode and 0 being placebo. I find 3 -4 to be best for outputted files. 3 is still slow, while 4 is a bit faster 5 is the max I generally would go based on the trade off with 6+.
-w n # n being the number of workers/chunks you want (normally number of cores you want utilized)
--set-thread-affinity n # n is the number of threads you will set for each individual worker. I usually do 2 threads for for 16 workers (cores) as I have a total of 32 threads, but you can also do 8 workers and 4 threads. 

Av1an works really well with directory based for loop bash scripts.

Av1an is known to have some audio issues. You have to extract the timecode for audio and mux it into the final file…

The bellow bash file is my way of automating it mostly. You have an input directory and a output. The input is for source file that is encoded to av1. The output contains the av1 encoded file, ripped or transcoded audio from ffmpeg and original source file. and a final file.

Directory Pathes
transcode/input/source.mkv
transcode/output/av1an/av1file.mkv
transcode/output/ffmpeg/mkaudio (original or transcoded).mka # From input directory
transcode/output/final/final file with correct audio.mkv
#!/bin/bash

# replace with path to your input directory
cd /home/argone/00-Local/work2/transcode/input/

for x in *.mkv

do

# extracts time codes for original audio (needed for audio sync correction
# input.vpy is generated bellow
touch input.vpy
cat <<EOT >> input.vpy
from vapoursynth import core
clip = core.ffms2.Source(source='$x', threads=32)
clip.set_output()
EOT
vspipe input.vpy -t "$x.dat" .

# av1 encoding tool that chunks and encodes video
av1an -v "--cq-level=22 --end-usage=q --cpu-used=3 \
--enable-fwd-kf=1 --aq-mode=1 --lag-in-frames=48 \
--bit-depth=10 --kf-max-dist=240 --kf-min-dist=12 \
--enable-qm=1 --sb-size=64 --enable-keyframe-filtering=2 \
--arnr-strength=1 --arnr-maxframes=3 \
--deltaq-mode=0 --sharpness=1 --enable-dnl-denoising=0 \
--denoise-noise-level=5 --tile-columns=1 --threads=2" \
--photon-noise 10 -e aom -m lsmash -w 16 \
--concat mkvmerge --resume --verbose \
-i "$x" -o "../output/av1an/$x"

# compresses audio to opus.
ffmpeg -i "$x" -map 0:a -mapping_family 1 -ac 6 \
-c:a:0 libopus -metadata:s:a:0 title='Opus 5.1' \
-metadata:s:a:0 language='eng' \
"../output/ffmpeg/${x%.*}.mka"

# extract audio, use only if you want original audio untouched
ffmpeg -i "$x" -map 0:a -c:a:0 copy \
"../output/ffmpeg/${x%.*}.mka"

# muxes audio + video stream, audio timestamps, fixes metadata
mkvmerge --output "../output/final/$x" \
--no-video --no-subtitles --no-chapters "../output/ffmpeg/${x%.*}.mka" \
--no-audio --timestamps 0:"$x.dat" "../output/av1an/$x"

rm input.vpy

done
1 Like

Sources

Compile aomenc

https://www.reddit.com/r/AV1/comments/s6eh5f/how_to_compile_av1_in_windows_without_crying/
Compile AOM on linux for linux

https://www.reddit.com/r/AV1/comments/jmwep/how_to_build_libaomav1_to_be_as_fast_as_possible/

I used the bellow for figuring out what dependencies are needed for vapoursynth which is need for av1an.

https://www.reddit.com/r/AV1/comments/rd5loo/guide_how_to_compile_av1an_on_ubuntu_2104/

Compiling Av1an for windows

https://www.reddit.com/r/AV1/comments/s8151l/how_to_compile_av1an_on_windows_without_breaking/

Jeepers this stuff didnt import correctly from libre office.

When you start to work around the distro’s package framework you’re doing it wrong as that will quickly turn into a house of cards and break fast.

You very likely want to use FFmpeg 6 for this type of application

l-smash don’t work with FFmpeg 5+, which is what you want to use in 2023 when encoding. There’s a fork but I haven’t tried it with FFmpeg 6.0, it’s located here: GitHub - AkarinVS/L-SMASH-Works at ffmpeg-4.5

Why are you not pulling Rust from using your distros repo?

  • native is in general a flakey cpu target, use the specific profile for your CPU
  • Only use O3 when you know it makes a significant difference as it will silently break things due to how optimization is applied and doesn’t necessarily mean that it’s faster in the end

You also have rav1e och svt-av1 which might be better options unless you’re targeting very low bitrates. Bindly applying settings is usually never a good idea as you need to consider the type of material you’re encoding.

If the source material is relatively clean you might get away with just using FFmpeg.

Even with svt, there is a drop off. I have found ffmpeg and svt nets me 12fps with preset 4. When using svt and av1an I see 24fps with the same settings. SVT is faster, rav1e is the slowest according to people I have talked with about the subject matter. AOM is what I have used and I will say go ahead and add to this. This should be a public thread. I just want to get the ball rolling. There are quite a few guides, none of which are in one spot. So I am trying to start with what I know and learn more through creating a centralized guide. From what is av1 to how to compile from source to actual use.

And btw, I am using ffmpeg 6 for my use of av1an. Works fine on windows and ubuntu.

Compiling av1an does not like the distro version of rust that is offered in the repo. I could only get it to successfully compile on popos with rustup.

again I found this via research on a reddit post about how to make aom run as fast as possible, and I have not had any personal issues.

1 Like

In defense of him the debian version is out of date and also because almost everyone these days considers it standard practice to use rustup as one of the methods to pull rust. There is nothing wrong with this as long as you know what you are doing and Argone does know what he is doing

Also debian only recommends using the debian shipped version of rust if packaging rust applications for specific use on debian. Quoting their wiki

“Due to the way Rust and the Rust crates are packaged in Debian, packages shipping Rust crates are only useful when packaging other crates or Rust-based software for Debian. While you can use them for everyday development with Rust, their main use is as dependencies for other applications since Debian doesn’t allow downloading source code at build-time.”

So I’m not sure where you got the idea that rustup is bad and he clearly had done his research so perhaps let him continue. Especially considering that Rustup certainly makes installing and managing multiple toolchains easy, especially if you want to use stable and nightly. The only thing against rustup from the distro packages is that, last time I checked, it doesn’t support multiple users properly. As it stands right now, rustup will install toolchains into a local user directory instead of in a place like /usr/bin, which means if you are logged in as another user you will probably have to download your own set of toolchains to use rust (though I’m not completely sure, haven’t needed that situation yet).

Also while he is using debian and even the debian wiki says above. Other wikis such as the arch wiki explicitly endorse rustup as the best method for installing rust

https://wiki.archlinux.org/title/Rust#Installation

Not even remotely in my experience. Though I come from the more dev side of the background with linux distributions so I do not encounter this trouble. I don’t think this one size fits all advice you just issued fits his use case. All he has to do is install compiled software to opt. This is the power and probably one of the best strengths of linux. It hardly seems right to suggest someone shouldn’t whom has the knowledge to do so. Hes not going to screw up his distro because he installs some compiled software to /opt LOL

This I know very little about. Carry on

That makes a ton of sense actually. For very basic things like trying Rust or running/installing a software built in Rust and distributed on cargo, the package manager’s version is probably fine particularly if its not built on the latest version of the toolchain. Sadly that’s rare. Most stuff likes and relies on newer active developed features of the language and that’s why programmers pull rustup.

For actually programming in Rust and doing Rust development, you’ll likely want to learn and use rustup. It’s a great tool with a lot of useful features. If this is an actively developed codec then you likely want to use rustup and NOT the distro version

This is why you port (and package) software properly because you want to keep software up to date and not have things randomly breaking. Let’s say you run it this way and any shared library gets a major version update, that will almost certainly break things and you have to start all over and likely also spending time trying to figure out what/why it broke in the first place because you’re using shared libraries and there will be API/ABI breakages. It works, for a while in best case but gets turned into a mess quickly unless you have zero interest in keeping your system up to date and care about security. Another issue is that most distros have packaging rules which vanilla upstream don’t necessarily follow which also causes files to installed in “wrong” paths causing inconsistencies.

If XYZ is to old you preferably submit a patch and/or use another distro that’s keep things more up to date ( rust package versions - Repology ) or you’re using the wrong “tool”. I can somewhat accept that argument if you have a box that dedicated and don’t see the Internet at all but otherwise it’s just a bad idea.

If you want to add to the conversation about how to do it for other distros be my guest. I am only familiar with popos/ubuntu/debian. I spent hours if not days developing a method to get this working for what I am used to. I have not had issues with updates if it works initially. I have had problems if not doing it in the order (top down of my thread) of compiling from source. As I stated it is important to use sudo ldconfig at times to unfuck shared libraries. Might not be the best method, but it is the only method I found to easily work in debian based distros.

Just because it works (initially) that doesn’t necessarily mean it’s good practice.

Some claims should be adjusted,

Bitrate has nothing to do with the display, it’s a measurement in this case of the compressed “video datarate” and nothing of the “raw” video stream is ever received by the screen/monitor which is handled by the interface you’re using (DP, HDMI etc) and its encoding.

AV1’s efficiency is in general about 30% or so depending on what you compare and bitrate, if you do “unfair” comparisons I guess you see 40-50% however there are a lot of variables to take into account. There are some early articles/tests claiming high rates but do keep in mind that they’re like very old and usually biased.

Any kind of transcoding will also lead to loss of quality unless it’s lossless. There’s a lossless mode for AV1 but that’s very slow and usually not worth using even if AV1 is more efficient because it’s not tuned to be lossless (it’s tuned to be lossy) so efficiency is usually very low. What you probably are trying to say is that the visually perceived quality might be “the same” using a more efficient codec in some scenarios. If your source already is quite heavily compressed (like an older DVDs) you might actually struggle to maintain the same amount of visual quality by transcoding at a lower bitrate unless you do filtering which may be percieved as better or worse.

I never said it did. I said it is the determining factor of file output size.

I said 30 to 50%* In any case.

This point is true. But the point of this is retaining quality as much as possible without too much loss all while having smaller sizes. i.e. compressed video. For example going from a 8GB avc source down to under 2GB AV1. There are tools to compare the compressed file to source material that outputs difference in a score.

“It can handle higher bit rates (the rate at which an image is written to a display).”

“Using hevc over avc you may see up to 50% better image quality”, “As I said hevc is ~50% better at compressing video than avc is”
Sure, up to but that’s extremely rarely the case…

“Taking a avc source input and outputting it to hevc can lead to smaller sizes while not losing any quality of the final video file” - That’s not how it works…

Yes, there are methods of comparing image quality such as VMAF, PSNR and SSIM but these tools should be used with care.

Yes I had lacking or incomplete information. Incorrect without all the information. I updated/corrected some of it.

Tbch i probably would. Id build a docker container with what argonne has put forth. Pass through nvidia in case i need to and then just give it the network called air gap which only has link local connections in my compose stack so yes this would be the path I took

Its not like you couldnt keep a compiled source up to date. Ive done this with systemD where I have a process that forks off runs a shell script to compile stuff. If it errors out it puts it in my logs and alerts me and then returns to the main process with a pass or fail. I get what you are saying but an experienced linux user knows how to deal with these issues and if they do why is too much of an issue? I mean what hes doing is on the more bleeding edge side of codecs right now. Tbch AV1 is a hot mess and it doesnt seem like it will get better. I think hes compiled a lot of stuff off bits and pieces. If your up to giving some advice how should he polish this out? Commands, structuring, suggested packages? I think he compiled most of it because the stock packaged stuff didnt have the things he needed at least so I gathered from his post

If you want docker. I was going to lead in to that. There is a docker for av1 encoding and av1an. Simplifies it, only downside is when running a for loop it overwrites anything already completed. With non containerized it prompts you if you want to overwrite a already completed encode.

1 Like

That for loop issue sounds a whole lot more like a script issue and yes given the other users criticisms and your self admission I highly suggest packaging this up in docker to prevent dirtying up a system

Easy to destroy a docker container and start over

1 Like

There is another down side. Docker you cant pin threads to a worker group. So for instance you have a 16 core 32 thread system, you have to define 16 workers or how ever many and thread count for each worker with bare bone. Docker all you can do tell it how many workers but cant actually assign the threads to a worker. So as far as I can tell you are stuck with 1 thread per worker. I think it is a container limitation. I could be mistaken.

You can limit those things but the compose config gets very complex and then the best way to automate that is a script creating and destroying compose confs as needed

You dont need to pin these tasks. Just give them less niceness than the bulk of your system tasks and go brrr