Need a docker expert. Building container for GitLab CI

Now from the title I know what you’re thinking. Oh God why is somebody building their own container and why do they need help to do that?

I hope you stick around after I elaborate, but I would really greatly appreciate help here

Most of my code work these days is on microcontroller units and it’s also on small board, arm, ETC. A lot of it also has to do with FPGAs and so on and so forth. An automated build continuous integration server for this would be something that would greatly help my quality of life, but I’ve assessed that it’s pretty difficult

Docker seems to be a good way to go about it because it allows you to isolate the system that you’re running to run that system. I already have portainer. It should help me actually make the container itself. So here is what I’m looking at.

I would have to build a container that had the various tool chains that I would need to build the actual programs for the architectures that I code on. Some of these are much harder than others so if we can just focus on arm cortex m0 through 4. ST and espressif. Pretty much everything I’ve moved to has RTOS. From what I have assessed the biggest hurdle for getting the regular CI builds up and running is moving to CMake. It can range from annoying to a major PITA depending on the MCU vendor. ST, I believe Cube can generate CMakeLists.txt (although it’s crap). I’ve used GitLab CE on normal architectures and by normal I mean just standard old arm A and x8664 and I’m familiar with just how much it makes your life better. How much it helps you with your code and building and so on and so forth and I feel like this is going to be essential going into a couple of my personal projects and also the valuable experience of setting something like this up.

Now I look at this task in my primary concern right now is I’m having trouble just formulating the initial requirements let alone solving the problem. So if someone is willing to take time out of their day and kind of help me in the right direction or help me step through the process to actually make a docker container with GitLab CE That can do continuous integration for say armed cortex m0 through m4 and also R series. I would greatly appreciate that.

The problem itself is making my brain hurt. You know once I started gathering all the information it kind of felt like overload and I was sitting there just like oh I’m under attack :joy:. So I’m trying to step through this and any and all insight that people can help with would be great

I’m not an expert per se.

This might be useless depending on how C builds/tests work.

I’ve quickly browsed gitlab docs, and the general steps to solve your problem seem to be:

[1/3]
→ run a completely standard gitlab docker image
→ configure the CI pipeline/whatever, that should be perfectly standard

[2/3]
→ run a gitlab runner docker image
→ this bit will be more difficult, as you’ll need to either run the whole thing as an ARM instance, or configure the build job inside the x86 runner image to build and test for ARM.
→ seems like you can configure the runner to run a “docker build” command to make an image of your test job. This would be the preferable place to do something like docker buildx build --platform linux/amd64,linux/arm64 to specify target platform different than host platform.
Enabling additional platforms might require extra steps.

Steps 1 and 2 are the automation of running step3 whenever you push code.

[3/3]
→ The setup above pretty much gives you a fresh ARM shell with your project files to do as you please
→ Now you have a completely separate issue of configuring your build and test, but you can do so in an empty image, without extending anything.
→ This assumes that the most straightforward way of building and testing your arm project is running your build tools on “arm host”

Footnotes:
Issues 1 and 2 are pretty much separate from issue 3, and I’d expect that the groups familiar with one problem set don’t intercept much with the other group.

I would start with trying to get one-off build+test running, before automating it via gitlab
One caveat of doing it in this way:
If there would be an issue preventing you from running buildx in this form inside gitlab runner container you’d have a perfectly working solution that needs a major change. 90% of it would still apply, but it’s removing jenga from underneath you.

2 Likes

Who would you need cmake?

Does your current build workflow run in a Linux command line?
If so, you can move it 1 to 1 to a gitlab ci build …

The gitlab runner can run a standard docker image, into which you can do pretty much whatever you want, as long as the image running has all the binaries/tools needed for the build , you can also break down the build process into multiple steps and use different docker images.

hmm okay I need a few more things explained. I saw his post about the runner. Are you saying you slave a seperate runner to the gitlab instance for a different arch. Expound?

Espressif is mean lol thats why. but go on :slight_smile: . Any and all info to consider

Yes, the runner is fed part of the gitlab-ci.yml file , it kick off a docker image as specified in the image section of the ci config. You can also specify multiple steps (validate,unit test,build,deploy binaries) that can use separate docker images
The docker image is spun up and fed a script to run inside, augmented by gitlab variables and such. At a minimum, the docker image needs to be able to run git and pull the code Grom gitlab itself. If you post a bash script that runs one of your builds I can give back a rough example…

1 Like

Ahhhh I think I get it. So I send a build up. the build gets validated and the runner runs against the variables given which include the architecture its being built for and it builds it in that runner to test it and pass back the results?

I think what I need to do most right now is get the initial instance up then get a runner going. Baby steps into it. Where the main instance does all the cool gitlab stuff and the runner does the grunt of the architecture specific validation, unit testing, building and deployment.

If I understand this correctly I dont need a bash example so much as I just need time to start getting this into my portainer configs and building runners for this setup. The question I guess I have that I feel hasnt fully been answered. How automated is it? The more automation the better. Not because its less work as half the time it turns out wrong. The real point of this instance is reducing human factors and risk. Which MIGHT save me time

There are days where I wish someone already did this work and made a compose but a job well done is a job done yourself right? (hopefully)

Amen, nothing like trying it out to see whether it fits your needs

As automated as you can make it, and as automated as it makes sense for you.
In an ideal world, you would work on a project ,commit a change and the automation would run compile/unit tests, eventually produce test/debug binaries, then, once satisfied with a bunch of changes, you would declare it a release, and another task in the automation would run the binary creation maybe publish them to an artifact server to be consumed/used by third parties ? All of the process can be integrated with teams/slack so that you get notifications of builds happening/failing

I don’t know how you workflow is structured, so can only assume.
If you are a one man shop, then you will probably be more interested in using it for keeping track of multiple project work happening concurrently and automating binary management
If you work in a team it is pretty much the only wa you can have devs working on the same code base without spending months merging changes …

Here’s an example of a gitlab CI file that manages Terraform cloud deployments for a project (different use case, but te concepts are the same)
The workflow in this case is:

  • Releases do not produce a binary, but trigger deployment of resources in the cloud
  • Plans (the equivalent of unit testing) can run multiple times, ideally whenever a change is committed to the code
  • Once multiple changes are approved, plans are merged into a release an produce a binary artefact, used to apply the changes to the cloud environment
  • the docker image used is a stock one from dockerhub (alpine/terragrunt) and has the following tools
    • git
    • terraform
    • terragrunt
    • curl
image:
  name: alpine/terragrunt
  entrypoint:
    - '/usr/bin/env'
    - 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'

variables:
  AWS_ACCESS_KEY_ID: $TF_VAR_access_key
  AWS_SECRET_ACCESS_KEY: $TF_VAR_secret_key
  AWS_DEFAULT_REGION: us-east-1
  OCI_FINGERPRINT: $TF_VAR_fingerprint
  OCI_USER_ID: $TF_VAR_user_ocid
  OCI_ROOT_PRIVATEKEYPATH: $TF_VAR_ssh_private_key
  OCI_ROOT_PUBLICKEY: $TF_VAR_ssh_public_key
  TERRAGRUNT_DOWNLOAD: terragrunt-cache
  TEAMS_WEBHOOK: https://outlook.office.com/webhook/0258fa78-b75f-444c-9dd3-aaa
  TGFINDMODULEURL: https://source/mrossi/tgplanparse/uploads/873de07729d15187ac6aef2b1ad19395/tgfindmodule
  GETLASTGITMESSAGEURL: https://source/snippets/3/raw?inline=false


before_script:
  - apk add --update curl
  - terragrunt --version
  - mkdir -p ~/creds
  - echo $OCIAPIKEY | base64 -d > ~/creds/oci_api_key.pem
  - echo $SERVICEACCOUNTKEY | base64 -d > ~/creds/cloud_opc_key
  - echo $TERRAGRUNT_KEY | base64 -d  > ~/creds/agent-key
  - eval $(ssh-agent -s)
  - ssh-add ~/creds/agent-key
  - mkdir -p /root/.ssh
  - echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
  - echo "Commit Message"
  - echo $CI_COMMIT_MESSAGE
  - echo "Commit SHA"
  - echo $CI_COMMIT_SHA

stages:
  - plan
  - deploy
  - plan-partial
  - deploy-partial

merge review:
  stage: plan
  script:
    - curl -s ${TGFINDMODULEURL} -o tgplanparsemodule
    - chmod +x tgplanparsemodule
    - curl -s ${GETLASTGITMESSAGEURL} -o getlasttagmessage.sh
    - chmod +x getlasttagmessage.sh
    - TG_ARGS=$(./getlasttagmessage.sh)
    - echo "TG ARGS-> $TG_ARGS"
    - terragrunt plan-all --terragrunt-non-interactive --terragrunt-config terraform-global.hcl -no-color -out=gitlabplan.plan ${TG_ARGS}
    - ./tgplanparsemodule ${CI_PROJECT_DIR} | tee plan.txt
    - >-
      curl -s -X POST -g -H "PRIVATE-TOKEN: ${GITLAB_TOKEN}"
      --data-urlencode [email protected]
      "${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/merge_requests/${CI_MERGE_REQUEST_IID}/discussions" 
  tags:
    - docker-runner
  only:
    - merge_requests

plan:
  stage: plan
  artifacts:
    expire_in: 2d
    paths:
      - terragrunt-cache/**
  script:
    - curl -s ${TGFINDMODULEURL} -o tgplanparsemodule
    - chmod +x tgplanparsemodule
    - terragrunt plan-all --terragrunt-non-interactive --terragrunt-config terraform-global.hcl -out=gitlabplan.plan
    - ./tgplanparsemodule ${CI_PROJECT_DIR} | tee plan.txt
    - >-
      curl -s -X POST -g -H "PRIVATE-TOKEN: ${GITLAB_TOKEN}"
      --data-urlencode "line_type=new"
      --data-urlencode [email protected]
      "${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/repository/commits/${CI_COMMIT_SHA}/comments" > /dev/null
  tags:
    - docker-runner
  when: manual
  only:
    - /^v\d+/

deploy:
  stage: deploy
  script:
    - echo "### Terragrunt output" > deploy.txt
    - echo "" >>  deploy.txt
    - echo "## Applying Plan - Output:" >>  deploy.txt
    - echo "" >>  deploy.txt
    - echo "\`\`\`" >>  deploy.txt
    - echo "" >>  deploy.txt
    - terragrunt apply-all --terragrunt-non-interactive --terragrunt-config terraform-global.hcl gitlabplan.plan -no-color | tee -a deploy.txt
    - echo "\`\`\`" >>  deploy.txt
    - echo "" >>  deploy.txt
    - >-
      curl -s -X POST -g -H "PRIVATE-TOKEN: ${GITLAB_TOKEN}"
      --data-urlencode "line_type=new"
      --data-urlencode [email protected]
      "${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/repository/commits/${CI_COMMIT_SHA}/comments" > /dev/null
  tags:
    - docker-runner
  except:
    - /^v\d+.*-rc\d+/
  when: manual
  only:
    - /^v\d+/
  environment:
    name: OCI-Production
    url: https://cloud.oracle.com

deploy-partial:
  stage: deploy-partial
  script:
    - IFS=$' '; TG_ARGS=" "; REF=$(git show-ref --tags -d | grep ${CI_COMMIT_SHA} | grep partial | cut -d " " -f 2 | cut -d "^" -f 1); MSG=$(git for-each-ref $REF --format '%(contents:subject)'); for item in $MSG; do TG_ARGS="$TG_ARGS --terragrunt-include-dir $item" ;done; export TG_ARGS="$TG_ARGS"
    - echo "TG ARGS-> $TG_ARGS"
    - curl -s ${TGFINDMODULEURL} -o tgplanparsemodule
    - chmod +x tgplanparsemodule
    - echo "### Terragrunt output" > deploy.txt
    - echo "" >>  deploy.txt
    - echo "## Applying Plan - Output:" >>  deploy.txt
    - echo "" >>  deploy.txt
    - echo "\`\`\`" >>  deploy.txt
    - echo "" >>  deploy.txt
    - terragrunt apply-all --terragrunt-non-interactive --terragrunt-config terraform-global.hcl gitlabplan.plan -no-color $TG_ARGS | tee -a deploy.txt
    - echo "\`\`\`" >>  deploy.txt
    - echo "" >>  deploy.txt
    - >-
      curl -s -X POST -g -H "PRIVATE-TOKEN: ${GITLAB_TOKEN}"
      --data-urlencode "line_type=new"
      --data-urlencode [email protected]
      "${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/repository/commits/${CI_COMMIT_SHA}/comments" > /dev/null
  tags:
    - docker-runner
  only:
    - /^partial-release-\d+/
  except:
    - /^v\d+.*-rc\d+/
  when: manual
  environment:
    name: OCI-Production
    url: https://cloud.oracle.com

plan-partial:
  stage: plan-partial
  script:
    - curl -s ${TGFINDMODULEURL} -o tgplanparsemodule
    - chmod +x tgplanparsemodule
    - IFS=$' '; TG_ARGS="--terragrunt-include-dir foo"; for item in `git for-each-ref --count=1 --sort=-taggerdate --format '%(contents:subject)' refs/tags`; do TG_ARGS="$TG_ARGS --terragrunt-include-dir $item" ;done; export TG_ARGS="$TG_ARGS"
    - echo "TG ARGS-> $TG_ARGS"
    - terragrunt plan-all --terragrunt-non-interactive --terragrunt-config terraform-global.hcl -out=gitlabplan.plan $TG_ARGS
    - ./tgplanparsemodule ${CI_PROJECT_DIR} | tee plan.txt
    - >-
      curl -s -X POST -g -H "PRIVATE-TOKEN: ${GITLAB_TOKEN}"
      --data-urlencode "line_type=new"
      --data-urlencode [email protected]
      "${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/repository/commits/${CI_COMMIT_SHA}/comments" > /dev/null
  tags:
    - docker-runner
  only:
    - /^partial-release-\d+/
  except:
    - /^v\d+.*-rc\d+/
  environment:
    name: OCI-Production
    url: https://cloud.oracle.com

1 Like

For the gitlab CI/runner part:
https://www.czerniga.it/2021/11/14/how-to-install-gitlab-using-docker-compose/

For your build environments, if I got it right:
https://docs.espressif.com/projects/esp-idf/en/latest/esp32/api-guides/tools/idf-docker-image.html

1 Like

So think one man. Two small man shop… I work on projects. Sometimes I have helpers so on and so forth but it’s very small and it’s kind of loosely structured. Honestly, it’s all I have this problem. I’m going to work on it until I fix it. I’m going to keep building it over and over and over again until I am satisfied. My biggest problem is when I introduce friends to help me or say it work. I have a co-worker working on the same thing then it becomes merge hell and I really need to be able to manage, builds and manage their continuous build and continuous integration and improve my quality of life is really what I’m trying to get at and I am having a hard time figuring out exactly where to start so to speak. How much do I need versus? Okay? Maybe I only need this to be automated so that’s why I think I need to jump into it

Does that make more sense? Kind of what I’m looking at?

Thanks. Yeah you did get it correct for espressif … as for the ST. Micro that’s very much hust GCC arm. Very much as vanilla as you get to cortex M. It’s always something I appreciate about ST.

Someone gave me some advice that maybe docker isn’t the best and I should go with LXC due to its superior isolation from the system, but I don’t know how true that is.

I really do just have to jump into it and get things running and testing things out and seeing how it goes. Ideally what the system will do is it will build its it will catch any errors. It will throw those things back like you were saying. As for deployments, I don’t intend to have my microcontroller is hooked to the GitLab server all of the time to have the deployed builds. Ideally I’d like to be able just to emulate it and see if it successfully works. Not so much deploying to actual hardware until I have a release candidate so to speak. Does that make more sense

I’m about to head to bed here soon so I’ll hit it more when I have more rest.

If you were running your binaries in a production environment and you were worried a container misbehaving and stopping a production line, maybe, for a build environment, gitlab has native support for docker, LXC you need to go in expert mode.
I would suggest small steps that can gradually help your quality of life instead of humongous plans that will cause more grief on top of what you already have …

The first thing I would suggest is, even before the automated builds, that you structure your processes around using git for handling code changes, releases, branches and such

If you try buidling a CI env without having a proper process for how you work on the code, again, doable, but it will be hell …

1 Like

@MadMatt why CMake? Usually the alternative is using whatever IDE the microcontroller vendor provides as the build system, which boils down to either heavily modded Eclipse or, in one case, forked NetBeans. Neither of which, I believe, can easily run in a headless container.

@PhaseLockedLoop now, on to other stuff.

GitLab Runner in LXC

I second this - if you set the runner to Docker executor, it will want to manage it’s own containers, and it’s better to have a layer of isolation between it and your other containers. In theory, it shuold be fine, but in practice, it’s better safe than sorry. To be clear, the setup I’m advising is, host with LXC, then GitLab Runner and Docker both installed inside LXC. If that doesn’t work, a VM might be a good solution, but it’s wasteful wrt RAM.

Also, be aware that GitLab runner doesn’t really clean up after itself: Smart cache cleanup for Docker images & volumes (#27332) · Issues · GitLab.org / gitlab-runner · GitLab, although there’s workarounds in the comments to that issue.

Building for ST

The docker image is… simple. Very simple. You install what you require from the distribution repositories (I usually default to Debian), and then install ARM’s GCC. While not strictly necessary, I prefer to use ARM’s official GCC build over distro’s. It’s outdated, but the AUR package should give you a clue how to do it.

The hard part will be getting the CMake toolchain file right, but I can help with it.

Other thoughts on Docker images

The obsession with small images and Alpine doesn’t IMO hold much water for CIs, because if you use the same base image, it won’t matter much. Docker is smart like this, with reusing the already downloaded layers. And, personally, I’m cagey about MUSL - a lot of software simply isn’t tested with it.


As an example, here’s my Rust CI Dockerfile, you can see it’s relatively simple.

# syntax=docker/dockerfile:1.4

ARG RUST_VERSION=1.59

FROM rust:${RUST_VERSION}-bullseye

RUN apt-get update && apt-get upgrade -y \
    && apt-get install -y --no-install-recommends \
        gcc-aarch64-linux-gnu \
        g++-aarch64-linux-gnu \
        binutils-aarch64-linux-gnu \
        libc6-dev-arm64-cross \
    && apt-get autoremove -y \
    && apt-get clean -y \
    && rm -rf /var/lib/apt/lists/*

RUN rustup target install \
        aarch64-unknown-linux-gnu \
        x86_64-unknown-linux-gnu \
    && rustup toolchain install nightly \
    && rustup component add clippy

RUN cargo install --locked \
        cargo-deny \
        cargo-hack \
        cargo-nextest \
        cargo-udeps \
    && rm -rf $CARGO_HOME/git $CARGO_HOME/registry \
    && strip $CARGO_HOME/bin/*

ENV \
    CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_LINKER=aarch64-linux-gnu-gcc \
    CC_aarch64_unknown_linux_gnu=aarch64-linux-gnu-gcc \
    CXX_aarch64_unknown_linux_gnu=aarch64-linux-gnu-g++
1 Like

It wasn’t me, @PhaseLockedLoop said he needed it :slight_smile:

1 Like

This is what I would recommend.

The production build should not contain development toolchains; only what the bare minimum required to run an OS, and your binary.

You maintain a preconfigured docker image to run your builds, but your production image is minimalist.

Using a ‘Golden Image’ is totally fine, people do it all the time. It save’s massive amounts of time.

For example, in your pipeline if you have one task to rebuild your image daily and upload that to the registry, and lets say in your image build script it updates the host OS, pulls all deps, and gets everything ready to compile code from the source control.

Then if you CI pipeline you pull your latest image and run some tests with your MR, instead of having to do all those aforementioned steps for each run.

That said, for a small project this is sort of overkill if you do not plan to be building things constantly. That said, developer time is expensive and you do not want them to sit on their thumbs waiting for builds to finish.

Having worked with larger global teams, this is the method we use and it works well for our needs.

if the setup is identical except for the arches then it should be a straightforward task.

2 Likes

As far as I understand, the OP wants to streamline the build of artefacts for embedded microcontrollers (e.g. firmware files), so the final builds will not be containers/images that need additional infrastructure to work.

That said, even in the microcontroller environment, this means having a framework in place to disable debug/serial/whatnot outputs when buidling a final firmware …

2 Likes

You’re pretty on. And so is jaski

I should have been more clear and I’m sorry about that. It was late last night. I’ve been busy out processing at work and it’s just been hectic. But yes essentially what I really want to push is I want this to help my iteration time. My build management and I want it to be able to catch some human errors that I make and so on and so forth and I want to know if the bill is successful or not. Deployment isn’t really necessary continuous deployment that is. Continuous integration possibly, but at this point I don’t quite need that. What I really need is solid build management, automated building of it to tell me what the errors were and help my iterations. I don’t want to add too many extra steps because if I add too many extra steps before they are needed then I’m needlessly complicating my project so to speak

I’m definitely going to try and get started on it this weekend. It’s just I have a lot to do right now this week :laughing::+1:

I truly appreciate what @jaskij It definitely helps me kind of frame my mind around the kind of approach I need to be making as well.

Honestly, at this point @PhaseLockedLoop , I wouldn’t focus on CI - especially if you’re working solo or near-solo.

Focus on setting up your development environment so that you can build your firmware with a single command in CLI. Single command build is what you really need. And document the process - whatever you end up learning will be valuable lessons and information you will need to build your automated builds anyway.

That’s general advice for creating any sort of automated build anyway: start out doing it manually. Rough steps look like this:

  1. Make it buildable manually in CLI on your host
  2. Move all those steps to .gitlab-ci.yml, including installing toolchains and what not (yes, this will be woefully inefficient)
  3. Identify which steps you can move to your Dockerfile, and do so
  4. Enjoy

Once you get a grip on that, you can move on to more ambitious things. I once made it so that GitLab CI would build the binary then program the physical device in our office and run integration tests against it. The hardest part was that the debugger kept resetting and figuring a stable passthrough using libvirtd took some doing.


I know you generally prefer Neovim as your editor, but there are two IDEs I can recommend which actually have good support for working with CMake and microcontrolllers, both with good Vim emulation:

  • CLion. Paid, but relatively easy to set up. As a bonus, you can use your private license at work.
  • QtCreator. Free, but takes some pain to set up. Been over two years since I used it, so sadly can’t helpy you with this one.

Another thing: all your code must build with -Wall -Werror, preferably with -Wpedantic. Period. No excuses. As you probably know, good, safe, coding in both C and C++ is years of building good habits. Those flags help learn and enforce some of them.

You could also look into finding a decent static analyzer (clang?) and integrating it into your CMakeLists.txt, so you can easily fire it in automated builds. But it’s often a major pain, because vendor libraries tend to be shit quality. Previous place I worked, we came across at least two different bugs in ST’s code.

1 Like

Yeah build it in your dev environment first. Then try to docker run <whatever distro image> try the buildscript there. See what dependencies you are missing. Time how long it takes to install dependencies. If it’s not a whole lot that needs to be installed and it installs fast you do not need to build a custom docker image. Having a custom image with the stuff already installed saves some time in CI runs, but it also creates an extra thing you now have to maintain to some degree and an extra step when changing versions of build tools.

Very often there are also official images for zyx language available. So that’s an option too.

If not try without one. You will see if you need to build an image once it’s running and takes too much time. And then you already have all the commands that you can paste into a dockerfile as RUN commands so your time wasn’t wasted.

1 Like

Mostly, it’s downloading the whole GCC toolchain. Which, ARM’s archive right now weighs in just under 500 MiB.

I don’t think ARM has any official Docker images for cross compiling. That said, instead of starting from a plain distro image, I highly recommend starting off with Docker’s official buildpack-deps image, although probably from -scm or -curl one. The Dockerfiles also are a good reference how to install files from a distro repository.

1 Like

Out of curiosity, is there any reason for that, or is it just a personal preference?