11 KiB
+++ title = "Ugh Forgejo Actions" description = "Forgejo Stuff lol idk who knows" date = 2024-02-10 [taxonomies] tags = ["admin", "ci", "docker", "forgejo", "actions"] +++
Over the last few days I've figured out how to use Forgejo Actions. I was excited to try it since it's integrated directly into Forgejo these days, and compatibility with github Actions means there's already loads of third party actions to take advantage of. Previously I was using Drone CI. I still am in a number of projects. But I'm hoping to get everything moved over over the next week or so.
But that's not why I'm writing this blog post. I'm writing this blog post because I haven't found very many useful resources for understanding how Actions works, despite the prevalence github Actions. All the documentation makes it look so simple to use, and in many cases I'm sure it is, but when you're running it yourself, you'll find the edges.
Getting Started
Setting up Forgejo Actions isn't a huge deal. I'm running it via docker-compose, but it can be run natively or in kubernetes or with lxc (etc). The one crucial thing though is it needs access to a docker daemon in order to operate. Giving it a Docker in Docker container is fine. Here's my DinD section:
services:
docker-in-docker:
image: docker:dind
privileged: true
command: ["dockerd", "-H", "tcp://0.0.0.0:2375", "--tls=false"]
restart: always
Simple enough? Now we're going to get a little more complicated with the Runner configuration. The runner needs to be brought up in 2 steps. The first step registers the runner with Forgejo and creates the runner configuration file. The second step launches the runner process. This is unfortunately not as simple as launching the container, but it isn't too bad.
services:
# docker-in-docker: ...
Forgejo-runner-1-register:
image: code.Forgejo.org/Forgejo/runner:3.3.0
links:
- docker-in-docker
environment:
DOCKER_HOST: tcp://docker-in-docker:2375
volumes:
- /storage/Forgejo-actions/runner-1:/data
user: 0:0
command: >-
bash -ec '
if [ -f config.yml ]; then
exit 0 ;
fi ;
while : ; do
Forgejo-runner register --no-interactive --instance https://git.asonix.dog --name bluestar-runner-1 --token TOKEN && break ;
sleep 1 ;
done ;
Forgejo-runner generate-config > config.yml ;
sed -i -e "s|network: .*|network: host|" config.yml ;
sed -i -e "s|labels: \[\]|labels: \[\"docker:docker://bash:alpine3.19\"\]|" config.yml ;
chown -R 1000:1000 /data
'
This is the first step. The script loops attempting to register the runner with Forgejo if there isn't an existing configuration file, and when it succeeds it writes the configuration file and updates some values.
This can be more-or-less copied verbatim, with the exception of TOKEN
, which needs to be copied
from the Forgejo actions admin panel. We'll come back to the config.yml file later. Next up we
actually run the runner.
services:
# docker-in-docker: ...
# Forgejo-runner-1-register: ...
Forgejo-runner-1-daemon:
image: code.Forgejo.org/Forgejo/runner:3.3.0
links:
- docker-in-docker
environment:
DOCKER_HOST: tcp://docker-in-docker:2375
depends_on:
Forgejo-runner-1-register:
condition: service_completed_successfully
volumes:
- /storage/Forgejo-actions/runner-1:/data
command: "Forgejo-runner --config config.yml daemon"
restart: always
A lot less going on. We let the Forgejo runner access our Docker-in-Docker daemon and launch it after the registration container finishes. Off to a great start.
Using my basic "make sure it works" action that I cobbled together after reading some of the documentation, we can make sure the runner works:
on:
pull_request:
push:
branches:
- main
tags:
- "v*.*.*"
env:
BINARY: example
jobs:
test:
runs-on: docker
strategy:
matrix:
info:
- arch: amd64
- arch: arm64v8
- arch: arm64v7
steps:
# We can run multiple bash commands, and for each item in our matrix!
- run: env
- run: echo "${{ matrix.info.arch }} Good"
test2:
runs-on: docker
container:
# we can override the docker image, how fancy
image: debian:bookworm-slim
steps:
- run : echo "Hello, debian"
test3:
runs-on: docker
container:
image: docker.io/asonix/rust-builder:latest-linux-arm32v7
steps:
# We can even compile rust code! It's amazing!
- run: cargo init --bin --name $BINARY
- run: build
This all runs successfully when a branch or a tag matching v*.*.* is pushed. We did it! We're
done!
Let's add it to an existing project (say, pict-rs)
on:
push:
pull_request:
branches:
- main
jobs:
clippy:
runs-on: docker
container:
image: docker.io/asonix/rust-builder:latest-linux-arm32v7
steps:
-
name: Checkout pict-rs
uses: actions/checkout@v4
-
name: Clippy
run: |
cargo clippy --no-default-features -- -D warnings
cargo clippy --no-default-features --features io-uring -- -D warnings
And run it!
OCI runtime exec failed: exec failed: unable to start container process: exec: "node":
executable file not found in $PATH: unknown
Oh.
...what?
Hmm...
The Problems
So actions/checkout@v4 depends on node to run, but my rust builder container doesn't have node in it so... I can't checkout my code? Well let't just split this up a bit, then.
on:
push:
pull_request:
branches:
- main
jobs:
clone:
runs-on: docker
container:
image: docker.io/node:20-bookworm
steps:
-
name: Checkout pict-rs
uses: actions/checkout@v4
clippy:
needs: [clone]
runs-on: docker
container:
image: docker.io/asonix/rust-builder:latest-linux-amd64
steps:
-
name: Clippy
run: |
cargo clippy --no-default-features -- -D warnings
cargo clippy --no-default-features --features io-uring -- -D warnings
Except that doesn't work. The cloned repo doesn't stick around between jobs. If we had more than one
runner the jobs might not even run on the same one! We could try solving this with artifacts but
wait... actions/download-artifact@v4
also depends on node, so we can't run it in the rust-builder
container.
So we have Actions but we can't use them. How does github handle this? Well github's answer is to
install anything you could ever need into their default actions containers. Meaning node, go,
python, ruby, docker (oh, docker... we'll need that too) and more are all bundled into a 60GB image.
If you remember when we had a script that ran sed
on the config earlier...
sed -i -e "s|labels: \[\]|labels: \[\"docker:docker://bash:alpine3.19\"\]|" config.yml ;
We were setting a value in the Forgejo runner's config to provide our runners with default
containers. We started with bash
on alpine
. While that particular container is very small and
doesn't take long to download, it doesn't contain the majority of things that actions expect to
exist.
In order to build pict-rs we need rust, and not just any rust but a rust that is capable of cross-building (I target armv7, aarch64, and x86_64 with musl libc). In order to clone pict-rs, use the actions cache, use the actions artifacts, and more we need nodejs. In order to build docker containers we need docker and qemu. The container image we need to have or make to run our CI is now nontrivial.
As a proof of concept, I first wrote my actions where I installed everything by hand. I started with
the node:20-bookworm image from dockerhub. I had steps to apt-get install docker, download rustup
and install it, add the proper targets, add clippy, add cargo-binstall,
cargo binstall cargo-zigbuild
, and install zig. While this worked, it took a while just in the
setup phase, which isn't what we want for CI.
My previous CI for pict-rs used my
rust-builder
image, which doesn't actually use cargo-zigbuild. I opted to use zig's linker for pict-rs in Forgejo Actions because I knew it would be easier than manually constructing a cross-compile environment. It would also enable me to use a single container to build for any platform, rather than my previous CI which had a unique container for each platform I targeted.
So I wrote a bit of caching. I had used an action to install zig, and that action cached zig on the runner. I wrote my own caching layer for all the rustup and cargo bits. That sped things up as well, but it still meant using space in the runner cache, and potentially installing everything again on a cache miss. In this process, I also hit the github rate limit for downloading cargo-zigbuild via cargo-binstall, meaning I had to start compiling it from crates.io instead (which doesn't take too long, but it's still longer than downloading a binary).
As an aside, I had to set
DOCKER_HOST: tcp://docker-in-docker:2375
as an environment variable in the runnerconfig.yml
file so that my use of docker in the actions container would find my docker-in-docker daemon.
Giving In
I decided I needed a universal base container image to run my CI the way github does it, because it's the only way that makes sense for their CI system. If all the actions people write are going to expect me to have things installed, then I better have them installed. You can find the actions workflow I use to produce my base image in my actions-base-image repository. I'm sure that in the future I will encounter more actions that fail to run on this image and I will need to update it to add more dependencies.
I also wrote another caching action, simpler now than before since all the rust, zig, and docker
bits are baked into the base image. You can find it here in
the cache-rust-dependencies
folder. It's extremely basic but saves me a download from crates.io
for each job pict-rs runs.
So?
Am I happy with Forgejo Actions? Not really. I think the design of Actions is pretty bad for the Forgejo Actions case where you control the runners yourself. On github it's fine, since github manages the 60GB image behind the scenes where you never need to think about it. Outside of github it's less ideal. I'm still going to migrate all my projects to it now that I have everything working and a not-too-big base image for myself (it's 1GB).
I hope anyone else struggling with Actions (github, gitea, forgejo, or otherwise) gains some insight from this post. It's not like I built this blog in the first place to be able to put this online or anything. Let's hope now that I've written all this that I don't need to update my base image to publish this to garage.