Welcome @michel! 
TL;DR: I managed to start a GitLab runner in a way that starts an Actyx node, but it feels a bit hacky. Still working on allowing a container to use that node, though.
Probably wrapping your app into an image containing Actyx and running it as background process is a better approach.
EDIT: Added PoC for wrapping your app + Actyx into a single image
EDIT: Add workaround for data directory mount.
Simply including actyx/os
as a service in GitLab CI failed with the following errors:
2021-03-30T16:52:21.958924178Z ip: RTNETLINK answers: Operation not permitted
2021-03-30T16:52:21.960238952Z Cannot start ActyxOS due to the following errors:
2021-03-30T16:52:21.960253243Z The container is not running in --privileged mode
2021-03-30T16:52:21.960257421Z /data does not exist or is mounted readonly. Please attach a persistent volume to that position, e.g. '-v /mnt/home/actyx/:/data'. Check the docs for further information.
2021-03-30T16:52:21.960261853Z /data is correctly mounted in rw mode, but doesn't look writeable.
So it seems we have two issues:
- Actyx needs to be able to access the network / open ports
- We need a writeable data volume
Separate images
1. Network
For the actyx/os
image to be able to talk to the hosts network, it needs to run in privileged mode. I started my GitLab CI runner w/ --cap-add=NET_ADMIN
and --privileged
.
$ docker run -d --name gitlab-runner --restart always --privileged --cap-add=NET_ADMIN \
-v /var/run/docker.sock:/var/run/docker.sock \
-v gitlab-runner-config:/etc/gitlab-runner \
gitlab/gitlab-runner:latest
When registering the runner w/ GitLab, you also need to set the privileged
mode using --docker-privileged
, e.g. docker run --rm -it --privileged -v gitlab-runner-config:/etc/gitlab-runner gitlab/gitlab-runner:latest register --docker-privileged
After these two steps, the Actyx
the RTNETLINK answers: Operation not permitted
is gone, but it still complains about the data volume.
2. Data volume
The start script in the actyx/os
docker image checks /etc/mtab
to make sure a data volume is mounted. That makes sense, as it prevents users from accidentally using ephemeral volumes; thus from loosing data.
When running in CI, ephemeral volumes are what we want, normally. So we need to trick Actyx in the container into accepting a local folder as data volume. We can do this by overriding the entrypoint/command to create a local folder and mount it locally so it shows up in /etc/mtab
:
docker run -it -eAX_DEV_MODE=1 --privileged \
--entrypoint="" \
actyx/os:1.1.2 \
sh -c "mkdir /data && mkdir -p /tmp/actyxos && mount --bind /tmp/actyxos /data && /tmp/init.sh"
This can be replicated in .gitlab-ci.yml
, see below.
After doing so, the Actyx node starts up successfully:
(probably someone from engineering is already headed to my place to ask why I’m suggesting weird hacks show me an elegant solution)
Config
Here’s the complete .gitlab-ci.yml
file
image: curlimages/curl
stages:
- test
actyx:ping:
stage: test
variables:
AX_DEV_MODE: 1
services:
- name: actyx/os
alias: actyx
entrypoint: [""]
command: ["sh", "-c", "mkdir /data && mkdir -p /tmp/actyxos && mount --bind /tmp/actyxos /data && /tmp/init.sh"]
script:
curl localhost:4457/_internal/swarm/state
Actyx + App in single image
Alternatively, and probably more elegantly, we can run Actyx in the background in our application image. We’d use supervisord
or similar for that.
Here’s a simple PoC that shows how this would work:
- We build our own docker image based on
actyx/os
and add our application
- We call the original image’s entrypoint to run in the background and start our app
- Then we can use this image directly in GitLab as all communication between our app and Actyx happens locally
app.sh
just tries to connect to Actyx and outputs the state of the Actyx swarm:
#!/bin/sh
nc -z localhost:8080 && echo Connected to actyx node
echo Swarm state:
curl localhost:4457/_internal/swarm/state
entrypoint.sh
prepares the data folder as shown in the previous section, starts Actyx and our app afterwards
#!/bin/sh
# Make actyx accept local folders as data volumne
mkdir /data
mkdir -p /tmp/actyxos
mount --bind /tmp/actyxos /data
# Start actyx node and wait for it (naively)
/tmp/init.sh & # that's the original image's entrypoint
sleep 15s
/app.sh
Finally, our Dockerfile
ties it together:
FROM actyx/os:1.1.2
ADD entrypoint.sh /entrypoint.sh
ADD app.sh /app.sh
ENTRYPOINT ["/entrypoint.sh"]
Building and running this image should result in this:
$ docker build . -t wrapper-poc && docker run -it --privileged wrapper-poc
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 36B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/actyx/os:1.1.2 0.0s
=> [1/3] FROM docker.io/actyx/os:1.1.2 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 60B 0.0s
=> CACHED [2/3] ADD entrypoint.sh /entrypoint.sh 0.0s
=> CACHED [3/3] ADD app.sh /app.sh 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:bdd6c33d2aa8e2fc29e05943a56af00a533e7c0929f6748d4fdc7ed8639a965c 0.0s
=> => naming to docker.io/library/wrapper-poc 0.0s
Starting ActyxOS
***********************
**** ActyxOS ready ****
***********************
Connected to actyx node
Swarm state:
{"Ok":{"store":{"count":0,"size":0},"swarm":{"listen_addrs":["/ip4/127.0.0.1/tcp/4001","/ip4/172.17.0.3/tcp/4001","/ip4/172.26.0.1/tcp/4001"],"peer_id":"12D3KooW9u4JS9dFhSmujV1yZczuKxwUd3D9kTyFmBAFsYSXVvDL","peers":{}}}}%
References: