Containers not cleaned up.
danielvandenberg95 opened this issue · comments
After using this runner for a while, I'm seeing an excessive amount of containers, using up about 100Gb on my disk. I'm using a swarm stack with the following:
version: "3"
services:
gitea_act_runner:
image: vegardit/gitea-act-runner:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock:rw
- /mnt/DockerData/gitea/runners/1:/data:rw # the config file is located at /data/.runner and needs to survive container restarts
environment:
TZ: "Europe/Berlin"
# config parameters for initial runner registration:
GITEA_INSTANCE_URL: myurl
My workflow is as follows:
name: Test and Build
on: [push]
jobs:
Run-Tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Install Node.js
uses: actions/setup-node@v1
with:
node-version: '18.x'
- name: Install Yarn
run: npm install -g yarn
- name: Install dependencies as in yarn.lock
run: yarn install --frozen-lockfile
- name: Run tests
run: yarn run jest
Build:
runs-on: ubuntu-latest
needs: Run-Tests
# Run the command build_and_push.sh
steps:
# Install docker-compose
- name: Install docker-compose
run: sudo apt-get update && sudo apt-get install -y docker-compose
- uses: actions/checkout@v2
- name: Build and push
run: sudo ./build_and_push.sh
where build_and_push.sh is the following:
#!/bin/bash
(for n in `seq 1 10`; do docker-compose build --parallel && break; done) && docker-compose push
I currently manually run docker container prune
as a workaround.
Is this an issue with the docker wrapper, or with the act runner in general? If the second, I'll raise the issue there.
This sounds like a bug in the act_ _runner binary itself. You should report it at https://gitea.com/gitea/act_runner/issues
Apparently it's docker compose.
https://stackoverflow.com/questions/36808476/why-docker-build-image-from-docker-file-will-create-container-when-build-exit-in
Thanks for the redirect though, they pointed me in this direction.