Skip to content

Docker patterns

This recipe is about using Docker from inside a RunWisp task — e.g. running an analytics job in a one-shot container, pruning unused images, or pre-pulling the next deploy’s image.

If you’re looking for “how do I run RunWisp itself in a container,” go to Deploy with Docker instead.

The cleanest case. RunWisp fires a task; the task runs a container that does its thing and exits.

[tasks.crunch-numbers]
group = "Analytics"
description = "Nightly aggregation of yesterday's events"
cron = "0 4 * * *"
on_overlap = "skip"
timeout = "2h"
keep_runs = 60
notify_on_failure = ["slack-ops"]
run = """
set -euo pipefail
docker run --rm \\
--env-file=/etc/analytics/secrets.env \\
--network=internal \\
--memory=2g \\
ghcr.io/example/analytics:current \\
--date=$(date -u -d 'yesterday' +%Y-%m-%d)
"""

The non-obvious bits — beyond the mandatory --rm called out above:

Bound the container’s resource use. RunWisp itself is RAM-frugal (~22 MB at idle); a runaway analytics job that eats all available memory will OOM-kill the daemon along with itself if you don’t cap it.

Putting secrets on the docker run command line leaks them into ps-readable form, into the daemon’s run log, and into journald. --env-file keeps them in a file you can chmod 0600 and audit separately.

Make sure the container can reach what it needs (your database, internal service mesh) without granting it general internet access unless the task actually needs it. The internal network is defined in your docker network create setup; pick whatever name you’ve chosen.

Pattern 2: pulling the next image ahead of time

Section titled “Pattern 2: pulling the next image ahead of time”

A “pre-pull” task that warms the local image cache so a deploy is fast. Pair with the deploy-hooks recipe.

[tasks.docker-prefetch]
group = "Deploys"
description = "Pull the latest production image so deploys are quick"
cron = "*/30 * * * *"
on_overlap = "skip"
timeout = "10m"
keep_runs = 30
# No failure alerts — a missed prefetch is not interesting.
run = """
set -euo pipefail
docker pull ghcr.io/example/app:current
docker pull ghcr.io/example/worker:current
"""

A failure here is harmless (the next deploy just pays the pull cost), so we deliberately skip notifications.

Disks fill up when nobody’s looking. Schedule a periodic prune.

[tasks.docker-prune]
group = "Maintenance"
description = "Reclaim disk: dangling images, stopped containers, unused volumes"
cron = "0 5 * * 0" # Sunday 05:00
on_overlap = "skip"
timeout = "30m"
notify_on_failure = ["slack-ops"]
run = """
set -euo pipefail
# Dangling images and stopped containers — always safe.
docker image prune --force
docker container prune --force
# Volumes — only those literally not referenced anywhere.
# DOUBLE-CHECK this in your environment before enabling.
docker volume prune --force --filter 'label!=keep'
# Show what we ended up with.
df -h /var/lib/docker
"""

The --filter 'label!=keep' exception lets you tag any volume you want preserved with docker volume create --label keep my-precious-volume. Test this against your dev environment firstdocker volume prune is irreversible.

Two extra rules apply when the daemon runs inside Docker:

RunWisp’s container needs /var/run/docker.sock mounted in to talk to the host’s Docker daemon:

services:
runwisp:
image: ghcr.io/runwisp/runwisp:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- runwisp-data:/data
# …

This grants the RunWisp container root-equivalent access to the host Docker daemon — anything that can talk to that socket can escape its container. Treat the RunWisp host’s auth surface accordingly.

The daemon’s container needs the docker binary on PATH. The prebuilt RunWisp image has it pre-installed; if you’re rolling your own image, install it explicitly:

RUN apk add --no-cache docker-cli

The CLI is a thin client over the socket — no extra daemon needed.

If your tasks need to be on a particular Docker network, you have two choices:

  1. Run the launching container on that network. The docker run invocations inherit it.
  2. Pass --network explicitly in every docker run call. More typing but less surprise.

For one-shot tasks, prefer #2 — it’s explicit at the call site.

Don’t reinvent supervisord inside a task

Section titled “Don’t reinvent supervisord inside a task”

A common antipattern:

# DON'T DO THIS
[tasks.run-worker-forever]
run = "docker run --rm ghcr.io/example/worker:current"
cron = "* * * * *" # restart every minute if dead??

If you want “run forever, restart on exit,” that’s what [services.*] is for. A long-running container is just a long-running shell command:

[services.worker]
description = "Long-running queue worker; supervised by RunWisp"
restart_delay = "2s"
restart_backoff = "exponential"
run = """
exec docker run --rm \\
--env-file=/etc/worker/.env \\
--network=internal \\
ghcr.io/example/worker:current
"""

The exec is important — without it, the shell stays alive after docker run and the SIGTERM that RunWisp sends on shutdown lands on the shell, not the container. With exec, the shell process is replaced and signals propagate cleanly.