docker-compose is a surprisingly effective deployment tool
I’m using docker-compose
to deploy Feep! Search to production. I’ve used it in similar capacity for a few other projects, too. This feels like the wrong way to do it (seems like there really ought to be systemd units involved somehow), but in practice it works great.
The central bit is extremely straightforward:
deploy:
stage: deploy
tags: [delta-wolf] # Run on local server
only: [main]
script:
- docker-compose up --detach
This (GitLab) CI job runs when the main
branch is updated, using a specially tagged CI runner that executes on the production (and everything else, tbh) server (called delta
). All it does is tell docker-compose
to bring everything up and background it; the Docker daemon will take care of supervision, and with the appropriate configuration (restart: always
) will make sure it comes back up when the server is rebooted.
There are a few more complications—in particular, while I could deploy directly out of the CI build directory (and I don’t think this would actually cause any issues), that seems like a somewhat fraught way to do it; so instead I have a clone of the repo at a more stable path and merge the updates into there before deploying. However, this necessitates a setup step and some checks to make sure things don’t wind up going off the rails. I also split pull
/build
into a separate step: the up
command will do them automatically, but having a separate step makes it easier to follow what's going on.
Putting that all together, here’s the relevant bits of .gitlab-ci.yml
:
stages: [build, deploy]
default:
tags: [delta-wolf] # Run on local server
before_script:
- cd ~wolf/progsearch-prod
# Make sure we're on the right commit for this pipeline; if some other pipeline
# has started doing work in the directory this will fail.
- '[ "$(git rev-parse HEAD)" = "$CI_COMMIT_SHA" ]'
setup:
stage: .pre
only: [main]
before_script: # Override since the CI_COMMIT_SHA check won't work yet
- cd ~wolf/progsearch-prod
script:
# Make sure the prod setup is clean, to avoid clobbering and confusion.
- '[ -z "$(git status --porcelain)" ]' # Make sure worktree is clean
- '[ "$(git rev-parse --abbrev-ref HEAD)" = "prod" ]' # and we're on the right branch
# Fetch the commit from the CI runner repo into the prod repo...
- git fetch "$CI_PROJECT_DIR" "$CI_COMMIT_SHA"
# ...and merge it.
- git merge --ff-only "$CI_COMMIT_SHA"
prep_deploy:
stage: build
script:
# Get things ready, to minimize the amount of time the actual deployment step takes.
# (This is particularly convenient with dependency auto-updates, since the merge
# request will have caused the pull to happen, so by the time I see it and hit "Merge"
# the large container downloads have already been done.)
- docker-compose pull
- docker-compose build
deploy:
stage: deploy
only: [main]
script:
- docker-compose up --detach
As I said, I’m also using this technique to good effect on a couple of other projects. In particular, my self-hosted GitLab instance is also managed this way: you’d think that having a GitLab instance upgrade itself by way of a CI pipeline that runs on that same instance would be a recipe for disaster, but it turns out that it’s actually remarkably robust; I’ve been using this for over 2½ years now and haven’t had a single hitch in that time!
At some point things will probably get big enough that I’ll need to move to an actual deployment system, but for the moment this setup is straightforward, easy to work with, and gets the job done, so I’m happy to tick off another item on my launch checklist and call it good.