I had a nice thing - I had working GitLab CI configuration - and then I just had to go and change things.
I use pre-commit to run a handful of checks on a repository before making a commit. Usually just things like "lint things" and "make sure you haven't accidentally included a private key"; you probably do similar on at least one of your repositories. When definitely-not-copying pre-commit config from another repository, I noticed that the hooks were a little out of date for some projects.
"A-ha!" I said to myself. "I can automate these updates!"
Pulling up the GitLab CI reference, I set up a scheduled
pipeline with a job called update-precommit
(because
I'm imaginative). This runs on an exclusive stage called "schedule", and has
rules set to only run when ${CI_PIPELINE_SOURCE}
is equal to
"schedule". With a little finessing - including but not limited to forgetting
to close my if
statement - I ended up with a job that looks like the
following:
1update-precommit:
2 stage: schedule
3 image: registry.gitlab.com/yesolutions/docker-pre-commit
4 before_script:
5 - pre-commit autoupdate
6 - git config user.email "gitlab-ci@rickhenry.uk"
7 - git config user.name "gitlab-ci-bot"
8 script: |
9 if $(git status --porcelain | grep -q .pre-commit-config.yaml); then
10 echo "### Committing changes!"
11 git add .pre-commit-config.yaml
12 git commit -m "ci: Updated pre-commit hooks"
13 git push -o ci.skip \
14 "https://pat:${GIT_PUSH_TOKEN}@${CI_REPOSITORY_URL#*@}" \
15 HEAD:main
16 else
17 echo "### No change to pre-commit config detected."
18 fi
19 rules:
20 - if: '$CI_PIPELINE_SOURCE == "schedule"'
GIT_PUSH_TOKEN
is a project access token with permissions to
push to the main
branch. Annoyingly, this also means that I - as, also, an
Owner-roled principle - can also push to main
, which I'd rather not do.
Instead, I generally prefer to use fast-forward merge requests, to keep a linear
history while protecting the main
branch from egregious errors.
Anyway, things were going great, right up until I realised that the scheduled
pipeline was also running the validate
and deploy
stages, which I didn't
want to happen. "An easy enough fix", I thought to myself, and added a when: never
rule to jobs with a condition matching that of the scheduled job. In my
head, this would result in those jobs never running for a scheduled pipeline.
I wasn't wrong - those jobs didn't run on the scheduled pipeline any more.
The previously-ruleless lint-markdown
job was then not running on any
pipelines with any flavour of trigger. It took me a moment to work out why - not
least because I wandered off for dinner - but GitLab requires at least one rule
to trigger as a when: on_success
(or always
, or similar) and helpfully adds
an implicit one with no conditions on an empty rule block. When adding the
rule:
block, however, GitLab removed it's implicit when: on_success
leaving
only the explicit
1- if: '$CI_PIPELINE_SOURCE == "schedule"'
2 when: never
that was added. In effect, I'd told GitLab to never run that job. This was fixed
by adding a new rule (- when: on_success
) to replace the implicit one, and the
pipelines then all worked as I wanted them to.
At some point I may set up some kind of GPG signing, but that's mostly to have a long list of signed commits rather than for any actual convetional benefit.