BIPI
BIPI

CI/CD Pipeline Compromise: When Your Build System Becomes the Attacker

Cybersecurity

GitHub Actions audit deep-dive, workflow_run abuse, leaked OIDC tokens, what attackers actually do with build-time access, and the cleanup that has to include re-imaging every self-hosted runner you own.

By Arjun Raghavan, Security & Systems Lead, BIPI · July 15, 2024 · 9 min read

#ci/cd#github-actions#investigation

CI/CD compromise is the incident that quietly turns into supply-chain compromise if you do not catch it fast. The build system is the place that has secrets, deploy keys, signing materials, and network access to everywhere; an attacker who lives there briefly can ship signed malicious artifacts that look indistinguishable from your real releases. The investigation has to move quickly between three planes: the workflow code, the runner host, and the produced artifacts.

The GitHub Actions audit log

Start at the audit log. GitHub Enterprise audit log streaming, or the per-organization audit log API, captures the events that matter: workflow runs, workflow modifications, secret creation and access, runner registration, and PAT or fine-grained token use. The audit log is your single most important evidence source because it is server-side and the attacker generally cannot tamper with it.

Filter for workflow_run events from forks against repositories that should not accept forked PRs, repository_dispatch events that were not triggered by your release tooling, and any workflows that were edited in the past 30 days. A workflow that suddenly gained an inline curl | bash or an unfamiliar action reference is the leading indicator most teams miss.

workflow_run abuse and the pwn request

The classic CI/CD attack is the pull_request_target combined with checkout of the head ref. By default, pull_request runs in a sandbox with no secrets; pull_request_target runs in the base repository's context with full secrets, intended for use cases like auto-labelling. If a workflow uses pull_request_target and then checks out the attacker-controlled head ref and runs scripts from it, the attacker has full secret access from any forked pull request. Adam Shostack's writing and the GitHub Security Lab's writeups have catalogued this for years; it still ships.

Grep your workflows for the pattern. Anything that uses pull_request_target and then references github.event.pull_request.head.ref or runs an npm install on the PR's code is suspect by default. Migrate to the safer pattern: pull_request for the build, and a separate workflow_run trigger for the privileged work, with a manual approval gate.

Runner inspection: self-hosted is where it gets hard

GitHub-hosted runners are ephemeral; each job gets a fresh VM. Self-hosted runners are not. They persist between jobs and can be modified by any workflow that runs on them. If an attacker landed a malicious workflow on a self-hosted runner, the workflow can drop persistence into the runner's filesystem, capture every subsequent job's secrets, and serve poisoned artifacts to every downstream consumer.

The investigation on a self-hosted runner is endpoint forensics: image the host, pull the runner's _work directory (per-job working directories), look for crontab modifications, examine systemd units, and dump the runner's process memory if you suspect in-memory persistence. The runner registration token is also a credential; if it leaked, the attacker can register their own runner against your org.

Secrets enumeration and OIDC tokens

An attacker with workflow execution on your repo has access to every secret the workflow reads. Enumerate which secrets were exposed to which workflows during the incident window. GitHub Actions secrets, environment-scoped secrets, and OIDC tokens issued to the workflow (used for cloud federated identity) are all on the menu. For OIDC, the impact is broader than people realise: an attacker can request an OIDC token claiming the workflow identity and use it against any cloud provider that trusts your repo, even if no static credential was leaked.

Rotate, in order: any AWS access keys or Azure service principal secrets the workflow used; any Kubernetes service account tokens; any package registry tokens (npm, PyPI, Docker Hub, Maven Central); any GitHub PATs the workflow could access. For OIDC, audit the trust policies in your cloud providers and tighten the sub claim conditions so they only match exact repos, branches, and environments, not any workflow from your org.

What attackers actually do with build access

Three patterns dominate. First, exfiltrate the secrets and use them outside the pipeline; this is the cryptojacking case (see the AWS credential leak playbook). Second, ship a malicious version of your software through your normal release channel; this is the supply chain attack, and it is the one that turns into a SolarWinds-style headline if it lands at a maintainer with downstream users. Third, pivot to your production infrastructure using the deploy credentials the pipeline holds.

The third is the most overlooked. Many pipelines hold long-lived deploy credentials to Kubernetes clusters, cloud accounts, or app servers. An attacker who briefly owns the pipeline can use those credentials to drop persistence in production, and that persistence outlives any pipeline cleanup. Audit production for changes during the suspected pipeline-compromise window, not just the pipeline itself.

Cleanup that holds

The full cleanup is more work than people anticipate. Re-image every self-hosted runner. Rotate every secret exposed to any workflow that ran in the incident window. Audit the last 90 days of releases for any artifact that could have been published from the compromised pipeline. Re-sign or re-publish anything in doubt. Audit cloud trust policies for federated identity scoping. Audit production for unauthorised changes during the window.

Closing the door for next time

The architectural change that prevents most repeats is moving to OIDC-based, scoped, short-lived credentials everywhere. No long-lived AWS keys in GitHub Actions secrets, no static service principals, no eternal Kubernetes tokens. Combine that with required reviewers on production environments, branch protection that prevents force-push to release branches, and a regular audit of which workflows can run with elevated permissions. The compromise becomes a contained nuisance instead of a multi-week recovery.

Read more field notes, explore our services, or get in touch at info@bipi.in. Privacy Policy · Terms.