BIPI
BIPI

Why Your CI Pipeline Is the Slowest Part of Your Engineering Org

Digital Engineering

The slowest constraint in most engineering orgs is not the code. It is the time engineers wait between push and merge. Four interventions that compress 30-minute builds into six.

By Arjun Raghavan, Security & Systems Lead, BIPI · February 22, 2026 · 7 min read

#ci/cd#build-optimization#devops#engineering-velocity

Time the average engineer spends waiting on CI between push and 'safe to merge' is the single biggest hidden cost in most engineering organisations. A 25-minute pipeline times five attempts per PR times five PRs per week times forty engineers is twenty engineer-weeks per year burned on waiting. Most teams never measure this and never optimise it.

We audit CI pipelines with the goal of compressing the 'commit to merge-able' loop to under 10 minutes. The math behind it is boring: parallelise what can be parallelised, cache what does not change, skip what is irrelevant, fail fast on what does. Almost every audit finds at least 50 percent reduction available with the existing tooling.

Cache the things that do not change

Most CI runs spend 3 to 8 minutes installing dependencies. The dependency tree changes once per PR on average; the install runs every push. Cache it. Every CI provider has a cache primitive (GitHub Actions cache, GitLab cache, CircleCI restore_cache). Hash the lockfile, key the cache by hash, restore on cache hit.

Same goes for build artefacts in monorepos. The lib/utils package did not change in this PR; do not rebuild it. Bazel, Nx, Turborepo all do this; the trade-off is initial setup cost. Worth it once you cross 10 packages.

Test parallelisation that actually parallelises

Most teams 'parallelise' tests by running them across N runners. The catch is balance. If runner 1 takes 8 minutes and runners 2 through 10 take 30 seconds each, the wall-clock time is 8 minutes regardless of how many runners you have.

Real parallelisation needs to balance test load. The cheap version: split by file count. The good version: split by historical runtime, recomputed on each run. The best version: dynamic distribution, where each runner pulls the next test from a shared queue. Most CI orchestrators support at least the second.

Build matrix vs build environment

If your CI matrix runs the same job 20 times across browser versions, OS versions, and Node versions, you are running 20 jobs to ship one bug-fix that affects one. Most matrix configurations are stale; they were added when the team supported Node 12 and IE11 and never reduced.

Two interventions. First: separate the matrix into 'merge-blocking' (small, fast, the things you ship) and 'nightly' (the comprehensive set that runs once a day). Second: actually deprecate stale matrix entries. We have seen teams running tests against Node 14 a year after EOL because nobody pulled it from the matrix.

Incremental builds and dependency graphs

Monorepos benefit massively from 'only build what changed'. Nx and Turborepo compute affected packages from the diff and skip everything else. A PR that touches one component runs that component's tests, plus tests for anything that depends on it, and skips the rest.

The trick is the affected algorithm. If your dependency graph is wrong (e.g., everything depends on a 'shared' package that almost nothing actually imports), affected reports too many jobs. We audit affected accuracy on every monorepo engagement. Real selectivity is closer to 30 percent of the matrix on a typical PR after tuning.

8x
PRs ship per day after compression (was 1)
55%
Typical reduction we achieve
10 min
Target for commit-to-mergeable

What we deploy first

  1. Dependency cache. 5 minutes saved on average. 30 minutes of work to set up.
  2. Test parallelisation by historical runtime. 30 to 60 percent runtime reduction for the test stage. A day's work if you have 20+ test files.
  3. Matrix audit. Reduces redundant runs. A 2-hour exercise once a year.
  4. Affected-only builds via Nx or Turborepo. Most leveraged for monorepos. A multi-day investment that pays back within months.

Closing

Engineering velocity is not the speed at which engineers type code. It is the speed at which the system around them lets the code reach production. CI pipeline time is one of the largest controllable factors in that system, and it is the easiest to measure and the easiest to improve. Treat it like an SLO. Compress it deliberately. Your team's throughput compounds.

Read more field notes, explore our services, or get in touch at info@bipi.in. Privacy Policy · Terms.