BIPI
BIPI

Monorepo at Scale in 2026: Turborepo, Nx, Bazel, and When to Skip All of Them

Digital Engineering

Turborepo, Nx, Bazel, Pants. We have shipped all four on client projects. Here is when monorepo helps, when polyrepo wins, and which tool fits which scale.

By Arjun Raghavan, Security & Systems Lead, BIPI · July 22, 2024 · 7 min read

#monorepo#tooling#build

We have shipped Turborepo on three client engagements, Nx on two, Bazel on one (a fintech), and Pants on one (a Python-heavy data company). The right tool depends on scale and language mix more than on architectural preference. Picking wrong adds 20 percent to your build times forever.

Monorepo versus polyrepo: the real tradeoff

The argument is not technical, it is organizational. Monorepo wins when teams need to atomically change interfaces and code that depends on them. Polyrepo wins when teams genuinely operate independently and a shared change is rare.

Where monorepo earns its keep:

  • Shared component libraries with high churn
  • Multiple services that consume the same SDK
  • Refactors that span service boundaries (rename a function, update all 14 callers atomically)
  • Unified tooling: one lint config, one TypeScript config, one CI pipeline

Where polyrepo wins:

  • Teams that genuinely deploy independently and rarely share code
  • Heterogeneous tech stacks where unified tooling does not help (a Python data team plus a JavaScript frontend team plus a Go backend team can each move faster in their own world)
  • Open-source components that need clean public histories
  • Compliance requirements that map to repository-level access controls

Turborepo: the JavaScript-first sweet spot

Turborepo is a build orchestrator on top of npm/yarn/pnpm workspaces. Smart cache, affected-only execution, simple configuration. If your codebase is JavaScript and TypeScript and you have under 200 packages, Turborepo is the right tool. The cache is genuinely good (we routinely see 70-90 percent cache hit rates in CI), and the configuration is small enough that any developer can understand it.

What it is not: a build system. Turborepo orchestrates whatever you have configured per-package (Vite, esbuild, tsc, Jest). It does not understand your TypeScript graph at the level Bazel does. For most JavaScript teams, this is the right tradeoff.

Nx: more features, more configuration

Nx adds dependency graph visualization, task pipelines with proper inputs/outputs, project generators, and a plugin ecosystem. It also adds configuration surface area. The Nx config for a 100-package codebase is non-trivial.

When Nx is the right call:

  • Mixed JavaScript and other languages (the executor model handles this cleanly)
  • You actively use the project graph for code review (which packages does this PR affect)
  • Your team values the generator and migration tooling enough to invest in the learning curve
  • You are running 50+ packages and want fine-grained task caching beyond what Turborepo offers

We have seen Nx adopted because someone read a blog post, then abandoned a year later because the team did not actually use the features that justified the configuration cost. Nx is good. It is also more than most teams need.

Bazel: the heavyweight that pays off at scale

Bazel is what Google uses internally. Hermetic builds, language-agnostic dependency graph, remote execution. The setup cost is enormous: writing BUILD files for every package, configuring the toolchain, learning Starlark. The payoff is real builds that scale to thousands of packages with caching that actually works.

When Bazel is the right call:

  • Polyglot codebase: Java, Go, Python, JavaScript all in the same repo
  • 1000+ packages with deep dependency graphs
  • Remote execution requirements (build farms)
  • A platform team willing to own Bazel as a first-class product

When Bazel is wrong: small teams, JavaScript-only stacks, anyone who does not have someone whose job is partially Bazel maintenance. The 'just use Bazel' camp ignores the human cost of a tool whose error messages require expertise to interpret.

Build cache strategies are the actual game

All four tools support remote caching. The question is whether you actually use it well. The patterns that work:

  1. Cache key includes content hash of inputs, not file paths or timestamps
  2. CI uploads to the cache; developers download from it
  3. Cache TTL is generous (weeks) because cache misses are cheap
  4. Affected-only test runs are tied to the dependency graph, not git diff
  5. Cache invalidation on toolchain version change is mandatory; we have seen builds silently use stale Node.js binaries for weeks because of this

Ownership boundaries within the repo

The mistake monorepo skeptics warn about (everyone changes everything, no clear ownership) is real if you do not address it. CODEOWNERS files map directories to teams. Required reviewers gate changes outside your team's directory. Per-package CI runs only the affected tests. We require all three on monorepo client engagements; without them, the monorepo becomes a tragedy of the commons within 18 months.

Pick the tool that fits the team you have, not the team you wish you had. Turborepo for most JavaScript teams. Nx if you need more features and have the people to use them. Bazel for polyglot at thousand-package scale. Polyrepo if your teams genuinely operate independently. The choice is durable; switching tools later is expensive.

Read more field notes, explore our services, or get in touch at info@bipi.in. Privacy Policy · Terms.