BIPI
BIPI

AI Coding Assistants in the Enterprise: The Security Posture Audit We Now Run

AI Security

Cursor, Claude Code, Copilot, Cline — every engineering team uses at least one AI coding assistant by 2026. Each one extends your developer's permissions and sees your source. The 8-point audit we run before approving deployment.

By Arjun Raghavan, Security & Systems Lead, BIPI · March 8, 2026 · 8 min read

#ai-security#devsecops#code-assistants#insider-risk

By mid-2026 the question is not whether your engineers use an AI coding assistant. It is which ones, configured how. Cursor, Claude Code, Copilot, Cline, Aider, Continue — at least one is installed on most engineers' laptops. Each one has access to the same source code, secrets, terminal, and build environment that the engineer has. From a threat-model perspective, they are a new class of process running with developer privileges.

We have moved from 'should we allow this' to 'how do we configure it safely'. The audit below is what we run on every engagement that includes AI coding assistant deployment.

The eight checkpoints

  1. Source code disclosure scope. The assistant sends code to a model provider for inference. Which lines? The whole repo? Only the open file? The assistant's docs say one thing; logs sometimes show another. Verify by inspecting actual outbound traffic from a test setup.
  2. Telemetry and training-data policies. Does the provider use your code for training? The defaults vary by tier (paid plans usually opt out by default; free plans usually opt in). Confirm in writing. Ratify in the procurement contract.
  3. Secret-scanning in-flight. Does the assistant strip secrets before sending context? Most have an internal redactor; the redactors miss things. Add a wrapper or proxy that redacts a known set of patterns (AWS keys, JWT-shaped tokens, anything matching your secret-format conventions).
  4. Tool execution policies. Modern coding assistants execute commands (run tests, modify files, install packages). Each is a privileged action. Configure them to require user confirmation per command, or scope them to a sandboxed working directory, or deny them entirely for terminal commands.
  5. MCP server allowlist. If the assistant supports MCP servers, your developers will install them. Which servers? Approved by whom? An MCP server that connects to JIRA is benign; one that connects to AWS production is a higher risk. Maintain an allowlist.
  6. Agent autonomy level. Some assistants run multi-step changes autonomously across files. The convenience is real; the blast radius if a bug or jailbreak triggers a destructive sequence is also real. Set the autonomy ceiling per repo (autonomy off in production-touching repos, on in scratch).
  7. Provenance for AI-generated commits. Every commit produced or assisted by an AI coding assistant should carry a Co-Authored-By line or equivalent metadata. This matters for audit, for licensing review, and for code review burden allocation.
  8. Network egress monitoring. AI assistants make outbound network calls. They should go to the configured model endpoint and nothing else. Anything else is a finding (data exfil, telemetry beacon, misconfigured proxy).

What we find when we run this

Three patterns repeat across audits.

Default opt-in to training data on free tiers. Engineers signed up with a personal email; the free tier opts in to training; production code is now potentially in next-gen training data for the model provider. Migrate to the team plan, configure no-training, audit retroactively for what may have been sent.

Unscoped command execution. The assistant runs whatever commands it wants in the developer's terminal. Most teams discover this is happening only after a developer notices an unexpected branch was pushed. Per-command confirmation closes the gap.

Untracked MCP server installs. Engineers add MCP servers from random GitHub repos. Each new server adds a tool surface that the assistant uses. We have seen MCP servers that included telemetry beacons. Vetting before install is the only sane policy.

Configuration we deploy

  • Team plan (paid tier with no-training default) for every assistant in use.
  • Centralised configuration file shipped via MDM that sets safe defaults: per-command confirmation, sandboxed working directory, network egress allowlist.
  • MCP server allowlist enforced by the configuration; the assistant cannot install unapproved servers.
  • Quarterly review of the assistant's actual outbound traffic from a sample of laptops.
  • Provenance metadata on AI-assisted commits; review burden for AI-assisted PRs explicitly different from human-only PRs.

Closing

AI coding assistants are mostly a productivity win and partly a new attack surface. The teams that get this right configure them deliberately and audit them like any other developer tool. The teams that do not are running with a default-permissive posture across hundreds of laptops, each with full source-code and credential access. The cost of running the eight-point audit once and writing a config policy is one engineering week. The cost of not running it is the next breach postmortem.

Read more field notes, explore our services, or get in touch at info@bipi.in. Privacy Policy · Terms.