BIPI
BIPI

Secrets in a Public Repo: The First Hour Playbook

Cybersecurity

A leaked AWS key in a public GitHub repo has a half-life of about four minutes before bots start probing it. This is the first-hour playbook we run when a developer pushes secrets to the wrong remote.

By Arjun Raghavan, Security & Systems Lead, BIPI · March 20, 2024 · 7 min read

#credentials#secrets#incident-response

GitHub secret scanners catch some of these. Attacker bots catch all of them, faster. The window between a secret being pushed and a bot trying it is measured in minutes, not hours. The playbook below is built for the call that comes in at 11 PM from a panicked engineer who realized what they did.

First five minutes: identify what leaked

Get the commit hash, the file path, and the exact strings exposed. Was it an AWS key, a Stripe key, a database connection string, an OAuth client secret, a private SSH key, a service account JSON? The blast radius depends on the type. An AWS access key with admin policy is a fire. A read-only Stripe restricted key with no charge permissions is a smaller fire that still needs to go out.

  • Pull the leaked commit, identify every secret in the diff (not just the one the engineer flagged)
  • Note the timestamp of the push in UTC, you will need it for log correlation
  • Identify the credential owner and the systems that credential reaches
  • Check the GitHub repository visibility, was it ever public or just unintentionally exposed

Blast radius: where is this key used

Before rotation, know where the credential is in use. Production services, CI pipelines, developer laptops, third-party integrations. Rotate without knowing this and you take down production at 11:30 PM. Most teams have this information scattered across vault entries, env var manifests, deployment configs, and the memory of one engineer who is on vacation. Build a secret inventory before the incident, not during.

Rotation order matters

Rotate the leaked credential first, but stage the change. Create the new credential in the same provider, deploy it to the consuming services using your existing secret management, validate the new credential is working, then disable the old one. If you disable first and rotate second, you have a production outage while you scramble. The exception is when the credential is actively being abused, in which case disable immediately and accept the outage.

  1. Create new credential in the provider (AWS IAM access key, Stripe restricted key, etc.)
  2. Update secret manager with the new value
  3. Restart or refresh consuming services so they pick up the new value
  4. Validate production is healthy on the new credential
  5. Disable or delete the leaked credential in the provider

Log review: did anyone use it

After rotation, prove the negative. Pull provider logs for the credential from the moment of leak to the moment of rotation. AWS CloudTrail filtered by access key ID shows every API call. Stripe events filtered by API key reveal any usage. Service account activity in Workspace and Azure logs the same way. If the logs show only your own service calls, you got lucky. If they show calls from IPs you do not control, you have a second incident on your hands.

GitHub history: rewrite or leave it

The debate over rewriting Git history is heated and the right answer is context-dependent. If the repo is public, the secret was already harvested, so rewriting hides nothing meaningful but does remove the value from future casual readers. If the repo is private with limited access, rewriting history is more useful. Use BFG Repo-Cleaner or git filter-repo, and remember that every collaborator needs to re-clone afterward. Force-pushing rewritten history to a shared branch breaks teammates' workspaces if you do not announce it.

Notify and document

If the leaked credential gives access to customer data, regulatory disclosure may be required. If it touches PCI or PHI, the disclosure clock is short. Document everything: the time of leak, time of detection, scope of credential, abuse log review results, rotation timeline, regulatory determination. This is the document that goes to insurance, the auditor, and possibly the regulator. Write it like it will be read by all three.

Prevent the next one

  • Pre-commit hooks running gitleaks or trufflehog catch most accidents
  • GitHub secret scanning push protection blocks pushes with detected secrets
  • Short-lived credentials via OIDC for CI eliminate long-lived keys entirely
  • Centralized secret management (Vault, AWS Secrets Manager, Doppler) so secrets never live in repos
Every leaked key starts the same way: a hurry, a default config, a developer who has not slept enough. Treat the prevention as a tooling problem, not a training problem.

Read more field notes, explore our services, or get in touch at info@bipi.in. Privacy Policy · Terms.