Why Your Secrets Scanner Yells About Things That Are Not Secrets.
Cybersecurity
Out-of-the-box secret scanners produce 100s of alerts a week, most of them noise. Tuning the signal-to-noise ratio is what separates a working secrets program from one that everyone has muted.
By Arjun Raghavan, Security & Systems Lead, BIPI · February 20, 2026 · 6 min read
Every team that turns on GitHub secret scanning, GitLeaks, TruffleHog, or any CI-integrated secret detector goes through the same arc. Initial enthusiasm: 'we are catching real exposures.' Two weeks later: 'why is this thing still alerting on the same test fixture?' Two months later: the channel is muted, the alerts are ignored, and a real secret leak goes unnoticed in the noise.
The problem is not the scanner. The problem is the absence of a tuning workflow that mature SOC programs apply to other detection systems.
The default false-positive sources
- Test fixtures with example tokens that look real but are not. AKIAEXAMPLE..., test-stripe-keys, JWT samples in unit tests.
- Documentation that includes redacted credentials with the format intact.
- Expired or rotated secrets that are still in git history.
- Synthetic credentials in test environments that should never reach production but are valid-shaped.
- Third-party vendor SDKs that include example/sandbox keys in their published source.
- Generated random strings that happen to match secret patterns by coincidence.
The wrong fix: blanket suppression
The team's first response is usually to add file-level or path-level suppressions. /tests/**, /docs/**, *.example.* are all marked exempt. Now real secrets in test files do not alert. We have seen real production credentials committed into a 'tests' folder by an engineer who thought it was just a fixture.
Suppression by path is a heavy hammer. It treats the entire path as untrusted and stops scanning. Better is suppression by content pattern with a documented reason.
The right fix: pattern-level allowlisting
For each repeated false positive, identify the specific pattern and document it. AKIAIOSFODNN7EXAMPLE is a documented AWS example key. Add it to an allowlist with a note. The next match continues to alert. New false positives still appear, but the queue is now manageable.
Tools that support this well: GitHub secret scanning custom patterns and exclusions, GitLeaks .gitleaksignore, semgrep rules with metadata. Tools that support it badly: anything with global suppression only.
Allowlisting AKIAEXAMPLE specifically is safe. Allowlisting all AWS keys in /tests/ is dangerous.
Severity by validity
GitHub's partner secret-scanning program does this well: when a real-looking AWS key is detected, it is automatically validated against AWS. If it is active, severity is critical. If it is inactive, lower. The team focuses on the 5 percent that are actually exploitable.
We replicate this for self-hosted scanning: every detected credential is run through a validity probe (a non-destructive API call to the relevant service). Active credentials get paged. Inactive ones get a ticket for cleanup but do not interrupt anyone.
Pre-commit, CI, and post-merge are different stages
- Pre-commit: high signal, low noise. Scan the diff being committed. False positives here annoy developers and erode trust.
- CI: scan the merge commit before it lands. Block on critical (active credentials).
- Post-merge / scheduled: scan the full history weekly for things that snuck in. Lower urgency, comprehensive.
- Push protection (GitHub native): block the push entirely if a high-confidence pattern is detected. Best for credential types with low false-positive rate (Stripe, AWS, OpenAI).
Rotation, not just detection
A detected secret is only handled when it is rotated. Detection without rotation pipeline means the secret stays valid in the wild even as the team flags and forgets. Mature workflows have a rotation step: when a credential is found, the platform that issued it (AWS, Vault, Doppler, internal IDP) is called to rotate, and the application is restarted to pick up the new value.
This is the part that takes engineering effort. It also distinguishes a secrets program from a secrets-detection program.
Closing
Secret scanning is one of those security practices where the defaults are good enough to feel like a working program but not good enough to be one. The work to make it useful is unglamorous: tune the patterns, validate the findings, automate the rotation. Without it, every team eventually mutes the alerts. With it, the next real exposure gets caught and fixed in minutes instead of days.
Read more field notes, explore our services, or get in touch at info@bipi.in. Privacy Policy · Terms.