Bug Bounty Automation: Where It Pays vs Where It Burns Bridges
Cybersecurity
Automation finds bugs at scale but can also get you banned. Learn which automation patterns pay, which violate program rules, and how to build a recon and detection pipeline that scales without crossing lines.
By Arjun Raghavan, Security & Systems Lead, BIPI · May 14, 2023 · 9 min read
Automation cuts both ways
Good automation finds bugs at scale and frees you for deep manual work. Bad automation hammers programs with scanner noise, generates Informatives, and gets you banned. The line between the two is sharper than most hunters realize.
Where automation pays
- Continuous subdomain enumeration, watching for new assets joining scope.
- Certificate transparency monitoring, where new certificates indicate new services.
- Tech stack fingerprinting, detecting framework or library version changes.
- Endpoint diffing, watching JavaScript for new API routes and parameters.
- Vulnerability monitoring, alerting when known CVEs touch the target's stack.
Where automation burns
- Blind scanner blasting, where Nuclei or similar tools spray every template at every host.
- Brute force on login or password reset endpoints, which violates almost every brief.
- Aggressive directory busting at high concurrency, which trips WAF and rate limits.
- Submitting unverified scanner output as reports, which destroys Signal and trust.
Reading the brief on automation
Most program briefs explicitly state automation rules. Common clauses include rate limits in requests per second, prohibited tools by name, blocked endpoints like login and signup, and required user agent strings identifying you as a researcher. Violating these can mean a ban with no appeal.
Building a pipeline that scales
- Asset discovery layer, running daily across your watchlist.
- Change detection layer, diffing today against yesterday for new endpoints or services.
- Triage queue, where humans review automation output before any further action.
- Manual deep dive layer, where promising leads get hand crafted exploitation.
- Report layer, where only validated findings with PoC reach the platform.
Validation gate, the most important step
Nuclei flagged a finding does not mean a vulnerability exists. Always reproduce manually, confirm impact, and only then submit. Half the Informative closures on every platform come from hunters who skipped this step.
Rate limiting your own automation
- Cap concurrency at the rate the brief allows, default to ten requests per second if unspecified.
- Add jitter to avoid pattern detection that looks like an attack.
- Pause automation when the target's response times rise, which indicates load.
- Pull off the target entirely during their announced maintenance windows.
The Signal cost of bad automation
If you submit ten scanner outputs and eight come back Informative, your Signal drops, your invite chances drop, and the triagers tag you mentally as a low quality hunter. Recovering from that perception takes months.
Automation should narrow your hunting, not widen your noise.
Tools that fit the pay side
- Passive recon, ChaosDB, Subfinder, Amass in passive mode.
- Change detection, urlhunter style tools watching wayback and CT logs.
- Endpoint discovery, custom JS parsers tailored to the target's framework.
- Targeted scanning, Nuclei with custom templates you have written for the program.
When to manualize entirely
Logic bugs, IDOR, authorization issues, race conditions, and most chain bugs cannot be found by automation. If you are hunting for the highest paid categories, automation only does the discovery layer and the rest is hand work.
Read more field notes, explore our services, or get in touch at info@bipi.in. Privacy Policy · Terms.