Cloud Storage Bucket Recon and How to End the Class
Cloud Security
Public buckets are not a 2017 problem. We still find them on every engagement, plus signed-URL leaks and bucket takeovers nobody noticed. Here is how recon works and the controls that close the class.
By Arjun Raghavan, Security & Systems Lead, BIPI · February 10, 2025 · 7 min read
Object stores leak in three ways: bucket name guessing finds public buckets, signed URLs leak through logs and CDNs and stay valid, and abandoned bucket DNS records get re-registered by attackers (subdomain takeover via CNAME). A single engagement still produces all three regularly.
How attackers find this
Bucket name enumeration is brute force against a known schema. S3Scanner and lazyS3 sweep S3 with company-name permutations. GCPBucketBrute does the same for GCS. Azure Blob enumeration uses the storage account name in the URL; tools like MicroBurst's Invoke-EnumerateAzureBlobs walk it. We feed wordlists from the target's domain, project names, and known internal acronyms.
- Public bucket ACL: anyone can list and read objects; a quick aws s3 ls --no-sign-request confirms.
- World-listable bucket with sensitive directory: backups/, db-dumps/, secrets/.
- Signed URL with multi-year expiry leaked into a public webhook log.
- CNAME pointing at a deleted bucket whose name is now claimable: register, host attacker content under target's domain.
- Cross-account bucket policy with overly permissive Principal:* and a missing aws:SourceAccount condition.
Methodology in practice
We treat public-bucket discovery as low-effort recon and run it before any active testing. Anything that comes back is reportable on its own. Signed-URL leaks require crawling public-facing assets (web pages, JS bundles, mobile apps) for query-string-bearing object URLs. Bucket takeover requires mapping target DNS for CNAMEs that resolve to NoSuchBucket responses.
Half of the storage findings on our engagements are buckets that were public for years and nobody opened the AWS Trusted Advisor report.
Detection
S3 server access logs and CloudTrail data events for S3 catch unusual access patterns. Macie classifies sensitive data and alerts on public exposure. For GCS, enable Data Access audit logs on storage.googleapis.com and watch for ListObjects from unexpected principals. For Azure, Storage Analytics logs and Defender for Storage cover the same. The signal is access from outside expected principals or networks.
Remediation
- Turn on S3 Block Public Access at the account level and inherit it everywhere; the same shape exists for GCS Public Access Prevention and Azure Storage 'Allow Blob public access = Disabled'.
- Default-deny bucket policies with explicit allow only for known account principals and known VPC endpoints.
- Use signed URLs with short expiry (minutes, not days) and rotate signing keys; for AWS, prefer aws:RequestedRegion and aws:SourceVpc conditions.
- Enable server access logging on every bucket; route logs into a security account.
- Audit DNS for CNAMEs to cloud storage and remove orphans before someone claims them.
- Run Macie or equivalent classification on object stores; alert on PII or secrets in unexpected buckets.
- Tag buckets with owner and data classification; ungoverned buckets get cleaned up on a schedule.
The class of bug ends when public access is impossible by default and signed URLs are short-lived. Both are switches your cloud already supports. The work is committing to use them and removing the long tail of buckets that were created before the policy existed.
Read more field notes, explore our services, or get in touch at info@bipi.in. Privacy Policy · Terms.