Kubernetes Privilege Escalation Beyond RBAC: The Paths Auditors Miss.
Cloud Security
RBAC is necessary but not sufficient. Real-world cluster compromise usually escalates through node access, host mounts, service accounts on shared nodes, or the API server's own auth flows. A field guide to what we find.
By Arjun Raghavan, Security & Systems Lead, BIPI · April 28, 2026 · 8 min read
Kubernetes RBAC audits are a common deliverable. Auditors run kubectl auth can-i across roles, cross-reference with cluster role bindings, flag any wildcards, hand over a report. The report is correct as far as it goes. It almost always misses how clusters actually get owned.
We have run incident response on six Kubernetes compromises in the last year. None of them used a wildcard RBAC role. Five of them escalated through a path the auditor did not check. Here are the patterns.
1. Pod-to-node escape via privileged containers
A workload runs with privileged: true or with hostPath mounts that include the kubelet directory or the container runtime socket. An attacker who compromises the application now has root on the node, and from the node has full access to every other pod scheduled there, every secret mounted on the node's other pods, and the kubelet credential which often lets them list every node in the cluster.
RBAC has nothing to say about this. The pod is allowed to mount what its spec says it can mount. The fix is at the admission controller layer: Pod Security Admission set to restricted on every namespace that does not have a documented reason to be otherwise.
2. Service account tokens shared across pods on a node
Older clusters mount the default service account token into every pod. If two workloads from different teams are scheduled on the same node, and one of them gets compromised, the attacker can sometimes reach the other pod's service account token through the kubelet credential or through the node's filesystem.
The audit fix is automountServiceAccountToken: false on every workload that does not need it, and the use of bound service account tokens (TokenRequest API) for ones that do. The latter is default in Kubernetes 1.24+ but only if you configured it.
If you are still mounting long-lived service account tokens into pods, an attacker who lands on one node has indefinite credentials to the whole cluster.
3. Cloud metadata service from inside the pod
AWS, Azure, and GCP all expose an instance metadata service at a link-local address. By default, a pod that has network egress can reach 169.254.169.254 and pull the node's IAM credentials. If those credentials have any meaningful permissions (and on most clusters they do), the attacker now has cloud-level access from a pod compromise.
The fix is IMDSv2 with hop limit 1 (so requests cannot traverse to pods) on AWS, equivalent on Azure and GCP, plus pod network policies that block egress to the metadata IP. We find this misconfigured on roughly 70 percent of the clusters we audit.
4. ExternalName services that bypass network policy
Kubernetes NetworkPolicy applies to pod-to-pod traffic. ExternalName services resolve to arbitrary DNS names, often outside the cluster. An attacker who can create services in their namespace can effectively create egress paths that NetworkPolicy does not see, because the destination is not a pod.
Defence is at the namespace level: deny ExternalName service creation by policy, or use a CNI like Cilium with L7 policies that operate at the DNS/SNI level rather than pod-IP level.
5. Webhook misconfiguration in admission controllers
Validating and mutating admission webhooks run on every API request. If a webhook fails open (failurePolicy: Ignore) and the webhook is unreachable, all admission control is bypassed. We found this on a cluster last year where the webhook backend was running in a namespace that the network policy isolated, and during a network policy refactor the webhook became unreachable. For 14 hours, the cluster accepted any pod spec, including pods with privileged: true.
Audit tip: check failurePolicy on every admission webhook in the cluster. If it is Ignore, you have a kill-switch for security policy that an attacker (or a tired SRE) can hit.
6. Secrets in environment variables
Not strictly an escalation, but a credential-leak that turns a small compromise into a large one. Pods often receive database passwords, API keys, and signing secrets through env vars. Anyone who can read the pod spec (which includes anyone with namespace-level get pods or describe pods) can read those env vars. Anyone who can exec into the pod can dump them. Anyone who compromises the application can leak them.
The mitigation is mounted volumes (which require explicit mount permissions) plus, ideally, a real secrets manager (Vault, AWS Secrets Manager, GCP Secret Manager) with short-lived credentials. Plain env-var secrets are a 2019 pattern that has somehow survived into 2026.
What a real audit looks like
- Run kube-bench or similar against every node.
- Review every Pod Security Admission policy and identify exemptions.
- List every service account, what it can do, and whether its token is mountable.
- Test metadata service reachability from a debug pod in every namespace.
- Enumerate admission webhooks and check failurePolicy on each.
- Check kubelet API authentication settings (anonymous-auth must be false, not default).
- Audit secrets storage: env vars vs mounted volumes vs external manager.
Closing
RBAC is one layer. It is the layer most commonly audited because it is the one most easily described in a compliance framework. Real attackers do not target the RBAC layer. They target the layers underneath, where the configuration is more nuanced and the audit coverage is thinner. If your Kubernetes audit ends at RBAC, you have audited the strongest part of the system and ignored the weakest.
Read more field notes, explore our services, or get in touch at info@bipi.in. Privacy Policy · Terms.