Kubernetes Pentest Methodology From Outside to Cluster Admin
Cloud Security
Kubernetes clusters fail through the same handful of mistakes: anonymous API access, exposed kubelets, weak RBAC, and tokens that let one pod become the cluster. Here is how we work them and how to harden.
By Arjun Raghavan, Security & Systems Lead, BIPI · February 4, 2025 · 9 min read
When we pentest a Kubernetes cluster we walk through the same checklist for every engagement, because the same mistakes recur. The cluster's attack surface is wide: the API server, the kubelets, etcd, the cloud provider integration, ingress controllers, and the workloads themselves. Each one is a path.
How attackers find this
kube-hunter does the perimeter sweep: open kube-apiserver, kubelet on 10250 with anonymous access, etcd on 2379, dashboard exposed without auth. Once we have any pod foothold, peirates and kdigger enumerate from inside: what service account token is mounted, what RBAC does it have, what NetworkPolicies apply, what node features are reachable.
- Anonymous API access: --anonymous-auth=true on the API server (sometimes still the default in old clusters) lets unauthenticated callers list resources.
- kubelet on 10250: with anonymous-auth, lets an attacker exec into any pod on that node.
- Exposed etcd without mTLS: read directly to extract every secret in the cluster.
- Default service account tokens automounted into every pod with cluster-reader or worse RBAC.
- Privileged pods or hostPath mounts that let a compromised pod become a node compromise.
- RBAC: get pods/exec across all namespaces is effectively cluster admin.
Methodology in practice
From outside we look at the API server first and ingress controllers second. From inside a pod, we check the service account token at /var/run/secrets/kubernetes.io/serviceaccount/token, run a self-subject-access-review against high-impact verbs (create pods, get secrets, exec, impersonate), and look for paths to escape the namespace boundary. A pod with create deployments in kube-system is cluster admin in two minutes.
Detection
Audit logging on the API server is mandatory; without it nothing else matters. Ship audit logs to a SIEM and write detections for anomalous verb-resource pairs (a service account that has never created pods suddenly creating them), exec into pods in kube-system, secret reads outside known controllers, and CertificateSigningRequest creations. Falco gives node-level detections (a shell in a pod, a file write to /etc/kubernetes).
Remediation
- Set --anonymous-auth=false on the API server and kubelet; the default has shifted on managed clusters but check yours.
- Disable automountServiceAccountToken at the namespace and pod level for workloads that do not call the API.
- Apply Pod Security Admission 'restricted' on application namespaces; block hostPath, hostNetwork, hostPID, privileged.
- Default-deny NetworkPolicy in every namespace; explicitly allow only the flows the workload needs.
- Audit RBAC quarterly; alert on ClusterRoleBindings with verbs:["*"] or impersonate, escalate, bind.
- Encrypt etcd at rest with a KMS provider; never expose etcd directly to anything beyond the control plane.
- Rotate cluster CAs and service account signing keys on a defined schedule.
A hardened cluster has tight RBAC, default-deny networking, restricted Pod Security, audit logging shipped, and runtime detection at the node. Every layer in that stack catches a different class of mistake. Skipping any of them is what makes our engagements quick.
Read more field notes, explore our services, or get in touch at info@bipi.in. Privacy Policy · Terms.