Introduction

When auditing a Kubernetes cluster for the first time, whether as part of an internal pentesting exercise, a configuration review, or simply to understand what is running in production, most of the real problems do not show up in fancy automated scanners but in patient reading of the cluster state. Excessive permissions, privileged containers nobody remembers deploying, secrets in environment variables, TLS certificates about to expire, and hostPath volumes mounting sensitive node paths are recurring findings that anyone can detect with kubectl and a bit of method.

This article walks through, step by step, how to enumerate a Kubernetes cluster fully manually in order to identify misconfigurations and possible vulnerabilities. The idea is not to replace an audit tool, but to offer a concrete guide to the points worth focusing on and the commands that return the relevant information. The only requirement is a valid kubeconfig with reasonable read permissions over the cluster resources.

Initial cluster reconnaissance

Before getting into the details of each namespace, it pays to draw the general map. The first useful piece of data is the server and client version, because it marks the validity of the APIs and the security features available.

kubectl version
kubectl cluster-info
kubectl api-resources
kubectl api-versions

A version below 1.28 is already out of the official support cycle, and anything older should raise an immediate alert. Versions near end of life have limited security patches, so it is common to find clusters vulnerable to already known CVEs simply because nobody handled the upgrade.

Next it is worth listing the deprecated APIs that are still active. Any reference to extensions/v1beta1, policy/v1beta1 or rbac.authorization.k8s.io/v1beta1 indicates old resources that probably still work for compatibility but may break on any future upgrade.

kubectl get --raw /metrics | grep apiserver_requested_deprecated_apis

To get a sense of the cluster size:

kubectl get nodes -o wide
kubectl get ns
kubectl get all --all-namespaces

Node and control plane enumeration

Nodes say a lot about the health of the cluster. The conditions DiskPressure, MemoryPressure, PIDPressure or NetworkUnavailable indicate that something is wrong even before looking at workloads. It is also worth checking whether control plane nodes have the corresponding taint, because without it they may end up running user workloads and exposing the control plane to compromised containers.

kubectl get nodes -o json | jq '.items[] | {name: .metadata.name, taints: .spec.taints, conditions: .status.conditions}'
kubectl describe nodes | grep -A2 Taints

Inconsistency between kubelet versions is also a common indicator of careless maintenance. A difference of more than two minor versions between kubelet and API server can produce unsupported behaviors.

kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.nodeInfo.kubeletVersion}{"\t"}{.status.nodeInfo.osImage}{"\t"}{.status.nodeInfo.kernelVersion}{"\n"}{end}'

To detect saturation, compare running pods with the maximum capacity of the node:

kubectl get nodes -o json | jq '.items[] | {name: .metadata.name, capacity: .status.capacity.pods, allocatable: .status.allocatable.pods}'
kubectl get pods --all-namespaces --field-selector spec.nodeName=<node> -o name | wc -l

API server and control plane configuration

The API server is the heart of the cluster and its configuration determines a good part of the security posture. If you have access to the pods in kube-system, you can inspect the flags it starts with:

kubectl -n kube-system get pods -l component=kube-apiserver -o yaml | grep -E '\\-\\-(insecure-port|anonymous-auth|authorization-mode|enable-admission-plugins|audit-log|encryption-provider)'

The critical flags to watch out for are several. --anonymous-auth=true allows requests without authentication; combined with weak bindings it can give access to listing pods or reading secrets. --insecure-port other than 0 exposes a port without TLS or authentication. --authorization-mode=AlwaysAllow totally disables authorization. The absence of NodeRestriction in --enable-admission-plugins allows a compromised kubelet to modify resources of other nodes. The absence of --audit-log-path means there is no traceability of who did what. The absence of --encryption-provider-config implies that secrets sit in etcd in clear text.

Testing anonymous access is trivial:

kubectl auth can-i list pods --all-namespaces --as=system:anonymous
kubectl auth can-i get secrets --all-namespaces --as=system:anonymous

Any affirmative answer is a critical finding.

For etcd, the corresponding pods can be inspected the same way looking for --client-cert-auth=true and the absence of --auto-tls. Etcd accessible without client authentication is equivalent to having the entire cluster compromised.

Service exposure and network

A very frequent mistake is accidentally exposing internal components to the internet through a Service of type LoadBalancer or NodePort. It pays to review every service for sensitive ports such as 6443 of the API server or 2379 of etcd.

kubectl get svc --all-namespaces -o wide
kubectl get svc --all-namespaces -o json | jq '.items[] | select(.spec.type=="LoadBalancer" or .spec.type=="NodePort") | {ns: .metadata.namespace, name: .metadata.name, type: .spec.type, ports: .spec.ports}'

Ingress objects also deserve careful review. You have to check whether they enforce HTTPS, whether they have the HSTS header, and whether the controller annotations do not allow old TLS versions or weak ciphers.

kubectl get ingress --all-namespaces -o yaml | grep -E '(tls|ssl-redirect|ssl-protocols|ssl-ciphers|hsts)'

RBAC

Role-based authorization is where most of the critical findings concentrate. The first thing is to look up who has cluster-admin and check whether there are subjects that are not system accounts.

kubectl get clusterrolebindings -o json | jq '.items[] | select(.roleRef.name=="cluster-admin") | {name: .metadata.name, subjects: .subjects}'

Any ServiceAccount, user or group that does not start with system: and that has cluster-admin should be justified. Special attention to the system:masters group, which gives total access to the cluster bypassing authorization: no new binding should reference it.

Next you have to look for wildcard rules, which are the most common form of excessive privileges:

kubectl get clusterroles -o json | jq '.items[] | select(.rules[]?.verbs[]? == "*" or .rules[]?.resources[]? == "*") | .metadata.name'
kubectl get roles --all-namespaces -o json | jq '.items[] | select(.rules[]?.verbs[]? == "*") | {ns: .metadata.namespace, name: .metadata.name}'

The verbs escalate, bind and impersonate are particularly dangerous because they allow privilege escalation without needing cluster-admin directly. Any role that includes them deserves a review.

kubectl get clusterroles -o json | jq '.items[] | select(.rules[]?.verbs[]? | IN("escalate","bind","impersonate")) | .metadata.name'

Another frequent pattern is broad access to secrets. Roles with get, list or watch over secrets without restriction by resourceNames allow bulk credential exfiltration.

kubectl get clusterroles,roles --all-namespaces -o json | jq '.items[] | select(.rules[]? | select(.resources[]? == "secrets" and (.verbs[]? | IN("get","list","watch","*"))))'

For each pod, check which ServiceAccount it uses and whether that account auto-mounts the token. Pods that use the default account with mounted token are a pattern that should be fixed, as they break the principle of least privilege.

kubectl get pods --all-namespaces -o json | jq '.items[] | {ns: .metadata.namespace, name: .metadata.name, sa: .spec.serviceAccountName, automount: .spec.automountServiceAccountToken}'

Pods and security contexts

Pods are where most runtime misconfigurations materialize. The list of flags to watch is fairly concrete: privileged: true, hostNetwork: true, hostPID: true, hostIPC: true, runAsUser: 0 or absence of runAsNonRoot, allowPrivilegeEscalation: true, writable root filesystem, and added capabilities such as SYS_ADMIN, NET_ADMIN, SYS_PTRACE or NET_RAW.

kubectl get pods --all-namespaces -o json | jq '.items[] | select(.spec.containers[]?.securityContext.privileged==true) | {ns: .metadata.namespace, name: .metadata.name}'

kubectl get pods --all-namespaces -o json | jq '.items[] | select(.spec.hostNetwork==true or .spec.hostPID==true or .spec.hostIPC==true) | {ns: .metadata.namespace, name: .metadata.name, hostNetwork: .spec.hostNetwork, hostPID: .spec.hostPID, hostIPC: .spec.hostIPC}'

kubectl get pods --all-namespaces -o json | jq '.items[] | select(.spec.containers[]?.securityContext.capabilities.add[]? | IN("SYS_ADMIN","NET_ADMIN","SYS_PTRACE","NET_RAW","DAC_OVERRIDE")) | {ns: .metadata.namespace, name: .metadata.name}'

A privileged container is practically equivalent to having root access on the node. Combined with hostPID and a suitable nsenter, the barrier between container and node disappears.

It is also good practice to review pods without resource limits defined, because they can exhaust node memory or CPU and affect the rest of the workloads:

kubectl get pods --all-namespaces -o json | jq '.items[] | select(.spec.containers[]? | .resources.limits == null) | {ns: .metadata.namespace, name: .metadata.name}'

Pods in CrashLoopBackOff or with ImagePullBackOff also deserve attention: sometimes they are operational issues, but sometimes they reveal references to images that no longer exist or leaked registry credentials.

kubectl get pods --all-namespaces --field-selector=status.phase!=Running,status.phase!=Succeeded

Secrets and ConfigMaps

Secrets in Kubernetes are not encrypted by default, only Base64-encoded. That means anyone with read permissions on the namespace can recover them in clear. A first step is to see what kinds of secrets exist and where:

kubectl get secrets --all-namespaces
kubectl get secrets --all-namespaces -o json | jq '.items[] | {ns: .metadata.namespace, name: .metadata.name, type: .type, keys: (.data // {} | keys)}'

Manually created kubernetes.io/service-account-token secrets are legacy since version 1.24 and should be replaced by the TokenRequest flow. Incomplete TLS secrets (without tls.crt or tls.key) are unusable. Secrets in the default namespace point to organizational sloppiness and tend to end up forgotten.

The truly delicate point is to check how secrets are exposed to containers. Injecting them as environment variables makes them visible in kubectl describe pod and in any process dump, so the recommended practice is to mount them as files.

kubectl get pods --all-namespaces -o json | jq '.items[] | {ns: .metadata.namespace, name: .metadata.name, envFrom: [.spec.containers[]?.envFrom[]?.secretRef.name], envSecrets: [.spec.containers[]?.env[]? | select(.valueFrom.secretKeyRef) | .name]}'

For ConfigMaps, the main thing is that they should not contain credentials. It is not unusual to find passwords, tokens or database connection strings in a ConfigMap by oversight.

kubectl get cm --all-namespaces -o json | jq '.items[] | {ns: .metadata.namespace, name: .metadata.name, data: .data}' | grep -iE 'password|secret|token|api[_-]?key|private[_-]?key|aws_access|bearer'

The same analysis applies to environment variables defined directly in pods, without going through secrets:

kubectl get pods --all-namespaces -o json | jq '.items[] | .spec.containers[]?.env[]? | select(.value != null) | select(.name | test("(?i)password|secret|token|api[_-]?key|access[_-]?key|credentials"))'

Storage and volumes

HostPath volumes are one of the most direct vectors to escape from a container to the node. Any pod with a hostPath mounting /, /etc, /var/run/docker.sock, /var/lib/kubelet, /proc or /root can read and modify the node filesystem.

kubectl get pods --all-namespaces -o json | jq '.items[] | select(.spec.volumes[]?.hostPath) | {ns: .metadata.namespace, name: .metadata.name, hostPaths: [.spec.volumes[]? | select(.hostPath) | .hostPath.path]}'

Access to the Docker socket (/var/run/docker.sock) deserves a separate mention, because it allows launching arbitrary containers on the node and, by extension, compromising the entire cluster.

For cluster-level PersistentVolumes:

kubectl get pv -o json | jq '.items[] | {name: .metadata.name, hostPath: .spec.hostPath, accessModes: .spec.accessModes, reclaimPolicy: .spec.persistentVolumeReclaimPolicy}'

The Recycle reclaim policy is deprecated and erases data without guarantees. The ReadWriteMany access modes allow several pods to write over the same volume, which may be intentional or a mistake.

Pod Security Admission

Since version 1.25, Kubernetes incorporates Pod Security Admission, which replaces the old PodSecurityPolicy. The review consists of checking which namespaces have the corresponding labels and at which level:

kubectl get ns -o json | jq '.items[] | {name: .metadata.name, labels: (.metadata.labels // {} | with_entries(select(.key | startswith("pod-security.kubernetes.io"))))}'

A namespace without any pod-security.kubernetes.io/enforce label allows any pod, including privileged ones. The recommended setting for application workloads is enforce=restricted, while baseline offers an acceptable trade-off. Finding enforce=privileged in a production namespace is a serious finding.

Image supply chain

Container images are code that runs with variable privileges, so it is worth knowing where they come from and whether they are pinned to a concrete version. Using the latest tag or no tag at all prevents knowing exactly what is running and opens the door to uncontrolled updates.

kubectl get pods --all-namespaces -o json | jq '.items[] | .spec.containers[]? | {image: .image, pullPolicy: .imagePullPolicy}' | grep -E ':latest|^[^:]*$'

Ideally, images should be referenced by their digest (@sha256:...), not just by tag. This guarantees real immutability.

kubectl get pods --all-namespaces -o jsonpath='{range .items[*].spec.containers[*]}{.image}{"\n"}{end}' | grep -v '@sha256:'

Images coming from private registries must have imagePullSecrets configured, either on the pod or on the associated ServiceAccount.

Logging and auditing

Detecting incidents requires logs. Without a collection agent like Fluent Bit, Filebeat, Promtail, Vector or Alloy deployed as a DaemonSet, logs live and die on each node.

kubectl get daemonset --all-namespaces -o json | jq '.items[] | {ns: .metadata.namespace, name: .metadata.name, image: .spec.template.spec.containers[].image}' | grep -iE 'fluent|filebeat|promtail|vector|logstash|alloy'

At the API server level, you have to verify that the audit log is enabled and with a reasonable retention policy. This is checked by reviewing the kube-apiserver pod flags as already explained.

Isolation between namespaces

Finally, it is worth checking whether there are user workloads in the default namespace, which is usually a sign of poorly organized deployments, and whether production and development namespaces coexist in the same cluster without network policies isolating them. NetworkPolicies are the tool for this:

kubectl get networkpolicies --all-namespaces
kubectl get pods -n default

A cluster without any NetworkPolicy allows free traffic between all pods, which facilitates lateral movement after an initial compromise.

It is also worth checking whether namespaces have minimum governance labels (environment, team, owner) that allow knowing who is responsible for each workload, and whether there are RoleBindings referencing subjects from other namespaces, which can create unexpected trust relationships between environments that were supposed to be isolated.

kubectl get rolebindings --all-namespaces -o json | jq '.items[] | select(.subjects[]?.namespace != null and .subjects[]?.namespace != .metadata.namespace) | {ns: .metadata.namespace, name: .metadata.name, subjects: .subjects}'

Lastly, reviewing ResourceQuota and LimitRange per namespace helps to detect environments without protection against abusive resource consumption, which can lead to denial of service between neighboring workloads.

kubectl get resourcequota --all-namespaces
kubectl get limitrange --all-namespaces

A namespace without quotas and without limit ranges trusts the good behavior of every workload that lands on it, which rarely matches reality. Combined with the absence of NetworkPolicies, this kind of namespace becomes the easiest entry point for an attacker who already has a foothold in any other part of the cluster, since there is nothing technical preventing escalation in resource use or in network reach across the rest of the workloads running in the cluster.

Conclusion

Manual enumeration of a cluster with kubectl does not require special tools, only method and attention to the right details. The findings with the most impact tend to be the same in almost any audit: cluster-admin bindings to subjects that should not have them, privileged containers or those with sensitive hostPaths, secrets exposed in environment variables, forgotten TLS certificates, API servers with anonymous authentication active, and RBAC riddled with wildcards.

Walking through the cluster in this order (version, nodes, control plane, service exposure, RBAC, pods, secrets, storage, PSA, images and logging) covers most of the critical points and allows generating a fairly complete picture of the security posture. The important thing is to understand that each of these commands is not an end in itself: what brings value is the critical reading of the result, comparing it with what the organization claims to have deployed and with the recommended practices.

Once familiar with these patterns, enumeration stops being a several-hour task and becomes a manageable routine. And at that point, any surprise that appears will be much more likely to be a real finding than a false positive buried in noise.