Environment Strategy
How to handle dev, staging, and production with the same chart using Helm value overlay files.
The Pattern
One chart, multiple value files:
charts/azure-base/
├── values.yaml # Defaults (dev-friendly, cheap)
├── values-dev.yaml # Explicit dev overrides
├── values-staging.yaml # Staging overrides
└── values-prod.yaml # Production (GRS, premium KV, purge protection)Deploy by selecting the right overlay:
helm install azure-base-dev ./charts/azure-base -f ./charts/azure-base/values-dev.yaml
helm install azure-base-staging ./charts/azure-base -f ./charts/azure-base/values-staging.yaml
helm install azure-base-prod ./charts/azure-base -f ./charts/azure-base/values-prod.yamlTIP
Each install creates a separate Helm release with a unique name. This means you can run all three environments in the same cluster (useful for dev/testing) or in separate clusters (typical for staging/prod).
What Changes Between Environments
Network Isolation
Each environment uses a separate address space to prevent overlap:
Full Comparison
| Setting | Dev | Staging | Prod |
|---|---|---|---|
| Location | eastus | eastus | westeurope |
| VNet CIDR | 10.10.0.0/16 | 10.20.0.0/16 | 10.30.0.0/16 |
| Storage Replication | LRS (local) | ZRS (zone) | GRS (geo) |
| Key Vault SKU | standard | standard | premium |
| Purge Protection | off | off | on |
| Soft Delete Days | 7 | 30 | 90 |
| Blob Containers | data | data, logs | data, logs, backups |
| Public KV Access | yes | yes | no |
| KV Deployment Access | no | no | yes |
| KV Disk Encryption | no | no | yes |
| NSG Rules | HTTPS only | HTTPS + HTTP | HTTPS only |
| Subnet Service Endpoints | Storage, KeyVault | Storage, KeyVault | Storage, KeyVault, Sql |
Why These Specific Differences
Storage replication:
- Dev uses LRS (3 copies in one datacenter) — cheapest, sufficient for throwaway data
- Staging uses ZRS (3 copies across zones) — tests zone-redundancy behavior
- Prod uses GRS (6 copies across regions) — survives full region failure
Key Vault:
- Dev/staging use
standard— no HSM-backed keys needed - Prod uses
premium— supports HSM-backed keys for compliance - Prod enables purge protection — prevents accidental permanent deletion (cannot be undone)
- Prod disables public access — only VNet-integrated services can reach it
Containers:
- Dev only needs
data— minimal resource usage - Staging adds
logs— tests log pipeline integration - Prod adds
backups— backup retention storage
How Helm Value Overlays Work
values.yaml contains the defaults. Environment files override specific values.
Helm deep-merges the files. Only values present in the overlay file are overridden; everything else keeps the default from values.yaml.
Example — values.yaml has:
storage:
accountTier: Standard
accountReplicationType: LRS
accountKind: StorageV2
httpsOnly: true
minTlsVersion: TLS1_2
containers:
- name: data
accessType: private
- name: logs
accessType: privatevalues-prod.yaml overrides:
storage:
accountReplicationType: GRS
containers:
- name: data
accessType: private
- name: logs
accessType: private
- name: backups
accessType: privateResult: accountTier stays Standard (from defaults), accountReplicationType becomes GRS (from prod), and containers are fully replaced (YAML arrays replace, they don't merge).
Array Replacement
Helm replaces arrays entirely, it does not merge them. If values.yaml has 2 containers and values-prod.yaml specifies 3 containers, you get exactly those 3 — not 5. This is why every environment file specifies the full container list.
Naming Strategy
The naming helper produces: {project}-{environment}-{suffix}
| Environment | Example Resource Group | Example VNet |
|---|---|---|
| dev | myapp-dev-rg | myapp-dev-vnet |
| staging | myapp-staging-rg | myapp-staging-vnet |
| prod | myapp-prod-rg | myapp-prod-vnet |
This means:
- Resources never collide between environments
kubectl get managedshows which environment each resource belongs to- Azure portal filtering by tag
environment=prodshows only prod resources
Multi-Cluster vs Single-Cluster
Single cluster (dev/testing)
# All three environments coexist in one cluster
helm install azure-base-dev ./charts/azure-base -f values-dev.yaml
helm install azure-base-staging ./charts/azure-base -f values-staging.yaml
helm install azure-base-prod ./charts/azure-base -f values-prod.yamlWorks because:
- Each Helm release has a unique name
- Each resource has a unique
metadata.name(different environment prefix) - Resources reference each other by name within the same environment
Multi-cluster (staging/prod)
# Staging cluster
kubectl config use-context staging-cluster
helm install azure-base ./charts/azure-base -f values-staging.yaml
# Prod cluster
kubectl config use-context prod-cluster
helm install azure-base ./charts/azure-base -f values-prod.yamlIn multi-cluster, you can use the same Helm release name (azure-base) since there's no collision across clusters.
CI/CD Integration
Validation (no cluster needed)
# Render and validate YAML for each environment
for env in dev staging prod; do
echo "=== Validating $env ==="
helm template azure-base-$env ./charts/azure-base \
-f ./charts/azure-base/values-$env.yaml > /dev/null
echo "$env: OK"
doneDeployment
# Deploy to target environment
ENVIRONMENT=${ENVIRONMENT:-dev}
helm upgrade --install azure-base-$ENVIRONMENT ./charts/azure-base \
-f ./charts/azure-base/values-$ENVIRONMENT.yaml \
--wait --timeout 10mhelm upgrade --install is idempotent — it installs on first run and upgrades on subsequent runs.
Drift check
# Compare desired state (Helm) with actual state (cluster)
helm get manifest azure-base-dev | kubectl diff -f -If there's no diff, the cluster state matches the Helm release. If Crossplane has auto-corrected drift in Azure, the Kubernetes MR state will still match (Crossplane updates the MR status, not the spec).
Adding a New Environment
- Copy an existing values file:
cp charts/azure-base/values-dev.yaml charts/azure-base/values-sandbox.yamlEdit the new file — change at minimum:
environment: sandbox- VNet address space (avoid overlap)
- Any environment-specific settings
Deploy:
helm install azure-base-sandbox ./charts/azure-base \
-f ./charts/azure-base/values-sandbox.yamlNo template changes needed. The naming helper and loops handle everything.