Solutions: Cluster Setup and Hardening
How to Use These Solutions
- Attempt the question first without looking at the solution
- If stuck, read only the first step for a hint
- After completing your attempt, compare with the full solution
- Pay attention to the Why explanations -- they deepen your understanding
- Practice the Verification steps to build confidence
Solution 1 -- Default Deny Network Policy
Difficulty: Easy
Steps
# Create the payments namespace if it doesn't exist
kubectl create namespace payments --dry-run=client -o yaml | kubectl apply -f -Apply the default deny policy:
# default-deny-all.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: payments
spec:
podSelector: {}
policyTypes:
- Ingress
- Egresskubectl apply -f default-deny-all.yamlWhy
podSelector: {}with an empty selector matches all pods in the namespace- Listing both
IngressandEgressinpolicyTypeswithout providing any rules means all traffic in both directions is denied - This is the foundational step before adding allow rules
Verification
# Verify the policy exists
kubectl get networkpolicies -n payments
# NAME POD-SELECTOR AGE
# default-deny-all <none> 5s
# Describe for details
kubectl describe networkpolicy default-deny-all -n paymentsSolution 2 -- Allow Specific Pod Communication
Difficulty: Medium
Steps
# Create the namespace
kubectl create namespace webapp --dry-run=client -o yaml | kubectl apply -f -Policy 1: Default deny ingress
# default-deny-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: webapp
spec:
podSelector: {}
policyTypes:
- IngressPolicy 2: Allow frontend to API
# allow-frontend-to-api.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-api
namespace: webapp
spec:
podSelector:
matchLabels:
role: api
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 8080kubectl apply -f default-deny-ingress.yaml
kubectl apply -f allow-frontend-to-api.yamlWhy
- The first policy creates a baseline deny-all for ingress traffic
- The second policy specifically targets pods with
role=apiand only allows traffic from pods labeledrole=frontendon TCP port 8080 - The
podSelectorin the spec selects which pods the policy applies to (the target) - The
fromsection specifies which pods are allowed to send traffic
Verification
# Create test pods
kubectl run frontend --image=nginx -n webapp -l role=frontend
kubectl run api --image=nginx -n webapp -l role=api
kubectl run other --image=nginx -n webapp -l role=other
# Wait for pods to be ready
kubectl wait --for=condition=ready pod/frontend pod/api pod/other -n webapp --timeout=60s
# Test: frontend -> api on port 8080 (should work if api listens on 8080)
kubectl exec -n webapp frontend -- curl -s --max-time 3 http://api:8080 || echo "Connection attempt completed"
# Test: other -> api on port 8080 (should be BLOCKED)
kubectl exec -n webapp other -- curl -s --max-time 3 http://api:8080 || echo "BLOCKED as expected"Solution 3 -- Cross-Namespace Network Policy
Difficulty: Medium
Steps
# Create namespaces
kubectl create namespace monitoring --dry-run=client -o yaml | kubectl apply -f -
kubectl create namespace production --dry-run=client -o yaml | kubectl apply -f -# allow-prometheus-scrape.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-prometheus-scrape
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
ports:
- protocol: TCP
port: 9090kubectl apply -f allow-prometheus-scrape.yamlWhy
- The policy is created in the
productionnamespace because that is where the target pods live namespaceSelectorwithkubernetes.io/metadata.name: monitoringselects the monitoring namespace using the automatic label added by Kubernetes 1.22+- This allows any pod in the monitoring namespace to reach production pods on port 9090
- We do not need to label the namespace manually because
kubernetes.io/metadata.nameis set automatically
Verification
# Verify the automatic namespace label exists
kubectl get namespace monitoring --show-labels | grep kubernetes.io/metadata.name
# Verify the policy
kubectl describe networkpolicy allow-prometheus-scrape -n productionSolution 4 -- Egress Network Policy with DNS
Difficulty: Medium
Steps
kubectl create namespace restricted --dry-run=client -o yaml | kubectl apply -f -Policy 1: Default deny egress
# default-deny-egress.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-egress
namespace: restricted
spec:
podSelector: {}
policyTypes:
- EgressPolicy 2: Allow DNS and internal communication
# allow-dns-and-internal.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns-and-internal
namespace: restricted
spec:
podSelector: {}
policyTypes:
- Egress
egress:
# Allow DNS
- ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
# Allow communication within the same namespace
- to:
- podSelector: {}kubectl apply -f default-deny-egress.yaml
kubectl apply -f allow-dns-and-internal.yamlWhy
- The first policy blocks all outgoing traffic from all pods in the namespace
- The second policy adds two egress exceptions:
- DNS traffic (port 53 UDP/TCP) to any destination, necessary for service name resolution
- Traffic to any pod within the same namespace (
podSelector: {}without anamespaceSelectordefaults to the same namespace)
- Policies are additive: the effective rules are the union of all policies selecting a pod
Verification
# Create test pods
kubectl run test1 --image=busybox -n restricted -- sleep 3600
kubectl run test2 --image=busybox -n restricted -- sleep 3600
kubectl wait --for=condition=ready pod/test1 pod/test2 -n restricted --timeout=60s
# Test DNS (should work)
kubectl exec -n restricted test1 -- nslookup kubernetes.default
# Test internal communication (should work)
kubectl exec -n restricted test1 -- ping -c 1 -W 2 test2
# Test external communication (should be blocked)
kubectl exec -n restricted test1 -- wget -T 3 -q http://google.com -O /dev/null || echo "BLOCKED as expected"Solution 5 -- Fix CIS Benchmark Failures (API Server)
Difficulty: Hard
Steps
# First, back up the current manifest
sudo cp /etc/kubernetes/manifests/kube-apiserver.yaml /tmp/kube-apiserver.yaml.bak
# Edit the API server manifest
sudo vim /etc/kubernetes/manifests/kube-apiserver.yamlMake the following changes in the command section:
spec:
containers:
- command:
- kube-apiserver
# Fix 1: Disable anonymous authentication
- --anonymous-auth=false
# Fix 2: Disable profiling
- --profiling=false
# Fix 3 & 4: Remove AlwaysAdmit, add NodeRestriction
- --enable-admission-plugins=NodeRestriction,NodeRestriction
# Ensure AlwaysAdmit is NOT in the list
# ... keep all other existing flagsMore specifically:
- Find
--anonymous-auth=trueand change to--anonymous-auth=false(or add the flag if missing) - Add
--profiling=false(or change fromtrue) - Find
--enable-admission-plugins=and:- Remove
AlwaysAdmitfrom the list - Add
NodeRestrictionif not present
- Remove
- If
--disable-admission-pluginscontainsNodeRestriction, remove it from there
Why
- Anonymous auth: When enabled, unauthenticated requests are treated as
system:anonymouswhich can be exploited - Profiling: Exposes detailed performance data that could reveal architecture information
- AlwaysAdmit: Admits all requests bypassing admission control checks
- NodeRestriction: Limits what kubelets can modify -- without it, a compromised kubelet could modify any object
Verification
# Wait for API server to restart (30-60 seconds)
sleep 45
# Verify the API server is running
kubectl get nodes
# Check the flags are applied
ps aux | grep kube-apiserver | tr ' ' '\n' | grep -E "anonymous|profiling|admission"
# Expected:
# --anonymous-auth=false
# --profiling=false
# --enable-admission-plugins=NodeRestriction,...
# Test anonymous access (should fail with 401)
curl -k https://localhost:6443/api/v1/podsIf API Server Does Not Restart
# Check for errors
sudo crictl logs $(sudo crictl ps -a --name kube-apiserver -q | head -1)
# Or check kubelet logs
sudo journalctl -u kubelet --since "2 minutes ago" | tail -30
# If broken, restore backup
sudo cp /tmp/kube-apiserver.yaml.bak /etc/kubernetes/manifests/kube-apiserver.yamlSolution 6 -- Fix Kubelet Security Configuration
Difficulty: Medium
Steps
# Back up current config
sudo cp /var/lib/kubelet/config.yaml /tmp/kubelet-config.yaml.bak
# Edit kubelet config
sudo vim /var/lib/kubelet/config.yamlEnsure these settings are present:
authentication:
anonymous:
enabled: false # Fix 1: Disable anonymous auth
webhook:
enabled: true
cacheTTL: 0s
authorization:
mode: Webhook # Fix 2: Change from AlwaysAllow to Webhook
readOnlyPort: 0 # Fix 3: Disable read-only port# Restart kubelet
sudo systemctl restart kubelet
# Verify kubelet is running
sudo systemctl status kubeletWhy
- Anonymous auth: Allows unauthenticated access to the kubelet API, which can expose pod information and exec capabilities
- AlwaysAllow authorization: Means any request to the kubelet is authorized, removing all access controls
- Read-only port (10255): Exposes kubelet metrics and pod information without authentication
Verification
# Verify kubelet is running
sudo systemctl status kubelet
# Check kubelet configuration
ps aux | grep kubelet
# Test that read-only port is disabled
curl -s http://localhost:10255/pods
# Should fail with "connection refused"
# Verify node is ready
kubectl get nodesSolution 7 -- Configure Encryption at Rest
Difficulty: Hard
Steps
# Step 1: Generate encryption key
ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
# Step 2: Create encryption config directory
sudo mkdir -p /etc/kubernetes/enc
# Step 3: Create the EncryptionConfiguration
sudo tee /etc/kubernetes/enc/encryption-config.yaml > /dev/null <<EOF
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: ${ENCRYPTION_KEY}
- identity: {}
EOF
# Step 4: Set proper permissions
sudo chmod 600 /etc/kubernetes/enc/encryption-config.yaml
sudo chown root:root /etc/kubernetes/enc/encryption-config.yaml
# Step 5: Back up API server manifest
sudo cp /etc/kubernetes/manifests/kube-apiserver.yaml /tmp/kube-apiserver.yaml.bak
# Step 6: Edit API server manifest
sudo vim /etc/kubernetes/manifests/kube-apiserver.yamlAdd to the API server manifest:
spec:
containers:
- command:
- kube-apiserver
- --encryption-provider-config=/etc/kubernetes/enc/encryption-config.yaml
# ... keep all other existing flags
volumeMounts:
# Add this volume mount (keep existing mounts)
- name: enc-config
mountPath: /etc/kubernetes/enc
readOnly: true
volumes:
# Add this volume (keep existing volumes)
- name: enc-config
hostPath:
path: /etc/kubernetes/enc
type: DirectoryOrCreate# Step 7: Wait for API server to restart
sleep 45
kubectl get nodes
# Step 8: Create a test secret
kubectl create secret generic test-secret -n default \
--from-literal=password=supersecretpassword
# Step 9: Verify encryption in etcd
ETCDCTL_API=3 etcdctl \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
get /registry/secrets/default/test-secret | hexdump -C | head -20Why
- aescbc provider: Uses AES-256 in CBC mode for strong encryption
- identity provider as fallback: Allows reading old unencrypted data during migration
- Provider order: The first provider (aescbc) is used for writing; all providers are tried for reading
- Volume mount: Required because the API server pod needs to access the encryption config file from the host
Verification
# The hexdump output should contain "k8s:enc:aescbc:v1:key1:" prefix
# instead of plain text. If you see "supersecretpassword" in plain text,
# encryption is NOT working.
# Re-encrypt all existing secrets
kubectl get secrets --all-namespaces -o json | kubectl replace -f -Solution 8 -- Check and Rotate Certificates
Difficulty: Medium
Steps
# Step 1: Check certificate expiration with kubeadm
sudo kubeadm certs check-expiration
# Step 2: Inspect API server certificate SANs
sudo openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text | \
grep -A1 "Subject Alternative Name"
# Also check the expiry date
sudo openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -enddate
# Step 3: Renew the API server certificate
sudo kubeadm certs renew apiserver
# Step 4: Restart the API server to use the new certificate
sudo crictl pods --name kube-apiserver -q | xargs sudo crictl rmp
# Or wait for kubelet to restart it automatically
# Wait for restart
sleep 30Why
kubeadm certs check-expirationprovides a quick overview of all certificate expiry datesopenssl x509 -textgives detailed certificate information including SANs, issuer, and validity period- SANs must include all hostnames and IPs through which the API server is accessed
- After renewal, the API server must be restarted to pick up the new certificate
Verification
# Verify the new certificate has an updated expiry
sudo openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -dates
# Verify the API server is running with the new cert
kubectl get nodes
# Re-check expiration
sudo kubeadm certs check-expiration | grep apiserverSolution 9 -- Create RBAC for Developer User
Difficulty: Medium
Steps
# Create the namespace
kubectl create namespace development --dry-run=client -o yaml | kubectl apply -f -Create the Role:
# developer-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: developer
namespace: development
rules:
# Pods: get, list, watch, create, delete
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch", "create", "delete"]
# Services and Deployments: get, list, watch
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "watch"]
# Pod logs: get
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
# Explicitly NO access to:
# - secrets
# - pods/execCreate the RoleBinding:
# sarah-developer-binding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: sarah-developer
namespace: development
subjects:
- kind: User
name: sarah
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: developer
apiGroup: rbac.authorization.k8s.iokubectl apply -f developer-role.yaml
kubectl apply -f sarah-developer-binding.yamlWhy
- Pod logs require a separate resource:
pods/logwith thegetverb - By not including
secretsin the resources list, Sarah cannot access secrets - By not including
pods/execin the resources list, Sarah cannot exec into pods - Using a Role (not ClusterRole) + RoleBinding restricts access to only the
developmentnamespace
Verification
# Check Sarah's permissions
kubectl auth can-i get pods -n development --as=sarah
# yes
kubectl auth can-i create pods -n development --as=sarah
# yes
kubectl auth can-i get pods/log -n development --as=sarah
# yes
kubectl auth can-i get secrets -n development --as=sarah
# no
kubectl auth can-i create pods/exec -n development --as=sarah
# no
kubectl auth can-i get pods -n production --as=sarah
# no (different namespace)
# List all permissions
kubectl auth can-i --list -n development --as=sarahSolution 10 -- Audit Dangerous RBAC Permissions
Difficulty: Hard
Steps
# Step 1: Find all ClusterRoleBindings referencing cluster-admin
kubectl get clusterrolebindings -o json | \
jq -r '.items[] | select(.roleRef.name == "cluster-admin") | .metadata.name + " -> " + (.subjects // [] | map(.name) | join(", "))'
# Step 2: Check if intern has cluster-admin
kubectl get clusterrolebindings -o json | \
jq -r '.items[] | select(.roleRef.name == "cluster-admin") | select(.subjects[]? | .name == "intern") | .metadata.name'
# Step 3: Delete the binding if found
# Assuming it is found (e.g., named "intern-admin" or "legacy-binding")
kubectl delete clusterrolebinding <name-found-in-step-2>Step 4: Create the read-only ClusterRole:
# read-only-all.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: read-only-all
rules:
- apiGroups: [""]
resources: ["pods", "services", "configmaps", "namespaces", "nodes",
"persistentvolumes", "persistentvolumeclaims", "events",
"endpoints", "serviceaccounts"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["deployments", "daemonsets", "statefulsets", "replicasets"]
verbs: ["get", "list", "watch"]
- apiGroups: ["networking.k8s.io"]
resources: ["networkpolicies", "ingresses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["roles", "rolebindings", "clusterroles", "clusterrolebindings"]
verbs: ["get", "list", "watch"]
- apiGroups: ["batch"]
resources: ["jobs", "cronjobs"]
verbs: ["get", "list", "watch"]
# NOTE: secrets are deliberately excludedStep 5: Bind to intern:
# intern-readonly-binding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: intern-readonly
subjects:
- kind: User
name: intern
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: read-only-all
apiGroup: rbac.authorization.k8s.iokubectl apply -f read-only-all.yaml
kubectl apply -f intern-readonly-binding.yamlWhy
- ClusterRoleBindings with
cluster-admingrant full control over the entire cluster - Interns should never have cluster-admin access -- this violates least privilege
- The new role explicitly excludes
secretsto prevent access to sensitive data - Using specific resource lists instead of wildcards ensures precise control
Verification
# Verify intern no longer has cluster-admin
kubectl auth can-i '*' '*' --as=intern
# no
# Verify read-only access works
kubectl auth can-i get pods --as=intern
# yes
kubectl auth can-i delete pods --as=intern
# no
kubectl auth can-i get secrets --as=intern
# no
kubectl auth can-i --list --as=intern | head -20Solution 11 -- Secure Service Account
Difficulty: Medium
Steps
kubectl create namespace production --dry-run=client -o yaml | kubectl apply -f -Step 1: Create the ServiceAccount:
# web-app-sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: web-app-sa
namespace: production
automountServiceAccountToken: falsekubectl apply -f web-app-sa.yamlStep 2: Update the deployment:
# If the deployment exists, patch it
kubectl patch deployment web-app -n production \
--type='json' \
-p='[
{"op": "add", "path": "/spec/template/spec/serviceAccountName", "value": "web-app-sa"},
{"op": "add", "path": "/spec/template/spec/automountServiceAccountToken", "value": false}
]'
# OR edit it directly
kubectl edit deployment web-app -n productionAdd to the pod spec:
spec:
template:
spec:
serviceAccountName: web-app-sa
automountServiceAccountToken: falseStep 3: Verify no token is mounted:
# Check the pod spec
kubectl get pods -n production -l app=web-app -o jsonpath='{.items[0].spec.automountServiceAccountToken}'
# Expected: false
# Exec into the pod and check
kubectl exec -n production $(kubectl get pods -n production -l app=web-app -o name | head -1) -- \
ls /var/run/secrets/kubernetes.io/serviceaccount/ 2>&1
# Expected: No such file or directoryWhy
- The default service account in every namespace has a token that provides API access
- Most application pods do not need to interact with the Kubernetes API
- Disabling token automounting removes a credential that could be exploited if the pod is compromised
- Setting it at both the ServiceAccount and Pod level provides defense in depth
Verification
# Verify the SA was created
kubectl get sa web-app-sa -n production -o yaml
# Verify no token volume is mounted
kubectl get pod -n production -l app=web-app -o jsonpath='{.items[0].spec.volumes}' | python3 -m json.tool
# Should NOT contain a volume named "kube-api-access-*" (or it should be absent)Solution 12 -- Configure Audit Logging
Difficulty: Hard
Steps
Step 1: Create the audit policy:
sudo mkdir -p /etc/kubernetes/audit# /etc/kubernetes/audit/audit-policy.yaml
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
# Rule 1: Log nothing for endpoints and services
- level: None
resources:
- group: ""
resources: ["endpoints", "services"]
# Rule 2: Log at Metadata level for secrets and configmaps
- level: Metadata
resources:
- group: ""
resources: ["secrets", "configmaps"]
# Rule 3: Log at RequestResponse level for pods
- level: RequestResponse
resources:
- group: ""
resources: ["pods"]
# Rule 4: Log everything else at Request level
- level: Requestsudo tee /etc/kubernetes/audit/audit-policy.yaml > /dev/null <<'EOF'
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: None
resources:
- group: ""
resources: ["endpoints", "services"]
- level: Metadata
resources:
- group: ""
resources: ["secrets", "configmaps"]
- level: RequestResponse
resources:
- group: ""
resources: ["pods"]
- level: Request
EOFStep 2: Create log directory:
sudo mkdir -p /var/log/kubernetes/auditStep 3: Edit API server manifest:
sudo cp /etc/kubernetes/manifests/kube-apiserver.yaml /tmp/kube-apiserver.yaml.bak
sudo vim /etc/kubernetes/manifests/kube-apiserver.yamlAdd these flags to the command:
spec:
containers:
- command:
- kube-apiserver
- --audit-policy-file=/etc/kubernetes/audit/audit-policy.yaml
- --audit-log-path=/var/log/kubernetes/audit/audit.log
- --audit-log-maxage=30
- --audit-log-maxbackup=10
- --audit-log-maxsize=100
# ... keep all other existing flags
volumeMounts:
# Add these (keep existing mounts)
- name: audit-policy
mountPath: /etc/kubernetes/audit
readOnly: true
- name: audit-log
mountPath: /var/log/kubernetes/audit
volumes:
# Add these (keep existing volumes)
- name: audit-policy
hostPath:
path: /etc/kubernetes/audit
type: DirectoryOrCreate
- name: audit-log
hostPath:
path: /var/log/kubernetes/audit
type: DirectoryOrCreateWhy
- Rule ordering matters: Rules are evaluated in order, and the first match wins
- None for endpoints/services: These generate very high volume and low-security-value events
- Metadata for secrets: We log that secrets were accessed (who, when) but NOT the secret content
- RequestResponse for pods: Full request and response bodies for pod operations for forensic analysis
- Request for everything else: Logs the request body for all other operations
- Volume mounts are essential: Without them, the API server cannot access the policy file or write logs
Verification
# Wait for API server to restart
sleep 45
kubectl get nodes
# Generate some audit events
kubectl create namespace audit-test
kubectl create secret generic audit-secret -n audit-test --from-literal=key=value
kubectl get pods -A
# Check audit logs
sudo tail -20 /var/log/kubernetes/audit/audit.log
# Check that the log file exists and has content
sudo ls -la /var/log/kubernetes/audit/audit.log
# Parse a specific audit event
sudo cat /var/log/kubernetes/audit/audit.log | python3 -m json.tool | head -50Solution 13 -- Create TLS Ingress
Difficulty: Medium
Steps
kubectl create namespace production --dry-run=client -o yaml | kubectl apply -f -
# Step 1: Generate self-signed TLS certificate
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout /tmp/webapp-tls.key \
-out /tmp/webapp-tls.crt \
-subj "/CN=webapp.example.com/O=MyOrg"
# Step 2: Create TLS secret
kubectl create secret tls webapp-tls \
--cert=/tmp/webapp-tls.crt \
--key=/tmp/webapp-tls.key \
-n productionStep 3: Create the Ingress:
# webapp-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webapp-ingress
namespace: production
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- webapp.example.com
secretName: webapp-tls
rules:
- host: webapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: webapp-service
port:
number: 80kubectl apply -f webapp-ingress.yamlWhy
- Self-signed certificates are acceptable for the exam; production would use Let's Encrypt or a real CA
- The TLS secret must be in the same namespace as the Ingress
ssl-redirect: "true"forces all HTTP traffic to HTTPSingressClassName: nginxselects the nginx ingress controller
Verification
# Verify the secret
kubectl get secret webapp-tls -n production
kubectl get secret webapp-tls -n production -o jsonpath='{.data.tls\.crt}' | \
base64 -d | openssl x509 -noout -subject -dates
# Verify the ingress
kubectl describe ingress webapp-ingress -n production
# Check TLS is configured
kubectl get ingress webapp-ingress -n production -o jsonpath='{.spec.tls}'Solution 14 -- Fix etcd TLS Configuration
Difficulty: Medium
Steps
# Step 1: Back up and inspect the etcd manifest
sudo cp /etc/kubernetes/manifests/etcd.yaml /tmp/etcd.yaml.bak
sudo cat /etc/kubernetes/manifests/etcd.yaml | grep -E "client-cert-auth|peer-client-cert-auth"
# Step 2: Edit the etcd manifest
sudo vim /etc/kubernetes/manifests/etcd.yamlEnsure these flags are present and set to true:
spec:
containers:
- command:
- etcd
- --client-cert-auth=true # Fix: was false
- --peer-client-cert-auth=true # Fix: was false
# ... keep all other existing flags# Step 3: Wait for etcd to restart
sleep 30
# Step 4: Verify etcd health
ETCDCTL_API=3 etcdctl \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
endpoint healthWhy
--client-cert-auth=truerequires clients (like the API server) to present a valid certificate signed by the etcd CA--peer-client-cert-auth=truerequires etcd peers in a cluster to present valid certificates for communication- Without client cert auth, any client that can reach etcd can read/write all cluster data
- This is a critical CIS benchmark check
Verification
# Verify etcd is healthy
ETCDCTL_API=3 etcdctl \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
endpoint health
# Verify flags are applied
ps aux | grep etcd | tr ' ' '\n' | grep client-cert-auth
# Verify that connection WITHOUT certs fails
ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 endpoint health 2>&1
# Should fail
# Verify API server still works
kubectl get nodesSolution 15 -- Combined Network Policy Challenge
Difficulty: Hard
Steps
kubectl create namespace microservices --dry-run=client -o yaml | kubectl apply -f -Policy 1: Default Deny All
# 01-default-deny.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: microservices
spec:
podSelector: {}
policyTypes:
- Ingress
- EgressPolicy 2: Allow DNS for All
# 02-allow-dns.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns
namespace: microservices
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53Policy 3: Frontend Policy
# 03-frontend-policy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: frontend-policy
namespace: microservices
spec:
podSelector:
matchLabels:
tier: frontend
policyTypes:
- Ingress
- Egress
ingress:
# Receive traffic from anywhere on port 443
- ports:
- protocol: TCP
port: 443
egress:
# Send traffic to backend pods on port 8080
- to:
- podSelector:
matchLabels:
tier: backend
ports:
- protocol: TCP
port: 8080Policy 4: Backend Policy
# 04-backend-policy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-policy
namespace: microservices
spec:
podSelector:
matchLabels:
tier: backend
policyTypes:
- Ingress
- Egress
ingress:
# Only receive from frontend pods on port 8080
- from:
- podSelector:
matchLabels:
tier: frontend
ports:
- protocol: TCP
port: 8080
egress:
# Only send to database pods on port 5432
- to:
- podSelector:
matchLabels:
tier: database
ports:
- protocol: TCP
port: 5432Policy 5: Database Policy
# 05-database-policy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: database-policy
namespace: microservices
spec:
podSelector:
matchLabels:
tier: database
policyTypes:
- Ingress
# Egress is covered by default-deny-all -- no egress rules needed
# since we want NO outbound connections from database
ingress:
# Only receive from backend pods on port 5432
- from:
- podSelector:
matchLabels:
tier: backend
ports:
- protocol: TCP
port: 5432kubectl apply -f 01-default-deny.yaml
kubectl apply -f 02-allow-dns.yaml
kubectl apply -f 03-frontend-policy.yaml
kubectl apply -f 04-backend-policy.yaml
kubectl apply -f 05-database-policy.yamlWhy
- Default deny first: Establishes the zero-trust baseline -- nothing is allowed unless explicitly permitted
- DNS for all: Without DNS, pods cannot resolve service names, breaking service discovery
- Frontend: Accepts from anywhere (external clients) but can only talk to backend
- Backend: Only accepts from frontend and only talks to database -- prevents direct external access
- Database: Only accepts from backend and cannot initiate outbound connections -- most restrictive tier
- Policies are additive: The DNS allow policy adds to the default deny, giving all pods DNS egress
Verification
# Verify all policies
kubectl get networkpolicies -n microservices
# Should show 5 policies
# Describe each to verify rules
kubectl describe networkpolicy frontend-policy -n microservices
kubectl describe networkpolicy backend-policy -n microservices
kubectl describe networkpolicy database-policy -n microservicesSolution 16 -- Secure API Server Authorization
Difficulty: Easy
Steps
# Back up manifest
sudo cp /etc/kubernetes/manifests/kube-apiserver.yaml /tmp/kube-apiserver.yaml.bak
# Edit the manifest
sudo vim /etc/kubernetes/manifests/kube-apiserver.yamlFind the line:
- --authorization-mode=AlwaysAllowChange it to:
- --authorization-mode=Node,RBAC# Wait for restart
sleep 45
# Verify
kubectl get nodesWhy
AlwaysAllowmeans any authenticated user can perform any action -- this is equivalent to giving everyone cluster-adminNodeauthorization allows kubelets to access the resources they need to functionRBACenforces role-based access control, requiring explicit permission grants- The order
Node,RBACensures Node authorization is checked first (for kubelet requests), then RBAC for all other requests
Verification
# Verify the flag was changed
ps aux | grep kube-apiserver | grep authorization-mode
# Expected: --authorization-mode=Node,RBAC
# Verify API server is functioning
kubectl get pods -A
# Verify RBAC is enforced (test as unauthorized user)
kubectl auth can-i create pods --as=random-user
# Expected: noSolution 17 -- Backup etcd Securely
Difficulty: Medium
Steps
# Step 1: Take the snapshot
ETCDCTL_API=3 etcdctl \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
snapshot save /tmp/etcd-backup-$(date +%Y%m%d).db
# Step 2: Verify the snapshot
ETCDCTL_API=3 etcdctl snapshot status /tmp/etcd-backup-$(date +%Y%m%d).db --write-table
# Step 3: Secure the backup file
sudo chmod 600 /tmp/etcd-backup-$(date +%Y%m%d).db
sudo chown root:root /tmp/etcd-backup-$(date +%Y%m%d).dbWhy
- TLS certificates are required to connect to etcd -- without them, the snapshot command fails
--cacertverifies the etcd server's certificate--certand--keyauthenticate the client to etcd- Setting permissions to
600ensures only root can read the backup (it contains all cluster secrets) - The backup file contains all data in plain text (even if encryption at rest is configured), making secure storage critical
Verification
# Verify the file exists and has correct permissions
ls -la /tmp/etcd-backup-$(date +%Y%m%d).db
# Expected: -rw------- 1 root root ... etcd-backup-YYYYMMDD.db
# Verify snapshot integrity
ETCDCTL_API=3 etcdctl snapshot status /tmp/etcd-backup-$(date +%Y%m%d).db --write-table
# Should show: hash, revision, total keys, total sizeSolution 18 -- Certificate Signing Request
Difficulty: Medium
Steps
# Step 1: Generate private key
openssl genrsa -out /tmp/alex.key 2048
# Step 2: Create CSR
openssl req -new -key /tmp/alex.key -out /tmp/alex.csr \
-subj "/CN=alex/O=development"
# Step 3: Submit to Kubernetes
CSR_CONTENT=$(cat /tmp/alex.csr | base64 | tr -d '\n')
cat <<EOF | kubectl apply -f -
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: alex
spec:
request: ${CSR_CONTENT}
signerName: kubernetes.io/kube-apiserver-client
usages:
- client auth
EOF
# Step 4: Approve the CSR
kubectl certificate approve alex
# Step 5: Retrieve the signed certificate
kubectl get csr alex -o jsonpath='{.status.certificate}' | base64 -d > /tmp/alex.crtWhy
- The CN (Common Name) in the CSR becomes the Kubernetes username (
alex) - The O (Organization) becomes the Kubernetes group (
development) signerName: kubernetes.io/kube-apiserver-clienttells Kubernetes to sign with the cluster CA for client authenticationusages: [client auth]specifies this certificate will be used for client authentication (not server auth)- The Kubernetes CSR API provides a centralized way to manage certificate requests
Verification
# Verify the CSR is approved
kubectl get csr alex
# NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
# alex 1m kubernetes.io/kube-apiserver-client kubernetes-admin <none> Approved,Issued
# Verify the certificate
openssl x509 -in /tmp/alex.crt -noout -subject -issuer
# Subject: CN=alex, O=development
# Issuer: CN=kubernetes
# Optionally set up kubeconfig for alex
kubectl config set-credentials alex \
--client-certificate=/tmp/alex.crt \
--client-key=/tmp/alex.key
kubectl config set-context alex-context \
--cluster=kind-kind \
--user=alexSolution 19 -- Comprehensive Cluster Hardening
Difficulty: Hard
Steps
Steps 1-5: API Server Hardening
sudo cp /etc/kubernetes/manifests/kube-apiserver.yaml /tmp/kube-apiserver.yaml.bak
sudo vim /etc/kubernetes/manifests/kube-apiserver.yamlEnsure these flags are set:
spec:
containers:
- command:
- kube-apiserver
- --anonymous-auth=false
- --authorization-mode=Node,RBAC
- --enable-admission-plugins=NodeRestriction
- --profiling=false
- --encryption-provider-config=/etc/kubernetes/enc/encryption-config.yaml
# ... other existing flagsFor encryption, create the config (if not already present):
ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
sudo mkdir -p /etc/kubernetes/enc
sudo tee /etc/kubernetes/enc/encryption-config.yaml > /dev/null <<EOF
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: ${ENCRYPTION_KEY}
- identity: {}
EOF
sudo chmod 600 /etc/kubernetes/enc/encryption-config.yamlAdd volume mount for encryption config in the API server manifest (if not already present):
volumeMounts:
- name: enc-config
mountPath: /etc/kubernetes/enc
readOnly: true
volumes:
- name: enc-config
hostPath:
path: /etc/kubernetes/enc
type: DirectoryOrCreateStep 6: Default Deny in kube-system
# kube-system-default-deny.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: kube-system
spec:
podSelector: {}
policyTypes:
- Ingresskubectl apply -f kube-system-default-deny.yamlStep 7: Disable automounting for default SA in kube-system
kubectl patch serviceaccount default -n kube-system \
-p '{"automountServiceAccountToken": false}'Why
Each step addresses a specific security concern:
- Anonymous auth: Prevents unauthenticated access
- Authorization mode: Enforces RBAC instead of allowing everything
- NodeRestriction: Prevents kubelet from modifying objects it should not
- Profiling: Removes a potential information disclosure endpoint
- Encryption at rest: Protects secrets stored in etcd
- kube-system NetworkPolicy: Protects critical system components from unauthorized network access
- SA token automounting: Prevents pods from getting unnecessary API credentials
Verification
# Wait for API server
sleep 45
# Verify API server flags
ps aux | grep kube-apiserver | tr ' ' '\n' | sort | grep -E "anonymous|authorization|admission|profiling|encryption"
# Verify NetworkPolicy
kubectl get networkpolicies -n kube-system
# Verify SA
kubectl get sa default -n kube-system -o yaml | grep automount
# automountServiceAccountToken: false
# Verify the cluster is healthy
kubectl get nodes
kubectl get pods -ASolution 20 -- Investigate and Fix Security Issues
Difficulty: Hard
Steps
Step 1: Delete the dangerous ClusterRoleBinding
# Find and verify the binding
kubectl get clusterrolebinding legacy-binding -o yaml
# Delete it
kubectl delete clusterrolebinding legacy-bindingStep 2: Default deny in staging
# staging-default-deny.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: staging
spec:
podSelector: {}
policyTypes:
- Ingress
- Egresskubectl create namespace staging --dry-run=client -o yaml | kubectl apply -f -
kubectl apply -f staging-default-deny.yamlStep 3: Fix the too-permissive role
# First, check the current role
kubectl get role too-permissive -n staging -o yamlReplace the role:
# fixed-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: too-permissive
namespace: staging
rules:
- apiGroups: [""]
resources: ["pods", "services"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "watch"]kubectl apply -f fixed-role.yamlStep 4: Disable insecure port
sudo vim /etc/kubernetes/manifests/kube-apiserver.yamlAdd or ensure this flag exists:
- --insecure-port=0INFO
Note: The --insecure-port flag was removed in Kubernetes 1.24 as the insecure port was permanently disabled. For older versions, explicitly setting it to 0 is important. If your cluster is 1.24+, this flag may not be needed, but adding it will not cause an error in older versions.
Why
- Legacy binding: Giving the
default:defaultservice account cluster-admin means ANY pod using the default SA in the default namespace has full cluster access - No network policies: Without policies, all pods can communicate freely, enabling lateral movement
- Wildcard permissions: A role with
*verbs on all resources is functionally equivalent to admin access within that namespace - Insecure port: If enabled, provides unauthenticated, unencrypted access to the API server
Verification
# Step 1: Verify binding is deleted
kubectl get clusterrolebinding legacy-binding 2>&1
# Expected: Error from server (NotFound)
# Step 2: Verify network policy
kubectl get networkpolicies -n staging
# NAME POD-SELECTOR AGE
# default-deny-all <none> 10s
# Step 3: Verify role is fixed
kubectl describe role too-permissive -n staging
# Should show only get, list, watch on pods, services, deployments
# No wildcards
# Step 4: Verify insecure port is disabled
ps aux | grep kube-apiserver | tr ' ' '\n' | grep insecure-port
# --insecure-port=0
# Overall cluster health
kubectl get nodes
kubectl get pods -AGeneral Exam Tips
- Always back up before editing manifests:
sudo cp file file.bak - Wait patiently after editing static pod manifests -- they take 30-60 seconds to restart
- Check logs if things break:
sudo crictl logs <container-id>orjournalctl -u kubelet - Use
kubectl auth can-ito verify RBAC changes - Use
kubectl describeto verify NetworkPolicy rules - Read the question carefully -- do exactly what is asked, nothing more
- Practice writing YAML from scratch -- the exam provides access to Kubernetes docs but not external resources
- Use imperative commands when possible for speed:
kubectl create role ...kubectl create rolebinding ...kubectl create secret tls ...kubectl create namespace ...