Mock Exam 2 - Solutions
Spoiler Warning
Do not read these solutions until you have attempted the full mock exam under timed conditions. The learning value comes from struggling with the questions first.
Solution 1: Strict Database Network Isolation (6%)
Domain: Cluster Setup | Time Target: 8 minutes
Step 1: Create Strict DB Isolation Policy
# strict-db-isolation.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: strict-db-isolation
namespace: database
spec:
podSelector:
matchLabels:
tier: database
policyTypes:
- Egress
egress:
# Allow egress to other database pods on 3306
- to:
- podSelector:
matchLabels:
tier: database
ports:
- protocol: TCP
port: 3306
# Allow egress to monitoring namespace on 9090
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
podSelector:
matchLabels:
app: monitoring
ports:
- protocol: TCP
port: 9090
# Allow DNS only to kube-system namespace
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53kubectl apply -f strict-db-isolation.yamlStep 2: Create DB Ingress Control Policy
# db-ingress-control.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: db-ingress-control
namespace: database
spec:
podSelector:
matchLabels:
tier: database
policyTypes:
- Ingress
ingress:
# Allow from backend application tier on 3306
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: backend
podSelector:
matchLabels:
tier: application
ports:
- protocol: TCP
port: 3306
# Allow from monitoring namespace on 9090
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
podSelector:
matchLabels:
app: monitoring
ports:
- protocol: TCP
port: 9090kubectl apply -f db-ingress-control.yamlVerification
kubectl get networkpolicies -n database
kubectl describe networkpolicy strict-db-isolation -n database
kubectl describe networkpolicy db-ingress-control -n databaseEdge Case
The DNS restriction to kube-system only is critical. Without a namespace selector, the DNS rule would allow port 53 to ANY destination, which could be used for DNS tunneling attacks. By restricting to kube-system, only the CoreDNS service is accessible.
Solution 2: TLS Certificate Inspection and Renewal (7%)
Domain: Cluster Setup | Time Target: 9 minutes
Step 1: SSH and Check Certificates
ssh cluster1-controlplane
# Check all certificate expirations
sudo kubeadm certs check-expirationStep 2: Inspect API Server Certificate SANs
openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text | grep -A1 "Subject Alternative Name"This shows all SANs, such as:
X509v3 Subject Alternative Name:
DNS:cluster1-controlplane, DNS:kubernetes, DNS:kubernetes.default,
DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local,
IP Address:10.96.0.1, IP Address:192.168.1.10Step 3: Renew Certificates if Needed
# Renew all certificates (if any expire within 30 days)
sudo kubeadm certs renew all
# Or renew specific certificate
sudo kubeadm certs renew apiserverStep 4: Verify etcd TLS Configuration
# Check etcd static pod manifest
sudo cat /etc/kubernetes/manifests/etcd.yaml | grep -E "cert-file|key-file|trusted-ca-file"Expected output:
- --cert-file=/etc/kubernetes/pki/etcd/server.crt
- --key-file=/etc/kubernetes/pki/etcd/server.key
- --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
- --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
- --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
- --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crtStep 5: Verify API Server etcd TLS
sudo cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep -E "etcd-certfile|etcd-keyfile|etcd-cafile"Expected output:
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.keyTIP
After renewing certificates, the API server will need to be restarted. For static pod manifests, the kubelet handles this automatically, but you may need to wait 30-60 seconds.
Solution 3: Kubernetes Dashboard Security (5%)
Domain: Cluster Setup | Time Target: 6 minutes
Step 1: Verify Dashboard is Running
kubectl get deployments -n kubernetes-dashboard
kubectl get pods -n kubernetes-dashboardStep 2: Remove Insecure Arguments
kubectl edit deployment kubernetes-dashboard -n kubernetes-dashboardRemove these arguments from the container spec if present:
# REMOVE these lines:
# - --enable-skip-login
# - --enable-insecure-loginStep 3: Change Service Type
kubectl get svc -n kubernetes-dashboard
# If the service is NodePort or LoadBalancer, patch it
kubectl patch svc kubernetes-dashboard -n kubernetes-dashboard \
-p '{"spec": {"type": "ClusterIP"}}'Step 4: Create Read-Only Access
# dashboard-viewer.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: dashboard-viewer
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: dashboard-view-only
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: dashboard-viewer-binding
subjects:
- kind: ServiceAccount
name: dashboard-viewer
namespace: kubernetes-dashboard
roleRef:
kind: ClusterRole
name: dashboard-view-only
apiGroup: rbac.authorization.k8s.iokubectl apply -f dashboard-viewer.yamlVerification
kubectl auth can-i delete pods --as=system:serviceaccount:kubernetes-dashboard:dashboard-viewer
# Should return "no"
kubectl auth can-i get pods --as=system:serviceaccount:kubernetes-dashboard:dashboard-viewer
# Should return "yes"Solution 4: RBAC Audit and Remediation (7%)
Domain: Cluster Hardening | Time Target: 10 minutes
Step 1: Find and Clean cluster-admin Bindings
# List all ClusterRoleBindings with cluster-admin
kubectl get clusterrolebindings -o json | jq -r '
.items[] |
select(.roleRef.name == "cluster-admin") |
.metadata.name + " -> " +
([.subjects[]? | .kind + ":" + (.namespace // "n/a") + "/" + .name] | join(", "))
'Delete any unauthorized bindings:
# Example: delete binding that grants cluster-admin to unauthorized subjects
kubectl delete clusterrolebinding <unauthorized-binding-name>WARNING
Be VERY careful. Do NOT delete cluster-admin binding for system:masters or system components. Check each binding before deleting.
Step 2: Remove Dangerous Permissions from debug-role
kubectl edit clusterrole debug-roleRemove pods/exec and pods/attach from the rules:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: debug-role
rules:
- apiGroups: [""]
resources: ["pods"] # Remove "pods/exec" and "pods/attach"
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
# Remove any rule that includes pods/exec or pods/attachStep 3: Find Dangerous Verbs
# Find roles/clusterroles with escalate, bind, or impersonate
kubectl get clusterroles -o json | jq -r '
.items[] |
select(.rules[]?.verbs[]? | IN("escalate", "bind", "impersonate")) |
"ClusterRole: " + .metadata.name
' > /tmp/dangerous-rbac.txt
kubectl get roles --all-namespaces -o json | jq -r '
.items[] |
select(.rules[]?.verbs[]? | IN("escalate", "bind", "impersonate")) |
"Role: " + .metadata.namespace + "/" + .metadata.name
' >> /tmp/dangerous-rbac.txtStep 4: Create Time-Bound Token
kubectl create token ci-pipeline \
-n cicd \
--duration=1h > /tmp/ci-token.txtVerification
# Verify debug-role no longer has exec
kubectl describe clusterrole debug-role | grep -E "exec|attach"
# Verify token file exists
cat /tmp/ci-token.txt | head -c 50
# Verify dangerous RBAC report
cat /tmp/dangerous-rbac.txtSolution 5: API Server Hardening (6%)
Domain: Cluster Hardening | Time Target: 7 minutes
Step 1: Edit API Server Manifest
ssh cluster1-controlplane
sudo vi /etc/kubernetes/manifests/kube-apiserver.yamlStep 2: Ensure Correct Configuration
spec:
containers:
- command:
- kube-apiserver
- --anonymous-auth=false
# Remove --insecure-port if present, or set to 0
# --insecure-port=0 (deprecated in newer versions, may not need to be set)
- --kubelet-certificate-authority=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction
- --authorization-mode=Node,RBAC
- --request-timeout=300s
# ... (keep all other existing flags)WARNING
Make sure --authorization-mode does NOT include AlwaysAllow. It should only have Node,RBAC.
Step 3: Verify
# Wait for API server to restart
watch crictl ps | grep kube-apiserver
# Test access
kubectl get nodes
# Verify anonymous auth is disabled
curl -k https://localhost:6443/api/v1/namespaces 2>/dev/null | grep -i "forbidden\|unauthorized"Solution 6: Seccomp and AppArmor Combined (7%)
Domain: System Hardening | Time Target: 9 minutes
Step 1: Create Seccomp Profile
ssh cluster1-node01
sudo mkdir -p /var/lib/kubelet/seccomp/profiles
sudo tee /var/lib/kubelet/seccomp/profiles/strict-net.json << 'EOF'
{
"defaultAction": "SCMP_ACT_ERRNO",
"architectures": [
"SCMP_ARCH_X86_64",
"SCMP_ARCH_X86",
"SCMP_ARCH_X32"
],
"syscalls": [
{
"names": [
"read", "write", "open", "close", "stat", "fstat", "lstat",
"poll", "lseek", "mmap", "mprotect", "munmap", "brk",
"rt_sigaction", "rt_sigprocmask", "ioctl", "access", "pipe",
"select", "sched_yield", "mremap", "msync", "mincore",
"madvise", "dup", "dup2", "pause", "nanosleep", "getpid",
"clone", "execve", "exit", "wait4", "kill", "uname", "fcntl",
"flock", "fsync", "fdatasync", "getcwd", "readlink",
"getuid", "getgid", "geteuid", "getegid", "getppid",
"getpgrp", "setsid", "arch_prctl", "exit_group", "openat",
"newfstatat", "set_tid_address", "set_robust_list", "futex",
"getrandom", "close_range", "pread64", "pwrite64",
"writev", "readv", "sigaltstack", "rt_sigreturn",
"getdents64", "clock_gettime", "clock_nanosleep",
"sysinfo", "prctl", "rseq", "mlock", "munlock",
"shmget", "shmat", "shmctl"
],
"action": "SCMP_ACT_ALLOW"
},
{
"names": [
"socket", "connect", "accept", "bind", "listen"
],
"action": "SCMP_ACT_ERRNO"
}
]
}
EOFTIP
Note that networking syscalls are explicitly listed as SCMP_ACT_ERRNO even though the default action is also SCMP_ACT_ERRNO. This makes the intent explicit and serves as documentation.
Step 2: Load AppArmor Profile
# Check if profile is already loaded
sudo aa-status | grep k8s-deny-write
# If not loaded, load it
sudo apparmor_parser -r /etc/apparmor.d/k8s-deny-writeStep 3: Update Pods
Exit SSH and switch context:
exit
kubectl config use-context cluster1Update secure-api with seccomp profile:
kubectl get pod secure-api -n restricted -o yaml > /tmp/secure-api.yamlEdit /tmp/secure-api.yaml:
spec:
securityContext:
seccompProfile:
type: Localhost
localhostProfile: profiles/strict-net.json
containers:
- name: secure-api
# ... (keep existing spec)kubectl delete pod secure-api -n restricted
kubectl apply -f /tmp/secure-api.yamlUpdate secure-writer with AppArmor profile:
kubectl get pod secure-writer -n restricted -o yaml > /tmp/secure-writer.yamlEdit /tmp/secure-writer.yaml to add the annotation:
metadata:
annotations:
container.apparmor.security.beta.kubernetes.io/secure-writer: localhost/k8s-deny-writeOr for Kubernetes v1.30+, use the native field:
spec:
containers:
- name: secure-writer
securityContext:
appArmorProfile:
type: Localhost
localhostProfile: k8s-deny-writekubectl delete pod secure-writer -n restricted
kubectl apply -f /tmp/secure-writer.yamlVerification
kubectl get pods -n restricted
kubectl describe pod secure-api -n restricted | grep -i seccomp
kubectl describe pod secure-writer -n restricted | grep -i apparmorSolution 7: Node Hardening - Kernel Modules and Packages (5%)
Domain: System Hardening | Time Target: 6 minutes
Step 1: SSH to Node
ssh cluster2-controlplane
ssh cluster2-node01Step 2: Blacklist Kernel Modules
sudo tee /etc/modprobe.d/k8s-hardening.conf << 'EOF'
blacklist dccp
blacklist sctp
blacklist rds
blacklist tipc
EOFStep 3: Remove Unnecessary Packages
sudo apt-get purge -y tcpdump strace 2>/dev/null || true
sudo apt-get autoremove -yStep 4: Verify
# Check kernel module blacklist
cat /etc/modprobe.d/k8s-hardening.conf
# Verify modules are not loaded (they may still be loaded until reboot)
lsmod | grep -E "dccp|sctp|rds|tipc"
# Verify packages removed
dpkg -l | grep -E "tcpdump|strace" | grep "^ii" || echo "Packages removed"Solution 8: Reduce OS Attack Surface - SUID and World-Writable (5%)
Domain: System Hardening | Time Target: 6 minutes
Step 1: SSH to Control Plane
ssh cluster1-controlplaneStep 2: Find SUID Binaries
find / -perm -4000 -type f 2>/dev/null > /tmp/suid-binaries.txtStep 3: Find World-Writable Directories
find / -type d -perm -0002 2>/dev/null > /tmp/world-writable-dirs.txtStep 4: Remove SUID Bit
sudo chmod u-s /usr/bin/newgrp
sudo chmod u-s /usr/bin/chfn
# Verify
ls -la /usr/bin/newgrp /usr/bin/chfnStep 5: Verify Kubernetes Functionality
kubectl get nodes
kubectl get pods -n kube-systemSolution 9: Pod Security Standards Enforcement (7%)
Domain: Minimize Microservice Vulnerabilities | Time Target: 9 minutes
Step 1: Label the Namespace
kubectl label namespace e-commerce \
pod-security.kubernetes.io/enforce=baseline \
pod-security.kubernetes.io/warn=restricted \
pod-security.kubernetes.io/warn-version=latest \
pod-security.kubernetes.io/audit=restricted \
pod-security.kubernetes.io/audit-version=latestStep 2: Identify Violating Pods
# Dry-run label to see what would be rejected under restricted
kubectl label --dry-run=server --overwrite ns e-commerce \
pod-security.kubernetes.io/enforce=restrictedStep 3: Fix checkout-service Deployment
kubectl edit deployment checkout-service -n e-commerceApply the following spec:
spec:
template:
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
seccompProfile:
type: RuntimeDefault
containers:
- name: checkout
# ... (keep existing image)
securityContext:
# REMOVE: privileged: true
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
seccompProfile:
type: RuntimeDefault
volumeMounts:
# ... (keep existing mounts)
- name: tmp-vol
mountPath: /tmp
- name: run-vol
mountPath: /var/run
volumes:
# ... (keep existing volumes)
- name: tmp-vol
emptyDir: {}
- name: run-vol
emptyDir: {}Verification
kubectl rollout status deployment/checkout-service -n e-commerce
kubectl get pods -n e-commerceSolution 10: RuntimeClass and Container Sandboxing (6%)
Domain: Minimize Microservice Vulnerabilities | Time Target: 7 minutes
Step 1: Create gVisor RuntimeClass
# gvisor-runtimeclass.yaml
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: gvisor
handler: runsckubectl apply -f gvisor-runtimeclass.yamlStep 2: Update Deployment
kubectl edit deployment payment-processor -n sandboxAdd runtimeClassName under spec.template.spec:
spec:
template:
spec:
runtimeClassName: gvisor
containers:
# ... (keep existing spec)Step 3: Verify
kubectl rollout status deployment/payment-processor -n sandbox
kubectl get pods -n sandbox -o jsonpath='{.items[*].spec.runtimeClassName}'Step 4: Create Kata RuntimeClass
# kata-runtimeclass.yaml
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: kata
handler: kata-runtime
scheduling:
nodeSelector:
runtime: katakubectl apply -f kata-runtimeclass.yaml
kubectl get runtimeclass kata -o yaml > /tmp/kata-runtimeclass.yamlSolution 11: Secrets Management Overhaul (7%)
Domain: Minimize Microservice Vulnerabilities | Time Target: 10 minutes
Step 1: Verify Existing Secret
kubectl get secret api-keys -n banking
kubectl get secret api-keys -n banking -o jsonpath='{.data.stripe-key}' | base64 -dStep 2: Create New Secret
kubectl create secret generic api-keys-v2 \
-n banking \
--from-literal=stripe-key=sk_live_abc123Step 3: Update payment-gateway Pod
Export, edit, and recreate:
kubectl get pod payment-gateway -n banking -o yaml > /tmp/payment-gateway.yamlEdit /tmp/payment-gateway.yaml:
apiVersion: v1
kind: Pod
metadata:
name: payment-gateway
namespace: banking
spec:
securityContext:
runAsUser: 1000
runAsGroup: 1000
containers:
- name: payment-gateway
# ... (keep existing image and command)
# REMOVE any env entries referencing api-keys:
# env:
# - name: STRIPE_KEY
# valueFrom:
# secretKeyRef:
# name: api-keys # REMOVE THIS
# key: stripe-key
securityContext:
runAsUser: 1000
runAsGroup: 1000
volumeMounts:
- name: secrets-vol
mountPath: /etc/secrets
readOnly: true
volumes:
- name: secrets-vol
secret:
secretName: api-keys-v2
defaultMode: 0400kubectl delete pod payment-gateway -n banking
kubectl apply -f /tmp/payment-gateway.yamlStep 4: Delete Old Secret
# Confirm new pod is running
kubectl get pod payment-gateway -n banking
# Verify secret is mounted
kubectl exec payment-gateway -n banking -- ls -la /etc/secrets/
# Delete old secret
kubectl delete secret api-keys -n bankingStep 5: Check Encryption at Rest
ssh cluster1-controlplane
sudo cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep encryption-provider-configSolution 12: OPA Gatekeeper - Allowed Registries (6%)
Domain: Minimize Microservice Vulnerabilities | Time Target: 8 minutes
Step 1: Create ConstraintTemplate
# allowed-repos-template.yaml
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
name: k8sallowedrepos
spec:
crd:
spec:
names:
kind: K8sAllowedRepos
validation:
openAPIV3Schema:
type: object
properties:
repos:
type: array
items:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8sallowedrepos
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
not startswith_any(container.image, input.parameters.repos)
msg := sprintf("Container '%v' uses disallowed image '%v'. Allowed repos: %v", [container.name, container.image, input.parameters.repos])
}
violation[{"msg": msg}] {
container := input.review.object.spec.initContainers[_]
not startswith_any(container.image, input.parameters.repos)
msg := sprintf("Init container '%v' uses disallowed image '%v'. Allowed repos: %v", [container.name, container.image, input.parameters.repos])
}
startswith_any(str, prefixes) {
prefix := prefixes[_]
startswith(str, prefix)
}kubectl apply -f allowed-repos-template.yaml
# Wait a few seconds for template to be processed
sleep 5Step 2: Create Constraint
# allowed-repos-constraint.yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sAllowedRepos
metadata:
name: allowed-repos
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Pod"]
excludedNamespaces:
- kube-system
- gatekeeper-system
parameters:
repos:
- "docker.io/library/"
- "gcr.io/company-project/"
- "registry.internal.company.com/"kubectl apply -f allowed-repos-constraint.yamlStep 3: Verify Rejection
# This should be REJECTED
kubectl run test-bad --image=quay.io/malicious/image -n default 2>&1
# This should be ACCEPTED
kubectl run test-good --image=docker.io/library/nginx:1.25 -n default
# Clean up
kubectl delete pod test-good -n default --ignore-not-foundSolution 13: Image Vulnerability Scanning and Remediation (7%)
Domain: Supply Chain Security | Time Target: 9 minutes
Step 1: Scan Images
trivy image --severity CRITICAL python:3.8-slim > /tmp/python-scan.txt
trivy image --severity CRITICAL node:16-alpine > /tmp/node-scan.txt
trivy image --severity CRITICAL postgres:13 > /tmp/postgres-scan.txtStep 2: Compare and Choose Best Image
# Scan the options
trivy image --severity CRITICAL python:3.11-slim 2>&1 | tail -5
trivy image --severity CRITICAL python:3.12-alpine 2>&1 | tail -5
trivy image --severity CRITICAL cgr.dev/chainguard/python:latest 2>&1 | tail -5Choose the image with fewest CRITICAL vulnerabilities (typically cgr.dev/chainguard/python:latest or python:3.12-alpine).
Step 3: Update Deployment
# Replace with the chosen image
kubectl set image deployment/analytics-api \
analytics-api=python:3.12-alpine -n productionStep 4: Scan Fixed Image and Run kubesec
# Scan the chosen image
trivy image --severity CRITICAL python:3.12-alpine > /tmp/analytics-scan-fixed.txt
# Export deployment manifest and run kubesec
kubectl get deployment analytics-api -n production -o yaml > /tmp/analytics-api.yaml
kubesec scan /tmp/analytics-api.yaml > /tmp/kubesec-analytics.txtVerification
kubectl rollout status deployment/analytics-api -n productionSolution 14: Static Analysis with kubesec (5%)
Domain: Supply Chain Security | Time Target: 6 minutes
Step 1: Export Pod Spec
kubectl get pod data-pipeline -n etl -o yaml > /tmp/data-pipeline.yamlStep 2: Initial kubesec Scan
kubesec scan /tmp/data-pipeline.yaml > /tmp/kubesec-data-pipeline.txtStep 3: Create Hardened Manifest
Edit /tmp/data-pipeline.yaml and save as /tmp/data-pipeline-hardened.yaml:
apiVersion: v1
kind: Pod
metadata:
name: data-pipeline
namespace: etl
spec:
serviceAccountName: data-pipeline-sa # Not default
securityContext:
runAsNonRoot: true
runAsUser: 10001
runAsGroup: 10001
containers:
- name: data-pipeline
image: python:3.12-alpine # keep original or use safer image
command: ["python", "-c", "import time; time.sleep(3600)"]
securityContext:
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
runAsNonRoot: true
runAsUser: 10001
capabilities:
drop:
- ALL
resources:
limits:
cpu: "500m"
memory: "256Mi"
requests:
cpu: "100m"
memory: "128Mi"
volumeMounts:
- name: tmp
mountPath: /tmp
volumes:
- name: tmp
emptyDir: {}TIP
Create the ServiceAccount first if it does not exist:
kubectl create serviceaccount data-pipeline-sa -n etlStep 4: Verify Improved Score
kubesec scan /tmp/data-pipeline-hardened.yaml > /tmp/kubesec-data-pipeline-hardened.txt
cat /tmp/kubesec-data-pipeline-hardened.txt | jq '.[0].score'
# Should show a score >= 8Solution 15: ValidatingWebhookConfiguration (4%)
Domain: Supply Chain Security | Time Target: 5 minutes
Step 1: Inspect Current Webhook
kubectl get validatingwebhookconfiguration image-registry-validator -o yamlStep 2: Edit the Webhook
kubectl edit validatingwebhookconfiguration image-registry-validatorApply these changes:
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
name: image-registry-validator
webhooks:
- name: validate-image-registry.example.com
failurePolicy: Fail # Changed from Ignore
rules:
- apiGroups: [""]
apiVersions: ["v1"]
operations: ["CREATE"]
resources: ["pods"]
namespaceSelector:
matchExpressions:
- key: kubernetes.io/metadata.name
operator: NotIn
values:
- kube-system
# ... (keep existing clientConfig, admissionReviewVersions, sideEffects)Verification
kubectl describe validatingwebhookconfiguration image-registry-validatorSolution 16: Audit Log Investigation and Falco (5%)
Domain: Monitoring, Logging & Runtime Security | Time Target: 8 minutes
Step 1: Examine Audit Logs
ssh cluster1-controlplane
# Find secret access events in last 100 lines
tail -100 /var/log/kubernetes/audit/audit.log | \
jq -r 'select(.objectRef.resource == "secrets" and (.verb == "get" or .verb == "list" or .verb == "watch"))' \
> /tmp/secret-access-audit.txtStep 2: Find Non-System Secret Access
tail -100 /var/log/kubernetes/audit/audit.log | \
jq -r 'select(
.objectRef.resource == "secrets" and
.objectRef.namespace == "kube-system" and
(.verb == "get" or .verb == "list" or .verb == "watch") and
(.user.username | startswith("system:") | not)
)' > /tmp/suspicious-secret-access.txtStep 3: Check Falco Alerts
# Check Falco logs
sudo journalctl -u falco --no-pager --since "1 hour ago" | \
grep -E "reverse shell|/etc/|network connection" > /tmp/falco-findings.txt
# Or check the Falco log file directly
sudo cat /var/log/falco/falco.log | \
grep -E "reverse shell|etc modified|Unexpected outbound" >> /tmp/falco-findings.txtStep 4: Delete Compromised Pods and Apply NetworkPolicy
exit # Exit SSH
kubectl config use-context cluster1
# Identify compromised pods from Falco logs
# (use container names from /tmp/falco-findings.txt)
kubectl get pods -n <compromised-namespace>
kubectl delete pod <compromised-pod> -n <compromised-namespace>
# Apply data exfiltration prevention policy
cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: prevent-data-exfiltration
namespace: <compromised-namespace>
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
EOFSolution 17: Immutable Containers and Detection (5%)
Domain: Monitoring, Logging & Runtime Security | Time Target: 7 minutes
Step 1: Make web-frontend Immutable
kubectl edit deployment web-frontend -n publicspec:
template:
spec:
containers:
- name: nginx
securityContext:
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
volumeMounts:
- name: tmp
mountPath: /tmp
- name: nginx-cache
mountPath: /var/cache/nginx
- name: nginx-run
mountPath: /var/run
volumes:
- name: tmp
emptyDir: {}
- name: nginx-cache
emptyDir: {}
- name: nginx-run
emptyDir: {}Step 2: Configure logging-agent DaemonSet
kubectl edit daemonset logging-agent -n monitoringspec:
template:
spec:
containers:
- name: logging-agent
securityContext:
readOnlyRootFilesystem: true
volumeMounts:
- name: tmp
mountPath: /tmp
- name: var-log
mountPath: /var/log
volumes:
- name: tmp
emptyDir: {}
- name: var-log
hostPath:
path: /var/log
type: DirectoryStep 3: Create Detection Script
cat > /tmp/find-mutable-containers.sh << 'SCRIPT'
#!/bin/bash
echo "NAMESPACE | POD | CONTAINER | readOnlyRootFilesystem"
echo "----------|-----|-----------|----------------------"
kubectl get pods --all-namespaces -o json | jq -r '
.items[] |
.metadata.namespace as $ns |
.metadata.name as $pod |
.spec.containers[] |
select(
.securityContext.readOnlyRootFilesystem != true
) |
"\($ns) | \($pod) | \(.name) | \(.securityContext.readOnlyRootFilesystem // "not set")"
'
SCRIPT
chmod +x /tmp/find-mutable-containers.shStep 4: Run and Save Output
/tmp/find-mutable-containers.sh > /tmp/mutable-containers.txt
cat /tmp/mutable-containers.txtVerification
kubectl rollout status deployment/web-frontend -n public
kubectl rollout status daemonset/logging-agent -n monitoringFinal Score Calculation
Add up the weights for all questions you answered completely and correctly:
| Score Range | Result |
|---|---|
| 67-100% | PASS -- You are exam-ready |
| 55-66% | CLOSE -- Minor gaps, review and reattempt in a few days |
| 40-54% | NEEDS WORK -- Revisit domain materials |
| Below 40% | NOT READY -- Complete the study guide before reattempting |
Next Steps
If you passed both mock exams:
- Review the Exam Tips & Cheatsheets section
- Schedule your CKS exam within 1-2 weeks
- Do a final review of cheatsheets the day before the exam
- Get a good night's sleep before exam day