Skip to content

Mock Exam 2 - Solutions

Spoiler Warning

Do not read these solutions until you have attempted the full mock exam under timed conditions. The learning value comes from struggling with the questions first.


Solution 1: Strict Database Network Isolation (6%)

Domain: Cluster Setup | Time Target: 8 minutes

Step 1: Create Strict DB Isolation Policy

yaml
# strict-db-isolation.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: strict-db-isolation
  namespace: database
spec:
  podSelector:
    matchLabels:
      tier: database
  policyTypes:
  - Egress
  egress:
  # Allow egress to other database pods on 3306
  - to:
    - podSelector:
        matchLabels:
          tier: database
    ports:
    - protocol: TCP
      port: 3306
  # Allow egress to monitoring namespace on 9090
  - to:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: monitoring
      podSelector:
        matchLabels:
          app: monitoring
    ports:
    - protocol: TCP
      port: 9090
  # Allow DNS only to kube-system namespace
  - to:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: kube-system
    ports:
    - protocol: TCP
      port: 53
    - protocol: UDP
      port: 53
bash
kubectl apply -f strict-db-isolation.yaml

Step 2: Create DB Ingress Control Policy

yaml
# db-ingress-control.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: db-ingress-control
  namespace: database
spec:
  podSelector:
    matchLabels:
      tier: database
  policyTypes:
  - Ingress
  ingress:
  # Allow from backend application tier on 3306
  - from:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: backend
      podSelector:
        matchLabels:
          tier: application
    ports:
    - protocol: TCP
      port: 3306
  # Allow from monitoring namespace on 9090
  - from:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: monitoring
      podSelector:
        matchLabels:
          app: monitoring
    ports:
    - protocol: TCP
      port: 9090
bash
kubectl apply -f db-ingress-control.yaml

Verification

bash
kubectl get networkpolicies -n database
kubectl describe networkpolicy strict-db-isolation -n database
kubectl describe networkpolicy db-ingress-control -n database

Edge Case

The DNS restriction to kube-system only is critical. Without a namespace selector, the DNS rule would allow port 53 to ANY destination, which could be used for DNS tunneling attacks. By restricting to kube-system, only the CoreDNS service is accessible.


Solution 2: TLS Certificate Inspection and Renewal (7%)

Domain: Cluster Setup | Time Target: 9 minutes

Step 1: SSH and Check Certificates

bash
ssh cluster1-controlplane

# Check all certificate expirations
sudo kubeadm certs check-expiration

Step 2: Inspect API Server Certificate SANs

bash
openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text | grep -A1 "Subject Alternative Name"

This shows all SANs, such as:

X509v3 Subject Alternative Name:
    DNS:cluster1-controlplane, DNS:kubernetes, DNS:kubernetes.default,
    DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local,
    IP Address:10.96.0.1, IP Address:192.168.1.10

Step 3: Renew Certificates if Needed

bash
# Renew all certificates (if any expire within 30 days)
sudo kubeadm certs renew all

# Or renew specific certificate
sudo kubeadm certs renew apiserver

Step 4: Verify etcd TLS Configuration

bash
# Check etcd static pod manifest
sudo cat /etc/kubernetes/manifests/etcd.yaml | grep -E "cert-file|key-file|trusted-ca-file"

Expected output:

- --cert-file=/etc/kubernetes/pki/etcd/server.crt
- --key-file=/etc/kubernetes/pki/etcd/server.key
- --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
- --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
- --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
- --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt

Step 5: Verify API Server etcd TLS

bash
sudo cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep -E "etcd-certfile|etcd-keyfile|etcd-cafile"

Expected output:

- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key

TIP

After renewing certificates, the API server will need to be restarted. For static pod manifests, the kubelet handles this automatically, but you may need to wait 30-60 seconds.


Solution 3: Kubernetes Dashboard Security (5%)

Domain: Cluster Setup | Time Target: 6 minutes

Step 1: Verify Dashboard is Running

bash
kubectl get deployments -n kubernetes-dashboard
kubectl get pods -n kubernetes-dashboard

Step 2: Remove Insecure Arguments

bash
kubectl edit deployment kubernetes-dashboard -n kubernetes-dashboard

Remove these arguments from the container spec if present:

yaml
# REMOVE these lines:
# - --enable-skip-login
# - --enable-insecure-login

Step 3: Change Service Type

bash
kubectl get svc -n kubernetes-dashboard

# If the service is NodePort or LoadBalancer, patch it
kubectl patch svc kubernetes-dashboard -n kubernetes-dashboard \
  -p '{"spec": {"type": "ClusterIP"}}'

Step 4: Create Read-Only Access

yaml
# dashboard-viewer.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: dashboard-viewer
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: dashboard-view-only
rules:
- apiGroups: ["*"]
  resources: ["*"]
  verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: dashboard-viewer-binding
subjects:
- kind: ServiceAccount
  name: dashboard-viewer
  namespace: kubernetes-dashboard
roleRef:
  kind: ClusterRole
  name: dashboard-view-only
  apiGroup: rbac.authorization.k8s.io
bash
kubectl apply -f dashboard-viewer.yaml

Verification

bash
kubectl auth can-i delete pods --as=system:serviceaccount:kubernetes-dashboard:dashboard-viewer
# Should return "no"

kubectl auth can-i get pods --as=system:serviceaccount:kubernetes-dashboard:dashboard-viewer
# Should return "yes"

Solution 4: RBAC Audit and Remediation (7%)

Domain: Cluster Hardening | Time Target: 10 minutes

Step 1: Find and Clean cluster-admin Bindings

bash
# List all ClusterRoleBindings with cluster-admin
kubectl get clusterrolebindings -o json | jq -r '
  .items[] |
  select(.roleRef.name == "cluster-admin") |
  .metadata.name + " -> " +
  ([.subjects[]? | .kind + ":" + (.namespace // "n/a") + "/" + .name] | join(", "))
'

Delete any unauthorized bindings:

bash
# Example: delete binding that grants cluster-admin to unauthorized subjects
kubectl delete clusterrolebinding <unauthorized-binding-name>

WARNING

Be VERY careful. Do NOT delete cluster-admin binding for system:masters or system components. Check each binding before deleting.

Step 2: Remove Dangerous Permissions from debug-role

bash
kubectl edit clusterrole debug-role

Remove pods/exec and pods/attach from the rules:

yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: debug-role
rules:
- apiGroups: [""]
  resources: ["pods"]   # Remove "pods/exec" and "pods/attach"
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources: ["pods/log"]
  verbs: ["get"]
# Remove any rule that includes pods/exec or pods/attach

Step 3: Find Dangerous Verbs

bash
# Find roles/clusterroles with escalate, bind, or impersonate
kubectl get clusterroles -o json | jq -r '
  .items[] |
  select(.rules[]?.verbs[]? | IN("escalate", "bind", "impersonate")) |
  "ClusterRole: " + .metadata.name
' > /tmp/dangerous-rbac.txt

kubectl get roles --all-namespaces -o json | jq -r '
  .items[] |
  select(.rules[]?.verbs[]? | IN("escalate", "bind", "impersonate")) |
  "Role: " + .metadata.namespace + "/" + .metadata.name
' >> /tmp/dangerous-rbac.txt

Step 4: Create Time-Bound Token

bash
kubectl create token ci-pipeline \
  -n cicd \
  --duration=1h > /tmp/ci-token.txt

Verification

bash
# Verify debug-role no longer has exec
kubectl describe clusterrole debug-role | grep -E "exec|attach"

# Verify token file exists
cat /tmp/ci-token.txt | head -c 50

# Verify dangerous RBAC report
cat /tmp/dangerous-rbac.txt

Solution 5: API Server Hardening (6%)

Domain: Cluster Hardening | Time Target: 7 minutes

Step 1: Edit API Server Manifest

bash
ssh cluster1-controlplane
sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml

Step 2: Ensure Correct Configuration

yaml
spec:
  containers:
  - command:
    - kube-apiserver
    - --anonymous-auth=false
    # Remove --insecure-port if present, or set to 0
    # --insecure-port=0    (deprecated in newer versions, may not need to be set)
    - --kubelet-certificate-authority=/etc/kubernetes/pki/ca.crt
    - --enable-admission-plugins=NodeRestriction
    - --authorization-mode=Node,RBAC
    - --request-timeout=300s
    # ... (keep all other existing flags)

WARNING

Make sure --authorization-mode does NOT include AlwaysAllow. It should only have Node,RBAC.

Step 3: Verify

bash
# Wait for API server to restart
watch crictl ps | grep kube-apiserver

# Test access
kubectl get nodes

# Verify anonymous auth is disabled
curl -k https://localhost:6443/api/v1/namespaces 2>/dev/null | grep -i "forbidden\|unauthorized"

Solution 6: Seccomp and AppArmor Combined (7%)

Domain: System Hardening | Time Target: 9 minutes

Step 1: Create Seccomp Profile

bash
ssh cluster1-node01

sudo mkdir -p /var/lib/kubelet/seccomp/profiles

sudo tee /var/lib/kubelet/seccomp/profiles/strict-net.json << 'EOF'
{
  "defaultAction": "SCMP_ACT_ERRNO",
  "architectures": [
    "SCMP_ARCH_X86_64",
    "SCMP_ARCH_X86",
    "SCMP_ARCH_X32"
  ],
  "syscalls": [
    {
      "names": [
        "read", "write", "open", "close", "stat", "fstat", "lstat",
        "poll", "lseek", "mmap", "mprotect", "munmap", "brk",
        "rt_sigaction", "rt_sigprocmask", "ioctl", "access", "pipe",
        "select", "sched_yield", "mremap", "msync", "mincore",
        "madvise", "dup", "dup2", "pause", "nanosleep", "getpid",
        "clone", "execve", "exit", "wait4", "kill", "uname", "fcntl",
        "flock", "fsync", "fdatasync", "getcwd", "readlink",
        "getuid", "getgid", "geteuid", "getegid", "getppid",
        "getpgrp", "setsid", "arch_prctl", "exit_group", "openat",
        "newfstatat", "set_tid_address", "set_robust_list", "futex",
        "getrandom", "close_range", "pread64", "pwrite64",
        "writev", "readv", "sigaltstack", "rt_sigreturn",
        "getdents64", "clock_gettime", "clock_nanosleep",
        "sysinfo", "prctl", "rseq", "mlock", "munlock",
        "shmget", "shmat", "shmctl"
      ],
      "action": "SCMP_ACT_ALLOW"
    },
    {
      "names": [
        "socket", "connect", "accept", "bind", "listen"
      ],
      "action": "SCMP_ACT_ERRNO"
    }
  ]
}
EOF

TIP

Note that networking syscalls are explicitly listed as SCMP_ACT_ERRNO even though the default action is also SCMP_ACT_ERRNO. This makes the intent explicit and serves as documentation.

Step 2: Load AppArmor Profile

bash
# Check if profile is already loaded
sudo aa-status | grep k8s-deny-write

# If not loaded, load it
sudo apparmor_parser -r /etc/apparmor.d/k8s-deny-write

Step 3: Update Pods

Exit SSH and switch context:

bash
exit
kubectl config use-context cluster1

Update secure-api with seccomp profile:

bash
kubectl get pod secure-api -n restricted -o yaml > /tmp/secure-api.yaml

Edit /tmp/secure-api.yaml:

yaml
spec:
  securityContext:
    seccompProfile:
      type: Localhost
      localhostProfile: profiles/strict-net.json
  containers:
  - name: secure-api
    # ... (keep existing spec)
bash
kubectl delete pod secure-api -n restricted
kubectl apply -f /tmp/secure-api.yaml

Update secure-writer with AppArmor profile:

bash
kubectl get pod secure-writer -n restricted -o yaml > /tmp/secure-writer.yaml

Edit /tmp/secure-writer.yaml to add the annotation:

yaml
metadata:
  annotations:
    container.apparmor.security.beta.kubernetes.io/secure-writer: localhost/k8s-deny-write

Or for Kubernetes v1.30+, use the native field:

yaml
spec:
  containers:
  - name: secure-writer
    securityContext:
      appArmorProfile:
        type: Localhost
        localhostProfile: k8s-deny-write
bash
kubectl delete pod secure-writer -n restricted
kubectl apply -f /tmp/secure-writer.yaml

Verification

bash
kubectl get pods -n restricted
kubectl describe pod secure-api -n restricted | grep -i seccomp
kubectl describe pod secure-writer -n restricted | grep -i apparmor

Solution 7: Node Hardening - Kernel Modules and Packages (5%)

Domain: System Hardening | Time Target: 6 minutes

Step 1: SSH to Node

bash
ssh cluster2-controlplane
ssh cluster2-node01

Step 2: Blacklist Kernel Modules

bash
sudo tee /etc/modprobe.d/k8s-hardening.conf << 'EOF'
blacklist dccp
blacklist sctp
blacklist rds
blacklist tipc
EOF

Step 3: Remove Unnecessary Packages

bash
sudo apt-get purge -y tcpdump strace 2>/dev/null || true
sudo apt-get autoremove -y

Step 4: Verify

bash
# Check kernel module blacklist
cat /etc/modprobe.d/k8s-hardening.conf

# Verify modules are not loaded (they may still be loaded until reboot)
lsmod | grep -E "dccp|sctp|rds|tipc"

# Verify packages removed
dpkg -l | grep -E "tcpdump|strace" | grep "^ii" || echo "Packages removed"

Solution 8: Reduce OS Attack Surface - SUID and World-Writable (5%)

Domain: System Hardening | Time Target: 6 minutes

Step 1: SSH to Control Plane

bash
ssh cluster1-controlplane

Step 2: Find SUID Binaries

bash
find / -perm -4000 -type f 2>/dev/null > /tmp/suid-binaries.txt

Step 3: Find World-Writable Directories

bash
find / -type d -perm -0002 2>/dev/null > /tmp/world-writable-dirs.txt

Step 4: Remove SUID Bit

bash
sudo chmod u-s /usr/bin/newgrp
sudo chmod u-s /usr/bin/chfn

# Verify
ls -la /usr/bin/newgrp /usr/bin/chfn

Step 5: Verify Kubernetes Functionality

bash
kubectl get nodes
kubectl get pods -n kube-system

Solution 9: Pod Security Standards Enforcement (7%)

Domain: Minimize Microservice Vulnerabilities | Time Target: 9 minutes

Step 1: Label the Namespace

bash
kubectl label namespace e-commerce \
  pod-security.kubernetes.io/enforce=baseline \
  pod-security.kubernetes.io/warn=restricted \
  pod-security.kubernetes.io/warn-version=latest \
  pod-security.kubernetes.io/audit=restricted \
  pod-security.kubernetes.io/audit-version=latest

Step 2: Identify Violating Pods

bash
# Dry-run label to see what would be rejected under restricted
kubectl label --dry-run=server --overwrite ns e-commerce \
  pod-security.kubernetes.io/enforce=restricted

Step 3: Fix checkout-service Deployment

bash
kubectl edit deployment checkout-service -n e-commerce

Apply the following spec:

yaml
spec:
  template:
    spec:
      securityContext:
        runAsNonRoot: true
        runAsUser: 1000
        runAsGroup: 1000
        fsGroup: 1000
        seccompProfile:
          type: RuntimeDefault
      containers:
      - name: checkout
        # ... (keep existing image)
        securityContext:
          # REMOVE: privileged: true
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          capabilities:
            drop:
            - ALL
            add:
            - NET_BIND_SERVICE
          seccompProfile:
            type: RuntimeDefault
        volumeMounts:
        # ... (keep existing mounts)
        - name: tmp-vol
          mountPath: /tmp
        - name: run-vol
          mountPath: /var/run
      volumes:
      # ... (keep existing volumes)
      - name: tmp-vol
        emptyDir: {}
      - name: run-vol
        emptyDir: {}

Verification

bash
kubectl rollout status deployment/checkout-service -n e-commerce
kubectl get pods -n e-commerce

Solution 10: RuntimeClass and Container Sandboxing (6%)

Domain: Minimize Microservice Vulnerabilities | Time Target: 7 minutes

Step 1: Create gVisor RuntimeClass

yaml
# gvisor-runtimeclass.yaml
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: gvisor
handler: runsc
bash
kubectl apply -f gvisor-runtimeclass.yaml

Step 2: Update Deployment

bash
kubectl edit deployment payment-processor -n sandbox

Add runtimeClassName under spec.template.spec:

yaml
spec:
  template:
    spec:
      runtimeClassName: gvisor
      containers:
      # ... (keep existing spec)

Step 3: Verify

bash
kubectl rollout status deployment/payment-processor -n sandbox
kubectl get pods -n sandbox -o jsonpath='{.items[*].spec.runtimeClassName}'

Step 4: Create Kata RuntimeClass

yaml
# kata-runtimeclass.yaml
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: kata
handler: kata-runtime
scheduling:
  nodeSelector:
    runtime: kata
bash
kubectl apply -f kata-runtimeclass.yaml
kubectl get runtimeclass kata -o yaml > /tmp/kata-runtimeclass.yaml

Solution 11: Secrets Management Overhaul (7%)

Domain: Minimize Microservice Vulnerabilities | Time Target: 10 minutes

Step 1: Verify Existing Secret

bash
kubectl get secret api-keys -n banking
kubectl get secret api-keys -n banking -o jsonpath='{.data.stripe-key}' | base64 -d

Step 2: Create New Secret

bash
kubectl create secret generic api-keys-v2 \
  -n banking \
  --from-literal=stripe-key=sk_live_abc123

Step 3: Update payment-gateway Pod

Export, edit, and recreate:

bash
kubectl get pod payment-gateway -n banking -o yaml > /tmp/payment-gateway.yaml

Edit /tmp/payment-gateway.yaml:

yaml
apiVersion: v1
kind: Pod
metadata:
  name: payment-gateway
  namespace: banking
spec:
  securityContext:
    runAsUser: 1000
    runAsGroup: 1000
  containers:
  - name: payment-gateway
    # ... (keep existing image and command)
    # REMOVE any env entries referencing api-keys:
    # env:
    # - name: STRIPE_KEY
    #   valueFrom:
    #     secretKeyRef:
    #       name: api-keys    # REMOVE THIS
    #       key: stripe-key
    securityContext:
      runAsUser: 1000
      runAsGroup: 1000
    volumeMounts:
    - name: secrets-vol
      mountPath: /etc/secrets
      readOnly: true
  volumes:
  - name: secrets-vol
    secret:
      secretName: api-keys-v2
      defaultMode: 0400
bash
kubectl delete pod payment-gateway -n banking
kubectl apply -f /tmp/payment-gateway.yaml

Step 4: Delete Old Secret

bash
# Confirm new pod is running
kubectl get pod payment-gateway -n banking

# Verify secret is mounted
kubectl exec payment-gateway -n banking -- ls -la /etc/secrets/

# Delete old secret
kubectl delete secret api-keys -n banking

Step 5: Check Encryption at Rest

bash
ssh cluster1-controlplane
sudo cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep encryption-provider-config

Solution 12: OPA Gatekeeper - Allowed Registries (6%)

Domain: Minimize Microservice Vulnerabilities | Time Target: 8 minutes

Step 1: Create ConstraintTemplate

yaml
# allowed-repos-template.yaml
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
  name: k8sallowedrepos
spec:
  crd:
    spec:
      names:
        kind: K8sAllowedRepos
      validation:
        openAPIV3Schema:
          type: object
          properties:
            repos:
              type: array
              items:
                type: string
  targets:
  - target: admission.k8s.gatekeeper.sh
    rego: |
      package k8sallowedrepos

      violation[{"msg": msg}] {
        container := input.review.object.spec.containers[_]
        not startswith_any(container.image, input.parameters.repos)
        msg := sprintf("Container '%v' uses disallowed image '%v'. Allowed repos: %v", [container.name, container.image, input.parameters.repos])
      }

      violation[{"msg": msg}] {
        container := input.review.object.spec.initContainers[_]
        not startswith_any(container.image, input.parameters.repos)
        msg := sprintf("Init container '%v' uses disallowed image '%v'. Allowed repos: %v", [container.name, container.image, input.parameters.repos])
      }

      startswith_any(str, prefixes) {
        prefix := prefixes[_]
        startswith(str, prefix)
      }
bash
kubectl apply -f allowed-repos-template.yaml
# Wait a few seconds for template to be processed
sleep 5

Step 2: Create Constraint

yaml
# allowed-repos-constraint.yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sAllowedRepos
metadata:
  name: allowed-repos
spec:
  match:
    kinds:
    - apiGroups: [""]
      kinds: ["Pod"]
    excludedNamespaces:
    - kube-system
    - gatekeeper-system
  parameters:
    repos:
    - "docker.io/library/"
    - "gcr.io/company-project/"
    - "registry.internal.company.com/"
bash
kubectl apply -f allowed-repos-constraint.yaml

Step 3: Verify Rejection

bash
# This should be REJECTED
kubectl run test-bad --image=quay.io/malicious/image -n default 2>&1

# This should be ACCEPTED
kubectl run test-good --image=docker.io/library/nginx:1.25 -n default

# Clean up
kubectl delete pod test-good -n default --ignore-not-found

Solution 13: Image Vulnerability Scanning and Remediation (7%)

Domain: Supply Chain Security | Time Target: 9 minutes

Step 1: Scan Images

bash
trivy image --severity CRITICAL python:3.8-slim > /tmp/python-scan.txt
trivy image --severity CRITICAL node:16-alpine > /tmp/node-scan.txt
trivy image --severity CRITICAL postgres:13 > /tmp/postgres-scan.txt

Step 2: Compare and Choose Best Image

bash
# Scan the options
trivy image --severity CRITICAL python:3.11-slim 2>&1 | tail -5
trivy image --severity CRITICAL python:3.12-alpine 2>&1 | tail -5
trivy image --severity CRITICAL cgr.dev/chainguard/python:latest 2>&1 | tail -5

Choose the image with fewest CRITICAL vulnerabilities (typically cgr.dev/chainguard/python:latest or python:3.12-alpine).

Step 3: Update Deployment

bash
# Replace with the chosen image
kubectl set image deployment/analytics-api \
  analytics-api=python:3.12-alpine -n production

Step 4: Scan Fixed Image and Run kubesec

bash
# Scan the chosen image
trivy image --severity CRITICAL python:3.12-alpine > /tmp/analytics-scan-fixed.txt

# Export deployment manifest and run kubesec
kubectl get deployment analytics-api -n production -o yaml > /tmp/analytics-api.yaml
kubesec scan /tmp/analytics-api.yaml > /tmp/kubesec-analytics.txt

Verification

bash
kubectl rollout status deployment/analytics-api -n production

Solution 14: Static Analysis with kubesec (5%)

Domain: Supply Chain Security | Time Target: 6 minutes

Step 1: Export Pod Spec

bash
kubectl get pod data-pipeline -n etl -o yaml > /tmp/data-pipeline.yaml

Step 2: Initial kubesec Scan

bash
kubesec scan /tmp/data-pipeline.yaml > /tmp/kubesec-data-pipeline.txt

Step 3: Create Hardened Manifest

Edit /tmp/data-pipeline.yaml and save as /tmp/data-pipeline-hardened.yaml:

yaml
apiVersion: v1
kind: Pod
metadata:
  name: data-pipeline
  namespace: etl
spec:
  serviceAccountName: data-pipeline-sa  # Not default
  securityContext:
    runAsNonRoot: true
    runAsUser: 10001
    runAsGroup: 10001
  containers:
  - name: data-pipeline
    image: python:3.12-alpine  # keep original or use safer image
    command: ["python", "-c", "import time; time.sleep(3600)"]
    securityContext:
      readOnlyRootFilesystem: true
      allowPrivilegeEscalation: false
      runAsNonRoot: true
      runAsUser: 10001
      capabilities:
        drop:
        - ALL
    resources:
      limits:
        cpu: "500m"
        memory: "256Mi"
      requests:
        cpu: "100m"
        memory: "128Mi"
    volumeMounts:
    - name: tmp
      mountPath: /tmp
  volumes:
  - name: tmp
    emptyDir: {}

TIP

Create the ServiceAccount first if it does not exist:

bash
kubectl create serviceaccount data-pipeline-sa -n etl

Step 4: Verify Improved Score

bash
kubesec scan /tmp/data-pipeline-hardened.yaml > /tmp/kubesec-data-pipeline-hardened.txt
cat /tmp/kubesec-data-pipeline-hardened.txt | jq '.[0].score'
# Should show a score >= 8

Solution 15: ValidatingWebhookConfiguration (4%)

Domain: Supply Chain Security | Time Target: 5 minutes

Step 1: Inspect Current Webhook

bash
kubectl get validatingwebhookconfiguration image-registry-validator -o yaml

Step 2: Edit the Webhook

bash
kubectl edit validatingwebhookconfiguration image-registry-validator

Apply these changes:

yaml
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  name: image-registry-validator
webhooks:
- name: validate-image-registry.example.com
  failurePolicy: Fail  # Changed from Ignore
  rules:
  - apiGroups: [""]
    apiVersions: ["v1"]
    operations: ["CREATE"]
    resources: ["pods"]
  namespaceSelector:
    matchExpressions:
    - key: kubernetes.io/metadata.name
      operator: NotIn
      values:
      - kube-system
  # ... (keep existing clientConfig, admissionReviewVersions, sideEffects)

Verification

bash
kubectl describe validatingwebhookconfiguration image-registry-validator

Solution 16: Audit Log Investigation and Falco (5%)

Domain: Monitoring, Logging & Runtime Security | Time Target: 8 minutes

Step 1: Examine Audit Logs

bash
ssh cluster1-controlplane

# Find secret access events in last 100 lines
tail -100 /var/log/kubernetes/audit/audit.log | \
  jq -r 'select(.objectRef.resource == "secrets" and (.verb == "get" or .verb == "list" or .verb == "watch"))' \
  > /tmp/secret-access-audit.txt

Step 2: Find Non-System Secret Access

bash
tail -100 /var/log/kubernetes/audit/audit.log | \
  jq -r 'select(
    .objectRef.resource == "secrets" and
    .objectRef.namespace == "kube-system" and
    (.verb == "get" or .verb == "list" or .verb == "watch") and
    (.user.username | startswith("system:") | not)
  )' > /tmp/suspicious-secret-access.txt

Step 3: Check Falco Alerts

bash
# Check Falco logs
sudo journalctl -u falco --no-pager --since "1 hour ago" | \
  grep -E "reverse shell|/etc/|network connection" > /tmp/falco-findings.txt

# Or check the Falco log file directly
sudo cat /var/log/falco/falco.log | \
  grep -E "reverse shell|etc modified|Unexpected outbound" >> /tmp/falco-findings.txt

Step 4: Delete Compromised Pods and Apply NetworkPolicy

bash
exit  # Exit SSH
kubectl config use-context cluster1

# Identify compromised pods from Falco logs
# (use container names from /tmp/falco-findings.txt)
kubectl get pods -n <compromised-namespace> 
kubectl delete pod <compromised-pod> -n <compromised-namespace>

# Apply data exfiltration prevention policy
cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: prevent-data-exfiltration
  namespace: <compromised-namespace>
spec:
  podSelector: {}
  policyTypes:
  - Egress
  egress:
  - ports:
    - protocol: TCP
      port: 53
    - protocol: UDP
      port: 53
EOF

Solution 17: Immutable Containers and Detection (5%)

Domain: Monitoring, Logging & Runtime Security | Time Target: 7 minutes

Step 1: Make web-frontend Immutable

bash
kubectl edit deployment web-frontend -n public
yaml
spec:
  template:
    spec:
      containers:
      - name: nginx
        securityContext:
          readOnlyRootFilesystem: true
          allowPrivilegeEscalation: false
        volumeMounts:
        - name: tmp
          mountPath: /tmp
        - name: nginx-cache
          mountPath: /var/cache/nginx
        - name: nginx-run
          mountPath: /var/run
      volumes:
      - name: tmp
        emptyDir: {}
      - name: nginx-cache
        emptyDir: {}
      - name: nginx-run
        emptyDir: {}

Step 2: Configure logging-agent DaemonSet

bash
kubectl edit daemonset logging-agent -n monitoring
yaml
spec:
  template:
    spec:
      containers:
      - name: logging-agent
        securityContext:
          readOnlyRootFilesystem: true
        volumeMounts:
        - name: tmp
          mountPath: /tmp
        - name: var-log
          mountPath: /var/log
      volumes:
      - name: tmp
        emptyDir: {}
      - name: var-log
        hostPath:
          path: /var/log
          type: Directory

Step 3: Create Detection Script

bash
cat > /tmp/find-mutable-containers.sh << 'SCRIPT'
#!/bin/bash
echo "NAMESPACE | POD | CONTAINER | readOnlyRootFilesystem"
echo "----------|-----|-----------|----------------------"

kubectl get pods --all-namespaces -o json | jq -r '
  .items[] |
  .metadata.namespace as $ns |
  .metadata.name as $pod |
  .spec.containers[] |
  select(
    .securityContext.readOnlyRootFilesystem != true
  ) |
  "\($ns) | \($pod) | \(.name) | \(.securityContext.readOnlyRootFilesystem // "not set")"
'
SCRIPT

chmod +x /tmp/find-mutable-containers.sh

Step 4: Run and Save Output

bash
/tmp/find-mutable-containers.sh > /tmp/mutable-containers.txt
cat /tmp/mutable-containers.txt

Verification

bash
kubectl rollout status deployment/web-frontend -n public
kubectl rollout status daemonset/logging-agent -n monitoring

Final Score Calculation

Add up the weights for all questions you answered completely and correctly:

Score RangeResult
67-100%PASS -- You are exam-ready
55-66%CLOSE -- Minor gaps, review and reattempt in a few days
40-54%NEEDS WORK -- Revisit domain materials
Below 40%NOT READY -- Complete the study guide before reattempting

Next Steps

If you passed both mock exams:

  1. Review the Exam Tips & Cheatsheets section
  2. Schedule your CKS exam within 1-2 weeks
  3. Do a final review of cheatsheets the day before the exam
  4. Get a good night's sleep before exam day

Released under the MIT License.