Skip to content

Mock Exam 2 - Questions

Timed Exam

Set a timer for 2 hours before starting. Do not look at the solutions until the timer expires. Use only kubernetes.io documentation as a reference.

Exam Instructions

  • This exam contains 17 questions totaling 100 points
  • Passing score: 67 points
  • Each question specifies the cluster context to switch to
  • This exam focuses on areas where candidates commonly fail and includes tricky edge cases
  • Flag difficult questions and return to them after completing easier ones

Question 1

Weight6%
DifficultyMedium
DomainCluster Setup
Clusterkubectl config use-context cluster1

Scenario

The database namespace requires strict network segmentation. The security team has reported that pods in this namespace can still reach the Kubernetes API server and external internet endpoints.

Tasks

  1. Create a NetworkPolicy named strict-db-isolation in the database namespace that applies to all pods with label tier: database and:
    • Denies ALL egress traffic by default
    • Allows egress only to pods with label tier: database within the same namespace on port 3306 (TCP)
    • Allows egress to pods with label app: monitoring in the monitoring namespace on port 9090 (TCP)
    • Allows DNS resolution (port 53 TCP and UDP) to the kube-system namespace only
  2. Create a NetworkPolicy named db-ingress-control in the database namespace that:
    • Applies to pods with label tier: database
    • Allows ingress only from pods with label tier: application in the backend namespace on port 3306
    • Allows ingress from pods with label app: monitoring in the monitoring namespace on port 9090
  3. Verify that the policies are correctly applied

Tricky Part

The DNS egress rule must target the kube-system namespace specifically, not allow DNS to any destination. This prevents data exfiltration via DNS tunneling.


Question 2

Weight7%
DifficultyHard
DomainCluster Setup
Clusterssh cluster1-controlplane

Scenario

The cluster's TLS certificates need to be inspected and potentially renewed. The security audit has flagged that some certificates may be close to expiry and the API server's certificate Subject Alternative Names (SANs) may be incomplete.

Tasks

  1. SSH into cluster1-controlplane
  2. Check the expiration dates of all Kubernetes certificates using kubeadm certs check-expiration
  3. Inspect the API server certificate and list all Subject Alternative Names (SANs):
    openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text | grep -A1 "Subject Alternative Name"
  4. If any certificate expires within 30 days, renew it using kubeadm certs renew
  5. Verify the etcd server certificate is using TLS by checking the etcd static pod manifest for the correct --cert-file and --key-file flags
  6. Verify the API server is configured to communicate with etcd using TLS by checking --etcd-certfile, --etcd-keyfile, and --etcd-cafile

Question 3

Weight5%
DifficultyMedium
DomainCluster Setup
Clusterssh cluster2-controlplane

Scenario

The Kubernetes dashboard has been deployed on cluster2 with insecure settings. You need to secure it.

Tasks

  1. The Kubernetes dashboard is deployed in the kubernetes-dashboard namespace. Verify it is running
  2. Ensure the dashboard deployment does NOT use --enable-skip-login argument
  3. Ensure the dashboard deployment does NOT use --enable-insecure-login argument
  4. Ensure the dashboard Service is of type ClusterIP (not NodePort or LoadBalancer)
  5. Create a read-only ServiceAccount named dashboard-viewer in the kubernetes-dashboard namespace
  6. Create a ClusterRole named dashboard-view-only that allows only get, list, watch on all resources
  7. Bind the ClusterRole to the ServiceAccount

Question 4

Weight7%
DifficultyHard
DomainCluster Hardening
Clusterkubectl config use-context cluster1

Scenario

A security audit has discovered several dangerous RBAC configurations across the cluster. You need to identify and fix them.

Tasks

  1. Find all ClusterRoleBindings that grant the cluster-admin ClusterRole and list them. Remove any bindings that are NOT for system:masters group or system:admin user or ServiceAccounts in kube-system
  2. There is a ClusterRole named debug-role that grants create on pods/exec. This is a dangerous permission. Modify the ClusterRole to remove pods/exec and pods/attach from its permissions
  3. Find any Role or ClusterRole that grants escalate, bind, or impersonate verbs and save the output to /tmp/dangerous-rbac.txt
  4. The ServiceAccount ci-pipeline in the cicd namespace has a long-lived token secret. Create a time-bound token for this ServiceAccount with a 1-hour expiration instead and save it to /tmp/ci-token.txt

Question 5

Weight6%
DifficultyMedium
DomainCluster Hardening
Clusterssh cluster1-controlplane

Scenario

The API server on cluster1 needs additional hardening. Several flags are misconfigured or missing.

Tasks

  1. SSH into cluster1-controlplane and inspect the API server manifest
  2. Ensure the following configurations are in place:
    • --anonymous-auth=false
    • --insecure-port=0 (or flag is absent, as it is deprecated)
    • --kubelet-certificate-authority is set to the correct CA file path
    • --enable-admission-plugins includes NodeRestriction
    • --authorization-mode includes Node,RBAC (and does NOT include AlwaysAllow)
    • --request-timeout is set to 300s
  3. Verify the API server restarts successfully
  4. Verify you can still access the cluster with kubectl get nodes

Question 6

Weight7%
DifficultyHard
DomainSystem Hardening
Clusterssh cluster1-node01

Scenario

A seccomp profile needs to be applied to an existing workload, and an AppArmor profile needs to be configured on a different pod. Both are running on cluster1-node01.

Tasks

  1. SSH into cluster1-node01
  2. Create a seccomp profile at /var/lib/kubelet/seccomp/profiles/strict-net.json that:
    • Default action: SCMP_ACT_ERRNO
    • Allows all syscalls from the RuntimeDefault set
    • Additionally blocks: socket, connect, accept, bind, listen (deny networking syscalls)
  3. There is an existing AppArmor profile at /etc/apparmor.d/k8s-deny-write. Load it if it is not already loaded
  4. Switch context to cluster1 and update the pod secure-api in the restricted namespace to use the strict-net seccomp profile
  5. Update the pod secure-writer in the restricted namespace to use the k8s-deny-write AppArmor profile
  6. Verify both pods are running after the changes

Question 7

Weight5%
DifficultyMedium
DomainSystem Hardening
Clusterssh cluster2-controlplane

Scenario

The worker nodes in cluster2 have excessive access permissions. Users should not be able to SSH directly into worker nodes, and unnecessary kernel modules should be disabled.

Tasks

  1. SSH into cluster2-controlplane and verify that the node cluster2-node01 is accessible
  2. On cluster2-node01, configure the following:
    • Disable the ip_forward sysctl ONLY for containers (do not affect the host -- check if /etc/sysctl.d/ has container-specific settings)
    • Blacklist the following kernel modules by creating /etc/modprobe.d/k8s-hardening.conf: dccp, sctp, rds, tipc
    • Remove the tcpdump and strace packages if installed
  3. Verify the kernel module blacklist is in effect

Question 8

Weight5%
DifficultyMedium
DomainSystem Hardening
Clusterssh cluster1-controlplane

Scenario

Unnecessary privileges have been granted at the OS level on the control plane node. You need to reduce the attack surface.

Tasks

  1. SSH into cluster1-controlplane
  2. Find all SUID binaries on the system and save the list to /tmp/suid-binaries.txt
  3. Find all world-writable directories and save the list to /tmp/world-writable-dirs.txt
  4. Remove the SUID bit from /usr/bin/newgrp and /usr/bin/chfn (these are not needed for Kubernetes operations)
  5. Verify the Kubernetes control plane components are still functioning after your changes

Question 9

Weight7%
DifficultyHard
DomainMinimize Microservice Vulnerabilities
Clusterkubectl config use-context cluster1

Scenario

The e-commerce namespace runs multiple microservices. The security team requires that ALL pods in this namespace meet strict security requirements. Some existing pods violate these requirements.

Tasks

  1. Label the e-commerce namespace to enforce the baseline Pod Security Standard (enforce mode)
  2. Also set restricted in warn and audit modes for the namespace
  3. Identify all pods in the e-commerce namespace that would violate the restricted standard using a dry-run approach
  4. The checkout-service deployment in e-commerce is running with privileged: true. Fix it by:
    • Removing the privileged: true flag
    • Adding runAsNonRoot: true with runAsUser: 1000
    • Dropping all capabilities and only adding NET_BIND_SERVICE
    • Adding readOnlyRootFilesystem: true with emptyDir volumes for /tmp and /var/run
    • Setting allowPrivilegeEscalation: false
    • Adding seccompProfile with type RuntimeDefault
  5. Verify the deployment rolls out successfully

Question 10

Weight6%
DifficultyMedium
DomainMinimize Microservice Vulnerabilities
Clusterkubectl config use-context cluster2

Scenario

The application team needs to implement container sandboxing using RuntimeClass for high-security workloads in the sandbox namespace.

Tasks

  1. A container runtime runsc (gVisor) is already configured on the nodes with handler name runsc
  2. Create a RuntimeClass named gvisor that uses the runsc handler
  3. Update the payment-processor deployment in the sandbox namespace to use the gvisor RuntimeClass
  4. Verify the deployment is running with the correct RuntimeClass
  5. Create a second RuntimeClass named kata with handler kata-runtime and set scheduling to only run on nodes with label runtime: kata
  6. Save the RuntimeClass YAML to /tmp/kata-runtimeclass.yaml

Question 11

Weight7%
DifficultyHard
DomainMinimize Microservice Vulnerabilities
Clusterkubectl config use-context cluster1

Scenario

Secrets management needs to be overhauled. Several secrets in the banking namespace are improperly managed.

Tasks

  1. A secret named api-keys in the banking namespace contains a key stripe-key with value sk_live_abc123. Verify this secret exists
  2. The secret is currently of type Opaque. Create a new secret named api-keys-v2 of type Opaque with the same data, but ensure the pod payment-gateway in banking mounts this new secret as a volume at /etc/secrets with defaultMode: 0400 (read-only for owner)
  3. Update the payment-gateway pod to:
    • Mount the secret volume with readOnly: true
    • Set the container's security context to runAsUser: 1000 and runAsGroup: 1000
    • Remove any environment variables that reference the old api-keys secret
  4. Delete the old api-keys secret after confirming the new configuration works
  5. SSH into the control plane and verify that secrets are encrypted at rest (check if --encryption-provider-config is set on the API server)

Question 12

Weight6%
DifficultyMedium
DomainMinimize Microservice Vulnerabilities
Clusterkubectl config use-context cluster2

Scenario

An OPA Gatekeeper policy is needed to enforce that all containers must use images from approved registries only.

Tasks

  1. Create a ConstraintTemplate named k8sallowedrepos that:
    • Takes a parameter repos which is a list of allowed repository prefixes
    • Checks that all container images (including init containers) start with one of the allowed prefixes
    • Returns a violation message that includes the container name and the disallowed image
  2. Create a Constraint named allowed-repos using the template that:
    • Applies to Pod resources in all namespaces except kube-system and gatekeeper-system
    • Allows only these repositories: docker.io/library/, gcr.io/company-project/, registry.internal.company.com/
  3. Verify by attempting to create a pod with an image from quay.io/malicious/image -- it should be rejected
  4. Verify a pod with image docker.io/library/nginx:1.25 is accepted

Question 13

Weight7%
DifficultyHard
DomainSupply Chain Security
Clusterkubectl config use-context cluster1

Scenario

Multiple images in the production namespace need vulnerability assessment and remediation. You also need to set up continuous scanning.

Tasks

  1. Use trivy to scan the following images and save CRITICAL vulnerabilities only:
    • python:3.8-slim -> save to /tmp/python-scan.txt
    • node:16-alpine -> save to /tmp/node-scan.txt
    • postgres:13 -> save to /tmp/postgres-scan.txt
  2. The analytics-api deployment in production uses python:3.8-slim. Find the image with the fewest CRITICAL vulnerabilities from these options and update the deployment:
    • python:3.11-slim
    • python:3.12-alpine
    • cgr.dev/chainguard/python:latest
  3. Scan the chosen image and save the results to /tmp/analytics-scan-fixed.txt
  4. Use kubesec to scan the analytics-api deployment manifest and save the output to /tmp/kubesec-analytics.txt

Question 14

Weight5%
DifficultyMedium
DomainSupply Chain Security
Clusterkubectl config use-context cluster1

Scenario

Static analysis has identified security concerns in several pod specifications. You need to use kubesec to assess and improve them.

Tasks

  1. Export the pod specification of data-pipeline in the etl namespace to /tmp/data-pipeline.yaml
  2. Run kubesec scan on the exported file and save the result to /tmp/kubesec-data-pipeline.txt
  3. Based on the kubesec recommendations, modify the pod specification to achieve a score of at least 8 by adding:
    • runAsNonRoot: true
    • readOnlyRootFilesystem: true
    • runAsUser > 10000
    • capabilities.drop: ["ALL"]
    • Resource limits (CPU and memory)
    • A ServiceAccount that is not default
  4. Save the improved manifest to /tmp/data-pipeline-hardened.yaml
  5. Run kubesec scan on the hardened manifest and save the result to /tmp/kubesec-data-pipeline-hardened.txt to confirm the improved score

Question 15

Weight4%
DifficultyEasy
DomainSupply Chain Security
Clusterkubectl config use-context cluster2

Scenario

An allowlist for container image registries needs to be enforced at the admission level using a simple validation webhook.

Tasks

  1. There is an existing ValidatingWebhookConfiguration named image-registry-validator in the cluster. Inspect it
  2. The webhook is currently configured to Ignore failures. Change the failurePolicy to Fail so that if the webhook is unavailable, pod creation is blocked
  3. Ensure the webhook matches on CREATE operations for pods only
  4. The webhook should NOT apply to the kube-system namespace. Add a namespaceSelector that excludes namespaces with the label kubernetes.io/metadata.name: kube-system
  5. Verify the webhook configuration is valid by describing it

Question 16

Weight5%
DifficultyHard
DomainMonitoring, Logging & Runtime Security
Clusterkubectl config use-context cluster1

Scenario

Suspicious activity has been detected in the cluster. You need to use audit logs and Falco to investigate and respond.

Tasks

  1. Audit logging is already enabled on cluster1. SSH into the control plane and examine the audit logs at /var/log/kubernetes/audit/audit.log
  2. Find all audit events where secrets were accessed (verb: get, list, or watch) in the last 100 lines of the log and save them to /tmp/secret-access-audit.txt
  3. Identify any requests from non-system users (users not starting with system:) that accessed secrets in the kube-system namespace and save them to /tmp/suspicious-secret-access.txt
  4. Check Falco alerts for any container that has:
    • Spawned a reverse shell
    • Modified files under /etc
    • Made unexpected network connections
  5. Save all suspicious Falco findings to /tmp/falco-findings.txt
  6. If any compromised pods are identified, delete them and create a NetworkPolicy in their namespace to prevent further data exfiltration (deny all egress except DNS)

Question 17

Weight5%
DifficultyMedium
DomainMonitoring, Logging & Runtime Security
Clusterkubectl config use-context cluster2

Scenario

Several containers in the cluster need to be made immutable. Additionally, you need to detect any containers that are not immutable.

Tasks

  1. The web-frontend deployment in the public namespace has containers that can write to their root filesystem. Make the containers immutable by:
    • Setting readOnlyRootFilesystem: true
    • Adding emptyDir volumes for /tmp, /var/cache/nginx, and /var/run
    • Setting allowPrivilegeEscalation: false
  2. The logging-agent DaemonSet in the monitoring namespace needs to write to /var/log. Configure it with:
    • readOnlyRootFilesystem: true
    • An emptyDir volume for /tmp
    • A hostPath volume for /var/log mounted as readOnly: false (the agent needs to write logs)
  3. Write a script at /tmp/find-mutable-containers.sh that finds all pods across all namespaces where readOnlyRootFilesystem is NOT set to true and outputs the namespace, pod name, and container name
  4. Run the script and save the output to /tmp/mutable-containers.txt

Scoring Summary

QuestionDomainWeightDifficulty
Q1Cluster Setup6%Medium
Q2Cluster Setup7%Hard
Q3Cluster Setup5%Medium
Q4Cluster Hardening7%Hard
Q5Cluster Hardening6%Medium
Q6System Hardening7%Hard
Q7System Hardening5%Medium
Q8System Hardening5%Medium
Q9Microservice Vulnerabilities7%Hard
Q10Microservice Vulnerabilities6%Medium
Q11Microservice Vulnerabilities7%Hard
Q12Microservice Vulnerabilities6%Medium
Q13Supply Chain Security7%Hard
Q14Supply Chain Security5%Medium
Q15Supply Chain Security4%Easy
Q16Monitoring & Runtime5%Hard
Q17Monitoring & Runtime5%Medium
Total100%

After Completing the Exam

  1. Score yourself honestly
  2. Compare your results with Mock Exam 1 -- did you improve?
  3. If you scored above 80% on both exams, you are likely ready for the real CKS exam
  4. Schedule the exam within 1-2 weeks while the material is fresh

Released under the MIT License.