Skip to content

Mock Exam 1 - Questions

Timed Exam

Set a timer for 2 hours before starting. Do not look at the solutions until the timer expires. Use only kubernetes.io documentation as a reference.

Exam Instructions

  • This exam contains 17 questions totaling 100 points
  • Passing score: 67 points
  • Each question specifies the cluster context to switch to
  • Read each question carefully -- missing a small detail can cost you the entire question
  • Flag difficult questions and return to them after completing easier ones

Question 1

Weight7%
DifficultyMedium
DomainCluster Setup
Clusterssh cluster1-controlplane

Scenario

A CIS benchmark scan using kube-bench has identified several issues on the control plane node of cluster1. You need to fix the findings.

Tasks

  1. SSH into cluster1-controlplane and run kube-bench against the master node targets
  2. Fix the following findings:
    • Ensure that the --authorization-mode argument on the API server includes Node and RBAC (not just AlwaysAllow)
    • Ensure that the --profiling argument on the API server is set to false
    • Ensure that the --audit-log-path argument is set to /var/log/apiserver/audit.log
    • Ensure that the --audit-log-maxage argument is set to 30
  3. Verify the API server restarts successfully after your changes

Question 2

Weight4%
DifficultyEasy
DomainCluster Setup
Clusterkubectl config use-context cluster1

Scenario

The namespace payments currently has no network restrictions. Any pod can communicate with any other pod across all namespaces.

Tasks

  1. Create a default-deny-ingress NetworkPolicy in the payments namespace that denies all incoming traffic to all pods
  2. Create a default-deny-egress NetworkPolicy in the payments namespace that denies all outgoing traffic from all pods
  3. Create a NetworkPolicy named allow-payment-api in the payments namespace that:
    • Applies to pods with the label app: payment-api
    • Allows ingress from pods with the label app: frontend in the web namespace on port 8443
    • Allows egress to pods with the label app: payment-db in the payments namespace on port 5432
    • Allows egress to any destination on port 53 (UDP and TCP) for DNS resolution

Question 3

Weight8%
DifficultyHard
DomainCluster Hardening
Clusterkubectl config use-context cluster1

Scenario

The cluster has an overly permissive RBAC configuration. A ServiceAccount named deployment-manager in the dev namespace currently has cluster-admin privileges granted through a ClusterRoleBinding named dev-admin-binding.

Tasks

  1. Delete the ClusterRoleBinding dev-admin-binding
  2. Create a new Role named deployment-manager-role in the dev namespace with the following permissions:
    • deployments (apps group): get, list, watch, create, update, patch
    • replicasets (apps group): get, list, watch
    • pods: get, list, watch, delete
    • services: get, list
    • configmaps: get, list
    • secrets: get (only)
  3. Create a RoleBinding named deployment-manager-binding in the dev namespace that binds the deployment-manager-role to the ServiceAccount deployment-manager
  4. Verify the ServiceAccount can create deployments but cannot delete secrets

Question 4

Weight6%
DifficultyMedium
DomainCluster Hardening
Clusterkubectl config use-context cluster2

Scenario

Several ServiceAccounts in the production namespace have automounted API tokens that are not needed. Additionally, the legacy-app deployment is using the default ServiceAccount.

Tasks

  1. Create a new ServiceAccount named legacy-app-sa in the production namespace with automountServiceAccountToken: false
  2. Modify the default ServiceAccount in the production namespace to set automountServiceAccountToken: false
  3. Update the legacy-app deployment in the production namespace to use the legacy-app-sa ServiceAccount
  4. Ensure that the deployment rolls out successfully with the new ServiceAccount

Question 5

Weight4%
DifficultyEasy
DomainCluster Hardening
Clusterssh cluster2-controlplane

Scenario

The Kubernetes version on cluster2 needs to be upgraded from its current version to the next minor release to address known security vulnerabilities.

Tasks

  1. SSH into cluster2-controlplane and determine the current Kubernetes version
  2. Upgrade the control plane components (kubeadm, kubelet, kubectl) to the next available minor version
  3. Verify that all control plane components are running the new version
  4. Ensure the node shows as Ready after the upgrade

TIP

Use apt-cache madison kubeadm to find available versions. Remember to drain and uncordon if needed.


Question 6

Weight7%
DifficultyHard
DomainSystem Hardening
Clusterssh cluster1-node01

Scenario

A container running on cluster1-node01 requires an AppArmor profile to restrict its capabilities. The profile needs to prevent the container from writing to the filesystem except for specific paths.

Tasks

  1. SSH into cluster1-node01
  2. Create an AppArmor profile named k8s-restricted-write at /etc/apparmor.d/k8s-restricted-write that:
    • Allows read access to all files
    • Allows write access only to /tmp/** and /var/log/app/**
    • Denies write access to all other paths
    • Allows network access
  3. Load the profile using apparmor_parser
  4. Verify the profile is loaded with aa-status
  5. Switch context to cluster1 and update the pod named restricted-app in the secure namespace to use this AppArmor profile

Question 7

Weight5%
DifficultyMedium
DomainSystem Hardening
Clusterkubectl config use-context cluster1

Scenario

A custom seccomp profile is required for the audit-logger pod in the monitoring namespace. The profile should restrict system calls to only those necessary for the application.

Tasks

  1. Create a seccomp profile at /var/lib/kubelet/seccomp/profiles/audit-logger.json on the node where the pod will run with the following configuration:
    • Default action: SCMP_ACT_ERRNO
    • Allow these syscalls: read, write, open, close, stat, fstat, lstat, poll, lseek, mmap, mprotect, munmap, brk, rt_sigaction, rt_sigprocmask, ioctl, access, pipe, select, sched_yield, mremap, msync, mincore, madvise, shmget, shmat, shmctl, dup, dup2, pause, nanosleep, getpid, socket, connect, accept, sendto, recvfrom, bind, listen, getsockname, getpeername, clone, execve, exit, wait4, kill, uname, fcntl, flock, fsync, fdatasync, getcwd, readlink, getuid, getgid, geteuid, getegid, getppid, getpgrp, setsid, arch_prctl, exit_group, openat, newfstatat, set_tid_address, set_robust_list, futex, epoll_create1, epoll_ctl, epoll_wait, getrandom, close_range, pread64, pwrite64, writev, readv, sigaltstack, rt_sigreturn, getdents64, clock_gettime, clock_nanosleep, sysinfo, prctl, rseq
  2. Update the audit-logger pod specification to use this seccomp profile with localhostProfile

Question 8

Weight6%
DifficultyMedium
DomainSystem Hardening
Clusterssh cluster1-controlplane

Scenario

The control plane node has unnecessary services running and ports exposed. You need to reduce the attack surface.

Tasks

  1. SSH into cluster1-controlplane
  2. Identify and stop the following unnecessary services: apache2, rpcbind
  3. Disable the services so they do not start on boot
  4. Find all processes listening on ports that are NOT standard Kubernetes or system ports (22, 53, 2379, 2380, 6443, 10250, 10257, 10259) and note them
  5. Remove any packages associated with the unnecessary services: apache2, rpcbind
  6. Ensure that the node still functions correctly as a control plane node after your changes

Question 9

Weight7%
DifficultyHard
DomainMinimize Microservice Vulnerabilities
Clusterkubectl config use-context cluster1

Scenario

The api-gateway deployment in the frontend namespace is running containers with excessive privileges. You need to harden the pod security configuration.

Tasks

  1. Modify the api-gateway deployment to enforce the following security context at the pod level:
    • runAsNonRoot: true
    • runAsUser: 1000
    • runAsGroup: 3000
    • fsGroup: 2000
    • seccompProfile type: RuntimeDefault
  2. Add the following security context at the container level for all containers:
    • allowPrivilegeEscalation: false
    • readOnlyRootFilesystem: true
    • capabilities.drop: ["ALL"]
    • capabilities.add: ["NET_BIND_SERVICE"]
  3. Add the following volumeMounts and volumes to allow the application to write to necessary paths:
    • An emptyDir volume named tmp-dir mounted at /tmp
    • An emptyDir volume named cache-dir mounted at /var/cache
  4. Verify the deployment rolls out successfully with all pods running

Question 10

Weight7%
DifficultyHard
DomainMinimize Microservice Vulnerabilities
Clusterkubectl config use-context cluster2

Scenario

The data-processor namespace needs to enforce Pod Security Standards. Currently, there are no restrictions on what pods can be deployed.

Tasks

  1. Label the data-processor namespace to enforce the restricted Pod Security Standard in enforce mode
  2. Label the namespace to use the restricted standard in warn mode for version latest
  3. Label the namespace to use the restricted standard in audit mode for version latest
  4. There is an existing pod legacy-processor in the namespace that violates the restricted standard. Create a copy of this pod's manifest, fix all security violations to make it compliant with the restricted standard, save it as /tmp/fixed-legacy-processor.yaml, and apply it
  5. Verify the fixed pod runs successfully under the enforced policy

Question 11

Weight6%
DifficultyMedium
DomainMinimize Microservice Vulnerabilities
Clusterkubectl config use-context cluster1

Scenario

An OPA Gatekeeper ConstraintTemplate and Constraint are needed to enforce a policy that prevents containers from running as root.

Tasks

  1. OPA Gatekeeper is already installed in the cluster. Create a ConstraintTemplate named k8spspreventroot that:
    • Checks if spec.containers[].securityContext.runAsNonRoot is set to true
    • Also checks spec.initContainers[] if present
    • Provides a descriptive violation message including the container name
  2. Create a Constraint named prevent-root-containers using the template that:
    • Applies to Pod resources
    • Applies to namespaces with the label enforce-security: "true"
    • Excludes the kube-system namespace
  3. Label the production namespace with enforce-security: "true"
  4. Verify that creating a pod without runAsNonRoot: true in the production namespace is rejected

Question 12

Weight7%
DifficultyHard
DomainMinimize Microservice Vulnerabilities
Clusterkubectl config use-context cluster2

Scenario

The application team needs to manage secrets more securely. Currently, secrets in the finance namespace are stored as plain base64-encoded values and there is no encryption at rest configured.

Tasks

  1. SSH into cluster2-controlplane and create an EncryptionConfiguration at /etc/kubernetes/pki/encryption-config.yaml that:
    • Uses aescbc as the encryption provider
    • Has the secret name finance-secrets-key
    • Uses a 32-byte base64-encoded encryption key (generate one)
    • Applies to secrets resources
    • Also has the identity provider as a fallback
  2. Configure the API server to use this encryption configuration by adding the --encryption-provider-config flag
  3. Wait for the API server to restart
  4. Create a new secret named db-credentials in the finance namespace with key password and value S3cur3P@ssw0rd!
  5. Verify the secret is encrypted at rest by reading it directly from etcd

Question 13

Weight6%
DifficultyMedium
DomainSupply Chain Security
Clusterkubectl config use-context cluster1

Scenario

Several container images in the staging namespace need to be scanned for vulnerabilities. You must identify and fix critical security issues.

Tasks

  1. Use trivy to scan the image nginx:1.19.0 and save the output showing only CRITICAL and HIGH vulnerabilities to /tmp/nginx-scan.txt
  2. Use trivy to scan the image redis:6.0.5 and save the output showing only CRITICAL vulnerabilities to /tmp/redis-scan.txt
  3. The web-server deployment in the staging namespace is using nginx:1.19.0. Update it to use nginx:1.25-alpine which has fewer vulnerabilities
  4. The cache deployment in the staging namespace is using redis:6.0.5. Update it to use redis:7-alpine
  5. Verify both deployments are running successfully with the updated images

Question 14

Weight6%
DifficultyMedium
DomainSupply Chain Security
Clusterkubectl config use-context cluster1

Scenario

The cluster needs to restrict which container registries are allowed. An ImagePolicyWebhook admission controller needs to be configured.

Tasks

  1. An ImagePolicyWebhook backend service is already running at https://image-policy.kube-system.svc:8443/validate
  2. Create the admission control configuration file at /etc/kubernetes/admission/image-policy-config.yaml that:
    • Sets the defaultAllow to false (deny images when the webhook is unreachable)
    • Configures the webhook to use the kubeconfig at /etc/kubernetes/admission/image-policy-kubeconfig.yaml
  3. Create the kubeconfig file at /etc/kubernetes/admission/image-policy-kubeconfig.yaml that points to the webhook service
  4. Enable the ImagePolicyWebhook admission plugin in the API server by:
    • Adding ImagePolicyWebhook to the --enable-admission-plugins flag
    • Setting --admission-control-config-file to point to your configuration
  5. Verify the API server restarts successfully

Question 15

Weight4%
DifficultyEasy
DomainSupply Chain Security
Clusterkubectl config use-context cluster2

Scenario

A Dockerfile at /root/Dockerfile is used to build an application image. The Dockerfile has several security best practices violations.

Tasks

  1. Review the Dockerfile at /root/Dockerfile on cluster2-controlplane
  2. Fix the following security issues:
    • The image is using the latest tag -- change to a specific version
    • The container runs as root -- add a non-root user and switch to it
    • The image uses ADD for remote URLs -- change to COPY where possible
    • Remove any unnecessary RUN commands that install debug tools (like curl, wget, netcat)
    • Use a multi-stage build to reduce the final image size
  3. Save the fixed Dockerfile to /root/Dockerfile-fixed

Question 16

Weight5%
DifficultyMedium
DomainMonitoring, Logging & Runtime Security
Clusterkubectl config use-context cluster1

Scenario

An audit policy needs to be configured for the Kubernetes API server to log security-relevant events.

Tasks

  1. Create an audit policy at /etc/kubernetes/audit/audit-policy.yaml on the control plane node with the following rules (in order):
    • Do not log requests to the system:kube-controller-manager, system:kube-scheduler, or system:kube-proxy users
    • Log Secret, ConfigMap, and TokenReview resources at the Metadata level
    • Log all resources in the authentication.k8s.io group at the RequestResponse level
    • Log pod exec and attach subresources at the RequestResponse level
    • Log all resources in core and apps groups at the Request level
    • Log everything else at the Metadata level
  2. Configure the API server to use this audit policy with:
    • --audit-policy-file pointing to the policy
    • --audit-log-path set to /var/log/kubernetes/audit/audit.log
    • --audit-log-maxage set to 7
    • --audit-log-maxbackup set to 3
    • --audit-log-maxsize set to 100
  3. Add the necessary volume and volumeMount to the API server static pod manifest
  4. Verify the API server restarts and audit logs are being generated

Question 17

Weight5%
DifficultyHard
DomainMonitoring, Logging & Runtime Security
Clusterkubectl config use-context cluster1

Scenario

Falco is installed on the cluster nodes and is detecting suspicious activity. You need to create custom rules and investigate alerts.

Tasks

  1. SSH into cluster1-controlplane where Falco is installed
  2. Create a custom Falco rule file at /etc/falco/rules.d/custom-rules.yaml with the following rules:
    • Rule 1: Alert when any process opens a shell (/bin/sh, /bin/bash, /usr/bin/sh, /usr/bin/bash) inside a container
      • Priority: WARNING
      • Output should include: time, container name, container ID, user, shell path, and parent process
    • Rule 2: Alert when any file under /etc is modified inside a container
      • Priority: ERROR
      • Output should include: time, container name, file path, user, and process name
    • Rule 3: Alert when an outbound network connection is made from a container to a port other than 80 or 443
      • Priority: NOTICE
      • Output should include: time, container name, destination IP, destination port, and process name
  3. Restart Falco to load the new rules
  4. Verify the rules are loaded by checking Falco's log output
  5. On cluster1, identify the pod in the compromised namespace that has been flagged by Falco for spawning a shell process and delete it

Scoring Summary

QuestionDomainWeightDifficulty
Q1Cluster Setup7%Medium
Q2Cluster Setup4%Easy
Q3Cluster Hardening8%Hard
Q4Cluster Hardening6%Medium
Q5Cluster Hardening4%Easy
Q6System Hardening7%Hard
Q7System Hardening5%Medium
Q8System Hardening6%Medium
Q9Microservice Vulnerabilities7%Hard
Q10Microservice Vulnerabilities7%Hard
Q11Microservice Vulnerabilities6%Medium
Q12Microservice Vulnerabilities7%Hard
Q13Supply Chain Security6%Medium
Q14Supply Chain Security6%Medium
Q15Supply Chain Security4%Easy
Q16Monitoring & Runtime5%Medium
Q17Monitoring & Runtime5%Hard
Total100%

After Completing the Exam

  1. Score yourself honestly using the rubric on the index page
  2. Review ALL solutions, even for questions you answered correctly
  3. Note your weakest domains and allocate extra study time
  4. Wait at least 48 hours before attempting Mock Exam 2

Released under the MIT License.