Mock Exam 1 - Questions
Timed Exam
Set a timer for 2 hours before starting. Do not look at the solutions until the timer expires. Use only kubernetes.io documentation as a reference.
Exam Instructions
- This exam contains 17 questions totaling 100 points
- Passing score: 67 points
- Each question specifies the cluster context to switch to
- Read each question carefully -- missing a small detail can cost you the entire question
- Flag difficult questions and return to them after completing easier ones
Question 1
| Weight | 7% |
| Difficulty | Medium |
| Domain | Cluster Setup |
| Cluster | ssh cluster1-controlplane |
Scenario
A CIS benchmark scan using kube-bench has identified several issues on the control plane node of cluster1. You need to fix the findings.
Tasks
- SSH into
cluster1-controlplaneand runkube-benchagainst the master node targets - Fix the following findings:
- Ensure that the
--authorization-modeargument on the API server includesNodeandRBAC(not justAlwaysAllow) - Ensure that the
--profilingargument on the API server is set tofalse - Ensure that the
--audit-log-pathargument is set to/var/log/apiserver/audit.log - Ensure that the
--audit-log-maxageargument is set to30
- Ensure that the
- Verify the API server restarts successfully after your changes
Question 2
| Weight | 4% |
| Difficulty | Easy |
| Domain | Cluster Setup |
| Cluster | kubectl config use-context cluster1 |
Scenario
The namespace payments currently has no network restrictions. Any pod can communicate with any other pod across all namespaces.
Tasks
- Create a
default-deny-ingressNetworkPolicy in thepaymentsnamespace that denies all incoming traffic to all pods - Create a
default-deny-egressNetworkPolicy in thepaymentsnamespace that denies all outgoing traffic from all pods - Create a NetworkPolicy named
allow-payment-apiin thepaymentsnamespace that:- Applies to pods with the label
app: payment-api - Allows ingress from pods with the label
app: frontendin thewebnamespace on port8443 - Allows egress to pods with the label
app: payment-dbin thepaymentsnamespace on port5432 - Allows egress to any destination on port
53(UDP and TCP) for DNS resolution
- Applies to pods with the label
Question 3
| Weight | 8% |
| Difficulty | Hard |
| Domain | Cluster Hardening |
| Cluster | kubectl config use-context cluster1 |
Scenario
The cluster has an overly permissive RBAC configuration. A ServiceAccount named deployment-manager in the dev namespace currently has cluster-admin privileges granted through a ClusterRoleBinding named dev-admin-binding.
Tasks
- Delete the ClusterRoleBinding
dev-admin-binding - Create a new Role named
deployment-manager-rolein thedevnamespace with the following permissions:deployments(apps group):get,list,watch,create,update,patchreplicasets(apps group):get,list,watchpods:get,list,watch,deleteservices:get,listconfigmaps:get,listsecrets:get(only)
- Create a RoleBinding named
deployment-manager-bindingin thedevnamespace that binds thedeployment-manager-roleto the ServiceAccountdeployment-manager - Verify the ServiceAccount can create deployments but cannot delete secrets
Question 4
| Weight | 6% |
| Difficulty | Medium |
| Domain | Cluster Hardening |
| Cluster | kubectl config use-context cluster2 |
Scenario
Several ServiceAccounts in the production namespace have automounted API tokens that are not needed. Additionally, the legacy-app deployment is using the default ServiceAccount.
Tasks
- Create a new ServiceAccount named
legacy-app-sain theproductionnamespace withautomountServiceAccountToken: false - Modify the
defaultServiceAccount in theproductionnamespace to setautomountServiceAccountToken: false - Update the
legacy-appdeployment in theproductionnamespace to use thelegacy-app-saServiceAccount - Ensure that the deployment rolls out successfully with the new ServiceAccount
Question 5
| Weight | 4% |
| Difficulty | Easy |
| Domain | Cluster Hardening |
| Cluster | ssh cluster2-controlplane |
Scenario
The Kubernetes version on cluster2 needs to be upgraded from its current version to the next minor release to address known security vulnerabilities.
Tasks
- SSH into
cluster2-controlplaneand determine the current Kubernetes version - Upgrade the control plane components (kubeadm, kubelet, kubectl) to the next available minor version
- Verify that all control plane components are running the new version
- Ensure the node shows as
Readyafter the upgrade
TIP
Use apt-cache madison kubeadm to find available versions. Remember to drain and uncordon if needed.
Question 6
| Weight | 7% |
| Difficulty | Hard |
| Domain | System Hardening |
| Cluster | ssh cluster1-node01 |
Scenario
A container running on cluster1-node01 requires an AppArmor profile to restrict its capabilities. The profile needs to prevent the container from writing to the filesystem except for specific paths.
Tasks
- SSH into
cluster1-node01 - Create an AppArmor profile named
k8s-restricted-writeat/etc/apparmor.d/k8s-restricted-writethat:- Allows read access to all files
- Allows write access only to
/tmp/**and/var/log/app/** - Denies write access to all other paths
- Allows network access
- Load the profile using
apparmor_parser - Verify the profile is loaded with
aa-status - Switch context to
cluster1and update the pod namedrestricted-appin thesecurenamespace to use this AppArmor profile
Question 7
| Weight | 5% |
| Difficulty | Medium |
| Domain | System Hardening |
| Cluster | kubectl config use-context cluster1 |
Scenario
A custom seccomp profile is required for the audit-logger pod in the monitoring namespace. The profile should restrict system calls to only those necessary for the application.
Tasks
- Create a seccomp profile at
/var/lib/kubelet/seccomp/profiles/audit-logger.jsonon the node where the pod will run with the following configuration:- Default action:
SCMP_ACT_ERRNO - Allow these syscalls:
read,write,open,close,stat,fstat,lstat,poll,lseek,mmap,mprotect,munmap,brk,rt_sigaction,rt_sigprocmask,ioctl,access,pipe,select,sched_yield,mremap,msync,mincore,madvise,shmget,shmat,shmctl,dup,dup2,pause,nanosleep,getpid,socket,connect,accept,sendto,recvfrom,bind,listen,getsockname,getpeername,clone,execve,exit,wait4,kill,uname,fcntl,flock,fsync,fdatasync,getcwd,readlink,getuid,getgid,geteuid,getegid,getppid,getpgrp,setsid,arch_prctl,exit_group,openat,newfstatat,set_tid_address,set_robust_list,futex,epoll_create1,epoll_ctl,epoll_wait,getrandom,close_range,pread64,pwrite64,writev,readv,sigaltstack,rt_sigreturn,getdents64,clock_gettime,clock_nanosleep,sysinfo,prctl,rseq
- Default action:
- Update the
audit-loggerpod specification to use this seccomp profile withlocalhostProfile
Question 8
| Weight | 6% |
| Difficulty | Medium |
| Domain | System Hardening |
| Cluster | ssh cluster1-controlplane |
Scenario
The control plane node has unnecessary services running and ports exposed. You need to reduce the attack surface.
Tasks
- SSH into
cluster1-controlplane - Identify and stop the following unnecessary services:
apache2,rpcbind - Disable the services so they do not start on boot
- Find all processes listening on ports that are NOT standard Kubernetes or system ports (22, 53, 2379, 2380, 6443, 10250, 10257, 10259) and note them
- Remove any packages associated with the unnecessary services:
apache2,rpcbind - Ensure that the node still functions correctly as a control plane node after your changes
Question 9
| Weight | 7% |
| Difficulty | Hard |
| Domain | Minimize Microservice Vulnerabilities |
| Cluster | kubectl config use-context cluster1 |
Scenario
The api-gateway deployment in the frontend namespace is running containers with excessive privileges. You need to harden the pod security configuration.
Tasks
- Modify the
api-gatewaydeployment to enforce the following security context at the pod level:runAsNonRoot: truerunAsUser: 1000runAsGroup: 3000fsGroup: 2000seccompProfiletype:RuntimeDefault
- Add the following security context at the container level for all containers:
allowPrivilegeEscalation: falsereadOnlyRootFilesystem: truecapabilities.drop: ["ALL"]capabilities.add: ["NET_BIND_SERVICE"]
- Add the following
volumeMountsandvolumesto allow the application to write to necessary paths:- An
emptyDirvolume namedtmp-dirmounted at/tmp - An
emptyDirvolume namedcache-dirmounted at/var/cache
- An
- Verify the deployment rolls out successfully with all pods running
Question 10
| Weight | 7% |
| Difficulty | Hard |
| Domain | Minimize Microservice Vulnerabilities |
| Cluster | kubectl config use-context cluster2 |
Scenario
The data-processor namespace needs to enforce Pod Security Standards. Currently, there are no restrictions on what pods can be deployed.
Tasks
- Label the
data-processornamespace to enforce therestrictedPod Security Standard inenforcemode - Label the namespace to use the
restrictedstandard inwarnmode for versionlatest - Label the namespace to use the
restrictedstandard inauditmode for versionlatest - There is an existing pod
legacy-processorin the namespace that violates the restricted standard. Create a copy of this pod's manifest, fix all security violations to make it compliant with therestrictedstandard, save it as/tmp/fixed-legacy-processor.yaml, and apply it - Verify the fixed pod runs successfully under the enforced policy
Question 11
| Weight | 6% |
| Difficulty | Medium |
| Domain | Minimize Microservice Vulnerabilities |
| Cluster | kubectl config use-context cluster1 |
Scenario
An OPA Gatekeeper ConstraintTemplate and Constraint are needed to enforce a policy that prevents containers from running as root.
Tasks
- OPA Gatekeeper is already installed in the cluster. Create a
ConstraintTemplatenamedk8spspreventrootthat:- Checks if
spec.containers[].securityContext.runAsNonRootis set totrue - Also checks
spec.initContainers[]if present - Provides a descriptive violation message including the container name
- Checks if
- Create a
Constraintnamedprevent-root-containersusing the template that:- Applies to
Podresources - Applies to namespaces with the label
enforce-security: "true" - Excludes the
kube-systemnamespace
- Applies to
- Label the
productionnamespace withenforce-security: "true" - Verify that creating a pod without
runAsNonRoot: truein theproductionnamespace is rejected
Question 12
| Weight | 7% |
| Difficulty | Hard |
| Domain | Minimize Microservice Vulnerabilities |
| Cluster | kubectl config use-context cluster2 |
Scenario
The application team needs to manage secrets more securely. Currently, secrets in the finance namespace are stored as plain base64-encoded values and there is no encryption at rest configured.
Tasks
- SSH into
cluster2-controlplaneand create anEncryptionConfigurationat/etc/kubernetes/pki/encryption-config.yamlthat:- Uses
aescbcas the encryption provider - Has the secret name
finance-secrets-key - Uses a 32-byte base64-encoded encryption key (generate one)
- Applies to
secretsresources - Also has the
identityprovider as a fallback
- Uses
- Configure the API server to use this encryption configuration by adding the
--encryption-provider-configflag - Wait for the API server to restart
- Create a new secret named
db-credentialsin thefinancenamespace with keypasswordand valueS3cur3P@ssw0rd! - Verify the secret is encrypted at rest by reading it directly from etcd
Question 13
| Weight | 6% |
| Difficulty | Medium |
| Domain | Supply Chain Security |
| Cluster | kubectl config use-context cluster1 |
Scenario
Several container images in the staging namespace need to be scanned for vulnerabilities. You must identify and fix critical security issues.
Tasks
- Use
trivyto scan the imagenginx:1.19.0and save the output showing onlyCRITICALandHIGHvulnerabilities to/tmp/nginx-scan.txt - Use
trivyto scan the imageredis:6.0.5and save the output showing onlyCRITICALvulnerabilities to/tmp/redis-scan.txt - The
web-serverdeployment in thestagingnamespace is usingnginx:1.19.0. Update it to usenginx:1.25-alpinewhich has fewer vulnerabilities - The
cachedeployment in thestagingnamespace is usingredis:6.0.5. Update it to useredis:7-alpine - Verify both deployments are running successfully with the updated images
Question 14
| Weight | 6% |
| Difficulty | Medium |
| Domain | Supply Chain Security |
| Cluster | kubectl config use-context cluster1 |
Scenario
The cluster needs to restrict which container registries are allowed. An ImagePolicyWebhook admission controller needs to be configured.
Tasks
- An
ImagePolicyWebhookbackend service is already running athttps://image-policy.kube-system.svc:8443/validate - Create the admission control configuration file at
/etc/kubernetes/admission/image-policy-config.yamlthat:- Sets the
defaultAllowtofalse(deny images when the webhook is unreachable) - Configures the webhook to use the kubeconfig at
/etc/kubernetes/admission/image-policy-kubeconfig.yaml
- Sets the
- Create the kubeconfig file at
/etc/kubernetes/admission/image-policy-kubeconfig.yamlthat points to the webhook service - Enable the
ImagePolicyWebhookadmission plugin in the API server by:- Adding
ImagePolicyWebhookto the--enable-admission-pluginsflag - Setting
--admission-control-config-fileto point to your configuration
- Adding
- Verify the API server restarts successfully
Question 15
| Weight | 4% |
| Difficulty | Easy |
| Domain | Supply Chain Security |
| Cluster | kubectl config use-context cluster2 |
Scenario
A Dockerfile at /root/Dockerfile is used to build an application image. The Dockerfile has several security best practices violations.
Tasks
- Review the Dockerfile at
/root/Dockerfileoncluster2-controlplane - Fix the following security issues:
- The image is using the
latesttag -- change to a specific version - The container runs as root -- add a non-root user and switch to it
- The image uses
ADDfor remote URLs -- change toCOPYwhere possible - Remove any unnecessary
RUNcommands that install debug tools (likecurl,wget,netcat) - Use a multi-stage build to reduce the final image size
- The image is using the
- Save the fixed Dockerfile to
/root/Dockerfile-fixed
Question 16
| Weight | 5% |
| Difficulty | Medium |
| Domain | Monitoring, Logging & Runtime Security |
| Cluster | kubectl config use-context cluster1 |
Scenario
An audit policy needs to be configured for the Kubernetes API server to log security-relevant events.
Tasks
- Create an audit policy at
/etc/kubernetes/audit/audit-policy.yamlon the control plane node with the following rules (in order):- Do not log requests to the
system:kube-controller-manager,system:kube-scheduler, orsystem:kube-proxyusers - Log
Secret,ConfigMap, andTokenReviewresources at theMetadatalevel - Log all resources in the
authentication.k8s.iogroup at theRequestResponselevel - Log pod
execandattachsubresources at theRequestResponselevel - Log all resources in core and
appsgroups at theRequestlevel - Log everything else at the
Metadatalevel
- Do not log requests to the
- Configure the API server to use this audit policy with:
--audit-policy-filepointing to the policy--audit-log-pathset to/var/log/kubernetes/audit/audit.log--audit-log-maxageset to7--audit-log-maxbackupset to3--audit-log-maxsizeset to100
- Add the necessary volume and volumeMount to the API server static pod manifest
- Verify the API server restarts and audit logs are being generated
Question 17
| Weight | 5% |
| Difficulty | Hard |
| Domain | Monitoring, Logging & Runtime Security |
| Cluster | kubectl config use-context cluster1 |
Scenario
Falco is installed on the cluster nodes and is detecting suspicious activity. You need to create custom rules and investigate alerts.
Tasks
- SSH into
cluster1-controlplanewhere Falco is installed - Create a custom Falco rule file at
/etc/falco/rules.d/custom-rules.yamlwith the following rules:- Rule 1: Alert when any process opens a shell (
/bin/sh,/bin/bash,/usr/bin/sh,/usr/bin/bash) inside a container- Priority:
WARNING - Output should include:
time,container name,container ID,user,shell path, andparent process
- Priority:
- Rule 2: Alert when any file under
/etcis modified inside a container- Priority:
ERROR - Output should include:
time,container name,file path,user, andprocess name
- Priority:
- Rule 3: Alert when an outbound network connection is made from a container to a port other than 80 or 443
- Priority:
NOTICE - Output should include:
time,container name,destination IP,destination port, andprocess name
- Priority:
- Rule 1: Alert when any process opens a shell (
- Restart Falco to load the new rules
- Verify the rules are loaded by checking Falco's log output
- On
cluster1, identify the pod in thecompromisednamespace that has been flagged by Falco for spawning a shell process and delete it
Scoring Summary
| Question | Domain | Weight | Difficulty |
|---|---|---|---|
| Q1 | Cluster Setup | 7% | Medium |
| Q2 | Cluster Setup | 4% | Easy |
| Q3 | Cluster Hardening | 8% | Hard |
| Q4 | Cluster Hardening | 6% | Medium |
| Q5 | Cluster Hardening | 4% | Easy |
| Q6 | System Hardening | 7% | Hard |
| Q7 | System Hardening | 5% | Medium |
| Q8 | System Hardening | 6% | Medium |
| Q9 | Microservice Vulnerabilities | 7% | Hard |
| Q10 | Microservice Vulnerabilities | 7% | Hard |
| Q11 | Microservice Vulnerabilities | 6% | Medium |
| Q12 | Microservice Vulnerabilities | 7% | Hard |
| Q13 | Supply Chain Security | 6% | Medium |
| Q14 | Supply Chain Security | 6% | Medium |
| Q15 | Supply Chain Security | 4% | Easy |
| Q16 | Monitoring & Runtime | 5% | Medium |
| Q17 | Monitoring & Runtime | 5% | Hard |
| Total | 100% |
After Completing the Exam
- Score yourself honestly using the rubric on the index page
- Review ALL solutions, even for questions you answered correctly
- Note your weakest domains and allocate extra study time
- Wait at least 48 hours before attempting Mock Exam 2