Mock Exam 1 - Solutions
Spoiler Warning
Do not read these solutions until you have attempted the full mock exam under timed conditions. The learning value comes from struggling with the questions first.
Solution 1: CIS Benchmark Remediation (7%)
Domain: Cluster Setup | Time Target: 8 minutes
Step 1: SSH and Run kube-bench
ssh cluster1-controlplane
# Run kube-bench against master targets
kube-bench run --targets=masterStep 2: Fix API Server Configuration
Edit the API server static pod manifest:
sudo vi /etc/kubernetes/manifests/kube-apiserver.yamlFind and modify the following arguments in the spec.containers[0].command section:
spec:
containers:
- command:
- kube-apiserver
# Change authorization-mode to include Node and RBAC
- --authorization-mode=Node,RBAC
# Add or change profiling to false
- --profiling=false
# Add audit log configuration
- --audit-log-path=/var/log/apiserver/audit.log
- --audit-log-maxage=30Step 3: Create Audit Log Directory and Add Volume Mounts
Ensure the audit log directory exists:
sudo mkdir -p /var/log/apiserverAdd the volume and volumeMount to the API server manifest:
spec:
containers:
- command:
# ... (existing commands)
volumeMounts:
# ... (existing volume mounts)
- mountPath: /var/log/apiserver
name: audit-log
volumes:
# ... (existing volumes)
- hostPath:
path: /var/log/apiserver
type: DirectoryOrCreate
name: audit-logStep 4: Verify
# Wait for API server to restart (watch for container restart)
watch crictl ps | grep kube-apiserver
# Verify the API server is running with correct flags
ps aux | grep kube-apiserver | grep -E "authorization-mode|profiling|audit-log"
# Check the audit log is being written
ls -la /var/log/apiserver/audit.logTime Management
This question involves editing a static pod manifest. The API server will auto-restart, but it can take 30-60 seconds. Use watch crictl ps to monitor instead of repeatedly running kubectl.
Solution 2: Network Policies (4%)
Domain: Cluster Setup | Time Target: 5 minutes
Step 1: Default Deny Ingress
# default-deny-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: payments
spec:
podSelector: {}
policyTypes:
- Ingresskubectl apply -f default-deny-ingress.yamlStep 2: Default Deny Egress
# default-deny-egress.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-egress
namespace: payments
spec:
podSelector: {}
policyTypes:
- Egresskubectl apply -f default-deny-egress.yamlStep 3: Allow Payment API Traffic
# allow-payment-api.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-payment-api
namespace: payments
spec:
podSelector:
matchLabels:
app: payment-api
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: web
podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8443
egress:
- to:
- podSelector:
matchLabels:
app: payment-db
ports:
- protocol: TCP
port: 5432
- to: []
ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53kubectl apply -f allow-payment-api.yamlVerification
kubectl get networkpolicies -n payments
kubectl describe networkpolicy allow-payment-api -n paymentsCommon Mistake
The DNS egress rule must be separate from the database egress rule. If you combine them in one egress block with a to selector, DNS will only be allowed to pods matching that selector. The empty to: [] allows DNS to any destination.
Solution 3: RBAC Least Privilege (8%)
Domain: Cluster Hardening | Time Target: 10 minutes
Step 1: Delete Overprivileged ClusterRoleBinding
kubectl delete clusterrolebinding dev-admin-bindingStep 2: Create Scoped Role
# deployment-manager-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: deployment-manager-role
namespace: dev
rules:
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: ["apps"]
resources: ["replicasets"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch", "delete"]
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]kubectl apply -f deployment-manager-role.yamlStep 3: Create RoleBinding
# deployment-manager-binding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: deployment-manager-binding
namespace: dev
subjects:
- kind: ServiceAccount
name: deployment-manager
namespace: dev
roleRef:
kind: Role
name: deployment-manager-role
apiGroup: rbac.authorization.k8s.iokubectl apply -f deployment-manager-binding.yamlStep 4: Verify
# Should return "yes"
kubectl auth can-i create deployments \
--as=system:serviceaccount:dev:deployment-manager -n dev
# Should return "no"
kubectl auth can-i delete secrets \
--as=system:serviceaccount:dev:deployment-manager -n dev
# Should return "no"
kubectl auth can-i create pods \
--as=system:serviceaccount:dev:deployment-manager -n dev
# Should return "yes"
kubectl auth can-i get secrets \
--as=system:serviceaccount:dev:deployment-manager -n devTIP
Use kubectl auth can-i --list --as=system:serviceaccount:dev:deployment-manager -n dev to see all permissions in one command.
Solution 4: ServiceAccount Hardening (6%)
Domain: Cluster Hardening | Time Target: 7 minutes
Step 1: Create New ServiceAccount
# legacy-app-sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: legacy-app-sa
namespace: production
automountServiceAccountToken: falsekubectl apply -f legacy-app-sa.yamlStep 2: Patch Default ServiceAccount
kubectl patch serviceaccount default -n production \
-p '{"automountServiceAccountToken": false}'Step 3: Update Deployment
kubectl set serviceaccount deployment/legacy-app legacy-app-sa -n productionOr edit the deployment directly:
kubectl edit deployment legacy-app -n productionAdd under spec.template.spec:
spec:
template:
spec:
serviceAccountName: legacy-app-sa
automountServiceAccountToken: falseStep 4: Verify
# Check rollout status
kubectl rollout status deployment/legacy-app -n production
# Verify ServiceAccount
kubectl get deployment legacy-app -n production -o jsonpath='{.spec.template.spec.serviceAccountName}'
# Verify no token is mounted
kubectl get pod -n production -l app=legacy-app -o jsonpath='{.items[0].spec.containers[0].volumeMounts}' | grep -c "serviceaccount"Solution 5: Kubernetes Version Upgrade (4%)
Domain: Cluster Hardening | Time Target: 6 minutes
Step 1: Check Current Version and Available Versions
ssh cluster2-controlplane
kubectl version --short
kubeadm version
# Find available versions
apt-cache madison kubeadmStep 2: Upgrade kubeadm
# Replace X.Y.Z with the target version
sudo apt-mark unhold kubeadm
sudo apt-get update
sudo apt-get install -y kubeadm=1.XX.Y-1.1
sudo apt-mark hold kubeadm
# Verify kubeadm version
kubeadm versionStep 3: Plan and Apply Upgrade
# Check upgrade plan
sudo kubeadm upgrade plan
# Apply upgrade (replace with actual version)
sudo kubeadm upgrade apply v1.XX.YStep 4: Upgrade kubelet and kubectl
# Drain the node (if needed)
kubectl drain cluster2-controlplane --ignore-daemonsets
sudo apt-mark unhold kubelet kubectl
sudo apt-get install -y kubelet=1.XX.Y-1.1 kubectl=1.XX.Y-1.1
sudo apt-mark hold kubelet kubectl
# Restart kubelet
sudo systemctl daemon-reload
sudo systemctl restart kubelet
# Uncordon
kubectl uncordon cluster2-controlplaneStep 5: Verify
kubectl get nodes
kubectl version --shortSolution 6: AppArmor Profile (7%)
Domain: System Hardening | Time Target: 9 minutes
Step 1: SSH to Node
ssh cluster1-node01Step 2: Create AppArmor Profile
sudo tee /etc/apparmor.d/k8s-restricted-write << 'EOF'
#include <tunables/global>
profile k8s-restricted-write flags=(attach_disconnected,mediate_deleted) {
#include <abstractions/base>
# Allow read access to all files
file,
# Deny write to everything first
deny /** w,
# Allow write to specific paths
/tmp/** rw,
/var/log/app/** rw,
# Allow network access
network,
}
EOFStep 3: Load the Profile
sudo apparmor_parser -r /etc/apparmor.d/k8s-restricted-writeStep 4: Verify Profile is Loaded
sudo aa-status | grep k8s-restricted-writeStep 5: Update Pod to Use the Profile
Switch back to the main terminal (exit SSH) and update the pod:
kubectl config use-context cluster1# restricted-app-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: restricted-app
namespace: secure
annotations:
container.apparmor.security.beta.kubernetes.io/restricted-app: localhost/k8s-restricted-write
spec:
containers:
- name: restricted-app
image: nginx:1.25-alpine
# ... (keep existing spec, add the annotation above)TIP
If the pod already exists, you cannot modify the AppArmor annotation. You need to delete and recreate the pod:
kubectl get pod restricted-app -n secure -o yaml > /tmp/restricted-app.yaml
# Edit /tmp/restricted-app.yaml to add the annotation
kubectl delete pod restricted-app -n secure
kubectl apply -f /tmp/restricted-app.yamlAlternatively, for Kubernetes v1.30+, you can use the native securityContext.appArmorProfile field:
spec:
containers:
- name: restricted-app
securityContext:
appArmorProfile:
type: Localhost
localhostProfile: k8s-restricted-writeVerification
kubectl get pod restricted-app -n secure
kubectl describe pod restricted-app -n secure | grep -i apparmorSolution 7: Seccomp Profile (5%)
Domain: System Hardening | Time Target: 7 minutes
Step 1: Create the Seccomp Profile
SSH into the node where the pod will run and create the profile:
sudo mkdir -p /var/lib/kubelet/seccomp/profiles
sudo tee /var/lib/kubelet/seccomp/profiles/audit-logger.json << 'EOF'
{
"defaultAction": "SCMP_ACT_ERRNO",
"architectures": [
"SCMP_ARCH_X86_64",
"SCMP_ARCH_X86",
"SCMP_ARCH_X32"
],
"syscalls": [
{
"names": [
"read", "write", "open", "close", "stat", "fstat", "lstat",
"poll", "lseek", "mmap", "mprotect", "munmap", "brk",
"rt_sigaction", "rt_sigprocmask", "ioctl", "access", "pipe",
"select", "sched_yield", "mremap", "msync", "mincore",
"madvise", "shmget", "shmat", "shmctl", "dup", "dup2",
"pause", "nanosleep", "getpid", "socket", "connect",
"accept", "sendto", "recvfrom", "bind", "listen",
"getsockname", "getpeername", "clone", "execve", "exit",
"wait4", "kill", "uname", "fcntl", "flock", "fsync",
"fdatasync", "getcwd", "readlink", "getuid", "getgid",
"geteuid", "getegid", "getppid", "getpgrp", "setsid",
"arch_prctl", "exit_group", "openat", "newfstatat",
"set_tid_address", "set_robust_list", "futex",
"epoll_create1", "epoll_ctl", "epoll_wait", "getrandom",
"close_range", "pread64", "pwrite64", "writev", "readv",
"sigaltstack", "rt_sigreturn", "getdents64",
"clock_gettime", "clock_nanosleep", "sysinfo", "prctl",
"rseq"
],
"action": "SCMP_ACT_ALLOW"
}
]
}
EOFStep 2: Update Pod Specification
apiVersion: v1
kind: Pod
metadata:
name: audit-logger
namespace: monitoring
spec:
securityContext:
seccompProfile:
type: Localhost
localhostProfile: profiles/audit-logger.json
containers:
- name: audit-logger
image: busybox:1.36
# ... (keep existing container spec)WARNING
The localhostProfile path is relative to the kubelet's seccomp profile directory (/var/lib/kubelet/seccomp/). Do NOT use the full absolute path.
Verification
kubectl get pod audit-logger -n monitoring
kubectl describe pod audit-logger -n monitoring | grep -i seccompSolution 8: Reduce Attack Surface (6%)
Domain: System Hardening | Time Target: 7 minutes
Step 1: SSH to Control Plane
ssh cluster1-controlplaneStep 2: Stop and Disable Unnecessary Services
# Stop services
sudo systemctl stop apache2
sudo systemctl stop rpcbind
# Disable services
sudo systemctl disable apache2
sudo systemctl disable rpcbindStep 3: Identify Non-standard Listening Ports
# List all listening ports
sudo ss -tlnp
# Or use netstat
sudo netstat -tlnp
# Filter out standard Kubernetes ports
sudo ss -tlnp | grep -vE ':(22|53|2379|2380|6443|10250|10257|10259)\s'Note any unexpected ports and their associated processes.
Step 4: Remove Unnecessary Packages
sudo apt-get purge -y apache2 rpcbind
sudo apt-get autoremove -yStep 5: Verify
# Confirm services are removed
systemctl status apache2 2>&1 | grep -i "not found\|inactive"
systemctl status rpcbind 2>&1 | grep -i "not found\|inactive"
# Verify Kubernetes is still functional
kubectl get nodes
kubectl get pods -n kube-systemSolution 9: Pod Security Hardening (7%)
Domain: Minimize Microservice Vulnerabilities | Time Target: 9 minutes
Step 1: Edit the Deployment
kubectl edit deployment api-gateway -n frontendFull Deployment Specification
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-gateway
namespace: frontend
spec:
# ... (keep existing selector/replicas)
template:
# ... (keep existing metadata)
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
seccompProfile:
type: RuntimeDefault
containers:
- name: api-gateway # adjust to actual container name
# ... (keep existing image)
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
volumeMounts:
# ... (keep existing volume mounts)
- name: tmp-dir
mountPath: /tmp
- name: cache-dir
mountPath: /var/cache
volumes:
# ... (keep existing volumes)
- name: tmp-dir
emptyDir: {}
- name: cache-dir
emptyDir: {}Verification
kubectl rollout status deployment/api-gateway -n frontend
# Verify security context
kubectl get deployment api-gateway -n frontend -o jsonpath='{.spec.template.spec.securityContext}'
# Verify containers are running
kubectl get pods -n frontend -l app=api-gatewayCommon Mistake
If you set readOnlyRootFilesystem: true without adding writable emptyDir volumes for /tmp and /var/cache, the application will likely crash because it cannot write temporary files. Always check application logs after applying security restrictions.
Solution 10: Pod Security Standards (7%)
Domain: Minimize Microservice Vulnerabilities | Time Target: 9 minutes
Step 1: Label the Namespace
kubectl label namespace data-processor \
pod-security.kubernetes.io/enforce=restricted \
pod-security.kubernetes.io/warn=restricted \
pod-security.kubernetes.io/warn-version=latest \
pod-security.kubernetes.io/audit=restricted \
pod-security.kubernetes.io/audit-version=latestStep 2: Export and Fix Legacy Pod
# Export existing pod manifest
kubectl get pod legacy-processor -n data-processor -o yaml > /tmp/fixed-legacy-processor.yamlEdit /tmp/fixed-legacy-processor.yaml to make it compliant with the restricted standard. The key requirements are:
apiVersion: v1
kind: Pod
metadata:
name: legacy-processor
namespace: data-processor
# Remove any status, resourceVersion, uid, etc.
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
seccompProfile:
type: RuntimeDefault
containers:
- name: processor
image: busybox:1.36 # keep original image
command: ["sleep", "3600"] # keep original command
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
capabilities:
drop:
- ALL
seccompProfile:
type: RuntimeDefault
volumeMounts:
- name: tmp
mountPath: /tmp
volumes:
- name: tmp
emptyDir: {}Restricted Standard Requirements
The restricted standard requires ALL of the following:
runAsNonRoot: trueallowPrivilegeEscalation: falsecapabilities.drop: ["ALL"]seccompProfile.type: RuntimeDefaultorLocalhost- No
hostNetwork,hostPID,hostIPC - No
privileged: true - No host path volumes
- No added capabilities except
NET_BIND_SERVICE
Step 3: Delete Old Pod and Apply Fixed Version
kubectl delete pod legacy-processor -n data-processor
kubectl apply -f /tmp/fixed-legacy-processor.yamlVerification
kubectl get pod legacy-processor -n data-processor
kubectl describe pod legacy-processor -n data-processor | grep -A5 "Security"Solution 11: OPA Gatekeeper Constraint (6%)
Domain: Minimize Microservice Vulnerabilities | Time Target: 8 minutes
Step 1: Create ConstraintTemplate
# constraint-template.yaml
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
name: k8spspreventroot
spec:
crd:
spec:
names:
kind: K8sPSPPreventRoot
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8spspreventroot
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
not container.securityContext.runAsNonRoot
msg := sprintf("Container '%v' must set securityContext.runAsNonRoot to true", [container.name])
}
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
container.securityContext.runAsNonRoot == false
msg := sprintf("Container '%v' has securityContext.runAsNonRoot set to false", [container.name])
}
violation[{"msg": msg}] {
container := input.review.object.spec.initContainers[_]
not container.securityContext.runAsNonRoot
msg := sprintf("Init container '%v' must set securityContext.runAsNonRoot to true", [container.name])
}
violation[{"msg": msg}] {
container := input.review.object.spec.initContainers[_]
container.securityContext.runAsNonRoot == false
msg := sprintf("Init container '%v' has securityContext.runAsNonRoot set to false", [container.name])
}kubectl apply -f constraint-template.yamlStep 2: Create Constraint
# prevent-root-constraint.yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sPSPPreventRoot
metadata:
name: prevent-root-containers
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Pod"]
namespaceSelector:
matchExpressions:
- key: enforce-security
operator: In
values: ["true"]
excludedNamespaces:
- kube-systemkubectl apply -f prevent-root-constraint.yamlStep 3: Label the Namespace
kubectl label namespace production enforce-security=trueStep 4: Verify
# This should be rejected
kubectl run test-root --image=nginx -n production
# This should succeed
kubectl run test-nonroot --image=nginx -n production \
--overrides='{"spec":{"containers":[{"name":"test-nonroot","image":"nginx","securityContext":{"runAsNonRoot":true,"runAsUser":1000}}]}}'
# Clean up test pod
kubectl delete pod test-nonroot -n production --ignore-not-foundTIP
Wait a few seconds after creating the ConstraintTemplate before creating the Constraint. Gatekeeper needs time to process the template.
Solution 12: Encryption at Rest (7%)
Domain: Minimize Microservice Vulnerabilities | Time Target: 10 minutes
Step 1: Generate Encryption Key
ssh cluster2-controlplane
# Generate a 32-byte base64-encoded key
head -c 32 /dev/urandom | base64
# Example output: aGVsbG93b3JsZHRoaXNpc215c2VjcmV0a2V5MTIzNDU=Step 2: Create EncryptionConfiguration
sudo tee /etc/kubernetes/pki/encryption-config.yaml << 'EOF'
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: finance-secrets-key
secret: aGVsbG93b3JsZHRoaXNpc215c2VjcmV0a2V5MTIzNDU=
- identity: {}
EOFWARNING
Replace the secret value with the key you generated. The key MUST be exactly 32 bytes base64-encoded.
Step 3: Configure API Server
Edit the API server static pod manifest:
sudo vi /etc/kubernetes/manifests/kube-apiserver.yamlAdd the encryption provider config flag:
spec:
containers:
- command:
- kube-apiserver
# ... (existing flags)
- --encryption-provider-config=/etc/kubernetes/pki/encryption-config.yamlAdd the volume mount (if the pki directory is not already mounted):
volumeMounts:
# ... (existing mounts -- /etc/kubernetes/pki is likely already mounted)
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: trueStep 4: Wait for API Server Restart
watch crictl ps | grep kube-apiserver
# Wait until the container is up and runningStep 5: Create Test Secret
kubectl create secret generic db-credentials \
-n finance \
--from-literal=password='S3cur3P@ssw0rd!'Step 6: Verify Encryption at Rest
# Read the secret directly from etcd
ETCDCTL_API=3 etcdctl \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
get /registry/secrets/finance/db-credentials | hexdump -C | head -20The output should show k8s:enc:aescbc:v1:finance-secrets-key prefix followed by encrypted data, NOT the plaintext password.
TIP
If you see the plaintext value, the encryption is not working. Check the API server logs:
crictl logs $(crictl ps --name kube-apiserver -q) 2>&1 | tail -20Solution 13: Image Vulnerability Scanning (6%)
Domain: Supply Chain Security | Time Target: 7 minutes
Step 1: Scan nginx Image
trivy image --severity CRITICAL,HIGH nginx:1.19.0 > /tmp/nginx-scan.txtStep 2: Scan redis Image
trivy image --severity CRITICAL redis:6.0.5 > /tmp/redis-scan.txtStep 3: Update Deployments
# Update nginx
kubectl set image deployment/web-server \
web-server=nginx:1.25-alpine -n staging
# Update redis
kubectl set image deployment/cache \
cache=redis:7-alpine -n stagingTIP
If you don't know the container name, check first:
kubectl get deployment web-server -n staging -o jsonpath='{.spec.template.spec.containers[*].name}'Step 4: Verify
kubectl rollout status deployment/web-server -n staging
kubectl rollout status deployment/cache -n staging
# Confirm new images
kubectl get deployment web-server -n staging -o jsonpath='{.spec.template.spec.containers[0].image}'
kubectl get deployment cache -n staging -o jsonpath='{.spec.template.spec.containers[0].image}'Solution 14: ImagePolicyWebhook (6%)
Domain: Supply Chain Security | Time Target: 8 minutes
Step 1: Create Admission Configuration
ssh cluster1-controlplane
sudo mkdir -p /etc/kubernetes/admission
sudo tee /etc/kubernetes/admission/image-policy-config.yaml << 'EOF'
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: ImagePolicyWebhook
configuration:
imagePolicy:
kubeConfigFile: /etc/kubernetes/admission/image-policy-kubeconfig.yaml
allowTTL: 50
denyTTL: 50
retryBackoff: 500
defaultAllow: false
EOFStep 2: Create Kubeconfig for Webhook
sudo tee /etc/kubernetes/admission/image-policy-kubeconfig.yaml << 'EOF'
apiVersion: v1
kind: Config
clusters:
- cluster:
server: https://image-policy.kube-system.svc:8443/validate
certificate-authority: /etc/kubernetes/pki/ca.crt
name: image-policy-webhook
contexts:
- context:
cluster: image-policy-webhook
user: api-server
name: image-policy-webhook
current-context: image-policy-webhook
users:
- name: api-server
user:
client-certificate: /etc/kubernetes/pki/apiserver.crt
client-key: /etc/kubernetes/pki/apiserver.key
EOFStep 3: Update API Server
Edit /etc/kubernetes/manifests/kube-apiserver.yaml:
spec:
containers:
- command:
- kube-apiserver
# Add ImagePolicyWebhook to existing plugins
- --enable-admission-plugins=NodeRestriction,ImagePolicyWebhook
- --admission-control-config-file=/etc/kubernetes/admission/image-policy-config.yaml
volumeMounts:
# Add mount for admission config
- mountPath: /etc/kubernetes/admission
name: admission-config
readOnly: true
volumes:
- hostPath:
path: /etc/kubernetes/admission
type: DirectoryOrCreate
name: admission-configStep 4: Verify
watch crictl ps | grep kube-apiserver
# Wait for API server to restart
kubectl get pods -n kube-systemDANGER
If defaultAllow is set to false and the webhook is unreachable, ALL pod creation will be blocked. Make sure the webhook service is running before enabling this.
Solution 15: Dockerfile Security (4%)
Domain: Supply Chain Security | Time Target: 5 minutes
Step 1: Review the Dockerfile
ssh cluster2-controlplane
cat /root/DockerfileStep 2: Create Fixed Dockerfile
cat > /root/Dockerfile-fixed << 'EOF'
# Stage 1: Build
FROM golang:1.21-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -o /app/main .
# Stage 2: Runtime
FROM alpine:3.19
RUN addgroup -g 1001 appgroup && \
adduser -u 1001 -G appgroup -D -h /app appuser
WORKDIR /app
COPY --from=builder /app/main .
RUN chown -R appuser:appgroup /app
USER appuser:appgroup
EXPOSE 8080
ENTRYPOINT ["/app/main"]
EOFKey Changes Made
- Specific version tags instead of
latest - Multi-stage build to reduce image size
- Non-root user created and switched to with
USER COPYinstead ofADDfor local files- Removed debug tools (curl, wget, netcat not installed)
- Minimal base image (alpine) for runtime
Solution 16: Audit Policy Configuration (5%)
Domain: Monitoring, Logging & Runtime Security | Time Target: 8 minutes
Step 1: Create Audit Policy
ssh cluster1-controlplane
sudo mkdir -p /etc/kubernetes/audit
sudo mkdir -p /var/log/kubernetes/audit
sudo tee /etc/kubernetes/audit/audit-policy.yaml << 'EOF'
apiVersion: audit.k8s.io/v1
kind: Policy
omitStages:
- "RequestReceived"
rules:
# Do not log requests from system components
- level: None
users:
- "system:kube-controller-manager"
- "system:kube-scheduler"
- "system:kube-proxy"
# Log Secret, ConfigMap, and TokenReview at Metadata level
- level: Metadata
resources:
- group: ""
resources: ["secrets", "configmaps"]
- group: "authentication.k8s.io"
resources: ["tokenreviews"]
# Log all authentication resources at RequestResponse level
- level: RequestResponse
resources:
- group: "authentication.k8s.io"
# Log exec and attach at RequestResponse level
- level: RequestResponse
resources:
- group: ""
resources: ["pods/exec", "pods/attach"]
# Log core and apps group resources at Request level
- level: Request
resources:
- group: ""
- group: "apps"
# Log everything else at Metadata level
- level: Metadata
EOFStep 2: Configure API Server
Edit /etc/kubernetes/manifests/kube-apiserver.yaml:
spec:
containers:
- command:
- kube-apiserver
# ... (existing flags)
- --audit-policy-file=/etc/kubernetes/audit/audit-policy.yaml
- --audit-log-path=/var/log/kubernetes/audit/audit.log
- --audit-log-maxage=7
- --audit-log-maxbackup=3
- --audit-log-maxsize=100
volumeMounts:
# ... (existing mounts)
- mountPath: /etc/kubernetes/audit
name: audit-policy
readOnly: true
- mountPath: /var/log/kubernetes/audit
name: audit-log
volumes:
# ... (existing volumes)
- hostPath:
path: /etc/kubernetes/audit
type: DirectoryOrCreate
name: audit-policy
- hostPath:
path: /var/log/kubernetes/audit
type: DirectoryOrCreate
name: audit-logStep 3: Verify
# Wait for API server to restart
watch crictl ps | grep kube-apiserver
# Check audit logs are being generated
ls -la /var/log/kubernetes/audit/audit.log
tail -5 /var/log/kubernetes/audit/audit.log | jq .Solution 17: Falco Custom Rules (5%)
Domain: Monitoring, Logging & Runtime Security | Time Target: 8 minutes
Step 1: SSH to Control Plane
ssh cluster1-controlplaneStep 2: Create Custom Falco Rules
sudo tee /etc/falco/rules.d/custom-rules.yaml << 'EOF'
- rule: Shell Spawned in Container
desc: Detect a shell being spawned inside a container
condition: >
spawned_process and
container and
proc.name in (sh, bash) and
proc.pname != healthcheck
output: >
Shell spawned in container
(time=%evt.time container=%container.name container_id=%container.id
user=%user.name shell=%proc.name parent=%proc.pname)
priority: WARNING
tags: [container, shell, mitre_execution]
- rule: File Modified Under Etc in Container
desc: Detect modification of files under /etc inside a container
condition: >
open_write and
container and
fd.name startswith /etc/
output: >
File under /etc modified in container
(time=%evt.time container=%container.name file=%fd.name
user=%user.name process=%proc.name)
priority: ERROR
tags: [container, filesystem, mitre_persistence]
- rule: Unexpected Outbound Connection from Container
desc: >
Detect outbound connections from containers to ports
other than 80 and 443
condition: >
outbound and
container and
fd.sport != 80 and
fd.sport != 443
output: >
Unexpected outbound connection from container
(time=%evt.time container=%container.name
dest_ip=%fd.sip dest_port=%fd.sport process=%proc.name)
priority: NOTICE
tags: [container, network, mitre_command_and_control]
EOFStep 3: Restart Falco
sudo systemctl restart falcoStep 4: Verify Rules are Loaded
# Check Falco service status
sudo systemctl status falco
# Check logs for rule loading
sudo journalctl -u falco --no-pager | tail -30 | grep -i "rule\|loaded\|custom"
# Or check the Falco log file
sudo cat /var/log/falco/falco.log | tail -20Step 5: Find and Delete Compromised Pod
# Exit SSH and switch context
exit
kubectl config use-context cluster1
# Check Falco alerts for shell spawning
ssh cluster1-controlplane "sudo cat /var/log/falco/falco.log | grep 'Shell spawned'" 2>/dev/null
# Or check journalctl
ssh cluster1-controlplane "sudo journalctl -u falco --no-pager | grep 'Shell spawned'" 2>/dev/null
# Identify the compromised pod in the namespace
kubectl get pods -n compromised
# Delete the compromised pod (replace with actual pod name from Falco logs)
kubectl delete pod <compromised-pod-name> -n compromisedTIP
If you cannot find the pod from Falco logs, check for suspicious activity:
kubectl get pods -n compromised -o wide
# Look for pods with shell processes running
kubectl exec <pod-name> -n compromised -- ps aux 2>/dev/nullFinal Score Calculation
Add up the weights for all questions you answered completely and correctly:
| Score Range | Result |
|---|---|
| 67-100% | PASS -- You are ready for the real exam |
| 55-66% | CLOSE -- Review weak areas and retake in 1 week |
| 40-54% | NEEDS WORK -- Spend more time on domain study materials |
| Below 40% | NOT READY -- Complete all domain sections before retaking |
Next Steps
- Review all solutions, including questions you got right
- Note which domains cost you the most points
- Focus your study on weak domains
- Wait at least 48 hours, then attempt Mock Exam 2