Skip to content

Mock Exam 1 - Solutions

Spoiler Warning

Do not read these solutions until you have attempted the full mock exam under timed conditions. The learning value comes from struggling with the questions first.


Solution 1: CIS Benchmark Remediation (7%)

Domain: Cluster Setup | Time Target: 8 minutes

Step 1: SSH and Run kube-bench

bash
ssh cluster1-controlplane

# Run kube-bench against master targets
kube-bench run --targets=master

Step 2: Fix API Server Configuration

Edit the API server static pod manifest:

bash
sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml

Find and modify the following arguments in the spec.containers[0].command section:

yaml
spec:
  containers:
  - command:
    - kube-apiserver
    # Change authorization-mode to include Node and RBAC
    - --authorization-mode=Node,RBAC
    # Add or change profiling to false
    - --profiling=false
    # Add audit log configuration
    - --audit-log-path=/var/log/apiserver/audit.log
    - --audit-log-maxage=30

Step 3: Create Audit Log Directory and Add Volume Mounts

Ensure the audit log directory exists:

bash
sudo mkdir -p /var/log/apiserver

Add the volume and volumeMount to the API server manifest:

yaml
spec:
  containers:
  - command:
    # ... (existing commands)
    volumeMounts:
    # ... (existing volume mounts)
    - mountPath: /var/log/apiserver
      name: audit-log
  volumes:
  # ... (existing volumes)
  - hostPath:
      path: /var/log/apiserver
      type: DirectoryOrCreate
    name: audit-log

Step 4: Verify

bash
# Wait for API server to restart (watch for container restart)
watch crictl ps | grep kube-apiserver

# Verify the API server is running with correct flags
ps aux | grep kube-apiserver | grep -E "authorization-mode|profiling|audit-log"

# Check the audit log is being written
ls -la /var/log/apiserver/audit.log

Time Management

This question involves editing a static pod manifest. The API server will auto-restart, but it can take 30-60 seconds. Use watch crictl ps to monitor instead of repeatedly running kubectl.


Solution 2: Network Policies (4%)

Domain: Cluster Setup | Time Target: 5 minutes

Step 1: Default Deny Ingress

yaml
# default-deny-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-ingress
  namespace: payments
spec:
  podSelector: {}
  policyTypes:
  - Ingress
bash
kubectl apply -f default-deny-ingress.yaml

Step 2: Default Deny Egress

yaml
# default-deny-egress.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-egress
  namespace: payments
spec:
  podSelector: {}
  policyTypes:
  - Egress
bash
kubectl apply -f default-deny-egress.yaml

Step 3: Allow Payment API Traffic

yaml
# allow-payment-api.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-payment-api
  namespace: payments
spec:
  podSelector:
    matchLabels:
      app: payment-api
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: web
      podSelector:
        matchLabels:
          app: frontend
    ports:
    - protocol: TCP
      port: 8443
  egress:
  - to:
    - podSelector:
        matchLabels:
          app: payment-db
    ports:
    - protocol: TCP
      port: 5432
  - to: []
    ports:
    - protocol: TCP
      port: 53
    - protocol: UDP
      port: 53
bash
kubectl apply -f allow-payment-api.yaml

Verification

bash
kubectl get networkpolicies -n payments
kubectl describe networkpolicy allow-payment-api -n payments

Common Mistake

The DNS egress rule must be separate from the database egress rule. If you combine them in one egress block with a to selector, DNS will only be allowed to pods matching that selector. The empty to: [] allows DNS to any destination.


Solution 3: RBAC Least Privilege (8%)

Domain: Cluster Hardening | Time Target: 10 minutes

Step 1: Delete Overprivileged ClusterRoleBinding

bash
kubectl delete clusterrolebinding dev-admin-binding

Step 2: Create Scoped Role

yaml
# deployment-manager-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: deployment-manager-role
  namespace: dev
rules:
- apiGroups: ["apps"]
  resources: ["deployments"]
  verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: ["apps"]
  resources: ["replicasets"]
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list", "watch", "delete"]
- apiGroups: [""]
  resources: ["services"]
  verbs: ["get", "list"]
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["get", "list"]
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get"]
bash
kubectl apply -f deployment-manager-role.yaml

Step 3: Create RoleBinding

yaml
# deployment-manager-binding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: deployment-manager-binding
  namespace: dev
subjects:
- kind: ServiceAccount
  name: deployment-manager
  namespace: dev
roleRef:
  kind: Role
  name: deployment-manager-role
  apiGroup: rbac.authorization.k8s.io
bash
kubectl apply -f deployment-manager-binding.yaml

Step 4: Verify

bash
# Should return "yes"
kubectl auth can-i create deployments \
  --as=system:serviceaccount:dev:deployment-manager -n dev

# Should return "no"
kubectl auth can-i delete secrets \
  --as=system:serviceaccount:dev:deployment-manager -n dev

# Should return "no"
kubectl auth can-i create pods \
  --as=system:serviceaccount:dev:deployment-manager -n dev

# Should return "yes"
kubectl auth can-i get secrets \
  --as=system:serviceaccount:dev:deployment-manager -n dev

TIP

Use kubectl auth can-i --list --as=system:serviceaccount:dev:deployment-manager -n dev to see all permissions in one command.


Solution 4: ServiceAccount Hardening (6%)

Domain: Cluster Hardening | Time Target: 7 minutes

Step 1: Create New ServiceAccount

yaml
# legacy-app-sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: legacy-app-sa
  namespace: production
automountServiceAccountToken: false
bash
kubectl apply -f legacy-app-sa.yaml

Step 2: Patch Default ServiceAccount

bash
kubectl patch serviceaccount default -n production \
  -p '{"automountServiceAccountToken": false}'

Step 3: Update Deployment

bash
kubectl set serviceaccount deployment/legacy-app legacy-app-sa -n production

Or edit the deployment directly:

bash
kubectl edit deployment legacy-app -n production

Add under spec.template.spec:

yaml
spec:
  template:
    spec:
      serviceAccountName: legacy-app-sa
      automountServiceAccountToken: false

Step 4: Verify

bash
# Check rollout status
kubectl rollout status deployment/legacy-app -n production

# Verify ServiceAccount
kubectl get deployment legacy-app -n production -o jsonpath='{.spec.template.spec.serviceAccountName}'

# Verify no token is mounted
kubectl get pod -n production -l app=legacy-app -o jsonpath='{.items[0].spec.containers[0].volumeMounts}' | grep -c "serviceaccount"

Solution 5: Kubernetes Version Upgrade (4%)

Domain: Cluster Hardening | Time Target: 6 minutes

Step 1: Check Current Version and Available Versions

bash
ssh cluster2-controlplane

kubectl version --short
kubeadm version

# Find available versions
apt-cache madison kubeadm

Step 2: Upgrade kubeadm

bash
# Replace X.Y.Z with the target version
sudo apt-mark unhold kubeadm
sudo apt-get update
sudo apt-get install -y kubeadm=1.XX.Y-1.1
sudo apt-mark hold kubeadm

# Verify kubeadm version
kubeadm version

Step 3: Plan and Apply Upgrade

bash
# Check upgrade plan
sudo kubeadm upgrade plan

# Apply upgrade (replace with actual version)
sudo kubeadm upgrade apply v1.XX.Y

Step 4: Upgrade kubelet and kubectl

bash
# Drain the node (if needed)
kubectl drain cluster2-controlplane --ignore-daemonsets

sudo apt-mark unhold kubelet kubectl
sudo apt-get install -y kubelet=1.XX.Y-1.1 kubectl=1.XX.Y-1.1
sudo apt-mark hold kubelet kubectl

# Restart kubelet
sudo systemctl daemon-reload
sudo systemctl restart kubelet

# Uncordon
kubectl uncordon cluster2-controlplane

Step 5: Verify

bash
kubectl get nodes
kubectl version --short

Solution 6: AppArmor Profile (7%)

Domain: System Hardening | Time Target: 9 minutes

Step 1: SSH to Node

bash
ssh cluster1-node01

Step 2: Create AppArmor Profile

bash
sudo tee /etc/apparmor.d/k8s-restricted-write << 'EOF'
#include <tunables/global>

profile k8s-restricted-write flags=(attach_disconnected,mediate_deleted) {
  #include <abstractions/base>

  # Allow read access to all files
  file,
  
  # Deny write to everything first
  deny /** w,

  # Allow write to specific paths
  /tmp/** rw,
  /var/log/app/** rw,

  # Allow network access
  network,
}
EOF

Step 3: Load the Profile

bash
sudo apparmor_parser -r /etc/apparmor.d/k8s-restricted-write

Step 4: Verify Profile is Loaded

bash
sudo aa-status | grep k8s-restricted-write

Step 5: Update Pod to Use the Profile

Switch back to the main terminal (exit SSH) and update the pod:

bash
kubectl config use-context cluster1
yaml
# restricted-app-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: restricted-app
  namespace: secure
  annotations:
    container.apparmor.security.beta.kubernetes.io/restricted-app: localhost/k8s-restricted-write
spec:
  containers:
  - name: restricted-app
    image: nginx:1.25-alpine
    # ... (keep existing spec, add the annotation above)

TIP

If the pod already exists, you cannot modify the AppArmor annotation. You need to delete and recreate the pod:

bash
kubectl get pod restricted-app -n secure -o yaml > /tmp/restricted-app.yaml
# Edit /tmp/restricted-app.yaml to add the annotation
kubectl delete pod restricted-app -n secure
kubectl apply -f /tmp/restricted-app.yaml

Alternatively, for Kubernetes v1.30+, you can use the native securityContext.appArmorProfile field:

yaml
spec:
  containers:
  - name: restricted-app
    securityContext:
      appArmorProfile:
        type: Localhost
        localhostProfile: k8s-restricted-write

Verification

bash
kubectl get pod restricted-app -n secure
kubectl describe pod restricted-app -n secure | grep -i apparmor

Solution 7: Seccomp Profile (5%)

Domain: System Hardening | Time Target: 7 minutes

Step 1: Create the Seccomp Profile

SSH into the node where the pod will run and create the profile:

bash
sudo mkdir -p /var/lib/kubelet/seccomp/profiles

sudo tee /var/lib/kubelet/seccomp/profiles/audit-logger.json << 'EOF'
{
  "defaultAction": "SCMP_ACT_ERRNO",
  "architectures": [
    "SCMP_ARCH_X86_64",
    "SCMP_ARCH_X86",
    "SCMP_ARCH_X32"
  ],
  "syscalls": [
    {
      "names": [
        "read", "write", "open", "close", "stat", "fstat", "lstat",
        "poll", "lseek", "mmap", "mprotect", "munmap", "brk",
        "rt_sigaction", "rt_sigprocmask", "ioctl", "access", "pipe",
        "select", "sched_yield", "mremap", "msync", "mincore",
        "madvise", "shmget", "shmat", "shmctl", "dup", "dup2",
        "pause", "nanosleep", "getpid", "socket", "connect",
        "accept", "sendto", "recvfrom", "bind", "listen",
        "getsockname", "getpeername", "clone", "execve", "exit",
        "wait4", "kill", "uname", "fcntl", "flock", "fsync",
        "fdatasync", "getcwd", "readlink", "getuid", "getgid",
        "geteuid", "getegid", "getppid", "getpgrp", "setsid",
        "arch_prctl", "exit_group", "openat", "newfstatat",
        "set_tid_address", "set_robust_list", "futex",
        "epoll_create1", "epoll_ctl", "epoll_wait", "getrandom",
        "close_range", "pread64", "pwrite64", "writev", "readv",
        "sigaltstack", "rt_sigreturn", "getdents64",
        "clock_gettime", "clock_nanosleep", "sysinfo", "prctl",
        "rseq"
      ],
      "action": "SCMP_ACT_ALLOW"
    }
  ]
}
EOF

Step 2: Update Pod Specification

yaml
apiVersion: v1
kind: Pod
metadata:
  name: audit-logger
  namespace: monitoring
spec:
  securityContext:
    seccompProfile:
      type: Localhost
      localhostProfile: profiles/audit-logger.json
  containers:
  - name: audit-logger
    image: busybox:1.36
    # ... (keep existing container spec)

WARNING

The localhostProfile path is relative to the kubelet's seccomp profile directory (/var/lib/kubelet/seccomp/). Do NOT use the full absolute path.

Verification

bash
kubectl get pod audit-logger -n monitoring
kubectl describe pod audit-logger -n monitoring | grep -i seccomp

Solution 8: Reduce Attack Surface (6%)

Domain: System Hardening | Time Target: 7 minutes

Step 1: SSH to Control Plane

bash
ssh cluster1-controlplane

Step 2: Stop and Disable Unnecessary Services

bash
# Stop services
sudo systemctl stop apache2
sudo systemctl stop rpcbind

# Disable services
sudo systemctl disable apache2
sudo systemctl disable rpcbind

Step 3: Identify Non-standard Listening Ports

bash
# List all listening ports
sudo ss -tlnp

# Or use netstat
sudo netstat -tlnp

# Filter out standard Kubernetes ports
sudo ss -tlnp | grep -vE ':(22|53|2379|2380|6443|10250|10257|10259)\s'

Note any unexpected ports and their associated processes.

Step 4: Remove Unnecessary Packages

bash
sudo apt-get purge -y apache2 rpcbind
sudo apt-get autoremove -y

Step 5: Verify

bash
# Confirm services are removed
systemctl status apache2 2>&1 | grep -i "not found\|inactive"
systemctl status rpcbind 2>&1 | grep -i "not found\|inactive"

# Verify Kubernetes is still functional
kubectl get nodes
kubectl get pods -n kube-system

Solution 9: Pod Security Hardening (7%)

Domain: Minimize Microservice Vulnerabilities | Time Target: 9 minutes

Step 1: Edit the Deployment

bash
kubectl edit deployment api-gateway -n frontend

Full Deployment Specification

yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: api-gateway
  namespace: frontend
spec:
  # ... (keep existing selector/replicas)
  template:
    # ... (keep existing metadata)
    spec:
      securityContext:
        runAsNonRoot: true
        runAsUser: 1000
        runAsGroup: 3000
        fsGroup: 2000
        seccompProfile:
          type: RuntimeDefault
      containers:
      - name: api-gateway  # adjust to actual container name
        # ... (keep existing image)
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          capabilities:
            drop:
            - ALL
            add:
            - NET_BIND_SERVICE
        volumeMounts:
        # ... (keep existing volume mounts)
        - name: tmp-dir
          mountPath: /tmp
        - name: cache-dir
          mountPath: /var/cache
      volumes:
      # ... (keep existing volumes)
      - name: tmp-dir
        emptyDir: {}
      - name: cache-dir
        emptyDir: {}

Verification

bash
kubectl rollout status deployment/api-gateway -n frontend

# Verify security context
kubectl get deployment api-gateway -n frontend -o jsonpath='{.spec.template.spec.securityContext}'

# Verify containers are running
kubectl get pods -n frontend -l app=api-gateway

Common Mistake

If you set readOnlyRootFilesystem: true without adding writable emptyDir volumes for /tmp and /var/cache, the application will likely crash because it cannot write temporary files. Always check application logs after applying security restrictions.


Solution 10: Pod Security Standards (7%)

Domain: Minimize Microservice Vulnerabilities | Time Target: 9 minutes

Step 1: Label the Namespace

bash
kubectl label namespace data-processor \
  pod-security.kubernetes.io/enforce=restricted \
  pod-security.kubernetes.io/warn=restricted \
  pod-security.kubernetes.io/warn-version=latest \
  pod-security.kubernetes.io/audit=restricted \
  pod-security.kubernetes.io/audit-version=latest

Step 2: Export and Fix Legacy Pod

bash
# Export existing pod manifest
kubectl get pod legacy-processor -n data-processor -o yaml > /tmp/fixed-legacy-processor.yaml

Edit /tmp/fixed-legacy-processor.yaml to make it compliant with the restricted standard. The key requirements are:

yaml
apiVersion: v1
kind: Pod
metadata:
  name: legacy-processor
  namespace: data-processor
  # Remove any status, resourceVersion, uid, etc.
spec:
  securityContext:
    runAsNonRoot: true
    runAsUser: 1000
    runAsGroup: 3000
    fsGroup: 2000
    seccompProfile:
      type: RuntimeDefault
  containers:
  - name: processor
    image: busybox:1.36  # keep original image
    command: ["sleep", "3600"]  # keep original command
    securityContext:
      allowPrivilegeEscalation: false
      readOnlyRootFilesystem: true
      runAsNonRoot: true
      capabilities:
        drop:
        - ALL
      seccompProfile:
        type: RuntimeDefault
    volumeMounts:
    - name: tmp
      mountPath: /tmp
  volumes:
  - name: tmp
    emptyDir: {}

Restricted Standard Requirements

The restricted standard requires ALL of the following:

  • runAsNonRoot: true
  • allowPrivilegeEscalation: false
  • capabilities.drop: ["ALL"]
  • seccompProfile.type: RuntimeDefault or Localhost
  • No hostNetwork, hostPID, hostIPC
  • No privileged: true
  • No host path volumes
  • No added capabilities except NET_BIND_SERVICE

Step 3: Delete Old Pod and Apply Fixed Version

bash
kubectl delete pod legacy-processor -n data-processor
kubectl apply -f /tmp/fixed-legacy-processor.yaml

Verification

bash
kubectl get pod legacy-processor -n data-processor
kubectl describe pod legacy-processor -n data-processor | grep -A5 "Security"

Solution 11: OPA Gatekeeper Constraint (6%)

Domain: Minimize Microservice Vulnerabilities | Time Target: 8 minutes

Step 1: Create ConstraintTemplate

yaml
# constraint-template.yaml
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
  name: k8spspreventroot
spec:
  crd:
    spec:
      names:
        kind: K8sPSPPreventRoot
  targets:
  - target: admission.k8s.gatekeeper.sh
    rego: |
      package k8spspreventroot

      violation[{"msg": msg}] {
        container := input.review.object.spec.containers[_]
        not container.securityContext.runAsNonRoot
        msg := sprintf("Container '%v' must set securityContext.runAsNonRoot to true", [container.name])
      }

      violation[{"msg": msg}] {
        container := input.review.object.spec.containers[_]
        container.securityContext.runAsNonRoot == false
        msg := sprintf("Container '%v' has securityContext.runAsNonRoot set to false", [container.name])
      }

      violation[{"msg": msg}] {
        container := input.review.object.spec.initContainers[_]
        not container.securityContext.runAsNonRoot
        msg := sprintf("Init container '%v' must set securityContext.runAsNonRoot to true", [container.name])
      }

      violation[{"msg": msg}] {
        container := input.review.object.spec.initContainers[_]
        container.securityContext.runAsNonRoot == false
        msg := sprintf("Init container '%v' has securityContext.runAsNonRoot set to false", [container.name])
      }
bash
kubectl apply -f constraint-template.yaml

Step 2: Create Constraint

yaml
# prevent-root-constraint.yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sPSPPreventRoot
metadata:
  name: prevent-root-containers
spec:
  match:
    kinds:
    - apiGroups: [""]
      kinds: ["Pod"]
    namespaceSelector:
      matchExpressions:
      - key: enforce-security
        operator: In
        values: ["true"]
    excludedNamespaces:
    - kube-system
bash
kubectl apply -f prevent-root-constraint.yaml

Step 3: Label the Namespace

bash
kubectl label namespace production enforce-security=true

Step 4: Verify

bash
# This should be rejected
kubectl run test-root --image=nginx -n production

# This should succeed
kubectl run test-nonroot --image=nginx -n production \
  --overrides='{"spec":{"containers":[{"name":"test-nonroot","image":"nginx","securityContext":{"runAsNonRoot":true,"runAsUser":1000}}]}}'

# Clean up test pod
kubectl delete pod test-nonroot -n production --ignore-not-found

TIP

Wait a few seconds after creating the ConstraintTemplate before creating the Constraint. Gatekeeper needs time to process the template.


Solution 12: Encryption at Rest (7%)

Domain: Minimize Microservice Vulnerabilities | Time Target: 10 minutes

Step 1: Generate Encryption Key

bash
ssh cluster2-controlplane

# Generate a 32-byte base64-encoded key
head -c 32 /dev/urandom | base64
# Example output: aGVsbG93b3JsZHRoaXNpc215c2VjcmV0a2V5MTIzNDU=

Step 2: Create EncryptionConfiguration

bash
sudo tee /etc/kubernetes/pki/encryption-config.yaml << 'EOF'
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
  - secrets
  providers:
  - aescbc:
      keys:
      - name: finance-secrets-key
        secret: aGVsbG93b3JsZHRoaXNpc215c2VjcmV0a2V5MTIzNDU=
  - identity: {}
EOF

WARNING

Replace the secret value with the key you generated. The key MUST be exactly 32 bytes base64-encoded.

Step 3: Configure API Server

Edit the API server static pod manifest:

bash
sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml

Add the encryption provider config flag:

yaml
spec:
  containers:
  - command:
    - kube-apiserver
    # ... (existing flags)
    - --encryption-provider-config=/etc/kubernetes/pki/encryption-config.yaml

Add the volume mount (if the pki directory is not already mounted):

yaml
    volumeMounts:
    # ... (existing mounts -- /etc/kubernetes/pki is likely already mounted)
    - mountPath: /etc/kubernetes/pki
      name: k8s-certs
      readOnly: true

Step 4: Wait for API Server Restart

bash
watch crictl ps | grep kube-apiserver
# Wait until the container is up and running

Step 5: Create Test Secret

bash
kubectl create secret generic db-credentials \
  -n finance \
  --from-literal=password='S3cur3P@ssw0rd!'

Step 6: Verify Encryption at Rest

bash
# Read the secret directly from etcd
ETCDCTL_API=3 etcdctl \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/etc/kubernetes/pki/etcd/ca.crt \
  --cert=/etc/kubernetes/pki/etcd/server.crt \
  --key=/etc/kubernetes/pki/etcd/server.key \
  get /registry/secrets/finance/db-credentials | hexdump -C | head -20

The output should show k8s:enc:aescbc:v1:finance-secrets-key prefix followed by encrypted data, NOT the plaintext password.

TIP

If you see the plaintext value, the encryption is not working. Check the API server logs:

bash
crictl logs $(crictl ps --name kube-apiserver -q) 2>&1 | tail -20

Solution 13: Image Vulnerability Scanning (6%)

Domain: Supply Chain Security | Time Target: 7 minutes

Step 1: Scan nginx Image

bash
trivy image --severity CRITICAL,HIGH nginx:1.19.0 > /tmp/nginx-scan.txt

Step 2: Scan redis Image

bash
trivy image --severity CRITICAL redis:6.0.5 > /tmp/redis-scan.txt

Step 3: Update Deployments

bash
# Update nginx
kubectl set image deployment/web-server \
  web-server=nginx:1.25-alpine -n staging

# Update redis
kubectl set image deployment/cache \
  cache=redis:7-alpine -n staging

TIP

If you don't know the container name, check first:

bash
kubectl get deployment web-server -n staging -o jsonpath='{.spec.template.spec.containers[*].name}'

Step 4: Verify

bash
kubectl rollout status deployment/web-server -n staging
kubectl rollout status deployment/cache -n staging

# Confirm new images
kubectl get deployment web-server -n staging -o jsonpath='{.spec.template.spec.containers[0].image}'
kubectl get deployment cache -n staging -o jsonpath='{.spec.template.spec.containers[0].image}'

Solution 14: ImagePolicyWebhook (6%)

Domain: Supply Chain Security | Time Target: 8 minutes

Step 1: Create Admission Configuration

bash
ssh cluster1-controlplane

sudo mkdir -p /etc/kubernetes/admission

sudo tee /etc/kubernetes/admission/image-policy-config.yaml << 'EOF'
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: ImagePolicyWebhook
  configuration:
    imagePolicy:
      kubeConfigFile: /etc/kubernetes/admission/image-policy-kubeconfig.yaml
      allowTTL: 50
      denyTTL: 50
      retryBackoff: 500
      defaultAllow: false
EOF

Step 2: Create Kubeconfig for Webhook

bash
sudo tee /etc/kubernetes/admission/image-policy-kubeconfig.yaml << 'EOF'
apiVersion: v1
kind: Config
clusters:
- cluster:
    server: https://image-policy.kube-system.svc:8443/validate
    certificate-authority: /etc/kubernetes/pki/ca.crt
  name: image-policy-webhook
contexts:
- context:
    cluster: image-policy-webhook
    user: api-server
  name: image-policy-webhook
current-context: image-policy-webhook
users:
- name: api-server
  user:
    client-certificate: /etc/kubernetes/pki/apiserver.crt
    client-key: /etc/kubernetes/pki/apiserver.key
EOF

Step 3: Update API Server

Edit /etc/kubernetes/manifests/kube-apiserver.yaml:

yaml
spec:
  containers:
  - command:
    - kube-apiserver
    # Add ImagePolicyWebhook to existing plugins
    - --enable-admission-plugins=NodeRestriction,ImagePolicyWebhook
    - --admission-control-config-file=/etc/kubernetes/admission/image-policy-config.yaml
    volumeMounts:
    # Add mount for admission config
    - mountPath: /etc/kubernetes/admission
      name: admission-config
      readOnly: true
  volumes:
  - hostPath:
      path: /etc/kubernetes/admission
      type: DirectoryOrCreate
    name: admission-config

Step 4: Verify

bash
watch crictl ps | grep kube-apiserver
# Wait for API server to restart

kubectl get pods -n kube-system

DANGER

If defaultAllow is set to false and the webhook is unreachable, ALL pod creation will be blocked. Make sure the webhook service is running before enabling this.


Solution 15: Dockerfile Security (4%)

Domain: Supply Chain Security | Time Target: 5 minutes

Step 1: Review the Dockerfile

bash
ssh cluster2-controlplane
cat /root/Dockerfile

Step 2: Create Fixed Dockerfile

bash
cat > /root/Dockerfile-fixed << 'EOF'
# Stage 1: Build
FROM golang:1.21-alpine AS builder

WORKDIR /app

COPY go.mod go.sum ./
RUN go mod download

COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -o /app/main .

# Stage 2: Runtime
FROM alpine:3.19

RUN addgroup -g 1001 appgroup && \
    adduser -u 1001 -G appgroup -D -h /app appuser

WORKDIR /app

COPY --from=builder /app/main .

RUN chown -R appuser:appgroup /app

USER appuser:appgroup

EXPOSE 8080

ENTRYPOINT ["/app/main"]
EOF

Key Changes Made

  1. Specific version tags instead of latest
  2. Multi-stage build to reduce image size
  3. Non-root user created and switched to with USER
  4. COPY instead of ADD for local files
  5. Removed debug tools (curl, wget, netcat not installed)
  6. Minimal base image (alpine) for runtime

Solution 16: Audit Policy Configuration (5%)

Domain: Monitoring, Logging & Runtime Security | Time Target: 8 minutes

Step 1: Create Audit Policy

bash
ssh cluster1-controlplane

sudo mkdir -p /etc/kubernetes/audit
sudo mkdir -p /var/log/kubernetes/audit

sudo tee /etc/kubernetes/audit/audit-policy.yaml << 'EOF'
apiVersion: audit.k8s.io/v1
kind: Policy
omitStages:
  - "RequestReceived"
rules:
  # Do not log requests from system components
  - level: None
    users:
      - "system:kube-controller-manager"
      - "system:kube-scheduler"
      - "system:kube-proxy"

  # Log Secret, ConfigMap, and TokenReview at Metadata level
  - level: Metadata
    resources:
      - group: ""
        resources: ["secrets", "configmaps"]
      - group: "authentication.k8s.io"
        resources: ["tokenreviews"]

  # Log all authentication resources at RequestResponse level
  - level: RequestResponse
    resources:
      - group: "authentication.k8s.io"

  # Log exec and attach at RequestResponse level
  - level: RequestResponse
    resources:
      - group: ""
        resources: ["pods/exec", "pods/attach"]

  # Log core and apps group resources at Request level
  - level: Request
    resources:
      - group: ""
      - group: "apps"

  # Log everything else at Metadata level
  - level: Metadata
EOF

Step 2: Configure API Server

Edit /etc/kubernetes/manifests/kube-apiserver.yaml:

yaml
spec:
  containers:
  - command:
    - kube-apiserver
    # ... (existing flags)
    - --audit-policy-file=/etc/kubernetes/audit/audit-policy.yaml
    - --audit-log-path=/var/log/kubernetes/audit/audit.log
    - --audit-log-maxage=7
    - --audit-log-maxbackup=3
    - --audit-log-maxsize=100
    volumeMounts:
    # ... (existing mounts)
    - mountPath: /etc/kubernetes/audit
      name: audit-policy
      readOnly: true
    - mountPath: /var/log/kubernetes/audit
      name: audit-log
  volumes:
  # ... (existing volumes)
  - hostPath:
      path: /etc/kubernetes/audit
      type: DirectoryOrCreate
    name: audit-policy
  - hostPath:
      path: /var/log/kubernetes/audit
      type: DirectoryOrCreate
    name: audit-log

Step 3: Verify

bash
# Wait for API server to restart
watch crictl ps | grep kube-apiserver

# Check audit logs are being generated
ls -la /var/log/kubernetes/audit/audit.log
tail -5 /var/log/kubernetes/audit/audit.log | jq .

Solution 17: Falco Custom Rules (5%)

Domain: Monitoring, Logging & Runtime Security | Time Target: 8 minutes

Step 1: SSH to Control Plane

bash
ssh cluster1-controlplane

Step 2: Create Custom Falco Rules

bash
sudo tee /etc/falco/rules.d/custom-rules.yaml << 'EOF'
- rule: Shell Spawned in Container
  desc: Detect a shell being spawned inside a container
  condition: >
    spawned_process and
    container and
    proc.name in (sh, bash) and
    proc.pname != healthcheck
  output: >
    Shell spawned in container
    (time=%evt.time container=%container.name container_id=%container.id
    user=%user.name shell=%proc.name parent=%proc.pname)
  priority: WARNING
  tags: [container, shell, mitre_execution]

- rule: File Modified Under Etc in Container
  desc: Detect modification of files under /etc inside a container
  condition: >
    open_write and
    container and
    fd.name startswith /etc/
  output: >
    File under /etc modified in container
    (time=%evt.time container=%container.name file=%fd.name
    user=%user.name process=%proc.name)
  priority: ERROR
  tags: [container, filesystem, mitre_persistence]

- rule: Unexpected Outbound Connection from Container
  desc: >
    Detect outbound connections from containers to ports
    other than 80 and 443
  condition: >
    outbound and
    container and
    fd.sport != 80 and
    fd.sport != 443
  output: >
    Unexpected outbound connection from container
    (time=%evt.time container=%container.name
    dest_ip=%fd.sip dest_port=%fd.sport process=%proc.name)
  priority: NOTICE
  tags: [container, network, mitre_command_and_control]
EOF

Step 3: Restart Falco

bash
sudo systemctl restart falco

Step 4: Verify Rules are Loaded

bash
# Check Falco service status
sudo systemctl status falco

# Check logs for rule loading
sudo journalctl -u falco --no-pager | tail -30 | grep -i "rule\|loaded\|custom"

# Or check the Falco log file
sudo cat /var/log/falco/falco.log | tail -20

Step 5: Find and Delete Compromised Pod

bash
# Exit SSH and switch context
exit
kubectl config use-context cluster1

# Check Falco alerts for shell spawning
ssh cluster1-controlplane "sudo cat /var/log/falco/falco.log | grep 'Shell spawned'" 2>/dev/null

# Or check journalctl
ssh cluster1-controlplane "sudo journalctl -u falco --no-pager | grep 'Shell spawned'" 2>/dev/null

# Identify the compromised pod in the namespace
kubectl get pods -n compromised

# Delete the compromised pod (replace with actual pod name from Falco logs)
kubectl delete pod <compromised-pod-name> -n compromised

TIP

If you cannot find the pod from Falco logs, check for suspicious activity:

bash
kubectl get pods -n compromised -o wide
# Look for pods with shell processes running
kubectl exec <pod-name> -n compromised -- ps aux 2>/dev/null

Final Score Calculation

Add up the weights for all questions you answered completely and correctly:

Score RangeResult
67-100%PASS -- You are ready for the real exam
55-66%CLOSE -- Review weak areas and retake in 1 week
40-54%NEEDS WORK -- Spend more time on domain study materials
Below 40%NOT READY -- Complete all domain sections before retaking

Next Steps

  • Review all solutions, including questions you got right
  • Note which domains cost you the most points
  • Focus your study on weak domains
  • Wait at least 48 hours, then attempt Mock Exam 2

Released under the MIT License.