Skip to content

Solutions: Supply Chain Security

Answer Key

These are detailed, step-by-step solutions for all 25 practice questions. Each solution includes the exact commands, YAML configurations, and verification steps you would use on the CKS exam.

Solution 1: Trivy Image Scan with Severity Filter

Task: Scan nginx:1.21 for CRITICAL and HIGH vulnerabilities, save as JSON.

bash
# Scan with severity filter and JSON output
trivy image --severity CRITICAL,HIGH --format json -o /root/nginx-scan.json nginx:1.21

Verification:

bash
# Check the file was created
cat /root/nginx-scan.json | jq '.Results[].Vulnerabilities | length'

# Quick summary
cat /root/nginx-scan.json | jq '.Results[].Vulnerabilities[] | .VulnerabilityID + " " + .Severity' | head -20

Exam Speed Tip

Remember: --severity uses comma-separated values with NO spaces: CRITICAL,HIGH not CRITICAL, HIGH.


Solution 2: Namespace-Wide Image Vulnerability Audit

Task: Find images with CRITICAL vulnerabilities in the webapp namespace.

bash
# Step 1: List all unique images in the namespace
kubectl get pods -n webapp -o jsonpath='{range .items[*]}{range .spec.containers[*]}{.image}{"\n"}{end}{end}' | sort -u > /tmp/images.txt

# Step 2: Scan each image and record those with CRITICAL vulnerabilities
> /root/critical-images.txt
while IFS= read -r image; do
  count=$(trivy image --severity CRITICAL --format json "$image" 2>/dev/null | \
    jq '[.Results[]?.Vulnerabilities // [] | length] | add // 0')
  if [ "$count" -gt 0 ]; then
    echo "$image" >> /root/critical-images.txt
  fi
done < /tmp/images.txt

Verification:

bash
cat /root/critical-images.txt

Exam Speed Tip

Use kubectl get pods -o jsonpath to quickly extract image names. Practice this jsonpath pattern -- it appears frequently.


Solution 3: kubesec Static Analysis

Task: Scan a Pod manifest with kubesec and document findings.

bash
# Scan the manifest
kubesec scan /root/pod-manifest.yaml

# Save the scan results
kubesec scan /root/pod-manifest.yaml > /tmp/kubesec-results.json

# Check the score
score=$(kubesec scan /root/pod-manifest.yaml | jq '.[0].score')
echo "Score: $score"

# If negative, extract critical findings
if [ "$score" -lt 0 ]; then
  kubesec scan /root/pod-manifest.yaml | jq -r '.[0].scoring.critical[] | "CRITICAL: " + .id + " - " + .reason' > /root/kubesec-findings.txt
fi

Verification:

bash
cat /root/kubesec-findings.txt

Solution 4: ImagePullSecret and ServiceAccount

Task: Create an ImagePullSecret and attach it to the default ServiceAccount.

bash
# Step 1: Create the ImagePullSecret
kubectl create secret docker-registry registry-secret \
  --docker-server=registry.internal.io \
  --docker-username=k8s-prod \
  --docker-password='SecureP@ss2024' \
  --docker-email=devops@company.com \
  --namespace=production

# Step 2: Patch the default ServiceAccount
kubectl patch serviceaccount default \
  -n production \
  -p '{"imagePullSecrets": [{"name": "registry-secret"}]}'

Verification:

bash
# Verify secret exists
kubectl get secret registry-secret -n production

# Verify ServiceAccount has the pull secret
kubectl get serviceaccount default -n production -o yaml | grep -A2 imagePullSecrets

# Test with a Pod
kubectl run test-pull --image=registry.internal.io/testapp:v1 -n production --restart=Never
kubectl describe pod test-pull -n production | grep -A5 Events

Exam Speed Tip

Use kubectl patch sa default -n <ns> -p '...' instead of editing YAML manually. It is faster and less error-prone.


Solution 5: Dockerfile Security Fix

Task: Identify and fix security issues in a Dockerfile.

Common insecure Dockerfile (before):

dockerfile
FROM ubuntu:latest
ENV API_KEY=secret123
ADD . /app
RUN apt-get update && apt-get install -y python3
WORKDIR /app
CMD python3 app.py

Fixed Dockerfile (after):

dockerfile
FROM ubuntu:22.04

# Remove hardcoded secrets - mount at runtime instead
# ENV API_KEY=secret123  -- REMOVED

# Use COPY instead of ADD
COPY . /app

RUN apt-get update && \
    apt-get install -y --no-install-recommends python3=3.10* && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/*

# Create and switch to non-root user
RUN groupadd -r appgroup --gid=10001 && \
    useradd -r -g appgroup --uid=10001 appuser
USER 10001:10001

WORKDIR /app
CMD ["python3", "app.py"]

Issues fixed:

  1. Changed FROM ubuntu:latest to FROM ubuntu:22.04 (specific version)
  2. Removed ENV API_KEY=secret123 (hardcoded secret)
  3. Changed ADD . /app to COPY . /app (security best practice)
  4. Added USER 10001:10001 (non-root user)
  5. Added package cleanup and --no-install-recommends

Solution 6: Fixable Vulnerability Scan

Task: Scan for fixable vulnerabilities and count CRITICAL/HIGH.

bash
# Step 1: Scan with --ignore-unfixed flag
trivy image --ignore-unfixed --severity CRITICAL,HIGH nginx:1.25 > /root/fixable-vulns.txt

# Step 2: Count CRITICAL and HIGH
count=$(trivy image --ignore-unfixed --severity CRITICAL,HIGH --format json nginx:1.25 | \
  jq '[.Results[]?.Vulnerabilities // [] | length] | add // 0')
echo "$count" > /root/vuln-count.txt

Verification:

bash
cat /root/fixable-vulns.txt
cat /root/vuln-count.txt

Solution 7: OPA Gatekeeper Registry Constraint

Task: Create a Constraint to restrict images to registry.company.com/.

yaml
# /root/allowed-registries-constraint.yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sAllowedRegistries
metadata:
  name: allowed-registries-secure
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Pod"]
    namespaces:
      - secure
  parameters:
    registries:
      - "registry.company.com/"
bash
kubectl apply -f /root/allowed-registries-constraint.yaml

Verification:

bash
# This should be DENIED
kubectl run test --image=docker.io/nginx:latest -n secure
# Error: admission webhook denied the request

# This should be ALLOWED
kubectl run test --image=registry.company.com/nginx:1.25 -n secure

Solution 8: ImagePolicyWebhook Full Configuration

Task: Configure ImagePolicyWebhook with all required files.

Step 1: Create the webhook kubeconfig

yaml
# /etc/kubernetes/pki/admission_kubeconfig.yaml
apiVersion: v1
kind: Config
clusters:
  - name: image-policy-webhook
    cluster:
      server: https://image-policy.default.svc:443/validate
      certificate-authority: /etc/kubernetes/pki/webhook-ca.crt
contexts:
  - name: image-policy-webhook
    context:
      cluster: image-policy-webhook
      user: api-server
current-context: image-policy-webhook
users:
  - name: api-server
    user:
      client-certificate: /etc/kubernetes/pki/apiserver.crt
      client-key: /etc/kubernetes/pki/apiserver.key

Step 2: Create the AdmissionConfiguration

yaml
# /etc/kubernetes/pki/admission_configuration.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
  - name: ImagePolicyWebhook
    configuration:
      imagePolicy:
        kubeConfigFile: /etc/kubernetes/pki/admission_kubeconfig.yaml
        allowTTL: 50
        denyTTL: 50
        retryBackoff: 500
        defaultAllow: false

Step 3: Update the API server manifest

bash
# Edit the API server static pod manifest
vi /etc/kubernetes/manifests/kube-apiserver.yaml

Add/modify these flags in the command section:

yaml
    - --enable-admission-plugins=NodeRestriction,ImagePolicyWebhook
    - --admission-control-config-file=/etc/kubernetes/pki/admission_configuration.yaml

Ensure the volume mounts include:

yaml
    volumeMounts:
    # ... existing mounts ...
    - name: admission-config
      mountPath: /etc/kubernetes/pki/admission_configuration.yaml
      readOnly: true
    - name: admission-kubeconfig
      mountPath: /etc/kubernetes/pki/admission_kubeconfig.yaml
      readOnly: true
  volumes:
  # ... existing volumes ...
  - name: admission-config
    hostPath:
      path: /etc/kubernetes/pki/admission_configuration.yaml
      type: File
  - name: admission-kubeconfig
    hostPath:
      path: /etc/kubernetes/pki/admission_kubeconfig.yaml
      type: File

Note

If the /etc/kubernetes/pki/ directory is already mounted as a volume in the API server pod (which is common), you may not need separate volume mounts for the admission files since they are already inside that directory. Check the existing mounts first.

Verification:

bash
# Wait for API server to restart
watch kubectl get pods -n kube-system

# Check the admission plugins
kubectl -n kube-system get pod kube-apiserver-controlplane -o yaml | \
  grep admission

# Test with an image
kubectl run test --image=nginx:latest

Exam Speed Tip

Before editing the API server manifest, take a backup: cp /etc/kubernetes/manifests/kube-apiserver.yaml /root/kube-apiserver.yaml.bak. This lets you recover quickly if something goes wrong.


Solution 9: Suspicious Pod Investigation

Task: Identify, scan, and potentially delete a suspicious Pod.

bash
# Step 1: Get the image
image=$(kubectl get pod suspicious-pod -o jsonpath='{.spec.containers[0].image}')
echo "Image: $image"

# Step 2: Scan for vulnerabilities and secrets
trivy image --scanners vuln,secret --format json -o /root/suspicious-scan.json "$image"

# Step 3: Check for CRITICAL vulnerabilities
critical_count=$(jq '[.Results[]?.Vulnerabilities // [] | .[] | select(.Severity=="CRITICAL")] | length' /root/suspicious-scan.json)
echo "CRITICAL vulnerabilities: $critical_count"

# Step 4: Delete if CRITICAL found
if [ "$critical_count" -gt 0 ]; then
  kubectl delete pod suspicious-pod
  echo "Pod deleted due to $critical_count CRITICAL vulnerabilities"
fi

Solution 10: Multi-Stage Dockerfile Rewrite

Task: Rewrite a Go application Dockerfile with multi-stage build.

dockerfile
# /root/go-app/Dockerfile

# Stage 1: Build
FROM golang:1.22-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build \
    -ldflags='-w -s -extldflags "-static"' \
    -o /app/server .

# Stage 2: Production
FROM gcr.io/distroless/static:nonroot
COPY --from=builder /app/server /server
USER 65532:65532
EXPOSE 8080
ENTRYPOINT ["/server"]

Verification:

bash
# Build the image
cd /root/go-app
docker build -t go-app:secure .

# Check image size (should be small -- ~10-20MB)
docker images go-app:secure

# Verify user
docker inspect go-app:secure --format='{{.Config.User}}'
# Output: 65532:65532

Solution 11: Trivy Configuration Scan

Task: Scan manifests directory for misconfigurations.

bash
# Step 1: Scan for configuration issues
trivy config /root/manifests/ > /root/config-scan.txt

# Step 2: Review HIGH severity issues
trivy config --severity HIGH /root/manifests/

# Step 3: Fix common issues in the manifests
# Example fixes (edit each manifest as needed):

Common fixes to apply:

yaml
# Add to container securityContext:
securityContext:
  runAsNonRoot: true
  readOnlyRootFilesystem: true
  allowPrivilegeEscalation: false
  capabilities:
    drop:
      - ALL

# Add resource limits:
resources:
  limits:
    cpu: 200m
    memory: 256Mi
  requests:
    cpu: 100m
    memory: 128Mi
bash
# Re-scan to verify fixes
trivy config --severity HIGH /root/manifests/

Solution 12: Image Digest Pinning

Task: Update a Deployment to use image digest instead of tag.

bash
# Step 1: Get current image
kubectl get deployment web-frontend -n production \
  -o jsonpath='{.spec.template.spec.containers[0].image}'
# Output: nginx:1.25

# Step 2: Find the digest
# Option A: Using docker
docker pull nginx:1.25
docker inspect --format='{{index .RepoDigests 0}}' nginx:1.25
# Output: nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764e2d42b2f4627c31f2a616affb1

# Option B: Using crane (if available)
crane digest nginx:1.25

# Option C: Using skopeo
skopeo inspect docker://nginx:1.25 | jq -r '.Digest'

# Step 3: Update the Deployment
kubectl set image deployment/web-frontend \
  -n production \
  web-frontend=nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764e2d42b2f4627c31f2a616affb1

Verification:

bash
kubectl get deployment web-frontend -n production \
  -o jsonpath='{.spec.template.spec.containers[0].image}'
# Should show: nginx@sha256:6db391d1c...

Solution 13: AlwaysPullImages Admission Controller

Task: Enable AlwaysPullImages on the API server.

bash
# Step 1: Edit the API server manifest
vi /etc/kubernetes/manifests/kube-apiserver.yaml

Find the --enable-admission-plugins flag and add AlwaysPullImages:

yaml
    - --enable-admission-plugins=NodeRestriction,AlwaysPullImages
bash
# Step 2: Wait for API server to restart
watch kubectl get pods -n kube-system

# Step 3: Verify the admission controller is active
kubectl -n kube-system get pod kube-apiserver-controlplane \
  -o yaml | grep enable-admission-plugins

# Step 4: Test with a Pod
kubectl run test-pull --image=nginx:1.25 --restart=Never
kubectl get pod test-pull -o jsonpath='{.spec.containers[0].imagePullPolicy}'
# Output: Always

Solution 14: Scan Image Archive

Task: Scan a tar archive image with Trivy.

bash
# Step 1: Scan the archive
trivy image --input /root/images/app-image.tar --severity CRITICAL,HIGH

# Step 2: Get CVE IDs in JSON and extract
trivy image --input /root/images/app-image.tar \
  --severity CRITICAL,HIGH \
  --format json | \
  jq -r '.Results[].Vulnerabilities[]?.VulnerabilityID' | \
  sort -u > /root/cve-list.txt

Verification:

bash
cat /root/cve-list.txt
wc -l /root/cve-list.txt

Exam Speed Tip

Remember --input for tar archives. This is different from scanning a remote image where you just provide the image name.


Solution 15: Multiple Registry Configuration

Task: Set up ImagePullSecrets for different teams.

bash
# Step 1: Create secret for team-a (GCR)
kubectl create secret docker-registry gcr-secret \
  --docker-server=gcr.io \
  --docker-username=_json_key \
  --docker-password="$(cat /root/gcr-key.json)" \
  --namespace=team-a

# Step 2: Create secret for team-b (ECR)
kubectl create secret docker-registry ecr-secret \
  --docker-server=123456789.dkr.ecr.us-east-1.amazonaws.com \
  --docker-username=AWS \
  --docker-password=ecr-token-2024 \
  --namespace=team-b

# Step 3: Patch ServiceAccounts
kubectl patch serviceaccount default -n team-a \
  -p '{"imagePullSecrets": [{"name": "gcr-secret"}]}'

kubectl patch serviceaccount default -n team-b \
  -p '{"imagePullSecrets": [{"name": "ecr-secret"}]}'

Verification:

bash
kubectl get sa default -n team-a -o yaml | grep -A2 imagePullSecrets
kubectl get sa default -n team-b -o yaml | grep -A2 imagePullSecrets

Solution 16: Combined Supply Chain Gate

Task: Configure ImagePolicyWebhook AND OPA Gatekeeper.

Part 1: ImagePolicyWebhook

yaml
# /etc/kubernetes/pki/admission_kubeconfig.yaml
apiVersion: v1
kind: Config
clusters:
  - name: policy-webhook
    cluster:
      server: https://policy-webhook.security.svc:443/check
      certificate-authority: /etc/kubernetes/pki/webhook-ca.crt
contexts:
  - name: policy-webhook
    context:
      cluster: policy-webhook
      user: api-server
current-context: policy-webhook
users:
  - name: api-server
    user:
      client-certificate: /etc/kubernetes/pki/apiserver.crt
      client-key: /etc/kubernetes/pki/apiserver.key
yaml
# /etc/kubernetes/pki/admission_configuration.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
  - name: ImagePolicyWebhook
    configuration:
      imagePolicy:
        kubeConfigFile: /etc/kubernetes/pki/admission_kubeconfig.yaml
        allowTTL: 50
        denyTTL: 50
        retryBackoff: 500
        defaultAllow: false

Update the API server manifest as shown in Solution 8.

Part 2: OPA Gatekeeper Constraint

yaml
# /root/registry-constraint.yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sAllowedRegistries
metadata:
  name: only-harbor-images
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Pod"]
    excludedNamespaces:
      - kube-system
      - gatekeeper-system
  parameters:
    registries:
      - "harbor.company.com/"
bash
kubectl apply -f /root/registry-constraint.yaml

Verification

bash
# Test denied image (wrong registry)
kubectl run test1 --image=docker.io/nginx:latest
# Should be denied by both ImagePolicyWebhook and Gatekeeper

# Test allowed image (correct registry)
kubectl run test2 --image=harbor.company.com/library/nginx:1.25
# Should be allowed (if webhook approves)

Solution 17: Cluster-Wide Vulnerability Audit

Task: Audit all images across the cluster.

bash
# Step 1: Get all unique images with their Pod info
kubectl get pods --all-namespaces \
  -o jsonpath='{range .items[*]}{.metadata.namespace}{"/"}{.metadata.name}{" "}{range .spec.containers[*]}{.image}{" "}{end}{"\n"}{end}' \
  > /tmp/all-pods-images.txt

# Step 2: Get unique images
kubectl get pods --all-namespaces \
  -o jsonpath='{range .items[*]}{range .spec.containers[*]}{.image}{"\n"}{end}{end}' | \
  sort -u > /tmp/unique-images.txt

# Step 3: Scan each and build report
> /root/cluster-audit.txt
while IFS= read -r line; do
  ns=$(echo "$line" | awk '{print $1}' | cut -d/ -f1)
  pod=$(echo "$line" | awk '{print $1}' | cut -d/ -f2)
  image=$(echo "$line" | awk '{print $2}')
  
  count=$(trivy image --severity CRITICAL --format json "$image" 2>/dev/null | \
    jq '[.Results[]?.Vulnerabilities // [] | length] | add // 0')
  
  echo "$ns/$pod $image $count" >> /root/cluster-audit.txt
  
  # Step 4: Delete Pods with more than 5 CRITICAL vulns
  if [ "$count" -gt 5 ]; then
    kubectl delete pod "$pod" -n "$ns"
    echo "DELETED: $ns/$pod ($count CRITICAL CVEs)" >> /root/cluster-audit.txt
  fi
done < /tmp/all-pods-images.txt

Solution 18: Troubleshoot ImagePolicyWebhook

Task: Fix a broken ImagePolicyWebhook configuration.

bash
# Step 1: Check API server status
crictl ps -a | grep apiserver
# If not running, check logs:
journalctl -u kubelet | tail -50

# Or check the container logs:
crictl logs $(crictl ps -a --name=kube-apiserver -q | head -1) 2>&1 | tail -30

# Step 2: Common issues to check:

# Issue A: Wrong path in admission-control-config-file
grep admission-control-config-file /etc/kubernetes/manifests/kube-apiserver.yaml
# Fix: Ensure path matches the actual file location

# Issue B: Missing volume mounts
grep -A20 volumeMounts /etc/kubernetes/manifests/kube-apiserver.yaml
# Fix: Ensure all referenced files are mounted

# Issue C: kubeConfigFile path mismatch
cat /etc/kubernetes/pki/admission_configuration.yaml
# Fix: Ensure kubeConfigFile path matches the actual kubeconfig location

# Issue D: Certificate files not found
ls -la /etc/kubernetes/pki/webhook-ca.crt
ls -la /etc/kubernetes/pki/admission_kubeconfig.yaml
ls -la /etc/kubernetes/pki/admission_configuration.yaml
# Fix: Create missing files or fix paths

# Step 3: Fix the identified issue and wait for restart
vi /etc/kubernetes/manifests/kube-apiserver.yaml

# Step 4: Wait and verify
watch kubectl get pods -n kube-system

# Step 5: Test
kubectl run test --image=nginx:latest

Exam Speed Tip

When the API server fails to start, check kubelet logs first: journalctl -u kubelet | tail -50. Common causes: wrong file path, missing volume mount, or syntax errors in YAML files.


Solution 19: Trivy Ignore File and Filtered Scan

Task: Create a .trivyignore and run filtered scan.

bash
# Step 1: Create the ignore file
cat <<EOF > /root/.trivyignore
# Accepted risks - approved by security team
CVE-2023-44487
CVE-2023-39325
CVE-2024-0727
EOF

# Step 2: Scan with ignore file, only fixable CRITICAL/HIGH, JSON output
trivy image \
  --ignorefile /root/.trivyignore \
  --severity CRITICAL,HIGH \
  --ignore-unfixed \
  --format json \
  -o /root/python-scan.json \
  python:3.11-slim

Verification:

bash
# Confirm ignored CVEs are not in the output
cat /root/python-scan.json | jq '.Results[].Vulnerabilities[]?.VulnerabilityID' | \
  grep -E "CVE-2023-44487|CVE-2023-39325|CVE-2024-0727"
# Should return empty (no matches)

# Count remaining vulnerabilities
cat /root/python-scan.json | jq '[.Results[].Vulnerabilities // [] | length] | add'

Solution 20: Comprehensive Secure Dockerfile

Task: Rewrite a Node.js Dockerfile with all best practices.

Dockerfile

dockerfile
# /root/secure-build/Dockerfile

# Stage 1: Builder
FROM node:20-slim AS builder
WORKDIR /app

# Copy dependency files first (layer caching)
COPY package.json package-lock.json ./

# Install production dependencies only
RUN npm ci --only=production --ignore-scripts

# Copy application source
COPY src/ ./src/

# Stage 2: Production
FROM gcr.io/distroless/nodejs20-debian12:nonroot

# Copy only what we need from builder
COPY --from=builder /app/node_modules /app/node_modules
COPY --from=builder /app/src /app/src
COPY --from=builder /app/package.json /app/package.json

WORKDIR /app

# Run as non-root (distroless nonroot = 65532)
USER 65532:65532

EXPOSE 3000

CMD ["src/server.js"]

.dockerignore

# /root/secure-build/.dockerignore
.git
.gitignore
.env
.env.*
node_modules
test/
tests/
*.md
README*
LICENSE
Dockerfile
docker-compose*.yml
.dockerignore
.github/
.vscode/
coverage/
*.log

Verification:

bash
cd /root/secure-build
docker build -t secure-node-app:v1.0 .
docker images secure-node-app:v1.0
docker inspect secure-node-app:v1.0 --format='{{.Config.User}}'
# Output: 65532:65532

# Scan for security issues
trivy image secure-node-app:v1.0
trivy config --file-patterns "dockerfile:Dockerfile" /root/secure-build/

Solution 21: Team-Based Registry Access Control

Task: Set up namespace-scoped registry access.

bash
# Step 1: Create namespaces with labels
kubectl create namespace team-frontend
kubectl label namespace team-frontend team=frontend

kubectl create namespace team-backend
kubectl label namespace team-backend team=backend

# Step 2: Create ImagePullSecrets
kubectl create secret docker-registry harbor-frontend-cred \
  --docker-server=harbor.company.com \
  --docker-username='robot$frontend-pull' \
  --docker-password='frontend-token-2024' \
  --namespace=team-frontend

kubectl create secret docker-registry harbor-backend-cred \
  --docker-server=harbor.company.com \
  --docker-username='robot$backend-pull' \
  --docker-password='backend-token-2024' \
  --namespace=team-backend

# Step 3: Patch ServiceAccounts
kubectl patch serviceaccount default -n team-frontend \
  -p '{"imagePullSecrets": [{"name": "harbor-frontend-cred"}]}'

kubectl patch serviceaccount default -n team-backend \
  -p '{"imagePullSecrets": [{"name": "harbor-backend-cred"}]}'

# Step 4: Create Gatekeeper Constraints
yaml
# /root/frontend-registry-constraint.yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sAllowedRegistries
metadata:
  name: frontend-allowed-registries
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Pod"]
    namespaces:
      - team-frontend
  parameters:
    registries:
      - "harbor.company.com/frontend/"
yaml
# /root/backend-registry-constraint.yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sAllowedRegistries
metadata:
  name: backend-allowed-registries
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Pod"]
    namespaces:
      - team-backend
  parameters:
    registries:
      - "harbor.company.com/backend/"
bash
kubectl apply -f /root/frontend-registry-constraint.yaml
kubectl apply -f /root/backend-registry-constraint.yaml

Verification:

bash
# Frontend can only use frontend images
kubectl run test --image=harbor.company.com/frontend/web:v1 -n team-frontend
# Allowed

kubectl run test2 --image=harbor.company.com/backend/api:v1 -n team-frontend
# Denied

# Backend can only use backend images
kubectl run test --image=harbor.company.com/backend/api:v1 -n team-backend
# Allowed

kubectl run test2 --image=harbor.company.com/frontend/web:v1 -n team-backend
# Denied

Solution 22: Image Forensics Investigation

Task: Perform forensic analysis on a suspicious image.

bash
# Step 1: Full scan (vulnerabilities + secrets + misconfigs)
trivy image --scanners vuln,secret,misconfig \
  --format json \
  registry.company.com/app:v2.3 > /tmp/full-scan.json

# Step 2: Examine image layers
docker pull registry.company.com/app:v2.3
docker history --no-trunc registry.company.com/app:v2.3 > /tmp/layers.txt

# Step 3: List all packages
trivy image --list-all-pkgs --format json \
  registry.company.com/app:v2.3 > /tmp/packages.json

# Step 4: Check for secrets
trivy image --scanners secret \
  --format json \
  registry.company.com/app:v2.3 > /tmp/secrets-scan.json

# Step 5: Compile the report
cat <<EOF > /root/forensics-report.txt
=== Image Forensics Report ===
Image: registry.company.com/app:v2.3
Date: $(date)

--- Vulnerability Summary ---
$(jq '.Results[] | "Target: " + .Target + " | Vulns: " + (.Vulnerabilities // [] | length | tostring)' /tmp/full-scan.json)

--- CRITICAL Vulnerabilities ---
$(jq -r '.Results[].Vulnerabilities[]? | select(.Severity=="CRITICAL") | .VulnerabilityID + " " + .PkgName + " " + .InstalledVersion' /tmp/full-scan.json)

--- Embedded Secrets ---
$(jq -r '.Results[]? | select(.Secrets) | .Secrets[]? | .RuleID + ": " + .Match' /tmp/secrets-scan.json 2>/dev/null || echo "No secrets detected")

--- Image Layers ---
$(cat /tmp/layers.txt)
EOF

Solution 23: Fix AdmissionConfiguration

Task: Update the AdmissionConfiguration file.

bash
# Step 1: Edit the configuration
cat <<EOF > /etc/kubernetes/pki/admission_configuration.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
  - name: ImagePolicyWebhook
    configuration:
      imagePolicy:
        kubeConfigFile: /etc/kubernetes/pki/admission_kubeconfig.yaml
        allowTTL: 50
        denyTTL: 50
        retryBackoff: 500
        defaultAllow: false
EOF

# Step 2: The API server will restart automatically (static pod)
# Wait for it to come back
sleep 30
kubectl get pods -n kube-system | grep apiserver

# Step 3: Verify
kubectl -n kube-system get pod kube-apiserver-controlplane \
  -o yaml | grep admission-control-config

# Confirm the settings
cat /etc/kubernetes/pki/admission_configuration.yaml | grep -E "defaultAllow|TTL|retryBackoff"

Solution 24: Complete Supply Chain Workflow

Task: End-to-end secure build and deploy.

bash
# Step 1: Fix the Dockerfile (apply best practices as in Solution 20)
vi /root/new-app/Dockerfile

# Step 2: Build and tag
cd /root/new-app
docker build -t harbor.company.com/production/new-app:v1.0 .

# Step 3: Scan - must have zero CRITICAL
trivy image --severity CRITICAL harbor.company.com/production/new-app:v1.0
# If CRITICAL found, fix Dockerfile and rebuild

# Step 4: Push (assuming docker login is configured)
docker push harbor.company.com/production/new-app:v1.0

# Step 5: Create ImagePullSecret
kubectl create secret docker-registry harbor-cred \
  --docker-server=harbor.company.com \
  --docker-username=k8s-prod \
  --docker-password=prod-token \
  --namespace=production

# Step 6: Deploy
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: new-app
  namespace: production
spec:
  replicas: 1
  selector:
    matchLabels:
      app: new-app
  template:
    metadata:
      labels:
        app: new-app
    spec:
      containers:
        - name: new-app
          image: harbor.company.com/production/new-app:v1.0
          securityContext:
            runAsNonRoot: true
            readOnlyRootFilesystem: true
            allowPrivilegeEscalation: false
            capabilities:
              drop:
                - ALL
          resources:
            limits:
              cpu: 200m
              memory: 256Mi
            requests:
              cpu: 100m
              memory: 128Mi
      imagePullSecrets:
        - name: harbor-cred
EOF

# Step 7: Verify
kubectl get pods -n production -l app=new-app
kubectl describe pod -n production -l app=new-app | grep -A5 Events

Solution 25: Comprehensive Supply Chain Controls

Task: Implement multiple supply chain security controls.

Part 1: AlwaysPullImages

bash
vi /etc/kubernetes/manifests/kube-apiserver.yaml
# Add AlwaysPullImages to --enable-admission-plugins
# --enable-admission-plugins=NodeRestriction,AlwaysPullImages,ImagePolicyWebhook

Part 2: ImagePolicyWebhook

yaml
# /etc/kubernetes/pki/admission_kubeconfig.yaml
apiVersion: v1
kind: Config
clusters:
  - name: image-checker
    cluster:
      server: https://image-checker.kube-system.svc:443/validate
      certificate-authority: /etc/kubernetes/pki/webhook-ca.crt
contexts:
  - name: image-checker
    context:
      cluster: image-checker
      user: api-server
current-context: image-checker
users:
  - name: api-server
    user:
      client-certificate: /etc/kubernetes/pki/apiserver.crt
      client-key: /etc/kubernetes/pki/apiserver.key
yaml
# /etc/kubernetes/pki/admission_configuration.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
  - name: ImagePolicyWebhook
    configuration:
      imagePolicy:
        kubeConfigFile: /etc/kubernetes/pki/admission_kubeconfig.yaml
        allowTTL: 50
        denyTTL: 50
        retryBackoff: 500
        defaultAllow: false

Add --admission-control-config-file=/etc/kubernetes/pki/admission_configuration.yaml to the API server manifest and ensure volume mounts are in place.

Part 3: Gatekeeper Constraint for :latest Tag

yaml
# First, the ConstraintTemplate (if not already installed)
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
  name: k8sdisallowedtags
spec:
  crd:
    spec:
      names:
        kind: K8sDisallowedTags
      validation:
        openAPIV3Schema:
          type: object
          properties:
            tags:
              type: array
              items:
                type: string
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8sdisallowedtags

        violation[{"msg": msg}] {
          container := input.review.object.spec.containers[_]
          tag := split(container.image, ":")[count(split(container.image, ":")) - 1]
          disallowed := input.parameters.tags[_]
          tag == disallowed
          msg := sprintf("Container '%s' uses disallowed tag '%s' in image '%s'", [container.name, tag, container.image])
        }

        violation[{"msg": msg}] {
          container := input.review.object.spec.containers[_]
          not contains(container.image, ":")
          not contains(container.image, "@")
          msg := sprintf("Container '%s' image '%s' must specify a tag or digest", [container.name, container.image])
        }
yaml
# The Constraint
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sDisallowedTags
metadata:
  name: no-latest-tag
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Pod"]
    excludedNamespaces:
      - kube-system
      - gatekeeper-system
  parameters:
    tags:
      - "latest"
bash
kubectl apply -f constraint-template.yaml
kubectl apply -f constraint.yaml

Part 4: ImagePullSecrets for Multiple Namespaces

bash
for ns in default production staging; do
  kubectl create secret docker-registry harbor-cred \
    --docker-server=harbor.company.com \
    --docker-username=k8s-pull \
    --docker-password=pull-token-2024 \
    --namespace=$ns

  kubectl patch serviceaccount default -n $ns \
    -p '{"imagePullSecrets": [{"name": "harbor-cred"}]}'
done

Part 5: Verification

bash
# Test 1: AlwaysPullImages
kubectl run test-pull --image=harbor.company.com/library/nginx:1.25 --restart=Never
kubectl get pod test-pull -o jsonpath='{.spec.containers[0].imagePullPolicy}'
# Expected: Always

# Test 2: Unauthorized image (ImagePolicyWebhook)
kubectl run test-unauth --image=malicious.registry.com/bad:v1 --restart=Never
# Expected: Error from server (Forbidden)

# Test 3: Latest tag (Gatekeeper)
kubectl run test-latest --image=harbor.company.com/library/nginx:latest --restart=Never
# Expected: Error: admission webhook denied - disallowed tag 'latest'

# Test 4: Valid image with credentials
kubectl run test-valid --image=harbor.company.com/library/nginx:1.25 --restart=Never -n production
# Expected: pod/test-valid created

# Cleanup
kubectl delete pod test-pull test-valid 2>/dev/null

Exam Speed Tip

For combined multi-control questions, tackle them in this order:

  1. ImagePullSecrets (fastest, least risk)
  2. Gatekeeper constraints (medium complexity)
  3. AlwaysPullImages (quick API server flag change)
  4. ImagePolicyWebhook (most complex, do last)

This way, if you run out of time, you have already secured the most points.


General Exam Tips for Supply Chain Security

  1. Always back up the API server manifest before editing: cp /etc/kubernetes/manifests/kube-apiserver.yaml /tmp/kube-apiserver.yaml.bak
  2. Know Trivy flags by heart: --severity, --format json, -o, --ignore-unfixed, --input, --scanners
  3. kubesec scan output is JSON -- use jq to extract the score: kubesec scan file.yaml | jq '.[0].score'
  4. ImagePullSecrets are namespace-scoped -- always double-check the namespace
  5. After modifying static pod manifests, wait 30-60 seconds for the kubelet to restart the Pod
  6. Use --dry-run=server to test if an image would be admitted without actually creating the Pod

Released under the MIT License.