Skip to content

Self-Assessment Solutions

Detailed solutions for all 10 self-assessment questions. Each answer includes an explanation of why this matters for the CKS exam.


Solution 1: Linux Capabilities

The Problem: SYS_ADMIN should NOT be added back.

The configuration correctly drops ALL capabilities first (good), but then adds back SYS_ADMIN, which is one of the most dangerous capabilities in Linux. CAP_SYS_ADMIN grants an extremely broad set of privileges including:

  • Mounting and unmounting filesystems
  • Performing syslog operations
  • Configuring kernel parameters
  • Using ptrace to debug other processes
  • Overriding resource limits

Adding SYS_ADMIN effectively gives the container near-root-level access to the host kernel, undermining the entire purpose of dropping ALL capabilities.

The Fix:

yaml
securityContext:
  capabilities:
    drop:
      - ALL
    add:
      - NET_BIND_SERVICE  # Only allows binding to ports < 1024

Only add back the specific capabilities the application genuinely needs. NET_BIND_SERVICE is commonly needed (for web servers binding to port 80/443) and is relatively safe. SYS_ADMIN is almost never appropriate for a containerized application.

CKS Exam Relevance

The CKS exam frequently asks you to audit pod security contexts and fix over-permissive capability settings. Know which capabilities are dangerous (SYS_ADMIN, SYS_PTRACE, NET_RAW, DAC_OVERRIDE) and which are commonly acceptable (NET_BIND_SERVICE, CHOWN).


Solution 2: Seccomp Profiles

The Three Profile Types

ProfileBehaviorUse Case
UnconfinedNo syscall filtering at all. The container can make any syscall the kernel supports.Never use in production. Only for debugging.
RuntimeDefaultApplies the container runtime's built-in seccomp profile. For Docker/containerd, this blocks ~60 dangerous syscalls (like mount, reboot, ptrace, unshare) while allowing ~300 safe syscalls.Minimum baseline for production.
LocalhostLoads a custom seccomp profile from a JSON file on the node at /var/lib/kubelet/seccomp/. Allows fine-grained control over exactly which syscalls are permitted.High-security workloads where you want to whitelist only the specific syscalls your application needs.

Minimum Baseline: RuntimeDefault

RuntimeDefault should be the minimum for production because:

  1. It blocks syscalls that enable container escapes (e.g., mount, unshare)
  2. It requires zero custom configuration
  3. It is compatible with the vast majority of applications
  4. It is enforced by the Restricted Pod Security Standard
yaml
# How to apply RuntimeDefault in a pod spec:
spec:
  securityContext:
    seccompProfile:
      type: RuntimeDefault
  containers:
    - name: app
      image: myapp:latest

Solution 3: Network Policy Default Deny

The Manifest

yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
  namespace: production
spec:
  podSelector: {}    # Empty selector matches ALL pods in the namespace
  policyTypes:
    - Ingress
    - Egress
  # No ingress or egress rules = deny all traffic

Why This Is a Security Best Practice

  1. Zero-trust networking: No pod can communicate with any other pod (or external service) unless explicitly permitted by another NetworkPolicy.
  2. Blast radius reduction: If a container is compromised, the attacker cannot reach other services.
  3. Explicit traffic declarations: Every allowed communication path must be intentionally defined, making the network architecture auditable.
  4. CIS Benchmark requirement: CIS Kubernetes Benchmark recommends default-deny policies.

Key Detail

The podSelector: {} with no match labels selects all pods in the namespace. If you wrote podSelector: {matchLabels: {app: web}}, only pods with that label would be affected. For default-deny, you must use the empty selector.


Solution 4: RBAC Least Privilege

Three Security Issues

Issue 1: cluster-admin role granted to a broad group.cluster-admin is the most powerful ClusterRole in Kubernetes. It grants unrestricted access to every resource in every namespace. Developers should never have cluster-admin access.

Issue 2: It is a ClusterRoleBinding, not a RoleBinding. This grants permissions across ALL namespaces. If developers only need access to specific namespaces (e.g., dev, staging), use namespace-scoped RoleBindings instead.

Issue 3: The subject is a Group with no audit trail. The Group developers could include any number of users. If one member performs a destructive action, there is no easy way to identify who. Individual subjects are more auditable.

Remediation

yaml
# 1. Create a restricted Role (not ClusterRole)
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: developer-role
  namespace: dev  # Scoped to one namespace
rules:
  - apiGroups: ["", "apps"]
    resources: ["pods", "deployments", "services", "configmaps"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
  # Explicitly NO access to: secrets, nodes, persistent volumes,
  # cluster-level resources, or delete operations
---
# 2. Bind to the namespace, not the cluster
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: dev-team-binding
  namespace: dev
subjects:
  - kind: Group
    name: developers
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: developer-role
  apiGroup: rbac.authorization.k8s.io
---
# 3. Delete the dangerous ClusterRoleBinding
# kubectl delete clusterrolebinding dev-team-binding

Steps taken:

  1. Replaced ClusterRole: cluster-admin with a custom Role with minimum necessary permissions
  2. Changed from ClusterRoleBinding to namespace-scoped RoleBinding
  3. Removed delete from the verbs (if developers do not need it)
  4. Excluded sensitive resources like secrets

Solution 5: TLS Certificate Inspection

Four Critical Fields

Running openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text -noout outputs certificate details. Check these fields:

FieldWhat to VerifyWhy It Matters
Subject (CN)Should be kube-apiserverConfirms the certificate identifies the correct service
IssuerShould match the cluster CA (kubernetes)Verifies the cert was signed by your cluster's Certificate Authority, not an untrusted CA
Validity (Not Before / Not After)Certificate should not be expiredExpired certificates cause API server failures and break cluster communication
Subject Alternative Names (SANs)Must include: kubernetes, kubernetes.default, kubernetes.default.svc, the cluster IP (usually 10.96.0.1), and all API server node IPsIf any SAN is missing, clients connecting via that name/IP will get TLS errors
Key Usage / Extended Key UsageShould include Digital Signature, Key Encipherment, Server AuthenticationConfirms the cert can be used for TLS server authentication
Public Key Algorithm / SizeRSA 2048+ or ECDSA 256+Weak keys are vulnerable to brute-force attacks

Practical Commands

bash
# Check expiry
openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -enddate

# Check SANs
openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text | grep -A1 "Subject Alternative Name"

# Check issuer
openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -issuer

# Verify cert against CA
openssl verify -CAfile /etc/kubernetes/pki/ca.crt /etc/kubernetes/pki/apiserver.crt

Solution 6: Container Isolation

Each setting breaks a specific Linux namespace isolation:

SettingWhat It DoesLinux Isolation BypassedSecurity Risk
hostNetwork: trueContainer uses the host's network namespace instead of its ownNET namespaceContainer can see all host network traffic, bind to any host port, sniff network packets, and access services listening on localhost (including the kubelet API)
hostPID: trueContainer can see all processes on the hostPID namespaceContainer can list, signal, and potentially ptrace host processes and other containers' processes. Can read /proc/<pid>/environ to steal secrets from other processes
hostIPC: trueContainer shares the host's IPC namespaceIPC namespaceContainer can access shared memory segments of other processes, potentially reading sensitive data from other containers or host processes
privileged: trueContainer runs with all Linux capabilities, has access to all host devices, and has no seccomp/AppArmor restrictionsALL isolationThe container is essentially root on the host. It can mount the host filesystem, load kernel modules, modify iptables rules, and escape the container entirely

DANGER

This pod configuration represents the worst-case security scenario. A single compromised container with these settings gives an attacker complete control over the host node and potentially the entire cluster (especially if the node hosts control-plane components).


Solution 7: Kubernetes Secrets

Default Storage

By default, Kubernetes Secrets are stored in etcd base64-encoded but NOT encrypted. Base64 is an encoding scheme, not an encryption method -- anyone with access to etcd can decode all secrets instantly.

Two Methods to Improve Security at Rest

Method 1: Encryption at Rest (EncryptionConfiguration)

Configure the API server to encrypt secrets before writing them to etcd:

yaml
# /etc/kubernetes/enc/enc.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: <base64-encoded-32-byte-key>
      - identity: {} # Fallback: allows reading unencrypted secrets

Then add to the API server manifest:

--encryption-provider-config=/etc/kubernetes/enc/enc.yaml

Method 2: External Secret Store (KMS Provider or External Secrets Operator)

Use an external key management system (AWS KMS, Azure Key Vault, HashiCorp Vault) to manage the encryption keys. The API server communicates with the KMS via a gRPC socket, so the encryption keys never touch the Kubernetes cluster.

Bonus: Why Environment Variables Are Less Secure

  1. Process exposure: Environment variables are visible in /proc/<pid>/environ for any process that can read it (if hostPID: true is set, all other pods can read them)
  2. Child process inheritance: All child processes automatically inherit environment variables, increasing the exposure surface
  3. Logging leaks: Environment variables often end up in application logs, crash dumps, and error reports
  4. No rotation: Changing an env-var-based secret requires restarting the pod; file-mounted secrets can be updated by the kubelet without a restart

Best Practice

Mount secrets as files with restrictive permissions (readOnly: true) and set automountServiceAccountToken: false on pods that do not need to talk to the Kubernetes API.


Solution 8: Admission Controllers

Mutating vs. Validating Webhooks

AspectMutating Admission WebhookValidating Admission Webhook
PurposeModifies (mutates) the incoming object before it is persistedAccepts or rejects the object without modifying it
Can change the object?Yes -- can add, remove, or modify fieldsNo -- can only allow or deny
Processing orderRuns firstRuns second (after mutating webhooks)
Failure modeCan be set to Ignore or FailCan be set to Ignore or Fail

Processing Order

Order: Authentication -> Authorization (RBAC) -> Mutating Admission -> Schema Validation -> Validating Admission -> etcd

Security Examples

Mutating Webhook Example: Automatically inject a securityContext into every pod that does not have one:

  • If a pod is submitted without runAsNonRoot: true, the mutating webhook adds it
  • This ensures all pods run as non-root, even if the developer forgot to set it

Validating Webhook Example: Reject any pod that uses an image from a registry outside the approved list:

  • If a pod references docker.io/random-image:latest, the validating webhook denies it
  • Only images from registry.internal.company.com are permitted

Solution 9: AppArmor and Containers

1. Where Must the Profile Be Loaded?

The AppArmor profile must be loaded into the kernel of the node where the pod will run. It is not loaded into the container or the Kubernetes API server -- it is a node-level configuration.

bash
# Load a profile on the node
sudo apparmor_parser -r /etc/apparmor.d/my-profile

# Verify it is loaded
sudo aa-status | grep my-profile

If using Kind, you must exec into the node container:

bash
docker exec -it cks-lab-worker apparmor_parser -r /etc/apparmor.d/my-profile

2. How to Reference the Profile in a Pod Spec

As of Kubernetes v1.30+, you use the securityContext.appArmorProfile field:

yaml
apiVersion: v1
kind: Pod
metadata:
  name: secure-pod
spec:
  containers:
    - name: app
      image: nginx:latest
      securityContext:
        appArmorProfile:
          type: Localhost
          localhostProfile: my-profile

For older Kubernetes versions (pre-1.30), the annotation-based approach is used:

yaml
metadata:
  annotations:
    container.apparmor.security.beta.kubernetes.io/app: localhost/my-profile

3. What Happens If the Profile Does Not Exist?

If you reference an AppArmor profile that is not loaded on the node where the pod is scheduled, the pod will fail to start. The container runtime (containerd/CRI-O) will refuse to create the container because it cannot apply the requested profile.

The pod will show status Blocked or the event will show an error like:

Error: failed to create containerd container: apparmor profile not found: my-profile

Scheduling Consideration

AppArmor profiles are per-node. If you have a 3-node cluster and only load the profile on 1 node, pods requiring that profile can only run on that node. Use node affinity or ensure all nodes have the profile loaded.


Solution 10: Audit Logging

1. Four Audit Levels

From least to most verbose:

LevelWhat Gets Logged
NoneNothing is logged for this rule
MetadataRequest metadata only (user, timestamp, resource, verb) -- no request or response body
RequestMetadata + the request body (what the user sent)
RequestResponseMetadata + request body + response body (what the API server returned)

2. Audit Policy Rule

yaml
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
  # Log Secret creation and deletion at RequestResponse level
  - level: RequestResponse
    resources:
      - group: ""
        resources: ["secrets"]
    verbs: ["create", "delete"]

  # Log Secret read access at Metadata level
  - level: Metadata
    resources:
      - group: ""
        resources: ["secrets"]
    verbs: ["get", "list", "watch"]

  # Default: log everything else at Metadata level
  - level: Metadata
    omitStages:
      - RequestReceived

Rule Ordering

Audit policy rules are processed in order and the first matching rule wins. This is why the more specific RequestResponse rule for create/delete must come before the general Metadata rule for get/list/watch. If you reversed them, the Metadata rule would match all Secret operations first.

3. API Server Configuration

To enable audit logging, add these flags to the API server manifest (/etc/kubernetes/manifests/kube-apiserver.yaml):

yaml
spec:
  containers:
    - command:
        - kube-apiserver
        # ... existing flags ...
        - --audit-policy-file=/etc/kubernetes/audit/audit-policy.yaml
        - --audit-log-path=/var/log/kubernetes/audit/audit.log
        - --audit-log-maxage=30
        - --audit-log-maxbackup=10
        - --audit-log-maxsize=100
      volumeMounts:
        - name: audit-policy
          mountPath: /etc/kubernetes/audit/audit-policy.yaml
          readOnly: true
        - name: audit-logs
          mountPath: /var/log/kubernetes/audit
  volumes:
    - name: audit-policy
      hostPath:
        path: /etc/kubernetes/audit/audit-policy.yaml
        type: File
    - name: audit-logs
      hostPath:
        path: /var/log/kubernetes/audit
        type: DirectoryOrCreate

After saving the manifest, the kubelet detects the change and restarts the API server static pod automatically.

Common Exam Mistake

If you misconfigure the audit policy file path or forget to mount it as a volume, the API server will fail to start. Always verify the volumeMounts and volumes match, and double-check the file exists at the specified hostPath.


Score Yourself

ScoreAssessmentNext Step
8-10You have a strong foundation.Start Domain 1: Cluster Setup & Hardening
5-7Some gaps to fill.Review the CKA to CKS Bridge for the topics you missed
Below 5Significant gaps in prerequisites.Spend a week with the bridge document and Linux security fundamentals before starting CKS domains

Released under the MIT License.