Self-Assessment Solutions
Detailed solutions for all 10 self-assessment questions. Each answer includes an explanation of why this matters for the CKS exam.
Solution 1: Linux Capabilities
The Problem: SYS_ADMIN should NOT be added back.
The configuration correctly drops ALL capabilities first (good), but then adds back SYS_ADMIN, which is one of the most dangerous capabilities in Linux. CAP_SYS_ADMIN grants an extremely broad set of privileges including:
- Mounting and unmounting filesystems
- Performing
syslogoperations - Configuring kernel parameters
- Using
ptraceto debug other processes - Overriding resource limits
Adding SYS_ADMIN effectively gives the container near-root-level access to the host kernel, undermining the entire purpose of dropping ALL capabilities.
The Fix:
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE # Only allows binding to ports < 1024Only add back the specific capabilities the application genuinely needs. NET_BIND_SERVICE is commonly needed (for web servers binding to port 80/443) and is relatively safe. SYS_ADMIN is almost never appropriate for a containerized application.
CKS Exam Relevance
The CKS exam frequently asks you to audit pod security contexts and fix over-permissive capability settings. Know which capabilities are dangerous (SYS_ADMIN, SYS_PTRACE, NET_RAW, DAC_OVERRIDE) and which are commonly acceptable (NET_BIND_SERVICE, CHOWN).
Solution 2: Seccomp Profiles
The Three Profile Types
| Profile | Behavior | Use Case |
|---|---|---|
| Unconfined | No syscall filtering at all. The container can make any syscall the kernel supports. | Never use in production. Only for debugging. |
| RuntimeDefault | Applies the container runtime's built-in seccomp profile. For Docker/containerd, this blocks ~60 dangerous syscalls (like mount, reboot, ptrace, unshare) while allowing ~300 safe syscalls. | Minimum baseline for production. |
| Localhost | Loads a custom seccomp profile from a JSON file on the node at /var/lib/kubelet/seccomp/. Allows fine-grained control over exactly which syscalls are permitted. | High-security workloads where you want to whitelist only the specific syscalls your application needs. |
Minimum Baseline: RuntimeDefault
RuntimeDefault should be the minimum for production because:
- It blocks syscalls that enable container escapes (e.g.,
mount,unshare) - It requires zero custom configuration
- It is compatible with the vast majority of applications
- It is enforced by the Restricted Pod Security Standard
# How to apply RuntimeDefault in a pod spec:
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
containers:
- name: app
image: myapp:latestSolution 3: Network Policy Default Deny
The Manifest
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {} # Empty selector matches ALL pods in the namespace
policyTypes:
- Ingress
- Egress
# No ingress or egress rules = deny all trafficWhy This Is a Security Best Practice
- Zero-trust networking: No pod can communicate with any other pod (or external service) unless explicitly permitted by another NetworkPolicy.
- Blast radius reduction: If a container is compromised, the attacker cannot reach other services.
- Explicit traffic declarations: Every allowed communication path must be intentionally defined, making the network architecture auditable.
- CIS Benchmark requirement: CIS Kubernetes Benchmark recommends default-deny policies.
Key Detail
The podSelector: {} with no match labels selects all pods in the namespace. If you wrote podSelector: {matchLabels: {app: web}}, only pods with that label would be affected. For default-deny, you must use the empty selector.
Solution 4: RBAC Least Privilege
Three Security Issues
Issue 1: cluster-admin role granted to a broad group.cluster-admin is the most powerful ClusterRole in Kubernetes. It grants unrestricted access to every resource in every namespace. Developers should never have cluster-admin access.
Issue 2: It is a ClusterRoleBinding, not a RoleBinding. This grants permissions across ALL namespaces. If developers only need access to specific namespaces (e.g., dev, staging), use namespace-scoped RoleBindings instead.
Issue 3: The subject is a Group with no audit trail. The Group developers could include any number of users. If one member performs a destructive action, there is no easy way to identify who. Individual subjects are more auditable.
Remediation
# 1. Create a restricted Role (not ClusterRole)
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: developer-role
namespace: dev # Scoped to one namespace
rules:
- apiGroups: ["", "apps"]
resources: ["pods", "deployments", "services", "configmaps"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
# Explicitly NO access to: secrets, nodes, persistent volumes,
# cluster-level resources, or delete operations
---
# 2. Bind to the namespace, not the cluster
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: dev-team-binding
namespace: dev
subjects:
- kind: Group
name: developers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: developer-role
apiGroup: rbac.authorization.k8s.io
---
# 3. Delete the dangerous ClusterRoleBinding
# kubectl delete clusterrolebinding dev-team-bindingSteps taken:
- Replaced
ClusterRole: cluster-adminwith a customRolewith minimum necessary permissions - Changed from
ClusterRoleBindingto namespace-scopedRoleBinding - Removed
deletefrom the verbs (if developers do not need it) - Excluded sensitive resources like
secrets
Solution 5: TLS Certificate Inspection
Four Critical Fields
Running openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text -noout outputs certificate details. Check these fields:
| Field | What to Verify | Why It Matters |
|---|---|---|
| Subject (CN) | Should be kube-apiserver | Confirms the certificate identifies the correct service |
| Issuer | Should match the cluster CA (kubernetes) | Verifies the cert was signed by your cluster's Certificate Authority, not an untrusted CA |
| Validity (Not Before / Not After) | Certificate should not be expired | Expired certificates cause API server failures and break cluster communication |
| Subject Alternative Names (SANs) | Must include: kubernetes, kubernetes.default, kubernetes.default.svc, the cluster IP (usually 10.96.0.1), and all API server node IPs | If any SAN is missing, clients connecting via that name/IP will get TLS errors |
| Key Usage / Extended Key Usage | Should include Digital Signature, Key Encipherment, Server Authentication | Confirms the cert can be used for TLS server authentication |
| Public Key Algorithm / Size | RSA 2048+ or ECDSA 256+ | Weak keys are vulnerable to brute-force attacks |
Practical Commands
# Check expiry
openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -enddate
# Check SANs
openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text | grep -A1 "Subject Alternative Name"
# Check issuer
openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -issuer
# Verify cert against CA
openssl verify -CAfile /etc/kubernetes/pki/ca.crt /etc/kubernetes/pki/apiserver.crtSolution 6: Container Isolation
Each setting breaks a specific Linux namespace isolation:
| Setting | What It Does | Linux Isolation Bypassed | Security Risk |
|---|---|---|---|
hostNetwork: true | Container uses the host's network namespace instead of its own | NET namespace | Container can see all host network traffic, bind to any host port, sniff network packets, and access services listening on localhost (including the kubelet API) |
hostPID: true | Container can see all processes on the host | PID namespace | Container can list, signal, and potentially ptrace host processes and other containers' processes. Can read /proc/<pid>/environ to steal secrets from other processes |
hostIPC: true | Container shares the host's IPC namespace | IPC namespace | Container can access shared memory segments of other processes, potentially reading sensitive data from other containers or host processes |
privileged: true | Container runs with all Linux capabilities, has access to all host devices, and has no seccomp/AppArmor restrictions | ALL isolation | The container is essentially root on the host. It can mount the host filesystem, load kernel modules, modify iptables rules, and escape the container entirely |
DANGER
This pod configuration represents the worst-case security scenario. A single compromised container with these settings gives an attacker complete control over the host node and potentially the entire cluster (especially if the node hosts control-plane components).
Solution 7: Kubernetes Secrets
Default Storage
By default, Kubernetes Secrets are stored in etcd base64-encoded but NOT encrypted. Base64 is an encoding scheme, not an encryption method -- anyone with access to etcd can decode all secrets instantly.
Two Methods to Improve Security at Rest
Method 1: Encryption at Rest (EncryptionConfiguration)
Configure the API server to encrypt secrets before writing them to etcd:
# /etc/kubernetes/enc/enc.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: <base64-encoded-32-byte-key>
- identity: {} # Fallback: allows reading unencrypted secretsThen add to the API server manifest:
--encryption-provider-config=/etc/kubernetes/enc/enc.yamlMethod 2: External Secret Store (KMS Provider or External Secrets Operator)
Use an external key management system (AWS KMS, Azure Key Vault, HashiCorp Vault) to manage the encryption keys. The API server communicates with the KMS via a gRPC socket, so the encryption keys never touch the Kubernetes cluster.
Bonus: Why Environment Variables Are Less Secure
- Process exposure: Environment variables are visible in
/proc/<pid>/environfor any process that can read it (ifhostPID: trueis set, all other pods can read them) - Child process inheritance: All child processes automatically inherit environment variables, increasing the exposure surface
- Logging leaks: Environment variables often end up in application logs, crash dumps, and error reports
- No rotation: Changing an env-var-based secret requires restarting the pod; file-mounted secrets can be updated by the kubelet without a restart
Best Practice
Mount secrets as files with restrictive permissions (readOnly: true) and set automountServiceAccountToken: false on pods that do not need to talk to the Kubernetes API.
Solution 8: Admission Controllers
Mutating vs. Validating Webhooks
| Aspect | Mutating Admission Webhook | Validating Admission Webhook |
|---|---|---|
| Purpose | Modifies (mutates) the incoming object before it is persisted | Accepts or rejects the object without modifying it |
| Can change the object? | Yes -- can add, remove, or modify fields | No -- can only allow or deny |
| Processing order | Runs first | Runs second (after mutating webhooks) |
| Failure mode | Can be set to Ignore or Fail | Can be set to Ignore or Fail |
Processing Order
Order: Authentication -> Authorization (RBAC) -> Mutating Admission -> Schema Validation -> Validating Admission -> etcd
Security Examples
Mutating Webhook Example: Automatically inject a securityContext into every pod that does not have one:
- If a pod is submitted without
runAsNonRoot: true, the mutating webhook adds it - This ensures all pods run as non-root, even if the developer forgot to set it
Validating Webhook Example: Reject any pod that uses an image from a registry outside the approved list:
- If a pod references
docker.io/random-image:latest, the validating webhook denies it - Only images from
registry.internal.company.comare permitted
Solution 9: AppArmor and Containers
1. Where Must the Profile Be Loaded?
The AppArmor profile must be loaded into the kernel of the node where the pod will run. It is not loaded into the container or the Kubernetes API server -- it is a node-level configuration.
# Load a profile on the node
sudo apparmor_parser -r /etc/apparmor.d/my-profile
# Verify it is loaded
sudo aa-status | grep my-profileIf using Kind, you must exec into the node container:
docker exec -it cks-lab-worker apparmor_parser -r /etc/apparmor.d/my-profile2. How to Reference the Profile in a Pod Spec
As of Kubernetes v1.30+, you use the securityContext.appArmorProfile field:
apiVersion: v1
kind: Pod
metadata:
name: secure-pod
spec:
containers:
- name: app
image: nginx:latest
securityContext:
appArmorProfile:
type: Localhost
localhostProfile: my-profileFor older Kubernetes versions (pre-1.30), the annotation-based approach is used:
metadata:
annotations:
container.apparmor.security.beta.kubernetes.io/app: localhost/my-profile3. What Happens If the Profile Does Not Exist?
If you reference an AppArmor profile that is not loaded on the node where the pod is scheduled, the pod will fail to start. The container runtime (containerd/CRI-O) will refuse to create the container because it cannot apply the requested profile.
The pod will show status Blocked or the event will show an error like:
Error: failed to create containerd container: apparmor profile not found: my-profileScheduling Consideration
AppArmor profiles are per-node. If you have a 3-node cluster and only load the profile on 1 node, pods requiring that profile can only run on that node. Use node affinity or ensure all nodes have the profile loaded.
Solution 10: Audit Logging
1. Four Audit Levels
From least to most verbose:
| Level | What Gets Logged |
|---|---|
| None | Nothing is logged for this rule |
| Metadata | Request metadata only (user, timestamp, resource, verb) -- no request or response body |
| Request | Metadata + the request body (what the user sent) |
| RequestResponse | Metadata + request body + response body (what the API server returned) |
2. Audit Policy Rule
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
# Log Secret creation and deletion at RequestResponse level
- level: RequestResponse
resources:
- group: ""
resources: ["secrets"]
verbs: ["create", "delete"]
# Log Secret read access at Metadata level
- level: Metadata
resources:
- group: ""
resources: ["secrets"]
verbs: ["get", "list", "watch"]
# Default: log everything else at Metadata level
- level: Metadata
omitStages:
- RequestReceivedRule Ordering
Audit policy rules are processed in order and the first matching rule wins. This is why the more specific RequestResponse rule for create/delete must come before the general Metadata rule for get/list/watch. If you reversed them, the Metadata rule would match all Secret operations first.
3. API Server Configuration
To enable audit logging, add these flags to the API server manifest (/etc/kubernetes/manifests/kube-apiserver.yaml):
spec:
containers:
- command:
- kube-apiserver
# ... existing flags ...
- --audit-policy-file=/etc/kubernetes/audit/audit-policy.yaml
- --audit-log-path=/var/log/kubernetes/audit/audit.log
- --audit-log-maxage=30
- --audit-log-maxbackup=10
- --audit-log-maxsize=100
volumeMounts:
- name: audit-policy
mountPath: /etc/kubernetes/audit/audit-policy.yaml
readOnly: true
- name: audit-logs
mountPath: /var/log/kubernetes/audit
volumes:
- name: audit-policy
hostPath:
path: /etc/kubernetes/audit/audit-policy.yaml
type: File
- name: audit-logs
hostPath:
path: /var/log/kubernetes/audit
type: DirectoryOrCreateAfter saving the manifest, the kubelet detects the change and restarts the API server static pod automatically.
Common Exam Mistake
If you misconfigure the audit policy file path or forget to mount it as a volume, the API server will fail to start. Always verify the volumeMounts and volumes match, and double-check the file exists at the specified hostPath.
Score Yourself
| Score | Assessment | Next Step |
|---|---|---|
| 8-10 | You have a strong foundation. | Start Domain 1: Cluster Setup & Hardening |
| 5-7 | Some gaps to fill. | Review the CKA to CKS Bridge for the topics you missed |
| Below 5 | Significant gaps in prerequisites. | Spend a week with the bridge document and Linux security fundamentals before starting CKS domains |