CIS Benchmarks and kube-bench
What Are CIS Benchmarks?
The Center for Internet Security (CIS) publishes comprehensive security benchmarks for Kubernetes. These benchmarks provide a prescriptive set of recommendations for configuring Kubernetes to support a strong security posture. They cover every component of a Kubernetes cluster: API server, etcd, controller manager, scheduler, kubelet, and more.
CKS Exam Relevance
The CKS exam tests your ability to run kube-bench, interpret its output, and remediate failures. You will not need to memorize every CIS check, but you must understand the most critical ones and know how to fix them by editing configuration files and static pod manifests.
CIS Benchmark Categories
kube-bench Overview
kube-bench is an open-source Go application that checks whether Kubernetes is deployed securely by running the checks documented in the CIS Benchmark. It is developed by Aqua Security.
Installation
# Download the latest release binary
curl -L https://github.com/aquasecurity/kube-bench/releases/download/v0.8.0/kube-bench_0.8.0_linux_amd64.tar.gz \
-o kube-bench.tar.gz
tar xzf kube-bench.tar.gz
sudo mv kube-bench /usr/local/bin/
# Or run as a container
kubectl apply -f https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job.yamlRunning kube-bench
# Run all checks on a control plane node
kube-bench run --targets=master
# Run all checks on a worker node
kube-bench run --targets=node
# Run specific section (API server)
kube-bench run --targets=master --check=1.2
# Run a specific check
kube-bench run --targets=master --check=1.2.6
# Output as JSON for processing
kube-bench run --targets=master --json
# Run as a Kubernetes Job
kubectl apply -f kube-bench-job.yaml
kubectl logs job/kube-benchRunning in Kind Clusters
In Kind clusters, you need to exec into the control plane container first:
docker exec -it kind-control-plane bash
# Then run kube-bench inside the container
kube-bench run --targets=masterInterpreting kube-bench Output
kube-bench produces output with four result categories:
| Status | Meaning |
|---|---|
| [PASS] | The check passed -- configuration is secure |
| [FAIL] | The check failed -- remediation required |
| [WARN] | Manual check needed -- cannot be automatically verified |
| [INFO] | Informational -- no action needed |
Example Output
[INFO] 1 Control Plane Security Configuration
[INFO] 1.1 Control Plane Node Configuration Files
[PASS] 1.1.1 Ensure that the API server pod specification file permissions are set to 600 or more restrictive
[PASS] 1.1.2 Ensure that the API server pod specification file ownership is set to root:root
[FAIL] 1.1.3 Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive
[PASS] 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root
== Remediations master ==
1.1.3 Run the below command (based on the file location on your system) on the control plane node.
chmod 600 /etc/kubernetes/manifests/kube-controller-manager.yaml
== Summary master ==
45 checks PASS
10 checks FAIL
12 checks WARN
0 checks INFOCritical CIS Checks and Remediation
API Server Checks
1.2.1 -- Ensure anonymous auth is disabled
# Check current setting
ps aux | grep kube-apiserver | grep anonymous-auth
# Remediation: Edit the API server manifest
sudo vim /etc/kubernetes/manifests/kube-apiserver.yamlAdd or modify the flag:
spec:
containers:
- command:
- kube-apiserver
- --anonymous-auth=false # Add this flag
# ... other flagsWARNING
Disabling anonymous auth may break health checks. Some components rely on anonymous access to /healthz and /readyz. In the exam, follow the instructions exactly. In production, use --anonymous-auth=false with appropriate RBAC for health endpoints.
1.2.6 -- Ensure the API server does not use the AlwaysAllow authorization mode
# Check current authorization modes
ps aux | grep kube-apiserver | grep authorization-modeEnsure the flag is set to:
- --authorization-mode=Node,RBACNever use:
# DANGEROUS - never use in production
- --authorization-mode=AlwaysAllow1.2.16 -- Ensure admission controllers are properly configured
spec:
containers:
- command:
- kube-apiserver
- --enable-admission-plugins=NodeRestriction,PodSecurity
# Ensure these are NOT in --disable-admission-plugins1.2.18 -- Ensure audit logging is enabled
spec:
containers:
- command:
- kube-apiserver
- --audit-policy-file=/etc/kubernetes/audit/audit-policy.yaml
- --audit-log-path=/var/log/kubernetes/audit/audit.log
- --audit-log-maxage=30
- --audit-log-maxbackup=10
- --audit-log-maxsize=100etcd Checks
1.5.1 -- Ensure etcd uses TLS for client connections
# In etcd manifest or configuration
- --cert-file=/etc/kubernetes/pki/etcd/server.crt
- --key-file=/etc/kubernetes/pki/etcd/server.key
- --client-cert-auth=true
- --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt1.5.2 -- Ensure etcd uses TLS for peer connections
- --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
- --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
- --peer-client-cert-auth=true
- --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crtController Manager Checks
1.3.2 -- Ensure the controller manager uses service account credentials
spec:
containers:
- command:
- kube-controller-manager
- --use-service-account-credentials=true1.3.6 -- Ensure RotateKubeletServerCertificate is enabled
- --feature-gates=RotateKubeletServerCertificate=trueScheduler Checks
1.4.1 -- Ensure profiling is disabled
spec:
containers:
- command:
- kube-scheduler
- --profiling=falseKubelet Checks
2.1.1 -- Ensure anonymous authentication is disabled
Edit the kubelet configuration file (usually at /var/lib/kubelet/config.yaml):
authentication:
anonymous:
enabled: false # Disable anonymous auth
webhook:
enabled: true # Enable webhook authentication
authorization:
mode: Webhook # Use Webhook authorization (not AlwaysAllow)Then restart the kubelet:
sudo systemctl restart kubelet2.1.2 -- Ensure authorization mode is not AlwaysAllow
authorization:
mode: Webhook # Not AlwaysAllow2.1.4 -- Ensure read-only port is disabled
readOnlyPort: 0 # Disable the read-only port (default 10255)File Permissions Checks
Many CIS checks verify file permissions. Here are the expected values:
| File | Expected Permissions | Expected Owner |
|---|---|---|
/etc/kubernetes/manifests/kube-apiserver.yaml | 600 | root:root |
/etc/kubernetes/manifests/kube-controller-manager.yaml | 600 | root:root |
/etc/kubernetes/manifests/kube-scheduler.yaml | 600 | root:root |
/etc/kubernetes/manifests/etcd.yaml | 600 | root:root |
/etc/kubernetes/admin.conf | 600 | root:root |
/etc/kubernetes/scheduler.conf | 600 | root:root |
/etc/kubernetes/controller-manager.conf | 600 | root:root |
/var/lib/kubelet/config.yaml | 600 | root:root |
# Fix permissions for all manifests
sudo chmod 600 /etc/kubernetes/manifests/*.yaml
sudo chown root:root /etc/kubernetes/manifests/*.yaml
# Fix kubeconfig permissions
sudo chmod 600 /etc/kubernetes/admin.conf
sudo chmod 600 /etc/kubernetes/scheduler.conf
sudo chmod 600 /etc/kubernetes/controller-manager.confRunning kube-bench as a Kubernetes Job
apiVersion: batch/v1
kind: Job
metadata:
name: kube-bench
spec:
template:
metadata:
labels:
app: kube-bench
spec:
hostPID: true
containers:
- name: kube-bench
image: docker.io/aquasec/kube-bench:v0.8.0
command: ["kube-bench", "run", "--targets", "node"]
volumeMounts:
- name: var-lib-kubelet
mountPath: /var/lib/kubelet
readOnly: true
- name: etc-systemd
mountPath: /etc/systemd
readOnly: true
- name: etc-kubernetes
mountPath: /etc/kubernetes
readOnly: true
restartPolicy: Never
volumes:
- name: var-lib-kubelet
hostPath:
path: /var/lib/kubelet
- name: etc-systemd
hostPath:
path: /etc/systemd
- name: etc-kubernetes
hostPath:
path: /etc/kubernetes# Apply and check results
kubectl apply -f kube-bench-job.yaml
kubectl wait --for=condition=complete job/kube-bench --timeout=60s
kubectl logs job/kube-benchRemediation Workflow
Exam Strategy
When asked to fix kube-bench findings:
- Run
kube-benchand note the FAIL items - Read the Remediations section at the bottom of the output -- it tells you exactly what to do
- Edit the appropriate file
- Wait for the component to restart (static pods) or restart the service (kubelet)
- Re-run
kube-benchto verify the fix
Verifying Changes
After editing a static pod manifest, the kubelet will automatically detect the change and restart the pod. Monitor the restart:
# Watch API server restart
watch crictl ps | grep kube-apiserver
# Check if the API server is healthy
kubectl get componentstatuses
# Verify specific flags are applied
ps aux | grep kube-apiserver | tr ' ' '\n' | grep -E "anonymous|authorization|admission|audit"
# Check kubelet flags
ps aux | grep kubelet | tr ' ' '\n' | grep -E "anonymous|authorization|read-only"Do Not Break the Cluster
When editing static pod manifests, always make a backup first:
sudo cp /etc/kubernetes/manifests/kube-apiserver.yaml /tmp/kube-apiserver.yaml.bakIf the API server does not come back up after your changes, restore the backup immediately. In the exam, a broken API server means you cannot complete any further tasks.