Skip to content

CIS Benchmarks and kube-bench

What Are CIS Benchmarks?

The Center for Internet Security (CIS) publishes comprehensive security benchmarks for Kubernetes. These benchmarks provide a prescriptive set of recommendations for configuring Kubernetes to support a strong security posture. They cover every component of a Kubernetes cluster: API server, etcd, controller manager, scheduler, kubelet, and more.

CKS Exam Relevance

The CKS exam tests your ability to run kube-bench, interpret its output, and remediate failures. You will not need to memorize every CIS check, but you must understand the most critical ones and know how to fix them by editing configuration files and static pod manifests.

CIS Benchmark Categories

kube-bench Overview

kube-bench is an open-source Go application that checks whether Kubernetes is deployed securely by running the checks documented in the CIS Benchmark. It is developed by Aqua Security.

Installation

bash
# Download the latest release binary
curl -L https://github.com/aquasecurity/kube-bench/releases/download/v0.8.0/kube-bench_0.8.0_linux_amd64.tar.gz \
  -o kube-bench.tar.gz
tar xzf kube-bench.tar.gz
sudo mv kube-bench /usr/local/bin/

# Or run as a container
kubectl apply -f https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job.yaml

Running kube-bench

bash
# Run all checks on a control plane node
kube-bench run --targets=master

# Run all checks on a worker node
kube-bench run --targets=node

# Run specific section (API server)
kube-bench run --targets=master --check=1.2

# Run a specific check
kube-bench run --targets=master --check=1.2.6

# Output as JSON for processing
kube-bench run --targets=master --json

# Run as a Kubernetes Job
kubectl apply -f kube-bench-job.yaml
kubectl logs job/kube-bench

Running in Kind Clusters

In Kind clusters, you need to exec into the control plane container first:

bash
docker exec -it kind-control-plane bash
# Then run kube-bench inside the container
kube-bench run --targets=master

Interpreting kube-bench Output

kube-bench produces output with four result categories:

StatusMeaning
[PASS]The check passed -- configuration is secure
[FAIL]The check failed -- remediation required
[WARN]Manual check needed -- cannot be automatically verified
[INFO]Informational -- no action needed

Example Output

[INFO] 1 Control Plane Security Configuration
[INFO] 1.1 Control Plane Node Configuration Files
[PASS] 1.1.1 Ensure that the API server pod specification file permissions are set to 600 or more restrictive
[PASS] 1.1.2 Ensure that the API server pod specification file ownership is set to root:root
[FAIL] 1.1.3 Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive
[PASS] 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root

== Remediations master ==
1.1.3 Run the below command (based on the file location on your system) on the control plane node.
chmod 600 /etc/kubernetes/manifests/kube-controller-manager.yaml

== Summary master ==
45 checks PASS
10 checks FAIL
12 checks WARN
0 checks INFO

Critical CIS Checks and Remediation

API Server Checks

1.2.1 -- Ensure anonymous auth is disabled

bash
# Check current setting
ps aux | grep kube-apiserver | grep anonymous-auth

# Remediation: Edit the API server manifest
sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml

Add or modify the flag:

yaml
spec:
  containers:
    - command:
        - kube-apiserver
        - --anonymous-auth=false    # Add this flag
        # ... other flags

WARNING

Disabling anonymous auth may break health checks. Some components rely on anonymous access to /healthz and /readyz. In the exam, follow the instructions exactly. In production, use --anonymous-auth=false with appropriate RBAC for health endpoints.

1.2.6 -- Ensure the API server does not use the AlwaysAllow authorization mode

bash
# Check current authorization modes
ps aux | grep kube-apiserver | grep authorization-mode

Ensure the flag is set to:

yaml
- --authorization-mode=Node,RBAC

Never use:

yaml
# DANGEROUS - never use in production
- --authorization-mode=AlwaysAllow

1.2.16 -- Ensure admission controllers are properly configured

yaml
spec:
  containers:
    - command:
        - kube-apiserver
        - --enable-admission-plugins=NodeRestriction,PodSecurity
        # Ensure these are NOT in --disable-admission-plugins

1.2.18 -- Ensure audit logging is enabled

yaml
spec:
  containers:
    - command:
        - kube-apiserver
        - --audit-policy-file=/etc/kubernetes/audit/audit-policy.yaml
        - --audit-log-path=/var/log/kubernetes/audit/audit.log
        - --audit-log-maxage=30
        - --audit-log-maxbackup=10
        - --audit-log-maxsize=100

etcd Checks

1.5.1 -- Ensure etcd uses TLS for client connections

yaml
# In etcd manifest or configuration
- --cert-file=/etc/kubernetes/pki/etcd/server.crt
- --key-file=/etc/kubernetes/pki/etcd/server.key
- --client-cert-auth=true
- --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt

1.5.2 -- Ensure etcd uses TLS for peer connections

yaml
- --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
- --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
- --peer-client-cert-auth=true
- --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt

Controller Manager Checks

1.3.2 -- Ensure the controller manager uses service account credentials

yaml
spec:
  containers:
    - command:
        - kube-controller-manager
        - --use-service-account-credentials=true

1.3.6 -- Ensure RotateKubeletServerCertificate is enabled

yaml
- --feature-gates=RotateKubeletServerCertificate=true

Scheduler Checks

1.4.1 -- Ensure profiling is disabled

yaml
spec:
  containers:
    - command:
        - kube-scheduler
        - --profiling=false

Kubelet Checks

2.1.1 -- Ensure anonymous authentication is disabled

Edit the kubelet configuration file (usually at /var/lib/kubelet/config.yaml):

yaml
authentication:
  anonymous:
    enabled: false        # Disable anonymous auth
  webhook:
    enabled: true         # Enable webhook authentication
authorization:
  mode: Webhook           # Use Webhook authorization (not AlwaysAllow)

Then restart the kubelet:

bash
sudo systemctl restart kubelet

2.1.2 -- Ensure authorization mode is not AlwaysAllow

yaml
authorization:
  mode: Webhook    # Not AlwaysAllow

2.1.4 -- Ensure read-only port is disabled

yaml
readOnlyPort: 0    # Disable the read-only port (default 10255)

File Permissions Checks

Many CIS checks verify file permissions. Here are the expected values:

FileExpected PermissionsExpected Owner
/etc/kubernetes/manifests/kube-apiserver.yaml600root:root
/etc/kubernetes/manifests/kube-controller-manager.yaml600root:root
/etc/kubernetes/manifests/kube-scheduler.yaml600root:root
/etc/kubernetes/manifests/etcd.yaml600root:root
/etc/kubernetes/admin.conf600root:root
/etc/kubernetes/scheduler.conf600root:root
/etc/kubernetes/controller-manager.conf600root:root
/var/lib/kubelet/config.yaml600root:root
bash
# Fix permissions for all manifests
sudo chmod 600 /etc/kubernetes/manifests/*.yaml
sudo chown root:root /etc/kubernetes/manifests/*.yaml

# Fix kubeconfig permissions
sudo chmod 600 /etc/kubernetes/admin.conf
sudo chmod 600 /etc/kubernetes/scheduler.conf
sudo chmod 600 /etc/kubernetes/controller-manager.conf

Running kube-bench as a Kubernetes Job

yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: kube-bench
spec:
  template:
    metadata:
      labels:
        app: kube-bench
    spec:
      hostPID: true
      containers:
        - name: kube-bench
          image: docker.io/aquasec/kube-bench:v0.8.0
          command: ["kube-bench", "run", "--targets", "node"]
          volumeMounts:
            - name: var-lib-kubelet
              mountPath: /var/lib/kubelet
              readOnly: true
            - name: etc-systemd
              mountPath: /etc/systemd
              readOnly: true
            - name: etc-kubernetes
              mountPath: /etc/kubernetes
              readOnly: true
      restartPolicy: Never
      volumes:
        - name: var-lib-kubelet
          hostPath:
            path: /var/lib/kubelet
        - name: etc-systemd
          hostPath:
            path: /etc/systemd
        - name: etc-kubernetes
          hostPath:
            path: /etc/kubernetes
bash
# Apply and check results
kubectl apply -f kube-bench-job.yaml
kubectl wait --for=condition=complete job/kube-bench --timeout=60s
kubectl logs job/kube-bench

Remediation Workflow

Exam Strategy

When asked to fix kube-bench findings:

  1. Run kube-bench and note the FAIL items
  2. Read the Remediations section at the bottom of the output -- it tells you exactly what to do
  3. Edit the appropriate file
  4. Wait for the component to restart (static pods) or restart the service (kubelet)
  5. Re-run kube-bench to verify the fix

Verifying Changes

After editing a static pod manifest, the kubelet will automatically detect the change and restart the pod. Monitor the restart:

bash
# Watch API server restart
watch crictl ps | grep kube-apiserver

# Check if the API server is healthy
kubectl get componentstatuses

# Verify specific flags are applied
ps aux | grep kube-apiserver | tr ' ' '\n' | grep -E "anonymous|authorization|admission|audit"

# Check kubelet flags
ps aux | grep kubelet | tr ' ' '\n' | grep -E "anonymous|authorization|read-only"

Do Not Break the Cluster

When editing static pod manifests, always make a backup first:

bash
sudo cp /etc/kubernetes/manifests/kube-apiserver.yaml /tmp/kube-apiserver.yaml.bak

If the API server does not come back up after your changes, restore the backup immediately. In the exam, a broken API server means you cannot complete any further tasks.

Released under the MIT License.