Lab Environment Setup
A CKS lab needs more than a basic Kubernetes cluster. You need a multi-node environment with specific security tools installed and configured. This guide sets up everything from scratch.
Lab Architecture
Step 1: Docker / containerd Setup
Kind runs Kubernetes nodes as Docker containers. You need Docker (or Podman) installed on your host machine.
Already Have Docker?
If you set up Docker for CKA practice, skip to Step 2. Verify with docker version.
Linux (Ubuntu/Debian)
# Remove old versions
sudo apt-get remove docker docker-engine docker.io containerd runc 2>/dev/null
# Install prerequisites
sudo apt-get update
sudo apt-get install -y \
ca-certificates \
curl \
gnupg \
lsb-release
# Add Docker GPG key
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
# Add repository
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin
# Add your user to the docker group
sudo usermod -aG docker $USER
newgrp docker
# Verify
docker versionmacOS
# Install Docker Desktop via Homebrew
brew install --cask docker
# Start Docker Desktop from Applications, then verify:
docker versionStep 2: Install Kind
Kind (Kubernetes IN Docker) creates clusters by running Kubernetes nodes as containers. It is the best option for CKS practice because it supports multi-node clusters and is lightweight.
# Linux (amd64)
[ $(uname -m) = x86_64 ] && \
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.25.0/kind-linux-amd64
[ $(uname -m) = aarch64 ] && \
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.25.0/kind-linux-arm64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
# macOS
brew install kind
# Verify
kind versionStep 3: Create the CKS Practice Cluster
Save the following configuration as cks-cluster.yaml:
# cks-cluster.yaml
# Multi-node Kind cluster for CKS practice
# 1 control-plane + 2 workers
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: cks-lab
nodes:
# Control Plane node
- role: control-plane
kubeadmConfigPatches:
- |
kind: ClusterConfiguration
apiServer:
extraArgs:
# Enable audit logging
audit-log-path: /var/log/kubernetes/audit/audit.log
audit-log-maxage: "30"
audit-log-maxbackup: "10"
audit-log-maxsize: "100"
# Enable admission controllers relevant to CKS
enable-admission-plugins: >-
NodeRestriction,PodSecurity
# Enable RBAC (default, but explicit)
authorization-mode: Node,RBAC
extraVolumes:
- name: audit-logs
hostPath: /var/log/kubernetes/audit
mountPath: /var/log/kubernetes/audit
readOnly: false
pathType: DirectoryOrCreate
extraMounts:
# Mount seccomp profiles into the node
- hostPath: ./seccomp-profiles
containerPath: /var/lib/kubelet/seccomp/profiles
# Mount audit policy
- hostPath: ./audit-policy.yaml
containerPath: /etc/kubernetes/audit/audit-policy.yaml
readOnly: true
# Worker node 1
- role: worker
extraMounts:
- hostPath: ./seccomp-profiles
containerPath: /var/lib/kubelet/seccomp/profiles
# Worker node 2
- role: worker
extraMounts:
- hostPath: ./seccomp-profiles
containerPath: /var/lib/kubelet/seccomp/profiles
networking:
# Use Calico-compatible settings
disableDefaultCNI: false
podSubnet: "10.244.0.0/16"
serviceSubnet: "10.96.0.0/12"Understanding the Cluster Configuration
- audit-log-path: Enables API server audit logging, which you will configure in the runtime security domain.
- enable-admission-plugins: Activates NodeRestriction (CIS benchmark requirement) and PodSecurity (for Pod Security Standards).
- authorization-mode: Explicitly sets Node and RBAC authorization -- both are CKS requirements.
- extraMounts for seccomp: Makes custom seccomp profiles available to the kubelet on every node.
- 3 nodes: CKS scenarios often require scheduling to specific nodes and testing node-level configurations.
Create Supporting Files
Before creating the cluster, set up the required directories and files:
# Create working directory
mkdir -p ~/cks-lab && cd ~/cks-lab
# Create seccomp profiles directory
mkdir -p seccomp-profiles
# Create a basic audit policy
cat > audit-policy.yaml << 'EOF'
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
# Log all requests at the Metadata level
- level: Metadata
resources:
- group: ""
resources: ["secrets", "configmaps"]
# Log pod changes at Request level
- level: Request
resources:
- group: ""
resources: ["pods"]
verbs: ["create", "update", "patch", "delete"]
# Log everything else at Metadata level
- level: Metadata
omitStages:
- RequestReceived
EOF
# Create a default seccomp profile for testing
cat > seccomp-profiles/audit.json << 'EOF'
{
"defaultAction": "SCMP_ACT_LOG"
}
EOF
# Create a restrictive seccomp profile
cat > seccomp-profiles/restricted.json << 'EOF'
{
"defaultAction": "SCMP_ACT_ERRNO",
"architectures": [
"SCMP_ARCH_X86_64",
"SCMP_ARCH_X86",
"SCMP_ARCH_AARCH64"
],
"syscalls": [
{
"names": [
"accept", "accept4", "access", "arch_prctl", "bind", "brk",
"capget", "capset", "chdir", "chmod", "chown", "clock_getres",
"clock_gettime", "clock_nanosleep", "clone", "close", "connect",
"dup", "dup2", "dup3", "epoll_create", "epoll_create1",
"epoll_ctl", "epoll_pwait", "epoll_wait", "execve", "exit",
"exit_group", "faccessat", "faccessat2", "fadvise64",
"fallocate", "fchmod", "fchmodat", "fchown", "fchownat",
"fcntl", "fdatasync", "flock", "fstat", "fstatfs", "fsync",
"ftruncate", "futex", "getcwd", "getdents", "getdents64",
"getegid", "geteuid", "getgid", "getgroups", "getpeername",
"getpgrp", "getpid", "getppid", "getrandom", "getresgid",
"getresuid", "getrlimit", "getsockname", "getsockopt",
"gettid", "gettimeofday", "getuid", "ioctl", "kill",
"listen", "lseek", "lstat", "madvise", "memfd_create",
"mincore", "mkdir", "mkdirat", "mmap", "mprotect", "munmap",
"nanosleep", "newfstatat", "open", "openat", "pause", "pipe",
"pipe2", "poll", "ppoll", "prctl", "pread64", "preadv",
"prlimit64", "pselect6", "pwrite64", "read", "readlink",
"readlinkat", "readv", "recvfrom", "recvmsg", "rename",
"renameat", "renameat2", "restart_syscall", "rmdir",
"rt_sigaction", "rt_sigpending", "rt_sigprocmask",
"rt_sigreturn", "rt_sigsuspend", "rt_sigtimedwait",
"sched_getaffinity", "sched_yield", "seccomp", "select",
"sendmsg", "sendto", "set_robust_list", "set_tid_address",
"setgid", "setgroups", "setitimer", "setsockopt", "setuid",
"shutdown", "sigaltstack", "socket", "socketpair", "stat",
"statfs", "statx", "symlink", "symlinkat", "sysinfo",
"tgkill", "timer_create", "timer_delete", "timer_getoverrun",
"timer_gettime", "timer_settime", "timerfd_create",
"timerfd_gettime", "timerfd_settime", "umask", "uname",
"unlink", "unlinkat", "utimensat", "wait4", "waitid",
"write", "writev"
],
"action": "SCMP_ACT_ALLOW"
}
]
}
EOFCreate the Cluster
cd ~/cks-lab
# Create the Kind cluster
kind create cluster --config cks-cluster.yaml
# Verify the cluster
kubectl cluster-info --context kind-cks-lab
kubectl get nodesExpected output:
NAME STATUS ROLES AGE VERSION
cks-lab-control-plane Ready control-plane 60s v1.31.x
cks-lab-worker Ready <none> 30s v1.31.x
cks-lab-worker2 Ready <none> 30s v1.31.xTroubleshooting
If nodes show NotReady, wait 1-2 minutes for the CNI to initialize. If it persists, check:
# Check node conditions
kubectl describe node cks-lab-control-plane | grep -A5 Conditions
# Check pod status in kube-system
kubectl get pods -n kube-systemStep 4: Install kubectl
You should already have kubectl from CKA. Verify it works with the Kind cluster:
# If not installed:
# Linux
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
# macOS
brew install kubectl
# Verify
kubectl version
kubectl config current-context # Should show: kind-cks-labStep 5: Install Helm
Helm is needed to install several CKS tools (Falco, OPA Gatekeeper):
# Linux
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
# macOS
brew install helm
# Verify
helm versionStep 6: Install Security Tools
Trivy (Image & Config Scanner)
# Linux (Debian/Ubuntu)
sudo apt-get install -y wget apt-transport-https gnupg
wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | \
gpg --dearmor | sudo tee /usr/share/keyrings/trivy.gpg > /dev/null
echo "deb [signed-by=/usr/share/keyrings/trivy.gpg] \
https://aquasecurity.github.io/trivy-repo/deb generic main" | \
sudo tee /etc/apt/sources.list.d/trivy.list
sudo apt-get update
sudo apt-get install -y trivy
# macOS
brew install trivy
# Verify
trivy version
# Test: scan an image
trivy image nginx:latestExam Note
On the actual CKS exam, Trivy is pre-installed. But for practice, you need it locally to build muscle memory with its flags and output format.
Falco (Runtime Security)
Falco runs inside the cluster. Install it using Helm:
# Add Falco Helm repo
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update
# Install Falco with eBPF driver (recommended for Kind)
helm install falco falcosecurity/falco \
--namespace falco \
--create-namespace \
--set tty=true \
--set falcosidekick.enabled=false \
--set driver.kind=modern_ebpf
# Verify Falco is running
kubectl get pods -n falco
kubectl logs -n falco -l app.kubernetes.io/name=falco --tail=20If Falco Pods Crash
Kind nodes may not have the required kernel headers. Try the eBPF driver instead:
helm upgrade falco falcosecurity/falco \
--namespace falco \
--set driver.kind=modern_ebpfIf issues persist, you can practice Falco rules configuration without the runtime daemon, focusing on rule syntax and output formatting which are the exam-tested skills.
OPA Gatekeeper (Admission Policy)
# Install Gatekeeper
helm repo add gatekeeper https://open-policy-agent.github.io/gatekeeper/charts
helm repo update
helm install gatekeeper gatekeeper/gatekeeper \
--namespace gatekeeper-system \
--create-namespace
# Verify
kubectl get pods -n gatekeeper-systemVerify Gatekeeper is running:
kubectl get pods -n gatekeeper-system
# Expected: gatekeeper-audit-xxxxx and gatekeeper-controller-manager-xxxxx pods Runningkube-bench (CIS Benchmark Auditing)
kube-bench audits your cluster against CIS Kubernetes Benchmarks:
# Run kube-bench as a Job in the cluster
kubectl apply -f https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job.yaml
# Wait for it to complete
kubectl wait --for=condition=complete job/kube-bench --timeout=120s
# View results
kubectl logs job/kube-bench
# Clean up
kubectl delete job kube-benchAlternatively, install locally:
# Linux
curl -L https://github.com/aquasecurity/kube-bench/releases/download/v0.8.0/kube-bench_0.8.0_linux_amd64.tar.gz \
| tar xz
sudo mv kube-bench /usr/local/bin/
# macOS
brew install kube-benchkubesec (Manifest Security Scanner)
# Install kubesec
# Linux
curl -sSL https://github.com/controlplaneio/kubesec/releases/download/v2.14.1/kubesec_linux_amd64.tar.gz \
| tar xz
sudo mv kubesec /usr/local/bin/
# macOS
brew install kubesec
# Test: scan a pod manifest
cat <<EOF | kubesec scan -
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: test
image: nginx
securityContext:
privileged: true
EOFStep 7: AppArmor Setup (Linux Only)
AppArmor is a Linux-only feature. If you are on macOS, skip this section -- you will practice AppArmor within the Kind node containers.
# Check if AppArmor is enabled
sudo aa-status
# Install AppArmor utilities if not present
sudo apt-get install -y apparmor-utils
# Load a test profile
cat > /tmp/k8s-deny-write << 'EOF'
#include <tunables/global>
profile k8s-deny-write flags=(attach_disconnected) {
#include <abstractions/base>
file,
# Deny all write operations
deny /** w,
}
EOF
sudo apparmor_parser -r /tmp/k8s-deny-write
sudo aa-status | grep k8s-deny-writemacOS Users
AppArmor is Linux-specific. On macOS, you can practice AppArmor inside Kind nodes:
docker exec -it cks-lab-worker bash
# Now you're inside the Kind node (which is a Linux container)
apt-get update && apt-get install -y apparmor-utils
aa-statusStep 8: Verification Script
Save this script as verify-lab.sh and run it to confirm everything is working:
#!/usr/bin/env bash
set -euo pipefail
GREEN='\033[0;32m'
RED='\033[0;31m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
PASS=0
FAIL=0
WARN=0
check() {
local description="$1"
local command="$2"
if eval "$command" &>/dev/null; then
echo -e " ${GREEN}[PASS]${NC} $description"
((PASS++))
else
echo -e " ${RED}[FAIL]${NC} $description"
((FAIL++))
fi
}
warn_check() {
local description="$1"
local command="$2"
if eval "$command" &>/dev/null; then
echo -e " ${GREEN}[PASS]${NC} $description"
((PASS++))
else
echo -e " ${YELLOW}[WARN]${NC} $description (optional)"
((WARN++))
fi
}
echo ""
echo "======================================"
echo " CKS Lab Environment Verification"
echo "======================================"
echo ""
echo "--- Host Tools ---"
check "Docker is installed" "docker version"
check "Kind is installed" "kind version"
check "kubectl is installed" "kubectl version --client"
check "Helm is installed" "helm version"
check "Trivy is installed" "trivy version"
warn_check "kubesec is installed" "kubesec version"
echo ""
echo "--- Kubernetes Cluster ---"
check "Kind cluster 'cks-lab' exists" "kind get clusters | grep -q cks-lab"
check "kubectl can connect to cluster" "kubectl cluster-info"
check "Control plane node is Ready" \
"kubectl get node cks-lab-control-plane -o jsonpath='{.status.conditions[?(@.type==\"Ready\")].status}' | grep -q True"
check "Worker node 1 is Ready" \
"kubectl get node cks-lab-worker -o jsonpath='{.status.conditions[?(@.type==\"Ready\")].status}' | grep -q True"
check "Worker node 2 is Ready" \
"kubectl get node cks-lab-worker2 -o jsonpath='{.status.conditions[?(@.type==\"Ready\")].status}' | grep -q True"
check "CoreDNS is running" \
"kubectl get pods -n kube-system -l k8s-app=kube-dns -o jsonpath='{.items[0].status.phase}' | grep -q Running"
echo ""
echo "--- Security Components ---"
warn_check "Falco is running" \
"kubectl get pods -n falco -l app.kubernetes.io/name=falco -o jsonpath='{.items[0].status.phase}' | grep -q Running"
warn_check "OPA Gatekeeper is running" \
"kubectl get pods -n gatekeeper-system -l control-plane=controller-manager -o jsonpath='{.items[0].status.phase}' | grep -q Running"
echo ""
echo "--- Node Configuration ---"
check "Seccomp profiles directory exists on control-plane" \
"docker exec cks-lab-control-plane ls /var/lib/kubelet/seccomp/profiles/"
check "Seccomp profiles directory exists on worker" \
"docker exec cks-lab-worker ls /var/lib/kubelet/seccomp/profiles/"
echo ""
echo "======================================"
echo -e " Results: ${GREEN}${PASS} passed${NC}, ${RED}${FAIL} failed${NC}, ${YELLOW}${WARN} warnings${NC}"
echo "======================================"
echo ""
if [ "$FAIL" -gt 0 ]; then
echo -e "${RED}Some checks failed. Review the output above and fix the issues.${NC}"
exit 1
else
echo -e "${GREEN}Lab environment is ready for CKS practice!${NC}"
exit 0
fiRun it:
chmod +x verify-lab.sh
./verify-lab.shQuick Reference: Cluster Management
Commands you will use frequently during CKS practice:
# Create the cluster
kind create cluster --config cks-cluster.yaml
# Delete the cluster (reset everything)
kind delete cluster --name cks-lab
# Access a node shell (useful for AppArmor, seccomp)
docker exec -it cks-lab-control-plane bash
docker exec -it cks-lab-worker bash
docker exec -it cks-lab-worker2 bash
# Copy files to/from nodes
docker cp audit-policy.yaml cks-lab-control-plane:/etc/kubernetes/audit/
docker cp cks-lab-worker:/var/log/syslog ./worker-syslog.log
# Restart a node component (e.g., after modifying static pod manifests)
docker exec cks-lab-control-plane crictl ps
# View kubelet logs on a node
docker exec cks-lab-control-plane journalctl -u kubelet --no-pager -n 50Practice Tip
Get into the habit of deleting and recreating your cluster regularly. This builds speed with setup commands and ensures you can recover from a broken cluster -- which may be necessary during the exam.
Next Steps
With your lab environment ready:
- Take the Self-Assessment Quiz to identify knowledge gaps
- Review the Solutions to understand any weak areas
- Begin Domain 1: Cluster Setup & Hardening