Storage Class, PV, & PVC – CKA Practice Questions
13 comprehensive, CKA-focused questions covering PV/PVC binding, selectors, node affinity, and mounting strategies.
Question 1: Static PV + PVC Binding with Persistence Verification
Objective: Create a complete storage setup from scratch and verify data persistence across pod recreation.
Prerequisites:
- Ensure
/mnt/log-dataexists on at least one node
Task:
Create a StorageClass
manualwith:- provisioner:
kubernetes.io/no-provisioner - volumeBindingMode:
Immediate - reclaimPolicy:
Delete
- provisioner:
Create a PV
pv-logwith:- Capacity:
100Mi - Access mode:
ReadWriteOnce - hostPath:
/mnt/log-data(with type: DirectoryOrCreate) - storageClassName:
manual
- Capacity:
Create a PVC
pvc-logrequesting50Mifrom StorageClassmanualCreate a Pod
log-podusing imagebusybox:- Mount
pvc-logat/log - Command:
sh -c 'echo "test123" > /log/test.txt && sleep 3600'
- Mount
Verify the file was created
Delete the Pod (not PVC)
Create a new Pod with the same name and verify the file persists
Delete both Pod and PVC, then verify the PV is deleted (due to Delete reclaim policy)
Verify:
kubectl get pv pv-log
kubectl get pvc pvc-log
kubectl exec log-pod -- cat /log/test.txt
# After pod deletion
kubectl get pv pv-log # Should be deletedQuestion 2: PVC Capacity Matching and Binding Rules
Objective: Understand exactly how Kubernetes selects the best matching PV based on multiple criteria.
Task:
Create StorageClass
capacity-test:- provisioner:
kubernetes.io/no-provisioner - volumeBindingMode:
Immediate
- provisioner:
Create FIVE PVs with different capacities (all with
capacity-test,ReadWriteOnce):pv-tiny: 20Mi, hostPath/mnt/tinypv-small: 50Mi, hostPath/mnt/smallpv-medium: 80Mi, hostPath/mnt/mediumpv-large: 200Mi, hostPath/mnt/largepv-xlarge: 400Mi, hostPath/mnt/xlarge
Create three PVCs requesting different capacities:
pvc-60requesting60Mi→ should bind topv-mediumpvc-30requesting30Mi→ should bind topv-smallpvc-600requesting300Mi→ should bind topv-large
Document the binding logic:
- Why does
pvc-60NOT bind topv-xlarge? - Why does
pvc-30bind topv-smalland notpv-tiny?
- Why does
Verify:
kubectl get pv -o wide
kubectl get pvc -o wide
# Check bindings
kubectl describe pvc pvc-60Question 3: PV-PVC Binding with Label Selectors (Type-based)
Objective: Force PVC to bind to specific PVs using label selectors, not just capacity.
Task:
Create THREE PVs with TYPE labels:
pv-ssd-1: 100Mi, Labeltype: ssd, Labelspeed: fast, hostPath/mnt/ssd1, storageClassName:""pv-ssd-2: 100Mi, Labeltype: ssd, Labelspeed: fast, hostPath/mnt/ssd2, storageClassName:""pv-hdd: 100Mi, Labeltype: hdd, Labelspeed: slow, hostPath/mnt/hdd, storageClassName:""
Create THREE PVCs with different selectors:
pvc-ssd: Request 50Mi, Selectortype: ssd(should bind to either ssd-1 or ssd-2)pvc-fast: Request 40Mi, Selectorspeed: fast(should bind to either ssd-1 or ssd-2)pvc-hdd: Request 50Mi, Selectortype: hdd(should bind to hdd)
Create a Pod
selector-test-podthat usespvc-ssdand verify it can write dataAttempt to create another Pod using
pvc-hddand verify it succeedsDocument which PV each PVC bound to
Verify:
kubectl get pv --show-labels
kubectl get pvc -o custom-columns=NAME:.metadata.name,PV:.spec.volumeName,SELECTOR:.spec.selector
kubectl describe pvc pvc-ssdQuestion 4: volumeBindingMode – WaitForFirstConsumer (Late Binding)
Objective: Understand late binding and how PVC remains Pending until Pod is scheduled.
Task:
Create StorageClass
delayed-sc:- provisioner:
kubernetes.io/no-provisioner - volumeBindingMode:
WaitForFirstConsumer
- provisioner:
Create PV
pv-delayed:- Capacity:
100Mi - hostPath:
/mnt/delayed - storageClassName:
delayed-sc - Important: Do NOT set nodeAffinity yet
- Capacity:
Create PVC
pvc-delayedrequesting50Mifromdelayed-scCheck PVC status immediately:
- Expected:
Pending(because no pod is consuming it)
- Expected:
Create Pod
consumer-podin namespacedefault:- Use
pvc-delayed - Image:
busybox - Command:
sleep 3600
- Use
Check PVC status again:
- Expected: Now
Bound
- Expected: Now
Delete the pod
Check PVC status:
- Expected: Back to
Pending
- Expected: Back to
Verify:
# Step 3-4
kubectl get pvc pvc-delayed
# Output should show: Pending
# Step 6
kubectl get pvc pvc-delayed
# Output should show: Bound
# Step 8
kubectl delete pod consumer-pod
kubectl get pvc pvc-delayed
# Output should show: Pending againQuestion 5: Local PV with Node Affinity Constraints
Objective: Create a local PV that can ONLY be used on a specific node.
Prerequisites:
- Ensure you have at least 2 nodes
- Identify one node name:
NODE_NAME=$(kubectl get nodes -o name | head -1 | cut -d'/' -f2)
Task:
Get your node name and save it
Create StorageClass
local-sc:- volumeBindingMode:
WaitForFirstConsumer - No provisioner needed
- volumeBindingMode:
Create PV
pv-local-node1:- Capacity:
100Mi - local path:
/mnt/local-data - storageClassName:
local-sc - nodeAffinity required:yaml
nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <YOUR_NODE_NAME>
- Capacity:
Create PVC
pvc-localrequesting50Mifromlocal-scCheck PVC status (should be
Pendingbecause no pod yet)Create Pod
local-pod:- Use
pvc-local - Image:
busybox - Command:
sh -c 'hostname > /mnt/host.txt && sleep 3600'(mount at/mnt)
- Use
Verify the Pod is running ON the correct node
Try to create another Pod on a DIFFERENT node using the same PVC - it should fail (or remain pending)
Verify:
# Step 5
kubectl get pvc pvc-local
# Step 7
kubectl get pods -o wide
# Verify local-pod is on your selected node
# Step 8
kubectl describe pod <new-pod> | grep -i events
# Should show cannot be scheduled on other nodesQuestion 6: Reclaim Policy – Delete vs Retain vs Recycle
Objective: Understand what happens to PV after PVC is deleted.
Task:
Create THREE separate StorageClasses (all no-provisioner):
delete-scwith reclaim policyDeleteretain-scwith reclaim policyRetainrecycle-scwith reclaim policyRecycle
Create PVs with each:
pv-delete: 50Mi,delete-scpv-retain: 50Mi,retain-scpv-recycle: 50Mi,recycle-sc(if available on your cluster)
Create corresponding PVCs:
pvc-delete,pvc-retain,pvc-recycle
Create Pods for each that write files to their volumes
Create test files in each mount point
Delete all three PVCs
Observe what happens to each PV:
- pv-delete: PV should be completely deleted
- pv-retain: PV should remain with status
Released(can be reclaimed manually) - pv-recycle: PV should be scrubbed and return to
Available(rarely used)
Verify:
# Before deletion
kubectl get pv
kubectl get pvc
# After deletion
kubectl get pv
# Check status of each PV
kubectl describe pv pv-delete
kubectl describe pv pv-retainQuestion 7: Mount Multiple PVCs in Single Pod
Objective: Advanced mounting scenario with different PVCs at different paths.
Task:
Create StorageClass
multi-sc(no-provisioner)Create PVs:
pv-app-data: 100Mi, hostPath/mnt/app-datapv-app-config: 100Mi, hostPath/mnt/app-configpv-app-logs: 100Mi, hostPath/mnt/app-logs
Create PVCs:
pvc-data(50Mi)pvc-config(50Mi)pvc-logs(50Mi)
Create Pod
multi-mount-app:- Image:
busybox - Mount THREE volumes:
pvc-dataat/app/datapvc-configat/app/configpvc-logsat/app/logs
- Command:sh
sh -c ' echo "app-data" > /app/data/data.txt echo "app-config" > /app/config/config.txt echo "app-logs" > /app/logs/logs.txt sleep 3600 '
- Image:
Verify files were created in each mount point
Create a second Pod that mounts the SAME three PVCs at different paths and reads the files
Verify:
kubectl exec multi-mount-app -- cat /app/data/data.txt
kubectl exec multi-mount-app -- cat /app/config/config.txt
kubectl exec multi-mount-app -- cat /app/logs/logs.txt
# Second pod
kubectl exec second-pod -- cat /different/path/data.txtQuestion 8: ReadWriteOnce (RWO) Access Mode Constraint
Objective: Verify RWO prevents simultaneous mounting on different nodes.
Task:
Create PV
pv-rwo:- 100Mi
- hostPath
/mnt/rwo - accessModes:
ReadWriteOnce(NOT ReadWriteMany)
Create PVC
pvc-rworequesting 50MiCreate Pod1
pod-rwo-1on NODE1:- Mount
pvc-rwo - Image:
busybox - Command:
sleep 3600
- Mount
Create Pod2
pod-rwo-2with nodeSelector forcing it to NODE2 (different node):- Try to mount the SAME
pvc-rwo
- Try to mount the SAME
Observe Pod2 status:
- Expected:
PendingorFailedwith error about volume already attached
- Expected:
Verify:
kubectl get pods -o wide
# pod-rwo-1 should be Running
# pod-rwo-2 should be Pending/Failed
kubectl describe pod pod-rwo-2 | grep -A5 Events
# Should show: FailedScheduling or FailedAttachVolumeQuestion 9: PVC Expansion and Storage Increase
Objective: Dynamically expand a PVC to larger size.
Task:
Create StorageClass
expandable-sc:- provisioner:
kubernetes.io/no-provisioner - allowVolumeExpansion:
true
- provisioner:
Create PV
pv-expand:- Capacity:
200Mi - hostPath:
/mnt/expand-data - storageClassName:
expandable-sc
- Capacity:
Create PVC
pvc-expand:- Initial request:
50Mi - storageClassName:
expandable-sc
- Initial request:
Create Pod
expand-pod:- Mount
pvc-expandat/data - Image:
busybox - Write files until ~40Mi used
- Mount
Check PVC capacity:
bashkubectl get pvc pvc-expandExpand the PVC to
120Mi:bashkubectl patch pvc pvc-expand -p '{"spec":{"resources":{"requests":{"storage":"120Mi"}}}}'Verify PVC expanded successfully
Verify Pod can still write to the expanded volume
Verify:
kubectl get pvc pvc-expand
# Should show: 120Mi
kubectl exec expand-pod -- df -h /data
# Should show larger capacityQuestion 10: Troubleshooting PVC Stuck in Pending
Objective: Debug a PVC stuck in Pending state.
Scenario: You have a PVC pvc-stuck in Pending state. You don't know why.
Possible Root Causes:
- StorageClass doesn't exist
- No available PV matches capacity
- No available PV matches selector labels
- No available PV matches storageClassName
- volumeBindingMode is WaitForFirstConsumer but no pod is trying to consume it
- PV status is Released (from Retain reclaim policy)
Task:
Create a deliberately broken scenario (choose one):
- Request PVC from non-existent StorageClass
- Request PVC with capacity larger than all available PVs
- Request PVC with label selector that no PV matches
- Request PVC with WaitForFirstConsumer but don't create a pod
Use these debugging commands:
bashkubectl describe pvc pvc-stuck kubectl get pv --show-labels kubectl get sc kubectl get events --sort-by='.lastTimestamp' kubectl logs -n kube-system -l app=provisionerIdentify the root cause
Fix the issue (create matching PV, correct StorageClass, add labels, etc.)
Verify PVC becomes
Bound
Verify:
kubectl get pvc pvc-stuck
# Should show: Bound
kubectl describe pvc pvc-stuck | grep -A2 Events
# Should show successful binding eventQuestion 11: Selector + Capacity + StorageClass (Complex Matching)
Objective: All binding rules working together simultaneously.
Task:
Create StorageClass
selective-sc(no-provisioner)Create FOUR PVs with combinations:
pv-1: 100Mi,selective-sc, Labels:tier: gold, env: prodpv-2: 100Mi,selective-sc, Labels:tier: silver, env: prodpv-3: 100Mi,selective-sc, Labels:tier: gold, env: devpv-4: 100Mi,selective-sc, Labels:tier: bronze, env: prod
Create PVCs with specific requirements:
pvc-gold-prod: Request 60Mi, StorageClassselective-sc, Selector:tier: gold AND env: prod→ Should bind ONLY topv-1pvc-silver: Request 80Mi, StorageClassselective-sc, Selector:tier: silver→ Should bind topv-2(matches labels + capacity)pvc-bronze: Request 50Mi, StorageClassselective-sc, Selector:tier: bronze→ Should bind topv-4
Verify each binding is correct
Create Pods using each PVC and verify they can write data
Verify:
kubectl get pv --show-labels
kubectl get pvc -o wide
kubectl describe pvc pvc-gold-prod | grep "Bound to"Question 12: Node Affinity with StorageClass and Selector (Advanced)
Objective: Combine node affinity constraint with selector-based PVC binding.
Prerequisites:
- Have at least 2 nodes
- Label one node:
kubectl label nodes <node-name> storage=fast
Task:
Create StorageClass
node-affinity-sc(no-provisioner)Create TWO PVs on SAME node with nodeAffinity:
pv-fast-1: 100Mi, nodeAffinity tostorage=fastnode, Label:speed: fastpv-fast-2: 100Mi, nodeAffinity tostorage=fastnode, Label:speed: fast
Create PVC
pvc-fast:- Request 60Mi
- StorageClass
node-affinity-sc - Selector:
speed: fast
Create Pod
affinity-test-podthat usespvc-fast:- Should be scheduled on the same node as the PV (because of nodeAffinity)
Verify Pod is on the correct node with the correct storage
Verify:
kubectl get pods -o wide
# Pod should be on the storage=fast labeled node
kubectl get pv pv-fast-1 -o jsonpath='{.spec.nodeAffinity}'
# Should show the node affinity constraint
kubectl describe pod affinity-test-pod | grep -i "node"Question 13: PVC Selector with Multiple Label Requirements (AND Logic)
Objective: PVC selector matching multiple labels (all must match).
Task:
Create StorageClass
multi-label-sc(no-provisioner)Create FIVE PVs with different label combinations:
pv-1: Labels:tier: gold, env: prod, region: us-eastpv-2: Labels:tier: gold, env: prod, region: us-westpv-3: Labels:tier: gold, env: dev, region: us-eastpv-4: Labels:tier: silver, env: prod, region: us-eastpv-5: Labels:tier: gold, env: prod, region: eu-west
Create PVCs with multi-label selectors (AND logic):
pvc-gold-prod-east: Selectors:tier: gold AND env: prod AND region: us-east→ Should match ONLYpv-1pvc-gold-prod: Selectors:tier: gold AND env: prod(no region specified) → Could matchpv-1,pv-2, orpv-5(any first one found)pvc-gold: Selector:tier: gold→ Could match any pv-1, pv-2, or pv-3
Create Pods for each PVC
Document which PV each PVC actually bound to and why
Verify:
kubectl get pv --show-labels
kubectl describe pvc pvc-gold-prod-east | grep "Bound to"
kubectl describe pvc pvc-gold | grep "Bound to"See solutions.md for complete YAML examples and step-by-step answers for all questions.