Skip to content

Storage Class, PV, & PVC – CKA Practice Questions

13 comprehensive, CKA-focused questions covering PV/PVC binding, selectors, node affinity, and mounting strategies.


Question 1: Static PV + PVC Binding with Persistence Verification

Objective: Create a complete storage setup from scratch and verify data persistence across pod recreation.

Prerequisites:

  • Ensure /mnt/log-data exists on at least one node

Task:

  1. Create a StorageClass manual with:

    • provisioner: kubernetes.io/no-provisioner
    • volumeBindingMode: Immediate
    • reclaimPolicy: Delete
  2. Create a PV pv-log with:

    • Capacity: 100Mi
    • Access mode: ReadWriteOnce
    • hostPath: /mnt/log-data (with type: DirectoryOrCreate)
    • storageClassName: manual
  3. Create a PVC pvc-log requesting 50Mi from StorageClass manual

  4. Create a Pod log-pod using image busybox:

    • Mount pvc-log at /log
    • Command: sh -c 'echo "test123" > /log/test.txt && sleep 3600'
  5. Verify the file was created

  6. Delete the Pod (not PVC)

  7. Create a new Pod with the same name and verify the file persists

  8. Delete both Pod and PVC, then verify the PV is deleted (due to Delete reclaim policy)

Verify:

bash
kubectl get pv pv-log
kubectl get pvc pvc-log
kubectl exec log-pod -- cat /log/test.txt
# After pod deletion
kubectl get pv pv-log  # Should be deleted

Question 2: PVC Capacity Matching and Binding Rules

Objective: Understand exactly how Kubernetes selects the best matching PV based on multiple criteria.

Task:

  1. Create StorageClass capacity-test:

    • provisioner: kubernetes.io/no-provisioner
    • volumeBindingMode: Immediate
  2. Create FIVE PVs with different capacities (all with capacity-test, ReadWriteOnce):

    • pv-tiny: 20Mi, hostPath /mnt/tiny
    • pv-small: 50Mi, hostPath /mnt/small
    • pv-medium: 80Mi, hostPath /mnt/medium
    • pv-large: 200Mi, hostPath /mnt/large
    • pv-xlarge: 400Mi, hostPath /mnt/xlarge
  3. Create three PVCs requesting different capacities:

    • pvc-60 requesting 60Mi → should bind to pv-medium
    • pvc-30 requesting 30Mi → should bind to pv-small
    • pvc-600 requesting 300Mi → should bind to pv-large
  4. Document the binding logic:

    • Why does pvc-60 NOT bind to pv-xlarge?
    • Why does pvc-30 bind to pv-small and not pv-tiny?

Verify:

bash
kubectl get pv -o wide
kubectl get pvc -o wide
# Check bindings
kubectl describe pvc pvc-60

Question 3: PV-PVC Binding with Label Selectors (Type-based)

Objective: Force PVC to bind to specific PVs using label selectors, not just capacity.

Task:

  1. Create THREE PVs with TYPE labels:

    • pv-ssd-1: 100Mi, Label type: ssd, Label speed: fast, hostPath /mnt/ssd1, storageClassName: ""
    • pv-ssd-2: 100Mi, Label type: ssd, Label speed: fast, hostPath /mnt/ssd2, storageClassName: ""
    • pv-hdd: 100Mi, Label type: hdd, Label speed: slow, hostPath /mnt/hdd, storageClassName: ""
  2. Create THREE PVCs with different selectors:

    • pvc-ssd: Request 50Mi, Selector type: ssd (should bind to either ssd-1 or ssd-2)
    • pvc-fast: Request 40Mi, Selector speed: fast (should bind to either ssd-1 or ssd-2)
    • pvc-hdd: Request 50Mi, Selector type: hdd (should bind to hdd)
  3. Create a Pod selector-test-pod that uses pvc-ssd and verify it can write data

  4. Attempt to create another Pod using pvc-hdd and verify it succeeds

  5. Document which PV each PVC bound to

Verify:

bash
kubectl get pv --show-labels
kubectl get pvc -o custom-columns=NAME:.metadata.name,PV:.spec.volumeName,SELECTOR:.spec.selector
kubectl describe pvc pvc-ssd

Question 4: volumeBindingMode – WaitForFirstConsumer (Late Binding)

Objective: Understand late binding and how PVC remains Pending until Pod is scheduled.

Task:

  1. Create StorageClass delayed-sc:

    • provisioner: kubernetes.io/no-provisioner
    • volumeBindingMode: WaitForFirstConsumer
  2. Create PV pv-delayed:

    • Capacity: 100Mi
    • hostPath: /mnt/delayed
    • storageClassName: delayed-sc
    • Important: Do NOT set nodeAffinity yet
  3. Create PVC pvc-delayed requesting 50Mi from delayed-sc

  4. Check PVC status immediately:

    • Expected: Pending (because no pod is consuming it)
  5. Create Pod consumer-pod in namespace default:

    • Use pvc-delayed
    • Image: busybox
    • Command: sleep 3600
  6. Check PVC status again:

    • Expected: Now Bound
  7. Delete the pod

  8. Check PVC status:

    • Expected: Back to Pending

Verify:

bash
# Step 3-4
kubectl get pvc pvc-delayed
# Output should show: Pending

# Step 6
kubectl get pvc pvc-delayed
# Output should show: Bound

# Step 8
kubectl delete pod consumer-pod
kubectl get pvc pvc-delayed
# Output should show: Pending again

Question 5: Local PV with Node Affinity Constraints

Objective: Create a local PV that can ONLY be used on a specific node.

Prerequisites:

  • Ensure you have at least 2 nodes
  • Identify one node name: NODE_NAME=$(kubectl get nodes -o name | head -1 | cut -d'/' -f2)

Task:

  1. Get your node name and save it

  2. Create StorageClass local-sc:

    • volumeBindingMode: WaitForFirstConsumer
    • No provisioner needed
  3. Create PV pv-local-node1:

    • Capacity: 100Mi
    • local path: /mnt/local-data
    • storageClassName: local-sc
    • nodeAffinity required:
      yaml
      nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - <YOUR_NODE_NAME>
  4. Create PVC pvc-local requesting 50Mi from local-sc

  5. Check PVC status (should be Pending because no pod yet)

  6. Create Pod local-pod:

    • Use pvc-local
    • Image: busybox
    • Command: sh -c 'hostname > /mnt/host.txt && sleep 3600' (mount at /mnt)
  7. Verify the Pod is running ON the correct node

  8. Try to create another Pod on a DIFFERENT node using the same PVC - it should fail (or remain pending)

Verify:

bash
# Step 5
kubectl get pvc pvc-local

# Step 7
kubectl get pods -o wide
# Verify local-pod is on your selected node

# Step 8
kubectl describe pod <new-pod> | grep -i events
# Should show cannot be scheduled on other nodes

Question 6: Reclaim Policy – Delete vs Retain vs Recycle

Objective: Understand what happens to PV after PVC is deleted.

Task:

  1. Create THREE separate StorageClasses (all no-provisioner):

    • delete-sc with reclaim policy Delete
    • retain-sc with reclaim policy Retain
    • recycle-sc with reclaim policy Recycle
  2. Create PVs with each:

    • pv-delete: 50Mi, delete-sc
    • pv-retain: 50Mi, retain-sc
    • pv-recycle: 50Mi, recycle-sc (if available on your cluster)
  3. Create corresponding PVCs:

    • pvc-delete, pvc-retain, pvc-recycle
  4. Create Pods for each that write files to their volumes

  5. Create test files in each mount point

  6. Delete all three PVCs

  7. Observe what happens to each PV:

    • pv-delete: PV should be completely deleted
    • pv-retain: PV should remain with status Released (can be reclaimed manually)
    • pv-recycle: PV should be scrubbed and return to Available (rarely used)

Verify:

bash
# Before deletion
kubectl get pv
kubectl get pvc

# After deletion
kubectl get pv
# Check status of each PV
kubectl describe pv pv-delete
kubectl describe pv pv-retain

Question 7: Mount Multiple PVCs in Single Pod

Objective: Advanced mounting scenario with different PVCs at different paths.

Task:

  1. Create StorageClass multi-sc (no-provisioner)

  2. Create PVs:

    • pv-app-data: 100Mi, hostPath /mnt/app-data
    • pv-app-config: 100Mi, hostPath /mnt/app-config
    • pv-app-logs: 100Mi, hostPath /mnt/app-logs
  3. Create PVCs:

    • pvc-data (50Mi)
    • pvc-config (50Mi)
    • pvc-logs (50Mi)
  4. Create Pod multi-mount-app:

    • Image: busybox
    • Mount THREE volumes:
      • pvc-data at /app/data
      • pvc-config at /app/config
      • pvc-logs at /app/logs
    • Command:
      sh
      sh -c '
      echo "app-data" > /app/data/data.txt
      echo "app-config" > /app/config/config.txt
      echo "app-logs" > /app/logs/logs.txt
      sleep 3600
      '
  5. Verify files were created in each mount point

  6. Create a second Pod that mounts the SAME three PVCs at different paths and reads the files

Verify:

bash
kubectl exec multi-mount-app -- cat /app/data/data.txt
kubectl exec multi-mount-app -- cat /app/config/config.txt
kubectl exec multi-mount-app -- cat /app/logs/logs.txt

# Second pod
kubectl exec second-pod -- cat /different/path/data.txt

Question 8: ReadWriteOnce (RWO) Access Mode Constraint

Objective: Verify RWO prevents simultaneous mounting on different nodes.

Task:

  1. Create PV pv-rwo:

    • 100Mi
    • hostPath /mnt/rwo
    • accessModes: ReadWriteOnce (NOT ReadWriteMany)
  2. Create PVC pvc-rwo requesting 50Mi

  3. Create Pod1 pod-rwo-1 on NODE1:

    • Mount pvc-rwo
    • Image: busybox
    • Command: sleep 3600
  4. Create Pod2 pod-rwo-2 with nodeSelector forcing it to NODE2 (different node):

    • Try to mount the SAME pvc-rwo
  5. Observe Pod2 status:

    • Expected: Pending or Failed with error about volume already attached

Verify:

bash
kubectl get pods -o wide
# pod-rwo-1 should be Running
# pod-rwo-2 should be Pending/Failed

kubectl describe pod pod-rwo-2 | grep -A5 Events
# Should show: FailedScheduling or FailedAttachVolume

Question 9: PVC Expansion and Storage Increase

Objective: Dynamically expand a PVC to larger size.

Task:

  1. Create StorageClass expandable-sc:

    • provisioner: kubernetes.io/no-provisioner
    • allowVolumeExpansion: true
  2. Create PV pv-expand:

    • Capacity: 200Mi
    • hostPath: /mnt/expand-data
    • storageClassName: expandable-sc
  3. Create PVC pvc-expand:

    • Initial request: 50Mi
    • storageClassName: expandable-sc
  4. Create Pod expand-pod:

    • Mount pvc-expand at /data
    • Image: busybox
    • Write files until ~40Mi used
  5. Check PVC capacity:

    bash
    kubectl get pvc pvc-expand
  6. Expand the PVC to 120Mi:

    bash
    kubectl patch pvc pvc-expand -p '{"spec":{"resources":{"requests":{"storage":"120Mi"}}}}'
  7. Verify PVC expanded successfully

  8. Verify Pod can still write to the expanded volume

Verify:

bash
kubectl get pvc pvc-expand
# Should show: 120Mi

kubectl exec expand-pod -- df -h /data
# Should show larger capacity

Question 10: Troubleshooting PVC Stuck in Pending

Objective: Debug a PVC stuck in Pending state.

Scenario: You have a PVC pvc-stuck in Pending state. You don't know why.

Possible Root Causes:

  1. StorageClass doesn't exist
  2. No available PV matches capacity
  3. No available PV matches selector labels
  4. No available PV matches storageClassName
  5. volumeBindingMode is WaitForFirstConsumer but no pod is trying to consume it
  6. PV status is Released (from Retain reclaim policy)

Task:

  1. Create a deliberately broken scenario (choose one):

    • Request PVC from non-existent StorageClass
    • Request PVC with capacity larger than all available PVs
    • Request PVC with label selector that no PV matches
    • Request PVC with WaitForFirstConsumer but don't create a pod
  2. Use these debugging commands:

    bash
    kubectl describe pvc pvc-stuck
    kubectl get pv --show-labels
    kubectl get sc
    kubectl get events --sort-by='.lastTimestamp'
    kubectl logs -n kube-system -l app=provisioner
  3. Identify the root cause

  4. Fix the issue (create matching PV, correct StorageClass, add labels, etc.)

  5. Verify PVC becomes Bound

Verify:

bash
kubectl get pvc pvc-stuck
# Should show: Bound

kubectl describe pvc pvc-stuck | grep -A2 Events
# Should show successful binding event

Question 11: Selector + Capacity + StorageClass (Complex Matching)

Objective: All binding rules working together simultaneously.

Task:

  1. Create StorageClass selective-sc (no-provisioner)

  2. Create FOUR PVs with combinations:

    • pv-1: 100Mi, selective-sc, Labels: tier: gold, env: prod
    • pv-2: 100Mi, selective-sc, Labels: tier: silver, env: prod
    • pv-3: 100Mi, selective-sc, Labels: tier: gold, env: dev
    • pv-4: 100Mi, selective-sc, Labels: tier: bronze, env: prod
  3. Create PVCs with specific requirements:

    • pvc-gold-prod: Request 60Mi, StorageClass selective-sc, Selector: tier: gold AND env: prod → Should bind ONLY to pv-1

    • pvc-silver: Request 80Mi, StorageClass selective-sc, Selector: tier: silver → Should bind to pv-2 (matches labels + capacity)

    • pvc-bronze: Request 50Mi, StorageClass selective-sc, Selector: tier: bronze → Should bind to pv-4

  4. Verify each binding is correct

  5. Create Pods using each PVC and verify they can write data

Verify:

bash
kubectl get pv --show-labels
kubectl get pvc -o wide
kubectl describe pvc pvc-gold-prod | grep "Bound to"

Question 12: Node Affinity with StorageClass and Selector (Advanced)

Objective: Combine node affinity constraint with selector-based PVC binding.

Prerequisites:

  • Have at least 2 nodes
  • Label one node: kubectl label nodes <node-name> storage=fast

Task:

  1. Create StorageClass node-affinity-sc (no-provisioner)

  2. Create TWO PVs on SAME node with nodeAffinity:

    • pv-fast-1: 100Mi, nodeAffinity to storage=fast node, Label: speed: fast
    • pv-fast-2: 100Mi, nodeAffinity to storage=fast node, Label: speed: fast
  3. Create PVC pvc-fast:

    • Request 60Mi
    • StorageClass node-affinity-sc
    • Selector: speed: fast
  4. Create Pod affinity-test-pod that uses pvc-fast:

    • Should be scheduled on the same node as the PV (because of nodeAffinity)
  5. Verify Pod is on the correct node with the correct storage

Verify:

bash
kubectl get pods -o wide
# Pod should be on the storage=fast labeled node

kubectl get pv pv-fast-1 -o jsonpath='{.spec.nodeAffinity}'
# Should show the node affinity constraint

kubectl describe pod affinity-test-pod | grep -i "node"

Question 13: PVC Selector with Multiple Label Requirements (AND Logic)

Objective: PVC selector matching multiple labels (all must match).

Task:

  1. Create StorageClass multi-label-sc (no-provisioner)

  2. Create FIVE PVs with different label combinations:

    • pv-1: Labels: tier: gold, env: prod, region: us-east
    • pv-2: Labels: tier: gold, env: prod, region: us-west
    • pv-3: Labels: tier: gold, env: dev, region: us-east
    • pv-4: Labels: tier: silver, env: prod, region: us-east
    • pv-5: Labels: tier: gold, env: prod, region: eu-west
  3. Create PVCs with multi-label selectors (AND logic):

    • pvc-gold-prod-east: Selectors: tier: gold AND env: prod AND region: us-east → Should match ONLY pv-1

    • pvc-gold-prod: Selectors: tier: gold AND env: prod (no region specified) → Could match pv-1, pv-2, or pv-5 (any first one found)

    • pvc-gold: Selector: tier: gold → Could match any pv-1, pv-2, or pv-3

  4. Create Pods for each PVC

  5. Document which PV each PVC actually bound to and why

Verify:

bash
kubectl get pv --show-labels
kubectl describe pvc pvc-gold-prod-east | grep "Bound to"
kubectl describe pvc pvc-gold | grep "Bound to"

See solutions.md for complete YAML examples and step-by-step answers for all questions.

Released under the MIT License.