Skip to content

Storage Class, PV, & PVC – Solutions

Complete YAML examples and solutions for all 13 practice questions.


Question 1: Static PV + PVC Binding with Persistence

StorageClass:

yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: manual
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
reclaimPolicy: Delete

PV:

yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-log
spec:
  capacity:
    storage: 100Mi
  accessModes:
    - ReadWriteOnce
  storageClassName: manual
  hostPath:
    path: /mnt/log-data
    type: DirectoryOrCreate

PVC:

yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-log
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: manual
  resources:
    requests:
      storage: 50Mi

Pod:

yaml
apiVersion: v1
kind: Pod
metadata:
  name: log-pod
spec:
  containers:
  - name: logger
    image: busybox
    command: ["sh", "-c", "echo 'test123' > /log/test.txt && sleep 3600"]
    volumeMounts:
    - name: log
      mountPath: /log
  volumes:
  - name: log
    persistentVolumeClaim:
      claimName: pvc-log

Important Notes on Reclaim Policy with hostPath:

⚠️ Use Retain for hostPath PVs (not Delete):

  • The StorageClass above shows reclaimPolicy: Delete, but for hostPath volumes, the deletion will fail with:
    host_path deleter only supports /tmp/.+ but received provided /mnt/log-data
  • Kubernetes' hostPath deleter only allows deletion of paths under /tmp/ for security reasons.
  • For hostPath PVs, set reclaimPolicy: Retain to avoid PV going into Failed state.
  • If you need automatic deletion, use /tmp instead:
    yaml
    hostPath:
      path: /tmp/log-data  # Safe for Delete reclaim policy
      type: DirectoryOrCreate

Reclaim Policy Behavior:

  • Retain: PV transitions to Released after PVC deletion; underlying data persists at /mnt/log-data and can be manually recovered or inspected.
  • Delete: Attempts to delete the PV and underlying storage; fails for hostPath outside /tmp/.
  • Recycle (deprecated): Previously scrubbed content; do not rely on this.

Question 2: PVC Capacity Matching

Create 5 PVs with different capacities:

yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-tiny
spec:
  capacity:
    storage: 20Mi
  accessModes: [ReadWriteOnce]
  storageClassName: capacity-test
  hostPath:
    path: /mnt/tiny
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-small
spec:
  capacity:
    storage: 50Mi
  accessModes: [ReadWriteOnce]
  storageClassName: capacity-test
  hostPath:
    path: /mnt/small
---
# ... similar for medium, large, xlarge

Binding Logic:

  • pvc-60 (60Mi request) → binds to pv-medium (200Mi) because:

    • It's the SMALLEST PV that satisfies the request
    • Not pv-xlarge (1Gi) because K8s chooses smallest fit
  • pvc-30 (30Mi request) → binds to pv-small (50Mi) because:

    • pv-tiny (20Mi) is too small
    • pv-small is the smallest that fits

Question 3: Label Selector Binding

PVs with Labels:

yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-ssd-1
  labels:
    type: ssd
    speed: fast
spec:
  capacity:
    storage: 100Mi
  accessModes: [ReadWriteOnce]
  storageClassName: ""
  hostPath:
    path: /mnt/ssd1
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-hdd
  labels:
    type: hdd
    speed: slow
spec:
  capacity:
    storage: 100Mi
  accessModes: [ReadWriteOnce]
  storageClassName: ""
  hostPath:
    path: /mnt/hdd

PVCs with Selectors:

yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-ssd
spec:
  accessModes: [ReadWriteOnce]
  storageClassName: ""
  resources:
    requests:
      storage: 50Mi
  selector:
    matchLabels:
      type: ssd
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-hdd
spec:
  accessModes: [ReadWriteOnce]
  storageClassName: ""
  resources:
    requests:
      storage: 50Mi
  selector:
    matchLabels:
      type: hdd

Pod:

yaml
apiVersion: v1
kind: Pod
metadata:
  name: selector-test-pod
spec:
  containers:
  - name: app
    image: busybox
    command: ["sh", "-c", "echo 'data' > /mnt/data.txt && sleep 3600"]
    volumeMounts:
    - name: ssd-vol
      mountPath: /mnt
  volumes:
  - name: ssd-vol
    persistentVolumeClaim:
      claimName: pvc-ssd

Question 4: WaitForFirstConsumer Binding Mode

StorageClass:

yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: delayed-sc
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

PV:

yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-delayed
spec:
  capacity:
    storage: 100Mi
  accessModes: [ReadWriteOnce]
  storageClassName: delayed-sc
  hostPath:
    path: /mnt/delayed

PVC:

yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-delayed
spec:
  accessModes: [ReadWriteOnce]
  storageClassName: delayed-sc
  resources:
    requests:
      storage: 50Mi

Behavior:

  • After creating PVC: Status = Pending (no binding yet)
  • After creating Pod with PVC: Status = Bound (binding happens)
  • After deleting Pod: Status = Pending again (binding released)

Question 5: Local PV with Node Affinity

Get Node Name:

bash
NODE_NAME=$(kubectl get nodes -o name | head -1 | cut -d'/' -f2)
echo $NODE_NAME

StorageClass:

yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-sc
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

Local PV with nodeAffinity:

yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-local-node1
spec:
  capacity:
    storage: 100Mi
  accessModes: [ReadWriteOnce]
  storageClassName: local-sc
  local:
    path: /mnt/local-data
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - <YOUR_NODE_NAME>  # Replace with actual node name

PVC:

yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-local
spec:
  accessModes: [ReadWriteOnce]
  storageClassName: local-sc
  resources:
    requests:
      storage: 50Mi

Pod (will be scheduled on correct node due to PV nodeAffinity):

yaml
apiVersion: v1
kind: Pod
metadata:
  name: local-pod
spec:
  containers:
  - name: app
    image: busybox
    command: ["sh", "-c", "hostname > /mnt/host.txt && sleep 3600"]
    volumeMounts:
    - name: local-vol
      mountPath: /mnt
  volumes:
  - name: local-vol
    persistentVolumeClaim:
      claimName: pvc-local

Question 6: Reclaim Policies

StorageClasses:

yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: delete-sc
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: Delete
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: retain-sc
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: Retain

PVs and Results:

  • Delete: PV is deleted immediately after PVC deletion
  • Retain: PV transitions to Released state and persists
  • Recycle: PV content is scrubbed and PV returns to Available

Question 7: Multiple PVCs in One Pod

Pod with 3 volumes:

yaml
apiVersion: v1
kind: Pod
metadata:
  name: multi-mount-app
spec:
  containers:
  - name: app
    image: busybox
    command:
    - sh
    - -c
    - |
      echo "app-data" > /app/data/data.txt
      echo "app-config" > /app/config/config.txt
      echo "app-logs" > /app/logs/logs.txt
      sleep 3600
    volumeMounts:
    - name: data-vol
      mountPath: /app/data
    - name: config-vol
      mountPath: /app/config
    - name: logs-vol
      mountPath: /app/logs
  volumes:
  - name: data-vol
    persistentVolumeClaim:
      claimName: pvc-data
  - name: config-vol
    persistentVolumeClaim:
      claimName: pvc-config
  - name: logs-vol
    persistentVolumeClaim:
      claimName: pvc-logs

Question 8: RWO Access Mode Constraint

PV (RWO only):

yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-rwo
spec:
  capacity:
    storage: 100Mi
  accessModes: [ReadWriteOnce]  # Only one node at a time
  hostPath:
    path: /mnt/rwo

Pod1 (will succeed):

yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod-rwo-1
spec:
  containers:
  - name: app
    image: busybox
    command: ["sleep", "3600"]
    volumeMounts:
    - name: rwo-vol
      mountPath: /mnt
  volumes:
  - name: rwo-vol
    persistentVolumeClaim:
      claimName: pvc-rwo

Pod2 (will fail/pending - RWO conflict):

yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod-rwo-2
spec:
  nodeSelector:
    kubernetes.io/hostname: <OTHER_NODE>  # Different node
  containers:
  - name: app
    image: busybox
    command: ["sleep", "3600"]
    volumeMounts:
    - name: rwo-vol
      mountPath: /mnt
  volumes:
  - name: rwo-vol
    persistentVolumeClaim:
      claimName: pvc-rwo

Result: Pod2 stays Pending with error: "FailedScheduling" - volume already attached to another node


Question 9: PVC Expansion

StorageClass (allow expansion):

yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: expandable-sc
provisioner: kubernetes.io/no-provisioner
allowVolumeExpansion: true

Expand PVC:

bash
kubectl patch pvc pvc-expand -p '{"spec":{"resources":{"requests":{"storage":"120Mi"}}}}'

Verification:

bash
kubectl get pvc pvc-expand
kubectl describe pvc pvc-expand
kubectl exec expand-pod -- df -h /data  # Should show 120Mi

Question 10: Troubleshooting Pending PVC

Debugging commands:

bash
# See why PVC is pending
kubectl describe pvc pvc-stuck

# Check available PVs and their status
kubectl get pv --show-labels

# Check if StorageClass exists
kubectl get sc

# Check events
kubectl get events --sort-by='.lastTimestamp'

# Check provisioner logs
kubectl logs -n kube-system -l app=provisioner

Common Fixes:

  1. Wrong StorageClass: Create correct StorageClass
  2. No matching capacity: Create larger PV
  3. Label mismatch: Add correct labels to PV
  4. WaitForFirstConsumer: Create Pod using the PVC

Question 11: Complex Multi-Criteria Matching

StorageClass:

yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: selective-sc
provisioner: kubernetes.io/no-provisioner

PVs with multiple labels:

yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-1
  labels:
    tier: gold
    env: prod
spec:
  capacity:
    storage: 100Mi
  accessModes: [ReadWriteOnce]
  storageClassName: selective-sc
  hostPath:
    path: /mnt/pv1
---
# Similar for pv-2, pv-3, pv-4 with different label combinations

Multi-label PVC:

yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-gold-prod
spec:
  accessModes: [ReadWriteOnce]
  storageClassName: selective-sc
  resources:
    requests:
      storage: 60Mi
  selector:
    matchLabels:
      tier: gold
      env: prod

Result: pvc-gold-prod binds to pv-1 (only PV with BOTH tier=gold AND env=prod)


Question 12: Node Affinity + Selector + StorageClass

Label a node:

bash
kubectl label nodes <node-name> storage=fast

PV with node affinity:

yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-fast-1
  labels:
    speed: fast
spec:
  capacity:
    storage: 100Mi
  accessModes: [ReadWriteOnce]
  storageClassName: node-affinity-sc
  hostPath:
    path: /mnt/fast1
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: storage
          operator: In
          values:
          - fast

Pod will schedule on fast node automatically:

bash
kubectl get pods -o wide
# Pod should be on the storage=fast labeled node

Question 13: Multi-Label AND Logic

PVs with different label combinations:

yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-1
  labels:
    tier: gold
    env: prod
    region: us-east
spec:
  # ... spec details
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-2
  labels:
    tier: gold
    env: prod
    region: us-west
spec:
  # ... spec details

PVC with multiple selectors (AND logic):

yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-gold-prod-east
spec:
  storageClassName: multi-label-sc
  resources:
    requests:
      storage: 50Mi
  selector:
    matchLabels:
      tier: gold       # AND
      env: prod        # AND
      region: us-east  # AND

Result: Binds ONLY to pv-1 (must match ALL three labels)

Released under the MIT License.