Storage Class, PV, & PVC – Solutions
Complete YAML examples and solutions for all 13 practice questions.
Question 1: Static PV + PVC Binding with Persistence
StorageClass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: manual
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
reclaimPolicy: DeletePV:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-log
spec:
capacity:
storage: 100Mi
accessModes:
- ReadWriteOnce
storageClassName: manual
hostPath:
path: /mnt/log-data
type: DirectoryOrCreatePVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-log
spec:
accessModes:
- ReadWriteOnce
storageClassName: manual
resources:
requests:
storage: 50MiPod:
apiVersion: v1
kind: Pod
metadata:
name: log-pod
spec:
containers:
- name: logger
image: busybox
command: ["sh", "-c", "echo 'test123' > /log/test.txt && sleep 3600"]
volumeMounts:
- name: log
mountPath: /log
volumes:
- name: log
persistentVolumeClaim:
claimName: pvc-logImportant Notes on Reclaim Policy with hostPath:
⚠️ Use Retain for hostPath PVs (not Delete):
- The StorageClass above shows
reclaimPolicy: Delete, but for hostPath volumes, the deletion will fail with:host_path deleter only supports /tmp/.+ but received provided /mnt/log-data - Kubernetes' hostPath deleter only allows deletion of paths under
/tmp/for security reasons. - For hostPath PVs, set
reclaimPolicy: Retainto avoid PV going intoFailedstate. - If you need automatic deletion, use
/tmpinstead:yamlhostPath: path: /tmp/log-data # Safe for Delete reclaim policy type: DirectoryOrCreate
Reclaim Policy Behavior:
- Retain: PV transitions to
Releasedafter PVC deletion; underlying data persists at/mnt/log-dataand can be manually recovered or inspected. - Delete: Attempts to delete the PV and underlying storage; fails for hostPath outside
/tmp/. - Recycle (deprecated): Previously scrubbed content; do not rely on this.
Question 2: PVC Capacity Matching
Create 5 PVs with different capacities:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-tiny
spec:
capacity:
storage: 20Mi
accessModes: [ReadWriteOnce]
storageClassName: capacity-test
hostPath:
path: /mnt/tiny
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-small
spec:
capacity:
storage: 50Mi
accessModes: [ReadWriteOnce]
storageClassName: capacity-test
hostPath:
path: /mnt/small
---
# ... similar for medium, large, xlargeBinding Logic:
pvc-60(60Mi request) → binds topv-medium(200Mi) because:- It's the SMALLEST PV that satisfies the request
- Not
pv-xlarge(1Gi) because K8s chooses smallest fit
pvc-30(30Mi request) → binds topv-small(50Mi) because:pv-tiny(20Mi) is too smallpv-smallis the smallest that fits
Question 3: Label Selector Binding
PVs with Labels:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-ssd-1
labels:
type: ssd
speed: fast
spec:
capacity:
storage: 100Mi
accessModes: [ReadWriteOnce]
storageClassName: ""
hostPath:
path: /mnt/ssd1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-hdd
labels:
type: hdd
speed: slow
spec:
capacity:
storage: 100Mi
accessModes: [ReadWriteOnce]
storageClassName: ""
hostPath:
path: /mnt/hddPVCs with Selectors:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-ssd
spec:
accessModes: [ReadWriteOnce]
storageClassName: ""
resources:
requests:
storage: 50Mi
selector:
matchLabels:
type: ssd
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-hdd
spec:
accessModes: [ReadWriteOnce]
storageClassName: ""
resources:
requests:
storage: 50Mi
selector:
matchLabels:
type: hddPod:
apiVersion: v1
kind: Pod
metadata:
name: selector-test-pod
spec:
containers:
- name: app
image: busybox
command: ["sh", "-c", "echo 'data' > /mnt/data.txt && sleep 3600"]
volumeMounts:
- name: ssd-vol
mountPath: /mnt
volumes:
- name: ssd-vol
persistentVolumeClaim:
claimName: pvc-ssdQuestion 4: WaitForFirstConsumer Binding Mode
StorageClass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: delayed-sc
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumerPV:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-delayed
spec:
capacity:
storage: 100Mi
accessModes: [ReadWriteOnce]
storageClassName: delayed-sc
hostPath:
path: /mnt/delayedPVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-delayed
spec:
accessModes: [ReadWriteOnce]
storageClassName: delayed-sc
resources:
requests:
storage: 50MiBehavior:
- After creating PVC: Status =
Pending(no binding yet) - After creating Pod with PVC: Status =
Bound(binding happens) - After deleting Pod: Status =
Pendingagain (binding released)
Question 5: Local PV with Node Affinity
Get Node Name:
NODE_NAME=$(kubectl get nodes -o name | head -1 | cut -d'/' -f2)
echo $NODE_NAMEStorageClass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-sc
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumerLocal PV with nodeAffinity:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-local-node1
spec:
capacity:
storage: 100Mi
accessModes: [ReadWriteOnce]
storageClassName: local-sc
local:
path: /mnt/local-data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- <YOUR_NODE_NAME> # Replace with actual node namePVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-local
spec:
accessModes: [ReadWriteOnce]
storageClassName: local-sc
resources:
requests:
storage: 50MiPod (will be scheduled on correct node due to PV nodeAffinity):
apiVersion: v1
kind: Pod
metadata:
name: local-pod
spec:
containers:
- name: app
image: busybox
command: ["sh", "-c", "hostname > /mnt/host.txt && sleep 3600"]
volumeMounts:
- name: local-vol
mountPath: /mnt
volumes:
- name: local-vol
persistentVolumeClaim:
claimName: pvc-localQuestion 6: Reclaim Policies
StorageClasses:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: delete-sc
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: Delete
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: retain-sc
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: RetainPVs and Results:
- Delete: PV is deleted immediately after PVC deletion
- Retain: PV transitions to
Releasedstate and persists - Recycle: PV content is scrubbed and PV returns to
Available
Question 7: Multiple PVCs in One Pod
Pod with 3 volumes:
apiVersion: v1
kind: Pod
metadata:
name: multi-mount-app
spec:
containers:
- name: app
image: busybox
command:
- sh
- -c
- |
echo "app-data" > /app/data/data.txt
echo "app-config" > /app/config/config.txt
echo "app-logs" > /app/logs/logs.txt
sleep 3600
volumeMounts:
- name: data-vol
mountPath: /app/data
- name: config-vol
mountPath: /app/config
- name: logs-vol
mountPath: /app/logs
volumes:
- name: data-vol
persistentVolumeClaim:
claimName: pvc-data
- name: config-vol
persistentVolumeClaim:
claimName: pvc-config
- name: logs-vol
persistentVolumeClaim:
claimName: pvc-logsQuestion 8: RWO Access Mode Constraint
PV (RWO only):
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-rwo
spec:
capacity:
storage: 100Mi
accessModes: [ReadWriteOnce] # Only one node at a time
hostPath:
path: /mnt/rwoPod1 (will succeed):
apiVersion: v1
kind: Pod
metadata:
name: pod-rwo-1
spec:
containers:
- name: app
image: busybox
command: ["sleep", "3600"]
volumeMounts:
- name: rwo-vol
mountPath: /mnt
volumes:
- name: rwo-vol
persistentVolumeClaim:
claimName: pvc-rwoPod2 (will fail/pending - RWO conflict):
apiVersion: v1
kind: Pod
metadata:
name: pod-rwo-2
spec:
nodeSelector:
kubernetes.io/hostname: <OTHER_NODE> # Different node
containers:
- name: app
image: busybox
command: ["sleep", "3600"]
volumeMounts:
- name: rwo-vol
mountPath: /mnt
volumes:
- name: rwo-vol
persistentVolumeClaim:
claimName: pvc-rwoResult: Pod2 stays Pending with error: "FailedScheduling" - volume already attached to another node
Question 9: PVC Expansion
StorageClass (allow expansion):
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: expandable-sc
provisioner: kubernetes.io/no-provisioner
allowVolumeExpansion: trueExpand PVC:
kubectl patch pvc pvc-expand -p '{"spec":{"resources":{"requests":{"storage":"120Mi"}}}}'Verification:
kubectl get pvc pvc-expand
kubectl describe pvc pvc-expand
kubectl exec expand-pod -- df -h /data # Should show 120MiQuestion 10: Troubleshooting Pending PVC
Debugging commands:
# See why PVC is pending
kubectl describe pvc pvc-stuck
# Check available PVs and their status
kubectl get pv --show-labels
# Check if StorageClass exists
kubectl get sc
# Check events
kubectl get events --sort-by='.lastTimestamp'
# Check provisioner logs
kubectl logs -n kube-system -l app=provisionerCommon Fixes:
- Wrong StorageClass: Create correct StorageClass
- No matching capacity: Create larger PV
- Label mismatch: Add correct labels to PV
- WaitForFirstConsumer: Create Pod using the PVC
Question 11: Complex Multi-Criteria Matching
StorageClass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: selective-sc
provisioner: kubernetes.io/no-provisionerPVs with multiple labels:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-1
labels:
tier: gold
env: prod
spec:
capacity:
storage: 100Mi
accessModes: [ReadWriteOnce]
storageClassName: selective-sc
hostPath:
path: /mnt/pv1
---
# Similar for pv-2, pv-3, pv-4 with different label combinationsMulti-label PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-gold-prod
spec:
accessModes: [ReadWriteOnce]
storageClassName: selective-sc
resources:
requests:
storage: 60Mi
selector:
matchLabels:
tier: gold
env: prodResult: pvc-gold-prod binds to pv-1 (only PV with BOTH tier=gold AND env=prod)
Question 12: Node Affinity + Selector + StorageClass
Label a node:
kubectl label nodes <node-name> storage=fastPV with node affinity:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-fast-1
labels:
speed: fast
spec:
capacity:
storage: 100Mi
accessModes: [ReadWriteOnce]
storageClassName: node-affinity-sc
hostPath:
path: /mnt/fast1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: storage
operator: In
values:
- fastPod will schedule on fast node automatically:
kubectl get pods -o wide
# Pod should be on the storage=fast labeled nodeQuestion 13: Multi-Label AND Logic
PVs with different label combinations:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-1
labels:
tier: gold
env: prod
region: us-east
spec:
# ... spec details
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-2
labels:
tier: gold
env: prod
region: us-west
spec:
# ... spec detailsPVC with multiple selectors (AND logic):
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-gold-prod-east
spec:
storageClassName: multi-label-sc
resources:
requests:
storage: 50Mi
selector:
matchLabels:
tier: gold # AND
env: prod # AND
region: us-east # ANDResult: Binds ONLY to pv-1 (must match ALL three labels)