kubeadm Practice Questions
Question 1: Initialize a Cluster
Objective: Initialize a new Kubernetes cluster with kubeadm.
Requirements:
- Kubernetes version: 1.31.0
- Pod network CIDR: 10.244.0.0/16
- After init, configure kubectl for the root user
Verify:
kubectl get nodes
kubectl cluster-infoQuestion 2: Generate Join Command
Objective: Generate the command to join worker nodes to an existing cluster.
Requirements:
- Create a new bootstrap token
- Print the complete join command including the token and CA cert hash
Verify:
kubeadm token listQuestion 3: Join a Worker Node
Objective: Join a worker node named worker-1 to the cluster.
Given:
- Control plane IP: 192.168.1.10
- Token: abcdef.1234567890abcdef
- CA cert hash: sha256:abc123...
Requirements:
- Run the join command on worker-1
- Verify the node appears in the cluster
Verify:
kubectl get nodesQuestion 4: Upgrade Control Plane
Objective: Upgrade the first control plane node from v1.30.0 to v1.31.0.
Requirements:
- Follow the correct upgrade order
- Upgrade kubeadm first
- Check upgrade plan
- Apply the upgrade
- Drain the node properly
- Upgrade kubelet and kubectl
- Uncordon the node
Verify:
kubectl get nodes
kubeadm version
kubelet --versionQuestion 5: Upgrade Worker Node
Objective: Upgrade worker node worker-1 from v1.30.0 to v1.31.0.
Requirements:
- Drain the worker from control plane
- Upgrade kubeadm on the worker
- Run kubeadm upgrade node
- Upgrade kubelet and kubectl
- Uncordon the node
Verify:
kubectl get nodes -o wideQuestion 6: Backup etcd
Objective: Create a backup of the etcd database.
Requirements:
- Save snapshot to
/opt/backup/etcd-snapshot.db - Use the correct certificates from the etcd pod manifest
- Verify the snapshot was created successfully
Verify:
ETCDCTL_API=3 etcdctl snapshot status /opt/backup/etcd-snapshot.db --write-out=tableQuestion 7: Restore etcd from Backup
Objective: Restore etcd from a snapshot file.
Given:
- Snapshot file:
/opt/backup/etcd-snapshot.db - Data directory should be:
/var/lib/etcd-restored
Requirements:
- Stop kubelet before restore
- Restore the snapshot to the new data directory
- Update etcd manifest to use new data directory
- Ensure cluster recovers
Verify:
kubectl get pods -AQuestion 8: Check Certificate Expiration
Objective: Check when cluster certificates will expire.
Requirements:
- List all certificate expiration dates
- Identify which certificate expires first
Verify:
kubeadm certs check-expirationQuestion 9: Renew Certificates
Objective: Renew the API server certificate.
Requirements:
- Renew only the apiserver certificate
- Restart the API server to use new cert
Verify:
kubeadm certs check-expiration | grep apiserverQuestion 10: Troubleshoot Node Not Ready
Objective: A node worker-2 shows NotReady status. Diagnose and fix.
Possible Issues to Check:
- Is kubelet running?
- Are there kubelet errors in the journal?
- Is the container runtime running?
- Network plugin issues?
Commands to Use:
kubectl describe node worker-2
ssh worker-2 "systemctl status kubelet"
ssh worker-2 "journalctl -xeu kubelet"
ssh worker-2 "crictl ps"Question 11: Create Token with Custom TTL
Objective: Create a new bootstrap token that expires in 2 hours.
Requirements:
- Create token with 2-hour TTL
- Verify token was created with correct expiration
Verify:
kubeadm token listQuestion 12: Reset kubeadm on Node
Objective: Completely remove kubeadm setup from a node.
Requirements:
- Run kubeadm reset
- Clean up iptables rules
- Remove CNI configuration
- Remove kubeconfig
Verify:
ls /etc/kubernetes/manifests/
iptables -LQuestion 13: Join Additional Control Plane
Objective: Add a second control plane node to create an HA cluster.
Given:
- First control plane already initialized with --upload-certs
- Certificate key: abc123...
Requirements:
- Join the node as a control plane (not worker)
- Configure kubectl on the new control plane
Verify:
kubectl get nodesQuestion 14: Find etcd Endpoint and Certificates
Objective: From the etcd pod manifest, find the endpoint URL and certificate paths.
Requirements:
- Find the endpoint URL (--listen-client-urls)
- Find paths for: CA cert, server cert, server key
- Write findings to
/tmp/etcd-info.txt
Verify:
cat /tmp/etcd-info.txtQuestion 15: Check etcd Health
Objective: Verify etcd cluster is healthy.
Requirements:
- Use etcdctl to check endpoint health
- Use correct certificates
Expected Output:
https://127.0.0.1:2379 is healthySolutions Reference
Question 1 Solution
kubeadm init \
--kubernetes-version=1.31.0 \
--pod-network-cidr=10.244.0.0/16
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/configQuestion 2 Solution
kubeadm token create --print-join-commandQuestion 4 Solution
# Upgrade kubeadm
apt-mark unhold kubeadm
apt-get update && apt-get install -y kubeadm=1.31.0-1.1
apt-mark hold kubeadm
# Check plan
kubeadm upgrade plan
# Apply upgrade
kubeadm upgrade apply v1.31.0
# Drain node
kubectl drain <node-name> --ignore-daemonsets --delete-emptydir-data
# Upgrade kubelet/kubectl
apt-mark unhold kubelet kubectl
apt-get update && apt-get install -y kubelet=1.31.0-1.1 kubectl=1.31.0-1.1
apt-mark hold kubelet kubectl
# Restart and uncordon
systemctl daemon-reload
systemctl restart kubelet
kubectl uncordon <node-name>Question 6 Solution
ETCDCTL_API=3 etcdctl snapshot save /opt/backup/etcd-snapshot.db \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.keyQuestion 7 Solution
systemctl stop kubelet
ETCDCTL_API=3 etcdctl snapshot restore /opt/backup/etcd-snapshot.db \
--data-dir=/var/lib/etcd-restored
# Edit /etc/kubernetes/manifests/etcd.yaml
# Change --data-dir=/var/lib/etcd-restored
# Change volume hostPath to /var/lib/etcd-restored
systemctl start kubeletQuestion 8 Solution
kubeadm certs check-expirationQuestion 9 Solution
kubeadm certs renew apiserver
# Restart API server (move manifest out and back)
mv /etc/kubernetes/manifests/kube-apiserver.yaml /tmp/
mv /tmp/kube-apiserver.yaml /etc/kubernetes/manifests/Question 11 Solution
kubeadm token create --ttl 2hQuestion 12 Solution
kubeadm reset -f
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
rm -rf /etc/cni/net.d
rm -rf $HOME/.kube/configQuestion 14 Solution
cat /etc/kubernetes/manifests/etcd.yaml | grep -E "listen-client|cacert|cert-file|key-file" > /tmp/etcd-info.txtQuestion 15 Solution
ETCDCTL_API=3 etcdctl endpoint health \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key