Solutions for Service Tasks
This document contains the solutions and verification steps for the Service practice questions.
Solution 1: ClusterIP — Basic Service (CKA)
Manifest:
yaml
apiVersion: v1
kind: Namespace
metadata:
name: marketing
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
namespace: marketing
spec:
replicas: 2
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: internal-web
namespace: marketing
spec:
type: ClusterIP
selector:
app: web
ports:
- port: 80
targetPort: 80Commands:
bash
# Apply manifest
kubectl apply -f solution1.yaml
# Verify endpoints
kubectl get endpoints internal-web -n marketing
# Test connectivity from a pod
kubectl run test --rm -it --image=busybox -n marketing -- wget -qO- internal-web:80Solution 2: NodePort — Basic Exposure (CKA)
Manifest:
yaml
apiVersion: v1
kind: Namespace
metadata:
name: public-access
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: front-door
namespace: public-access
spec:
replicas: 2
selector:
matchLabels:
app: front-door
template:
metadata:
labels:
app: front-door
spec:
containers:
- name: nginx
image: nginx
---
apiVersion: v1
kind: Service
metadata:
name: global-entry
namespace: public-access
spec:
type: NodePort
selector:
app: front-door
ports:
- port: 80
targetPort: 80
nodePort: 31050Commands:
bash
# Apply manifest
kubectl apply -f solution2.yaml
# Verify NodePort assignment
kubectl get svc global-entry -n public-access
# Access from external (replace <node-ip> with actual node IP)
curl http://<node-ip>:31050Solution 3: Imperative Commands
Commands:
bash
# 1. Create the pod
kubectl run manual-pod --image=httpd:alpine --labels="app=manual"
# 2. Expose imperatively
kubectl expose pod manual-pod --name=fast-svc --port=80 --target-port=8080
# 3. Generate YAML without creation
kubectl expose pod manual-pod --name=nodeport-svc --type=NodePort --port=80 --dry-run=client -o yaml > nodeport.yamlSolution 4: Multi-Port & Named Ports (CKA)
Manifest:
yaml
apiVersion: v1
kind: Namespace
metadata:
name: services-test
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-gateway
namespace: services-test
spec:
replicas: 2
selector:
matchLabels:
app: gateway
template:
metadata:
labels:
app: gateway
spec:
containers:
- name: nginx
image: nginx
ports:
- name: http-port
containerPort: 8080
- name: metrics-port
containerPort: 9090
---
apiVersion: v1
kind: Service
metadata:
name: multi-svc
namespace: services-test
spec:
type: ClusterIP
selector:
app: gateway
ports:
- name: http
port: 80
targetPort: http-port
- name: metrics
port: 9090
targetPort: metrics-portCommands:
bash
# Apply manifest
kubectl apply -f solution4.yaml
# Verify both ports are exposed
kubectl get svc multi-svc -n services-test
# Test connectivity for both ports
kubectl run test --rm -it --image=busybox -n services-test -- wget -qO- multi-svc:80
kubectl run test --rm -it --image=busybox -n services-test -- wget -qO- multi-svc:9090Solution 5: Multi-Protocol Service
Manifest:
yaml
apiVersion: v1
kind: Service
metadata:
name: app-gateway
spec:
selector:
app: gateway
ports:
- name: tcp-web
protocol: TCP
port: 80
targetPort: 8080
- name: udp-dns
protocol: UDP
port: 53
targetPort: 5353Solution 6: Troubleshooting Selectors
Analysis: The issue is a mismatch between the Service selector and the Pod labels.
Fix: Ensure the Deployment pod template has the label tier: backend.
yaml
# Corrected Pod Template in Deployment
template:
metadata:
labels:
app: myapp
tier: backend # Added this lineSolution 7: Service DNS & Resolution (CKA)
Commands:
bash
# 1. Create namespaces
kubectl create ns alpha
kubectl create ns beta
# 2. Deploy in alpha
kubectl create deployment web --image=nginx --replicas=2 -n alpha
kubectl expose deployment web --name=alpha-svc --port=80 -n alpha
# 3. Test from beta namespace
kubectl run test --rm -it --image=busybox -n beta -- nslookup alpha-svc.alpha.svc.cluster.local
# 4. Verify connectivity with short and full FQDN
kubectl run test --rm -it --image=busybox -n beta -- wget -qO- alpha-svc.alpha:80
kubectl run test --rm -it --image=busybox -n beta -- wget -qO- alpha-svc.alpha.svc.cluster.local:80Key Concepts:
- Same namespace:
<service> - Cross-namespace short:
<service>.<namespace> - Full FQDN:
<service>.<namespace>.svc.cluster.local
Solution 8: Service Selectors & Endpoints (CKA)
Commands:
bash
# 1. Create namespace
kubectl create ns testing
# 2. Create deployment
kubectl create deployment backend --image=nginx --replicas=2 -n testing
kubectl set labels deployment backend app=backend -n testing
# Alternative: Declarative approach
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
namespace: testing
spec:
replicas: 2
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: nginx
image: nginx
---
apiVersion: v1
kind: Service
metadata:
name: backend-svc
namespace: testing
spec:
selector:
app: backend
ports:
- port: 80
EOF
# 3. Verify endpoints
kubectl get endpoints backend-svc -n testing
kubectl describe svc backend-svc -n testing
# 4. Test connectivity
kubectl run test --rm -it --image=busybox -n testing -- wget -qO- backend-svc:80Solution 9: Manual Endpoints
Manifest:
yaml
apiVersion: v1
kind: Service
metadata:
name: legacy-db
spec:
ports:
- port: 3306
---
apiVersion: v1
kind: Endpoints
metadata:
name: legacy-db
subsets:
- addresses:
- ip: 192.168.1.50
ports:
- port: 3306Solution 10: Patching Service Types
Commands:
bash
# 1. Create NodePort
kubectl create service nodeport upgrade-svc --tcp=80:80
# 2. Patch to ClusterIP (removing NodePort config)
# Note: Simply changing type to ClusterIP usually works, but explicit cleanup is cleaner
kubectl patch svc upgrade-svc -p '{"spec":{"type":"ClusterIP", "nodePort": null}}'