Skip to content

Restricting Cluster Access and Dashboard Security

Overview

Securing a Kubernetes cluster is not just about configuring authentication and authorization -- it is about controlling who and what can reach the cluster in the first place. Every unnecessary access path is an attack surface. This topic covers restricting API server network exposure, securing (or removing) the Kubernetes Dashboard, locking down NodePort services, hardening kubectl access, and preventing anonymous and stale credential abuse.

CKS Exam Relevance

You may be asked to restrict API server access by modifying flags in the static pod manifest, secure or remove the Kubernetes Dashboard, limit NodePort exposure, or audit kubeconfig files for excessive permissions. Know how to use --bind-address, --service-node-port-range, and how to create read-only RBAC for dashboard service accounts.

Cluster Access Paths -- Security Architecture

Understanding every path into the cluster is the first step to restricting access.

Every arrow in this diagram is a potential attack vector. The goal is to eliminate unnecessary paths and restrict the remaining ones to the minimum required access.

Restricting API Server Access

The API server is the single most critical component. Limiting network-level access to it is a foundational security measure.

Binding the API Server to Specific Interfaces

By default, the API server may listen on all network interfaces (0.0.0.0). Restrict it to only the interface that needs to be accessible:

yaml
# /etc/kubernetes/manifests/kube-apiserver.yaml
spec:
  containers:
    - command:
        - kube-apiserver
        - --bind-address=192.168.1.10       # Listen ONLY on this IP (control plane network)
        - --advertise-address=192.168.1.10   # Advertise this IP to cluster components
        # Do NOT use --bind-address=0.0.0.0 in production
FlagPurposeSecure Value
--bind-addressInterface the API server listens onSpecific control plane IP (e.g., 192.168.1.10)
--advertise-addressIP advertised to cluster membersSame as --bind-address or internal IP
--secure-portPort for HTTPS API traffic6443 (default, keep it)

Do Not Bind to 0.0.0.0

Binding to 0.0.0.0 means the API server listens on every network interface, including public-facing ones. In cloud environments, this could expose the API server to the internet if firewall rules are misconfigured.

Disabling the Insecure Port

Older Kubernetes versions supported an insecure HTTP port (default 8080) with no authentication or authorization. This was removed in Kubernetes 1.24, but you should still verify it is disabled on older clusters:

yaml
spec:
  containers:
    - command:
        - kube-apiserver
        - --insecure-port=0    # Explicitly disable (removed in 1.24+)
bash
# Verify no insecure port is open
curl http://localhost:8080/api
# Should fail with "connection refused"

ss -tlnp | grep 8080
# Should return nothing

Legacy Clusters

If you encounter --insecure-port set to a non-zero value, this is a critical vulnerability. Anyone with network access to that port has full unauthenticated access to the API. Set it to 0 immediately.

Cloud-Level Firewall and Security Groups

Network-level restrictions provide defense-in-depth, even when Kubernetes authentication and authorization are properly configured.

bash
# AWS: Restrict security group for API server to admin CIDR only
aws ec2 authorize-security-group-ingress \
  --group-id sg-0123456789abcdef0 \
  --protocol tcp \
  --port 6443 \
  --cidr 10.0.0.0/16    # Only internal network

# GCP: Firewall rule for API server
gcloud compute firewall-rules create allow-apiserver \
  --allow=tcp:6443 \
  --source-ranges=10.0.0.0/16 \
  --target-tags=control-plane

# iptables: Restrict API server access on the node itself
iptables -A INPUT -p tcp --dport 6443 -s 10.0.0.0/16 -j ACCEPT
iptables -A INPUT -p tcp --dport 6443 -j DROP

Defense in Depth

Even with perfect RBAC and authentication, a network-level firewall prevents attackers from even reaching the API server to attempt exploitation of authentication vulnerabilities. Always layer network restrictions with Kubernetes-level access controls.

Restricting the NodePort Range

By default, NodePort services use ports 30000-32767. You can restrict or change this range:

yaml
# /etc/kubernetes/manifests/kube-apiserver.yaml
spec:
  containers:
    - command:
        - kube-apiserver
        - --service-node-port-range=30000-30100    # Limit to a narrow range

A narrower range:

  • Reduces the attack surface on every node
  • Makes firewall rules simpler and more precise
  • Limits how many services can be exposed via NodePort

Kubernetes Dashboard Security

The Kubernetes Dashboard is a web-based UI for managing the cluster. While useful for visibility, it is one of the most exploited entry points into Kubernetes clusters.

Why the Dashboard Is a Security Risk

The dashboard is dangerous because:

RiskDescription
External exposureIf exposed via NodePort or LoadBalancer, anyone on the network can reach it
Overly permissive SAThe dashboard service account often has broad cluster-level read (or even write) permissions
Skip login optionOlder versions allowed bypassing authentication entirely
Credential leakageThe dashboard shows secrets, configmaps, and other sensitive data in the UI
Single point of compromiseOne compromised dashboard session grants access to the entire cluster

The Tesla Cryptojacking Incident (2018)

Real-World Breach

In 2018, attackers discovered Tesla's Kubernetes Dashboard was publicly accessible without authentication. The dashboard's service account had sufficient permissions for the attackers to deploy cryptocurrency mining pods across the cluster. They also accessed AWS credentials stored in Kubernetes Secrets, gaining access to Tesla's cloud environment.

Root causes:

  • Dashboard exposed to the internet without authentication
  • Dashboard service account had excessive permissions
  • No network-level access restrictions
  • Secrets stored without encryption at rest

Secure Dashboard Deployment (If Needed)

If the dashboard is required, deploy it with the following security controls:

Step 1: Deploy with Minimal Permissions

bash
# Deploy the dashboard (official manifest)
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

Step 2: Create a Read-Only Service Account

yaml
# dashboard-readonly-sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: dashboard-readonly
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: dashboard-readonly
rules:
  - apiGroups: [""]
    resources: ["pods", "services", "nodes", "namespaces", "events"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["apps"]
    resources: ["deployments", "daemonsets", "statefulsets", "replicasets"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["networking.k8s.io"]
    resources: ["ingresses", "networkpolicies"]
    verbs: ["get", "list", "watch"]
  # NOTE: Deliberately excludes "secrets", "configmaps" with sensitive data,
  # and all write verbs (create, update, patch, delete)
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: dashboard-readonly-binding
subjects:
  - kind: ServiceAccount
    name: dashboard-readonly
    namespace: kubernetes-dashboard
roleRef:
  kind: ClusterRole
  name: dashboard-readonly
  apiGroup: rbac.authorization.k8s.io
bash
kubectl apply -f dashboard-readonly-sa.yaml

# Generate a token for this service account
kubectl -n kubernetes-dashboard create token dashboard-readonly

Step 3: Access via kubectl proxy Only

bash
# Access the dashboard through a local proxy (NEVER expose externally)
kubectl proxy

# Then open in browser:
# http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

The kubectl proxy approach ensures:

  • Traffic flows through your local kubeconfig authentication
  • The dashboard is only accessible on localhost
  • No external network exposure whatsoever

Step 4: Restrict Dashboard with NetworkPolicy

yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: dashboard-restrict
  namespace: kubernetes-dashboard
spec:
  podSelector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  policyTypes:
    - Ingress
  ingress:
    # Only allow traffic from within the cluster (API server proxy)
    - from:
        - ipBlock:
            cidr: 10.0.0.0/8    # Internal cluster network only
      ports:
        - protocol: TCP
          port: 8443

Dashboard Authentication Options

MethodSecurity LevelHow It Works
Bearer TokenGoodUser pastes a ServiceAccount token into the login form
KubeconfigGoodUpload a kubeconfig file to authenticate
Skip LoginDangerousBypasses auth, uses the dashboard's own SA permissions
OAuth2 ProxyBestExternal identity provider via a reverse proxy

Never Enable Skip Login

The --enable-skip-login flag on the dashboard allows users to bypass authentication entirely. Never enable this in any environment.

yaml
# DANGEROUS -- never do this
containers:
  - name: kubernetes-dashboard
    args:
      - --enable-skip-login    # REMOVE THIS

When to Remove the Dashboard Entirely

Best Practice

If the dashboard is not actively used, remove it. It provides no functionality that cannot be achieved through kubectl and introduces unnecessary attack surface.

bash
# Remove the dashboard completely
kubectl delete namespace kubernetes-dashboard

# Or if installed via a specific manifest:
kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

In production and security-focused environments, the dashboard should be considered optional and removed unless there is a clear business requirement.

Restricting External Access to NodePort Services

Why NodePort Is a Security Concern

When you create a NodePort service, Kubernetes opens the specified port on every node in the cluster, regardless of whether the target pods run on that node.

This means:

  • Every node becomes an entry point for that service
  • Any host-level firewall must account for the entire NodePort range
  • In cloud environments, security groups must explicitly block or allow these ports

Restricting NodePort Access with Firewall Rules

bash
# Allow NodePort traffic only from specific trusted CIDRs
iptables -A INPUT -p tcp --dport 30000:30100 -s 10.0.0.0/16 -j ACCEPT
iptables -A INPUT -p tcp --dport 30000:30100 -j DROP

# Or use cloud security groups (AWS example)
aws ec2 authorize-security-group-ingress \
  --group-id sg-worker-nodes \
  --protocol tcp \
  --port 30000-30100 \
  --cidr 10.0.0.0/16

Using externalTrafficPolicy for Source IP Preservation

yaml
apiVersion: v1
kind: Service
metadata:
  name: restricted-service
spec:
  type: NodePort
  externalTrafficPolicy: Local    # Only route to pods on the receiving node
  ports:
    - port: 80
      targetPort: 8080
      nodePort: 30080
  selector:
    app: my-app

Setting externalTrafficPolicy: Local means only nodes actually running the pod will accept traffic. Nodes without matching pods will drop the traffic. This reduces the attack surface and preserves the client's source IP for firewall-based filtering.

Prefer ClusterIP or Ingress Over NodePort

Service TypeExternal ExposureRecommended For
ClusterIPNone (internal only)Internal services, most workloads
NodePortEvery node on a high portDevelopment, debugging (avoid in production)
LoadBalancerExternal IP via cloud LBProduction external services (with cloud firewall)
IngressVia Ingress controller onlyHTTP/HTTPS services (most secure external option)

Avoid NodePort in Production

Use Ingress with TLS termination for external HTTP(S) access, or LoadBalancer with cloud security groups for non-HTTP protocols. NodePort should be a last resort and must be paired with strict firewall rules.

Securing kubectl Access

Kubeconfig File Security

The kubeconfig file (~/.kube/config) typically contains certificates, keys, or tokens that grant cluster access. Treat it like a password file.

bash
# Set restrictive permissions on kubeconfig
chmod 600 ~/.kube/config
chown $(whoami):$(whoami) ~/.kube/config

# Verify permissions
ls -la ~/.kube/config
# Should show: -rw------- 1 user user ... config

Kubeconfig Contains Credentials

A kubeconfig file often embeds client certificates and private keys (or bearer tokens) directly. If an attacker obtains your kubeconfig, they have your full cluster access. Never commit kubeconfig files to version control, share them in chat, or store them on shared filesystems.

Working with Multiple Contexts Securely

bash
# List all contexts in the kubeconfig
kubectl config get-contexts

# View the current context
kubectl config current-context

# Switch context explicitly (know which cluster you are operating on)
kubectl config use-context production-admin

# Use a specific context for a single command without switching
kubectl --context=staging-admin get pods

# Use a specific kubeconfig file without modifying the default
KUBECONFIG=/path/to/specific/config kubectl get pods

Avoid Accidental Production Changes

Use separate kubeconfig files for production and non-production environments. Set KUBECONFIG per shell session to avoid accidentally running commands against the wrong cluster.

bash
# In your shell profile, default to a safe context
export KUBECONFIG=~/.kube/staging-config

# Only use production config when explicitly needed
KUBECONFIG=~/.kube/production-config kubectl get pods -n critical-app

Using OIDC/SSO Instead of Static Certificates

Static client certificates embedded in kubeconfig files have significant drawbacks:

AspectStatic CertificatesOIDC/SSO
ExpirationTypically long-lived (1 year)Short-lived tokens (minutes to hours)
RevocationCannot be revoked without rotating the CATokens expire automatically; users can be deactivated in IdP
AuditingHard to distinguish users with same certEach user has a unique identity
Onboarding/OffboardingManual cert creation and distributionManaged centrally in identity provider
MFANot possibleSupported by most IdPs
yaml
# Example kubeconfig with OIDC provider
apiVersion: v1
kind: Config
clusters:
  - cluster:
      server: https://k8s-api.example.com:6443
      certificate-authority-data: <ca-cert-data>
    name: production
users:
  - name: oidc-user
    user:
      exec:
        apiVersion: client.authentication.k8s.io/v1beta1
        command: kubectl
        args:
          - oidc-login
          - get-token
          - --oidc-issuer-url=https://accounts.google.com
          - --oidc-client-id=my-kubernetes-client
          - --oidc-client-secret=my-client-secret
contexts:
  - context:
      cluster: production
      user: oidc-user
    name: production-oidc

Auditing Kubeconfig Files for Stale Credentials

bash
# List all users in the kubeconfig and check for embedded credentials
kubectl config view --raw -o json | \
  jq -r '.users[] | "\(.name): keys=\(.user | keys)"'

# Check for client certificates and their expiration
kubectl config view --raw -o json | \
  jq -r '.users[].user["client-certificate-data"] // empty' | \
  base64 -d | openssl x509 -noout -dates -subject 2>/dev/null

# Find kubeconfig files that might be scattered across the system
find / -name "config" -path "*/.kube/*" 2>/dev/null
find / -name "*.kubeconfig" 2>/dev/null
find / -name "kubeconfig" 2>/dev/null

# Check for kubeconfig files with overly broad permissions
find / -name "config" -path "*/.kube/*" -perm /077 2>/dev/null

Regular Credential Rotation

Regularly audit kubeconfig files across your organization. Look for:

  • Certificates that have expired or are about to expire
  • Credentials belonging to users who have left the organization
  • Kubeconfig files with chmod 644 or broader permissions
  • Embedded tokens that never expire (static tokens)
  • Kubeconfig files stored in shared directories or version control

Anonymous Access Prevention

By default, the API server allows anonymous authentication. Anonymous requests are assigned the identity system:anonymous with group system:unauthenticated.

yaml
# /etc/kubernetes/manifests/kube-apiserver.yaml
spec:
  containers:
    - command:
        - kube-apiserver
        - --anonymous-auth=false    # Disable anonymous authentication
bash
# Verify anonymous access is blocked
curl -k https://localhost:6443/api/v1/pods
# Should return 401 Unauthorized (not 403 Forbidden)

# If you get a 403, anonymous auth is enabled but RBAC is denying access.
# If you get a 401, anonymous auth is properly disabled.

TIP

For a deeper discussion of anonymous auth, authentication mechanisms, and the full API request flow, see Securing the API Server.

Service Account Token Security

Pods receive service account tokens by default, granting them API access they may not need. Limiting this is critical for reducing blast radius.

Disable Automatic Token Mounting

yaml
# At the ServiceAccount level (affects all pods using this SA)
apiVersion: v1
kind: ServiceAccount
metadata:
  name: my-app
  namespace: production
automountServiceAccountToken: false
---
# At the Pod level (overrides the SA setting)
apiVersion: v1
kind: Pod
metadata:
  name: my-app
  namespace: production
spec:
  serviceAccountName: my-app
  automountServiceAccountToken: false
  containers:
    - name: app
      image: my-app:latest

Audit Service Account Usage

bash
# Find pods with automountServiceAccountToken enabled (or default)
kubectl get pods -A -o json | \
  jq -r '.items[] |
    select(.spec.automountServiceAccountToken != false) |
    "\(.metadata.namespace)/\(.metadata.name) sa=\(.spec.serviceAccountName)"'

# Check what the default service account can do in a namespace
kubectl auth can-i --list \
  --as=system:serviceaccount:production:default \
  -n production

# Find service accounts with cluster-admin bindings
kubectl get clusterrolebindings -o json | \
  jq -r '.items[] |
    select(.roleRef.name == "cluster-admin") |
    select(.subjects[]? | .kind == "ServiceAccount") |
    "\(.metadata.name): \(.subjects[] | select(.kind == "ServiceAccount") | "\(.namespace)/\(.name)")"'

TIP

For a comprehensive guide on RBAC, service account token security, dangerous permissions, and least-privilege patterns, see RBAC Deep Dive.

Comprehensive Access Restriction Checklist

Complete Cluster Access Hardening Checklist

API Server Network Restrictions:

  • [ ] --bind-address set to control plane IP (not 0.0.0.0)
  • [ ] --insecure-port=0 (or removed on 1.24+)
  • [ ] --service-node-port-range narrowed to what is actually needed
  • [ ] Cloud firewall / security groups restrict port 6443 access
  • [ ] kubelet port (10250) restricted to control plane only

Dashboard Security:

  • [ ] Dashboard removed if not needed
  • [ ] If deployed, accessed only via kubectl proxy
  • [ ] Dashboard service account has read-only permissions only
  • [ ] --enable-skip-login is NOT present
  • [ ] NetworkPolicy restricts dashboard pod ingress

kubectl and Credential Security:

  • [ ] Kubeconfig file permissions set to 600
  • [ ] OIDC/SSO used instead of static certificates where possible
  • [ ] Separate kubeconfig files for production and non-production
  • [ ] Stale credentials audited and removed regularly
  • [ ] Kubeconfig files never stored in version control

Service Account and Anonymous Access:

  • [ ] --anonymous-auth=false set on the API server
  • [ ] automountServiceAccountToken: false on pods that do not need API access
  • [ ] Default service accounts have no additional RBAC bindings
  • [ ] No unnecessary ClusterRoleBindings to service accounts

Quick Reference

bash
# --- API Server Access Restriction ---

# Check what address the API server is bound to
ps aux | grep kube-apiserver | tr ' ' '\n' | grep bind-address

# Check if insecure port is disabled
ps aux | grep kube-apiserver | tr ' ' '\n' | grep insecure-port

# Check the NodePort range
ps aux | grep kube-apiserver | tr ' ' '\n' | grep service-node-port-range

# --- Dashboard Operations ---

# Check if the dashboard is installed
kubectl get deployments -n kubernetes-dashboard

# Remove the dashboard
kubectl delete namespace kubernetes-dashboard

# Access dashboard securely via proxy
kubectl proxy &
# Open: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

# Create a read-only token for dashboard access
kubectl -n kubernetes-dashboard create token dashboard-readonly

# --- Kubeconfig Security ---

# Lock down kubeconfig permissions
chmod 600 ~/.kube/config

# Check current context before running commands
kubectl config current-context

# List all contexts
kubectl config get-contexts

# Run a command against a specific context
kubectl --context=staging get pods

# --- Anonymous Access ---

# Test if anonymous auth is disabled
curl -k https://localhost:6443/api/v1/namespaces
# 401 = anonymous auth disabled (good)
# 403 = anonymous auth enabled but RBAC blocking (partial)

# --- Service Account Audit ---

# Check what a service account can do
kubectl auth can-i --list --as=system:serviceaccount:default:default -n default

# Find all pods NOT disabling automount
kubectl get pods -A -o json | \
  jq -r '.items[] | select(.spec.automountServiceAccountToken != false) | .metadata.namespace + "/" + .metadata.name'

Key Exam Takeaways

TopicWhat to Remember
API server bindingUse --bind-address with a specific IP, never 0.0.0.0 in production
Insecure portMust be 0 or absent; removed entirely in Kubernetes 1.24+
NodePort range--service-node-port-range restricts which ports can be used; narrower is safer
Dashboard securityRemove it if not needed; if kept, use kubectl proxy, read-only SA, no skip-login
Dashboard RBACCreate a dedicated SA with a ClusterRole that excludes secrets and write verbs
kubeconfigchmod 600, never commit to git, prefer OIDC over static certs, audit regularly
Anonymous auth--anonymous-auth=false on the API server; verify with curl returning 401
Service account tokensautomountServiceAccountToken: false on pods that do not call the API
Defense in depthLayer network firewalls, Kubernetes RBAC, and NetworkPolicies together

Released under the MIT License.