How I Securely Manage Kubernetes Dashboard Access with Tokens

Jan 15, 2024

When you first install the Kubernetes Dashboard, it’s tempting to use a high-privileged account to get in and see everything. But this is a huge security risk. I learned early on that the only safe way to manage dashboard access is by creating dedicated service accounts with fine-grained, read-only permissions. It prevents accidents and provides a clear audit trail. Here’s my process for creating secure, token-based access for my teams.

Why I Insist on Using Service Accounts

For me, using service accounts for dashboard access is non-negotiable. It means I never have to share admin credentials. I can give each team or user their own token with permissions scoped to exactly what they need to see, whether it’s a single namespace or the whole cluster (in read-only mode). If someone leaves the team, I can immediately revoke their access by just deleting their service account. It’s secure, auditable, and follows the principle of least privilege.

My Step-by-Step Process for Creating a Read-Only User

Here’s the workflow I follow to create a new read-only user for the dashboard.

Step 1: Create the Service Account

First, I create a new ServiceAccount in the kubernetes-dashboard namespace. I’ll call it something descriptive, like dashboard-viewer.

kubectl create serviceaccount dashboard-viewer -n kubernetes-dashboard

Step 2: Define the Read-Only Permissions

Next, I define the permissions. I create a ClusterRole named dashboard-viewer that has get, list, and watch verbs on all the common resources like pods, deployments, and services. I’m very careful not to grant any write permissions like create, update, or delete.

# dashboard-viewer-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: dashboard-viewer
rules:
- apiGroups: ["", "apps", "batch", "networking.k8s.io"]
  resources: ["pods", "pods/log", "deployments", "replicasets", "services", "ingresses", "jobs", "cronjobs", "configmaps", "secrets", "namespaces", "events"]
  verbs: ["get", "list", "watch"]

I apply this with kubectl apply -f dashboard-viewer-role.yaml.

Step 3: Bind the Service Account to the Role

Then, I bind the service account to the role using a ClusterRoleBinding. This is what actually grants the permissions to the account.

kubectl create clusterrolebinding dashboard-viewer-binding \
  --serviceaccount=kubernetes-dashboard:dashboard-viewer \
  --clusterrole=dashboard-viewer

Step 4: Generate the Token

Since Kubernetes 1.24, tokens are no longer created automatically. So, my next step is to create a Secret of type kubernetes.io/service-account-token and annotate it with the name of my service account. This triggers Kubernetes to generate a token and store it in this secret.

# dashboard-viewer-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: dashboard-viewer-token
  namespace: kubernetes-dashboard
  annotations:
    kubernetes.io/service-account.name: dashboard-viewer
type: kubernetes.io/service-account-token

Step 5: Extract and Use the Token

Finally, I extract the token from the secret and decode it from base64. This is the token I’ll give to my user to log in to the dashboard.

kubectl get secret dashboard-viewer-token -n kubernetes-dashboard -o jsonpath='{.data.token}' | base64 -d

Common Permission Patterns I Use

I have a few standard permission sets I use. For a general-purpose viewer for the operations team, I’ll use the ClusterRole I just created. For a development team that only needs to see their own namespace, I’ll create a Role and RoleBinding that are scoped only to their specific namespace. For admins, I’ll bind a separate admin service account to the built-in cluster-admin role, but I’m very careful about who gets that token.

My Automation Script

To make this process repeatable, I’ve written a simple shell script that takes a username and creates the service account, secret, and a read-only binding, and then outputs the token.

#!/bin/bash
# create-dashboard-user.sh
USERNAME=$1
NAMESPACE=kubernetes-dashboard

kubectl create serviceaccount ${USERNAME} -n ${NAMESPACE}

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
  name: ${USERNAME}-token
  namespace: ${NAMESPACE}
  annotations:
    kubernetes.io/service-account.name: ${USERNAME}
type: kubernetes.io/service-account-token
EOF

kubectl create clusterrolebinding ${USERNAME}-binding \
  --serviceaccount=${NAMESPACE}:${USERNAME} \
  --clusterrole=view

# Wait for the token to be generated
sleep 2

TOKEN=$(kubectl get secret ${USERNAME}-token -n ${NAMESPACE} -o jsonpath='{.data.token}' | base64 -d)
echo "Token for ${USERNAME}:\n${TOKEN}"

My Final Thoughts

My key takeaway is that you should treat dashboard access with the same rigor as any other production access. My rules are simple: always follow the principle of least privilege, use namespace-scoped roles whenever possible, give each user their own service account, and rotate tokens regularly. For an even more secure approach, I’ve started using the kubectl create token command, which generates short-lived tokens without needing to create a secret at all. This approach has made our dashboard access both secure and easy to manage.

El Muhammad's Portfolio

© 2025 Aria

Instagram YouTube TikTok 𝕏 GitHub