Kubernetes Dashboard access using config file Not enough data to create auth info structure.

After looking at this answer How to sign in kubernetes dashboard? and source code figured the kubeconfig authentication.

After kubeadm install on the master server get the default service account token and add it to config file. Then use the config file to authenticate.

You can use this to add the token.

#!/bin/bash
TOKEN=$(kubectl -n kube-system describe secret default| awk '$1=="token:"{print $2}')

kubectl config set-credentials kubernetes-admin --token="${TOKEN}"

your config file should be looking like this.

kubectl config view |cut -c1-50|tail -10
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
    token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.ey

Only authentication options specified by --authentication-mode flag is supported in kubeconfig file.

You can auth with token (any token in kube-system namespace):

$ kubectl get secrets -n kube-system
$ kubectl get secret $SECRET_NAME -n=kube-system -o json | jq -r '.data["token"]' | base64 -d > user_token.txt

and auth with the token (see user_token.txt file).


Two things are going on here

  • the Kubernetes Dashboard Application needs an authentication token
  • and this authentication token must be linked to an account with sufficient privileges.

The usual way to deploy the Dashboard Application is just

  • to kubectl apply a YAML file pulled from the configuration recommended at the Github project(for the dashboard): /src/deploy/recommended/kubernetes-dashboard.yaml ⟹ master•v1.10.1
  • then to run kubectl proxy and access the dashbord through the locally mapped Port 8001.

However this default configuration is generic and minimal. It just maps a role binding with minimal privileges. And, especially on DigitalOcean, the kubeconfig file provided when provisioning the cluster lacks the actual token, which is necessary to log into the dashboard.

Thus, to fix these shortcomings, we need to ensure there is an account, which has a RoleBinding to the cluster-admin ClusterRole in the Namespace kube-system. The above mentioned default setup just provides a binding to kubernetes-dashboard-minimal. We can fix that by deplyoing explicitly

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system

And then we also need to get the token for this ServiceAccount...

  • kubectl get serviceaccount -n kube-system will list you all service accounts. Check that the one you want/created is present
  • kubectl get secrets -n kube-system should list a secret for this account
  • and with kubectl describe secret -n kube-system admin-user-token-XXXXXX you'd get the information about the token.

The other answers to this question provide ample hints, how this access could be scripted in a convenient way (like e.g. using awk, using grep, using kubectl get with -o=json and piping to jq, or using -o=jsonpath)

You can then either:

  • store this token into a text file and upload this
  • edit your kubeconfig file and paste in the token to the admin user provided there

Tags:

Kubernetes