Connect local instance of kubectl to GKE cluster without using gcloud tool?

You may use kubectl with either a user account, or a service account.

User accounts are designed to be used by humans, hence the apparent "limitation" of the tools: if you use a GKE cluster, a user is assumed to have gcloud installed and being logged in with it.

You may instead use a service account, which is designed to be used by software. Kubernetes has a dedicated resource type ServiceAccount (not to confuse it with GCP service accounts!). Added benefit - it is a k8s feature, so it doesn't depend on the service implementation you use (GKE, AKS, etc.).

Approach:

You only need gcloud tool to connect to the cluster for the very first time. Once you have access to the cluster, you can create a new k8s ServiceAccount and use its token in the kubectl config file.

The service account however will need to be assigned necessary roles (k8s Role and RoleBinding resources) in a fine-grained manner.

Your cluster needs to have RBAC authentication enabled for this to work.

Word of caution:

Service account tokens do not expire and therefore special care should be taken to never expose/compromise them. It is probably a good idea to rotate them.

Simple(istic) example:

This a simplified example of how the above can be done. It is rather simplistic because it uses a default namespace, only adds a few roles, and requires a fair amount of manual actions, but it can help you to get started with your own implementation.

Create a file my-service-account.yaml:

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: my-user
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: my-user-role
rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: my-user-role-binding
subjects:
  - kind: ServiceAccount
    name: my-user
roleRef:
  kind: Role
  name: my-user-role
  apiGroup: rbac.authorization.k8s.io

and run kubectl apply -f my-service-account.yaml to create the resources.

Once the service account is created you can run kubectl get secrets to find the secret that holds the user token (it has it's name derived from the service account name), then run kubectl get secret <secret-name-here> -o yaml to get the secret data, and find a token in the data.token field in the output. The token is base64-encoded, so you will need to decode it first before using in the kubectl config file (you can use base64 -d in Linux for this). In the end, relevant part of the kubectl config file may look like:

apiVersion: v1
clusters:
  ...
contexts:
  ...
users:
- name: my-user
  user:
    token: <token-value-here>

Now you may switch kubectl context to the one you created for this user, and run:

kubectl get pods

The newly created user can only do the above and pretty much nothing else, since this is what is configured in the associated role. You can find out more about RBAC and the role configuration in the Kubernetes documentation: Using RBAC Authorization.


The easiest way to achieve this is by copying the ~/.kube/config file (from a gcloud authenticated instance) to this directory $HOME/.kube in your local instance (laptop).

But first, and using the authenticated instance, you would have to enable legacy cluster per this document by running these commands:

gcloud config set container/use_client_certificate True
export CLOUDSDK_CONTAINER_USE_CLIENT_CERTIFICATE=True

Then execute the get-credentials command, and copy the file.

gcloud container clusters get-credentials NAME [--zone=ZONE, -z ZONE] [GCLOUD_WIDE_FLAG …]

Note that you may have to run the get-credentials command, and copy the config file every time authentication tokens (saved in the config file) expire.


So I have created this tool called gke-kubeconfig to do so. I basically reverse engineered gcloud and did the same. It requests a short term token first and then use it to get the cluster data and create kube config.

You need to make sure that the token does not expire while you are using the config file, it currently expires after 1 hour so usually it shouldn't be a problem.

I have also created mie00/gke-kubeconfig to use in my CI pipeline.