How do you get kubectl to log in to an AWS EKS cluster?

  1. As mentioned in docs, the AWS IAM user created EKS cluster automatically receives system:master permissions, and it's enough to get kubectl working. You need to use this user credentials (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY) to access the cluster. In case you didn't create a specific IAM user to create a cluster, then you probably created it using root AWS account. In this case, you can use root user credentials (Creating Access Keys for the Root User).
  2. The main magic is inside aws-auth ConfigMap in your cluster – it contains IAM entities -> kubernetes ServiceAccount mapping.

I'm not sure about how do you pass credentials for the aws-iam-authenticator:

  • If you have ~/.aws/credentials with aws_profile_of_eks_iam_creator then you can try $ AWS_PROFILE=aws_profile_of_eks_iam_creator kubectl get all --all-namespaces
  • Also, you can use environment variables $ AWS_ACCESS_KEY_ID=XXX AWS_SECRET_ACCESS_KEY=YYY AWS_DEFAULT_REGION=your-region-1 kubectl get all --all-namespaces

Both of them should work, because kubectl ... will use generated ~/.kube/config that contains aws-iam-authenticator token -i cluster_name command. aws-iam-authenticator uses environment variables or ~/.aws/credentials to give you a token.

Also, this answer may be useful for the understanding of the first EKS user creation.


Here are my steps using the aws-cli


$ export AWS_ACCESS_KEY_ID="something"
$ export AWS_SECRET_ACCESS_KEY="something"
$ export AWS_SESSION_TOKEN="something"

$ aws eks update-kubeconfig \
  --region us-west-2 \
  --name my-cluster

>> Added new context arn:aws:eks:us-west-2:#########:cluster/my-cluster to /home/john/.kube/config

Bonus, use kubectx to switch kubectl contexts

$ kubectx 

>> arn:aws:eks:us-west-2:#########:cluster/my-cluster-two     arn:aws:eks:us-east-1:#####:cluster/my-cluster  

$ kubectx arn:aws:eks:us-east-1:#####:cluster/my-cluster


>> Switched to context "arn:aws:eks:us-east-1:#####:cluster/my-cluster".

Ref: https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html


Once you have setup the aws config on your system, check the current identity to verify that you're using the correct credentials that have permissions for the Amazon EKS cluster:

aws sts get-caller-identity

Afterwards use:

aws eks --region region update-kubeconfig --name cluster_name

This will create kubeconfig at your home path with required kubernetes API server url at $HOME/.kube/config.

Afterwards you can follow the kubectl instructions for installation and this should work.


After going over the comments I think it seems that you:

  1. Have created the cluster with the root user.
  2. Then created an IAM user and created AWS credentials (AWS_ACCESS_KEY_ID andAWS_SECRET_ACCESS_KEY) for it.
  3. Used these access and secret key in your kubeconfig settings (doesn't matter how - there are multiple ways for that).

And here is the problem as described in the docs:

If you receive one of the following errors while running kubectl commands, then your kubectl is not configured properly for Amazon EKS or the IAM user or role credentials that you are using do not map to a Kubernetes RBAC user with sufficient permissions in your Amazon EKS cluster.

  • could not get token: AccessDenied: Access denied
  • error: You must be logged in to the server (Unauthorized)
  • error: the server doesn't have a resource type "svc" <--- Your case

This could be because the cluster was created with one set of AWS credentials (from an IAM user or role), and kubectl is using a different set of credentials.

When an Amazon EKS cluster is created, the IAM entity (user or role) that creates the cluster is added to the Kubernetes RBAC authorization table as the administrator (with system:masters permissions).
Initially, only that IAM user can make calls to the Kubernetes API server using kubectl.

For more information, see Managing users or IAM roles for your cluster. If you use the console to create the cluster, you must ensure that the same IAM user credentials are in the AWS SDK credential chain when you are running kubectl commands on your cluster.

This is the cause for the errors.

As the accepted answer described - you'll need to edit aws-auth in order to manage users or IAM roles for your cluster.