How to Update Kubectl Config from AWS EKS

Ever searched for kubectl update config from aws eks and needed a quick result?

Step 1 – Validate AWS CLI

Make sure that you have valid AWS Credentials setup in your aws cli.

You can check this by typing:

aws sts get-caller-identity
Code language: Bash (bash)

This will let you know where your aws cli is pointing to.

You may need to update your ~/.aws/credentials file with a profile_name, aws_access_key_id, aws_secret_access_key and aws_session_token if these are generated for you by your Single Sign On (SSO).

If you have a profile you want to use going forward, that isn’t the default, then you can export it into the current CLI session. This will prevent you having to type --profile <profile_name> each time you make an API call.

export AWS_PROFILE=<profile_name_in_credentials_file>
Code language: Bash (bash)

Step 2 – Update Kubectl Config

Next you will need to get aws cli to update the local ~/.kube/config file for you.

To do this, replace the following with your cluster_name and aws_region it is deployed in:

aws eks update-kubeconfig --name <your_eks_cluster_name> --region <aws_region>
Code language: Bash (bash)

If this was successful, you should get a response that looks something like:

Added new context arn:aws:eks:<region>:<accountnumber>:cluster/<clustername> to /Users/user/.kube/config
Code language: plaintext (plaintext)

Step 3 – Verify Cluster Information

To guarantee that you are connected to the cluster you wanted, run the following command:

kubectl cluster-info
Code language: Bash (bash)

This will output something like:

Kubernetes control plane is running at<region> CoreDNS is running at<region> To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Code language: plaintext (plaintext)
Notify of
Inline Feedbacks
View all comments