Ever searched for kubectl update config from aws eks
and needed a quick result?
Step 1 – Validate AWS CLI
Make sure that you have valid AWS Credentials setup in your aws cli
.
You can check this by typing:
Code language: Bash (bash)aws sts get-caller-identity
This will let you know where your aws cli
is pointing to.
You may need to update your ~/.aws/credentials
file with a profile_name
, aws_access_key_id
, aws_secret_access_key
and aws_session_token
if these are generated for you by your Single Sign On (SSO).
If you have a profile you want to use going forward, that isn’t the default, then you can export it into the current CLI session. This will prevent you having to type --profile <profile_name>
each time you make an API call.
export AWS_PROFILE=<profile_name_in_credentials_file>
Code language: Bash (bash)
Step 2 – Update Kubectl Config
Next you will need to get aws cli
to update the local ~/.kube/config
file for you.
To do this, replace the following with your cluster_name
and aws_region
it is deployed in:
Code language: Bash (bash)aws eks update-kubeconfig --name <your_eks_cluster_name> --region <aws_region>
If this was successful, you should get a response that looks something like:
Code language: plaintext (plaintext)Added new context arn:aws:eks:<region>:<accountnumber>:cluster/<clustername> to /Users/user/.kube/config
Step 3 – Verify Cluster Information
To guarantee that you are connected to the cluster you wanted, run the following command:
Code language: Bash (bash)kubectl cluster-info
This will output something like:
Code language: plaintext (plaintext)Kubernetes control plane is running at https://xxxxx.xxx.<region>.eks.amazonaws.com CoreDNS is running at https://xxxxx.xxx.<region>.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.