best way to seed new machine with k8s/eks info - kubernetes

Say we have a couple of clusters on Amazon EKS. We have a new user or new machine that needs .kube/config to be populated with the latest cluster info.
Is there some easy way we get the context info from our clusters on EKS and put the info in the .kube/config file? something like:
eksctl init "cluster-1-ARN" "cluster-2-ARN"
so after some web-sleuthing, I heard about:
aws eks update-kubeconfig
I tried that, and I get this:
$ aws eks update-kubeconfig usage: aws [options]
[ ...] [parameters] To see help text, you can
run:
aws help aws help aws help
aws: error: argument --name is required
I would think it would just update for all clusters then, but it don't. So I put the cluster names/ARNs, like so:
aws eks update-kubeconfig --name arn:aws:eks:us-west-2:913xxx371:cluster/eks-cluster-1
aws eks update-kubeconfig --name arn:aws:eks:us-west-2:913xxx371:cluster/ignitecluster
but then I get:
kbc stderr: An error occurred (ResourceNotFoundException) when calling the DescribeCluster operation: No cluster found for name: arn:aws:eks:us-west-2:913xxx371:cluster/eks-cluster-1.
kbc stderr: An error occurred (ResourceNotFoundException) when calling the DescribeCluster operation: No cluster found for name: arn:aws:eks:us-west-2:913xxx371:cluster/ignitecluster.
hmmm this is kinda dumb 😒 those cluster names exist..so what 🤷 do I do now

So yeah those clusters I named don't actually exist. I discovered that via:
aws eks list-clusters
ultimately however, I still feel strong because we people need to make a tool that can just update your config with all the clusters that exist instead of having you name them.
So to do this programmatically, it would be:
aws eks list-clusters | jq '.clusters[]' | while read c; do
aws eks update-kubeconfig --name "$c"
done;

In my case, I was working with two AWS environments. My ~/.aws/credentials were pointing to one and had to be changed to point to the correct account. Once you change the account details, you can verify the change by running the following commands:
eksctl get clusters
and then setting the kube-config using the command below after verifying the region.
aws eks --region your_aws_region update-kubeconfig --name your_eks_cluster

Related

Cannot install Helm chart when accessing GKE cluster directly

I've set up a basic GKE cluster using Autopilot settings. I am able to install Helm charts on it using kubectl with proper kubeconfig pointing to the GKE cluster.
I'd like to do the same without the kubeconfig, by providing the cluster details with relevant parameters.
To do that I'm running a docker container using alpine/helm image and passing the paramtrised command which looks like this:
docker run --rm -v $(pwd):/chart alpine/helm install <my_chart_name> /chart --kube-apiserver <cluster_endpoint> --kube-ca-file /chart/<cluster_certificate_file> --kube-as-user <my_gke_cluster_username> --kube-token <token>
unfortunately it returns :
Error: INSTALLATION FAILED: Kubernetes cluster unreachable: Get "http://<cluster_endpoint>/version": dial tcp <cluster_endpoint>:80: i/o timeout
Is this even doable with GKE?
One challenge will be that GKE leverages a plugin (currently built in to kubectl itself but soon the standlone gke-gcloud-auth-plugin) to obtain an access token for the default gcloud user.
This token expires hourly.
If you can, it would be better to mount the kubeconfig (${HOME}/.kube/config) file into the container as it should (!) then authenticate as if it were kubectl which will not only leverage the access token correctly but will renew it as appropriate.
https://github.com/alpine-docker/helm
docker run \
--interactive --tty --rm \
--volume=${PWD}/.kube:/root/.kube \
--volume=${PWD}/.helm:/root/.helm \
--volume=${PWD}/.config/helm:/root/.config/helm \
--volume=${PWD}/.cache/helm:/root/.cache/helm \
alpine/helm ...
NOTE It appears there are several (.helm, .config and .cache) other local paths that may be required too.
Problem solved! A more experienced colleague has found the solution.
I should have used the address including "http://" protocol specification. That however still kept returning "Kubernetes cluster unreachable: " error, with "unknown" details instead.
I had been using incorect username. Instead the one from kubeconfig file, a new service account should be created and its name used instead in a form system:serviceaccount:<namespace>:<service_account>. However that would not alter the error either.
The service account lacked proper role, following command did the job: kubectl create rolebinding --clusterrole=cluster-admin --serviceaccount=<namespace>:<service_account>. Ofc, cluster-admin might now be the role we want to give away freely.

How to update kubernetes cluster image arguments using kops

While creating a cluster, kops gives us a set of arguments to configure the images to be used for the master instances and the node instances like the following as mentioned in the kops documentation for create cluster command : https://github.com/kubernetes/kops/blob/master/docs/cli/kops_create_cluster.md
--image string Set image for all instances.
--master-image string Set image for masters. Takes precedence over --image
--node-image string Set image for nodes. Takes precedence over --image
Suppose I forgot to add these parameters when I created the cluster, how can I edit the cluster and update these things?
While running kops edit cluster the cluster configuration opens up as a yaml.. but where should I add these things in there?
is there complete kops cluster yaml that I can refer to modify my cluster?
You would need to edit the instance group after the cluster is created to add/edit the image name.
kops get ig
kops edit ig <ig-name>
After the update is done for all masters and nodes, perform
kops update cluster <cluster-name>
kops update cluster <cluster-name> --yes
and then perform rolling-update or restart/stop 1 instance at a time from the cloud console
kops rolling-update cluster <cluster-name>
kops rolling-update cluster <cluster-name> --yes
in another terminal kops validate cluster <cluster-name> to validate the cluster
there are other flags we can use as well while performing the rolling-update
There are other parameters as well which you can add, update, edit in the instance group - take a look at the documentation for more information
Found a solution for this question. My intention was to update huge number of instance groups in one shot for a cluster. Editing each instance group one by one is lot of work.
run kops get <cluster name> -o yaml > cluster.yaml
edit it there, then run kops replace -f cluster.yaml

Always getting error: You must be logged in to the server (Unauthorized) EKS

I am currently playing around with AWS EKS
But I always get error: You must be logged in to the server (Unauthorized) when trying to run kubectl cluster-info command.
I have read a lot of AWS documentation and look at lots of similar issues who face the same problem. Unfortunately, none of them resolves my problem.
So, this is what I did
install all required packages
create a user to access aws-cli name crop-portal
create a role for EKS name crop-cluster
create EKS cluster via AWS console with the role crop-cluster namecrop-cluster(cluster and role have the same name)
run AWS configure for user crop-portal
run aws eks update-kubeconfig --name crop-cluster to update the kube config
run aws sts assume-role --role-arn crop-cluster-arn --role-session-name eks-access
copy accessKey, secreyKey and sessionToken into env variables AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN accordingly
run aws sts get-caller-indentity and now the result says it used assume role already
{
"UserId": "AROAXWZGX5HOBZPVGAUKC:botocore-session-1572604810",
"Account": "529972849116",
"Arn": "arn:aws:sts::529972849116:assumed-role/crop-cluster/botocore-session-1572604810"
}
run kubectl cluster and always get error: You must be logged in to the server (Unauthorized)
when I run aws-iam-authenticator token -i crop-cluster, it gave me the token and
when I run aws-iam-authenticator verify -t token -i crop-portal, it also passed
&{ARN:arn:aws:sts::529972849116:assumed-role/crop-cluster/1572605554603576170 CanonicalARN:arn:aws:iam::529972849116:role/crop-cluster AccountID:529972849116 UserID:AROAXWZGX5HOBZPVGAUKC SessionName:1572605554603576170}
I don't know what is wrong or what I miss. I try so hard to get it works but I really don't know what to do after this.
Some people suggest creating a cluster with awscli instead of GUI. I tried both methods and none of them work. Either creating with awscli or GUI is the same for me.
Please someone helps :(
I will try to add some more information here and I hope it will be more helpful while setting up the access to the EKS cluster.
When we create the EKS cluster by any method via CloudFormation/CLI/EKSCTL the IAM role/user who created the cluster will automatically binded to the default kubernetes RBAC API group "system:masters" (https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) and in this way creator of the cluster will get the admin access to the cluster.
To verify the role or user for the EKS cluster we can search for the CreateCluster Api call on cloudtrail and it will tell us the creator of the cluster.
Now generally if we use role to create the cluster as you did (For example "crop-cluster"). We have to make sure that we are assuming this role before making any api calls using kubectl and the easiest way to do this is set this role in the kubeconfig file. And we can easily do this by running the below command from the terminal.
aws eks --region region-code update-kubeconfig --name cluster_name --role-arn crop-cluster-arn
Now if we will run the above command then it will set the role with -r flag in the kube config file so in that way we are telling the aws/aws-iam-authenticator that before making any api call it should first assume the role and in this way WE DON'T HAVE TO ASSUME THE ROLE MANUALLY via cli using "aws sts assume-role --role-arn crop-cluster-arn --role-session-name eks-access".
Once kubeconfig file is set properly make sure that CLI is configured properly wit h the IAM user credentials "crop-portal". And we can confirm this by running the "aws sts get-caller-identity" command and output should show us the user ARN in the "Arn" section like below.
$ aws sts get-caller-identity
{
"Account": "xxxxxxxxxxxxx",
"UserId": "xxxxxxxxxxxxxx",
"Arn": "arn:aws:iam::xxxxxxxxxxx:user/crop-portal"
}
Once that is done you should be directly able to make kubectl command without any issue.
Note: I have assumed that user "crop-portal" does have enogh permission to assume the role "crop-cluster"
Note:
For more details we can also refer to answer on this question Getting error "An error occurred (AccessDenied) when calling the AssumeRole operation: Access denied" after setting up EKS cluster

kops validate cluster: returns "cluster not found" even though cluster is healthy and kubectl works fine

I created a cluster using kops. It worked fine and the cluster is healthy. I can see my nodes using kubectl and have created some deployments and services. I tried adding a node using "kops edit ig nodes" and got an error "cluster not found". Now I get that error for all kops commands:
kops validate cluster
Using cluster from kubectl context: <clustername>
cluster "<clustername>" not found
So my question is: where does kops look for clusters and how do I configure it to see my cluster.
My KOPS_STATE_STORE environment variable got messed up. I corrected it to be the correct s3 bucket and everything is fine.
export KOPS_STATE_STORE=s3://correctbucketname
Kubectl and Kops access the configuration file from the following the location.
When the cluster is created.The configuration will be saved into a users
$HOME/.kube/config
I have attached the link for further insight for instance, If you have another config file you can EXPORT it. kube-config

Kubernetes unable to pull images from gcr.io

I am trying to setup Kubernetes for the first time. I am following the Fedora Manual installation guide: http://kubernetes.io/v1.0/docs/getting-started-guides/fedora/fedora_manual_config.html
I am trying to get the kubernetes addons running , specifically the kube-ui. I created the service and replication controller like so:
kubectl create -f cluster/addons/kube-ui/kube-ui-rc.yaml --namespace=kube-system
kubectl create -f cluster/addons/kube-ui/kube-ui-svc.yaml --namespace=kube-system
When i run
kubectl get events --namespace=kube-system
I see errors such as this:
Failed to pull image "gcr.io/google_containers/pause:0.8.0": image pull failed for gcr.io/google_containers/pause:0.8.0, this may be because there are no credentials on this request. details: (Authentication is required.)
How am i supposed to tell kubernetes to authenticate? This isnt covered in the documentation. So how do i fix this?
This happened due to a recent outage to gce storage as a result of which all of us went through this error while pulling images from gcr (which uses gce storage on the backend).
Are you still seeing this error ?
as the message says, you need credentials. Are you using Google Container Engine? Then you need to run
gcloud config set project <your-project>
gcloud config set compute/zone <your-zone, like us-central1-f>
gcloud beta container clusters get-credentials --cluster <your-cluster-name>
then your GCE cluster will have the credentials