How to update kubernetes cluster image arguments using kops - kubernetes

While creating a cluster, kops gives us a set of arguments to configure the images to be used for the master instances and the node instances like the following as mentioned in the kops documentation for create cluster command : https://github.com/kubernetes/kops/blob/master/docs/cli/kops_create_cluster.md
--image string Set image for all instances.
--master-image string Set image for masters. Takes precedence over --image
--node-image string Set image for nodes. Takes precedence over --image
Suppose I forgot to add these parameters when I created the cluster, how can I edit the cluster and update these things?
While running kops edit cluster the cluster configuration opens up as a yaml.. but where should I add these things in there?
is there complete kops cluster yaml that I can refer to modify my cluster?

You would need to edit the instance group after the cluster is created to add/edit the image name.
kops get ig
kops edit ig <ig-name>
After the update is done for all masters and nodes, perform
kops update cluster <cluster-name>
kops update cluster <cluster-name> --yes
and then perform rolling-update or restart/stop 1 instance at a time from the cloud console
kops rolling-update cluster <cluster-name>
kops rolling-update cluster <cluster-name> --yes
in another terminal kops validate cluster <cluster-name> to validate the cluster
there are other flags we can use as well while performing the rolling-update
There are other parameters as well which you can add, update, edit in the instance group - take a look at the documentation for more information

Found a solution for this question. My intention was to update huge number of instance groups in one shot for a cluster. Editing each instance group one by one is lot of work.
run kops get <cluster name> -o yaml > cluster.yaml
edit it there, then run kops replace -f cluster.yaml

Related

Kops cannot edit nodes instance types

I'm getting the following error when trying to modify the instance types of the worker/master nodes of my k8s cluster.
error reading InstanceGroup "nodes": InstanceGroup.kops.k8s.io "nodes" not found
I run the following:
kops edit ig nodes --name ${NAME}
error reading InstanceGroup "nodes": InstanceGroup.kops.k8s.io "nodes" not found
Am I missing something here?
$ kops get instancegroups --name ${NAME}
NAME ROLE MACHINETYPE MIN MAX ZONES
master-us-east-2a Master t3.medium 1 1 us-east-2a
nodes-us-east-2a Node t3.medium 1 1 us-east-2a
nodes-us-east-2b Node t3.medium 1 1 us-east-2b
This works.
Maybe Kops has changed recently and they don't group all nodes under the same name anymore as previously?
kOps did indeed change recently in that new clusters are provisioned with one instance group per availability zone (AZ) instead of having one node IG that spans all AZs.
So in your case, you want to edit both nodes-us-east-2a and nodes-us-east-2b.
As a bonus comment, I really recommend that you both
use kops get -o yaml to dump the specs and put them under version control
template your IG spec so that you ensure they are consistent.

eksctl apply cluster changes after create

I've created a cluster using
eksctl create cluster -f mycluster.yaml
everything is running but now I would want to add cluster autoscaler. There does not seem to be an option to specify this for the eksctl update cluster command.
When creating a new cluster I can add the --asg-access flag, is there an option to enable ASG support for an existing cluster via eksctl?
The --asg-access flag only adds relevant iam policy and labels to a node group.
You can do that by creating a new node group with the autoscaler option set as true
nodeGroup:
iam:
withAddonPolicies:
autoScaler: true
and the labels as mentioned here
Then you need to install the autoscaler itself
Note:
You won't be able to edit your current nodeGroup, so you will have to add a new one first and then delete your current one. (https://eksctl.io/usage/managing-nodegroups/#nodegroup-immutability)

Multiple kubernetes cluster

The requirement is, for each new build for QA should create a new kubernetes cluster (new enviroment altogether) and then it should be destroyed after QA is completed.
So it is not a federated setup.
I am using kops in AWS to create cluster.
Do I need to create another 'bootstrap' instance for creating new cluster? The guess is I can change the name of cluster in command and it will create a new cluster. Like kops create cluster --zones=<zones> <some-other-name>.
So question is what does kubectl get all return - consolidated objects?
When I do kubectl apply -f ., how doest kubectl know which cluster to apply to?
How do I specify cluster name while installing things like helm?
You should be setting the context on your cluster something like this, once this is set then all your kubectl commands will be run in the context of that cluster.
kubectl config use-context my-cluster-name
Refer this link for more details

How change the node count of a created cluster

I created a cluster with kops in the AWS.
sudo kops create cluster --name=k8s.ehh.fun --state=s3://kops-state-ehh000 --zones=us-east-1a --node-count=3 --node-size=t2.micro --master-size=t2.micro --dns-zone=k8s.ehh.fun
And now a Would like to change the node-count without destroy the cluster. How can I do that?
I tried :
sudo kops update cluster --name=k8s.ehh.fun --state=s3://kops-state-ehh000 --node-count=3 --node-size=t2.micro
But I got : Error: unknown flag: --node-count
You can change the node count by editing the nodes instance group:
kops edit instancegroup nodes
This will open an editor in which you can edit your instance group's specification and increase the code count. After saving and exiting, call:
kops update cluster <cluster-name> --yes
This will automatically update your auto-scaling group and start additional instances (or terminate them if you decreased the node count).
See the documentation for more information.

kops validate cluster: returns "cluster not found" even though cluster is healthy and kubectl works fine

I created a cluster using kops. It worked fine and the cluster is healthy. I can see my nodes using kubectl and have created some deployments and services. I tried adding a node using "kops edit ig nodes" and got an error "cluster not found". Now I get that error for all kops commands:
kops validate cluster
Using cluster from kubectl context: <clustername>
cluster "<clustername>" not found
So my question is: where does kops look for clusters and how do I configure it to see my cluster.
My KOPS_STATE_STORE environment variable got messed up. I corrected it to be the correct s3 bucket and everything is fine.
export KOPS_STATE_STORE=s3://correctbucketname
Kubectl and Kops access the configuration file from the following the location.
When the cluster is created.The configuration will be saved into a users
$HOME/.kube/config
I have attached the link for further insight for instance, If you have another config file you can EXPORT it. kube-config