eksctl apply cluster changes after create - kubernetes

I've created a cluster using
eksctl create cluster -f mycluster.yaml
everything is running but now I would want to add cluster autoscaler. There does not seem to be an option to specify this for the eksctl update cluster command.
When creating a new cluster I can add the --asg-access flag, is there an option to enable ASG support for an existing cluster via eksctl?

The --asg-access flag only adds relevant iam policy and labels to a node group.
You can do that by creating a new node group with the autoscaler option set as true
nodeGroup:
iam:
withAddonPolicies:
autoScaler: true
and the labels as mentioned here
Then you need to install the autoscaler itself
Note:
You won't be able to edit your current nodeGroup, so you will have to add a new one first and then delete your current one. (https://eksctl.io/usage/managing-nodegroups/#nodegroup-immutability)

Related

terraform: how to upgrade Azure AKS default nodepool VM size without replacement of the cluster

I'm trying to upgrade the VM size of my AKS cluster using this approach with Terraform. Basically I create a new nodepool with the required amount of nodes, then I cordon the old node to disallow scheduling of new pods. Next, I drain the old node to reschedule all the pods in the newly created node. Then, I proceed to upgrade the VM size.
The problem I am facing is that azurerm_kubernetes_cluster resource allow for the creation of the default_node_pool and another resource, azurerm_kuberentes_cluster_node_pool allows me to create new nodepools with extra nodes.
Everything works until I create the new nodepool, cordon and drain the old one. However when I change the default_nod_pool.vm_size and run terraform apply, it tells me that the whole resource has to be recreated, including the new nodepool I just created, because it's linked to the cluster id, which will be replaced.
How should I manage this upgrade from the documentation with Terraform if upgrading the default node pool always forces replacement even if a new nodepool is in place?
terraform version
terraform v1.1.7
on linux_amd64
+ provider registry.terraform.io/hashicorp/azurerm v2.82.0
+ provider registry.terraform.io/hashicorp/local v2.2.2

How to update kubernetes cluster image arguments using kops

While creating a cluster, kops gives us a set of arguments to configure the images to be used for the master instances and the node instances like the following as mentioned in the kops documentation for create cluster command : https://github.com/kubernetes/kops/blob/master/docs/cli/kops_create_cluster.md
--image string Set image for all instances.
--master-image string Set image for masters. Takes precedence over --image
--node-image string Set image for nodes. Takes precedence over --image
Suppose I forgot to add these parameters when I created the cluster, how can I edit the cluster and update these things?
While running kops edit cluster the cluster configuration opens up as a yaml.. but where should I add these things in there?
is there complete kops cluster yaml that I can refer to modify my cluster?
You would need to edit the instance group after the cluster is created to add/edit the image name.
kops get ig
kops edit ig <ig-name>
After the update is done for all masters and nodes, perform
kops update cluster <cluster-name>
kops update cluster <cluster-name> --yes
and then perform rolling-update or restart/stop 1 instance at a time from the cloud console
kops rolling-update cluster <cluster-name>
kops rolling-update cluster <cluster-name> --yes
in another terminal kops validate cluster <cluster-name> to validate the cluster
there are other flags we can use as well while performing the rolling-update
There are other parameters as well which you can add, update, edit in the instance group - take a look at the documentation for more information
Found a solution for this question. My intention was to update huge number of instance groups in one shot for a cluster. Editing each instance group one by one is lot of work.
run kops get <cluster name> -o yaml > cluster.yaml
edit it there, then run kops replace -f cluster.yaml

Pull image from private docker registry in AWS EKS Autoscaler worker nodes

I'm using AWS EKS with Auto Scaler for the worker nodes. I've private Artifactory docker registry.
Now in order to download docker images from private registry, I've read many documents including kubernetes docs for - how to pull docker image from private docker registry.
There are three steps in the solution:
Create kubectl secret which contains docker registry credentials
Add "insecure-registries":["privateRegistryAddress:port"] in /etc/docker/daemon.json
Restart docker service
I've manually SSH into worker nodes and ran 2nd and 3rd step which works for temporary but as EKS Auto Scaler finds if that worker nodes is not in use then kill it and create new one as needed, where in this new worker node "insecure-registries":["privateRegistryAddress:port"] in /etc/docker/daemon.json is not added, and due to which pod scheduling fails.
There are two solutions I can think of here -
Configure AWS EC2 AMI which contains "insecure-registries":["privateRegistryAddress:port"] in /etc/docker/daemon.json default and use that image in auto scaler configuration
Create pod which has node level permission to edit the mentioned file and restart docker service - but I doubt if docker service restarted then that pod itself would go down and if that works or not
Please advise. Thanks.
Solved this from first approach I mentioned in question.
First of course created kubectl secret to login to private registry
SSHed into kubernetes worker nodes and added ["privateRegistryAddress:port"] in /etc/docker/daemon.json
Created AMI image out of that node
Updated EC2 launch template with the new AMI and set new template version as default
Updated Ec2 Auto scaling group with new launch template version
Killed previous worker nodes and let auto scaling group created new nodes
and voila!! :)
Now whenever EKS using Auto Scaling group increase/decrease EC2 instances, they will be able to download docker images from private docker registry.

How to specify `--asg-access` in config file with eksctl?

I am trying to create a Kubernetes on AWS EKS cluster using eksctl with autoscaling enabled using proper IAM permissions. As per the documentation:
You can create a cluster (or nodegroup in an existing cluster) with
IAM role that will allow use of cluster autoscaler:
eksctl create cluster --asg-access
I am trying to run
eksctl create cluster --asg-access -f myconfig.yml
But getting this error:
[✖] cannot use --asg-access when --config-file/-f is set
Is their a way to use --asg-access within the config file? I try to look for a related config in the config file schema doc to no avail.
You can enable autoscaling within config file without passing asg-access flag, i.e.
iam:
withAddonPolicies:
autoScaler: true
Example
Hope this will help

Multiple kubernetes cluster

The requirement is, for each new build for QA should create a new kubernetes cluster (new enviroment altogether) and then it should be destroyed after QA is completed.
So it is not a federated setup.
I am using kops in AWS to create cluster.
Do I need to create another 'bootstrap' instance for creating new cluster? The guess is I can change the name of cluster in command and it will create a new cluster. Like kops create cluster --zones=<zones> <some-other-name>.
So question is what does kubectl get all return - consolidated objects?
When I do kubectl apply -f ., how doest kubectl know which cluster to apply to?
How do I specify cluster name while installing things like helm?
You should be setting the context on your cluster something like this, once this is set then all your kubectl commands will be run in the context of that cluster.
kubectl config use-context my-cluster-name
Refer this link for more details