AWS Kubernetes - How to add new nodes to Kops cluster - kubernetes

I used KOPS to create a Kubernetes cluster. I want to add additional nodes without disrupting existing cluster. Any idea how I can do this?
kops create cluster --node-count=3 --node-size=t2.large --zones=us-west-2 --name=${KOPS_CLUSTER_NAME}

It depends a little on where you want to add them. You can do a kops edit ig <instance group> and change the min/max. Or you can only increase the max value and enable the cluster autoscaler addon to dynamically size your cluster based on needs. You may also want to add more subnets and make your cluster more highly available using additional availability zones.

The following does the job for me
## updating instance size
kops get instancegroups
## edit the size of instance group and save the file
kops edit ig <instance-group-name>
kops get instancegroups
kops update cluster --name=${KOPS_CLUSTER_NAME}
kops update cluster --name=${KOPS_CLUSTER_NAME} --yes
kops rolling-update cluster --name=${KOPS_CLUSTER_NAME}
kops rolling-update cluster --name=${KOPS_CLUSTER_NAME} --yes
kops get instancegroups
## updating the number of instances
kops edit ig <instance-group-name>
## edit the minSize and maxSize
kops get instancegroups
kops update cluster --name=${KOPS_CLUSTER_NAME}
kops update cluster --name=${KOPS_CLUSTER_NAME} --yes

Related

Can't delete Kubernetes cluster with kops despite deleting everything at AWS

For the last two hours, I have been unable to delete a cluster with kops even though I have deleted the only EC2 instance I had as well as my S3 bucket.
When I type:
kubectl config get-contexts
I get:
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* kubecourse.k8s.local kubecourse.k8s.local kubecourse.k8s.local
Next I type:
kops delete cluster --yes
But get:
Error: --name is required
Usage:
kops delete cluster [CLUSTER] [flags]
Then I type:
kops delete cluster --name=kubecourse.k8s.local --yes
But get:
kops delete cluster --name=kubecourse.k8s.local
Error: State Store: Required value: Please set the --state flag or export KOPS_STATE_STORE.
For example, a valid value follows the format s3://<bucket>.
So I type:
kops delete cluster --state=s3://k8-course-london
But this time get:
Error: --name is required
Usage:
kops delete cluster [CLUSTER] [flags]
And I'm stuck in a cycle. Your help would be most appreciated.
Looks like syntax used is wrong..
Right Syntax--
kops delete cluster --name=k8s.cluster.site --yes
https://kops.sigs.k8s.io/cli/kops_delete_cluster/

KOPS reload ssh access key to cluster

I want to restart my Kubernetes access ssh key using commands from this website:
https://github.com/kubernetes/kops/blob/master/docs/security.md#ssh-access
so those:
kops delete secret --name <clustername> sshpublickey admin
kops create secret --name <clustername> sshpublickey admin -i ~/.ssh/newkey.pub
kops update cluster --yes
And when I type last command "kops update cluster --yes" I get that error:
completed cluster failed validation: spec.spec.kubeProxy.enabled: Forbidden: kube-router requires kubeProxy to be disabled
Does Anybody have any idea what can I change those secret key without disabling kubeProxy?
This problem comes from having set
spec:
networking:
kuberouter: {}
but not
spec:
kubeProxy:
enabled: false
in the cluster spec.
Export the config using kops get -o yaml > myspec.yaml, edit the config according to the error above. Then you can apply the spec using kops replace -f myspec.yaml.
It is considered a best practice to check the above yaml into version control to track any changes done to the cluster configuration.
Once the cluster spec has been amended, the new ssh key should work as well.
What version of kubernetes are you running? If you are running the latests one 1.18.xx the user its not admin but ubuntu.
One other thing that you could do is to first edit the cluster and set the spect of kubeproxy to enabled fist . Run kops update cluster and rolling update and then do the secret delete and creation.

Deploy application to EKS Cluster

After creating an eks cluster with eksctl or aws CLI with the specified node group. Then when I apply my Deployment yaml file, is my Pods distributed among the node group above automatically?
Yes your pod will get deployed on any node in cluster which has sufficient resource to support it.

Enabling Kubernetes PodPresets with kops

I've got a kubernetes cluster which was set up with kops with 1.5, and then upgraded to 1.6.2. I'm trying to use PodPresets. The docs state the following requirements:
You have enabled the api type settings.k8s.io/v1alpha1/podpreset
You have enabled the admission controller PodPreset
You have defined your pod presets
I'm seeing that for 1.6.x, the first is taken care of (how can I verify?). How can I apply the second? I can see that there are three kube-apiserver-* pods running in the cluster (I imagine it's for the 3 azs). I guess I can edit their yaml config from kubernetes dashboard and add PodPreset to the admission-control string. But is there a better way to achieve this?
You can list the API groups which are currently enabled in your cluster either with the api-versions kubectl command, or by sending a GET request to the /apis endpoint of your kube-apiserver:
$ curl localhost:8080/apis
{
"paths": [
"/api",
"/api/v1",
"...",
"/apis/settings.k8s.io",
"/apis/settings.k8s.io/v1alpha1",
"...",
}
Note: The settings.k8s.io/v1alpha1 API is enabled by default on Kubernetes v1.6 and v1.7 but will be disabled by default in v1.8.
You can use a kops ClusterSpec to customize the configuration of your Kubernetes components during the cluster provisioning, including the API servers.
This is described on the documentation page Using A Manifest to Manage kops Clusters, and the full spec for the KubeAPIServerConfig type is available in the kops GoDoc.
Example:
apiVersion: kops/v1
kind: Cluster
metadata:
name: k8s.example.com
spec:
kubeAPIServer:
AdmissionControl:
- NamespaceLifecycle
- LimitRanger
- PodPreset
To update an existing cluster, perform the following steps:
Get the full cluster configuration with
kops get cluster name --full
Copy the kubeAPIServer spec block from it.
Do not push back the full configuration. Instead, edit the cluster configuration with
kops edit cluster name
Paste the kubeAPIServer spec block, add the missing bits, and save.
Update the cluster resources with
kops update cluster nane
Perform a rolling update to apply the changes:
kops rolling-update name

How to create Kubernetes Cluster using Kops with insecure registry?

I have to create a cluster with support of insecure docker registry. I want to use Kops for this. Is there any way to create cluster with insecure registry using Kops?
You can set insecure registry at cluster config edit time, after kops create cluster ... command (navigate to clusterSpec part of file):
$ kops edit cluster $NAME
...
docker:
insecureRegistry: registry.example.com
logDriver: json-file
...
Original link