How to change --horizontal-pod-autoscaler-sync-period field in kube-controller-manager to 5sec in gke - kubernetes

I am trying to set up an horizontal pod auto scaling in GKE. No proper documentation found to reduce the --horizontal-pod-autoscaler-sync-period to 5 sec using kube-controller-manager.
In the below link it says there is a possibility of changing the flags:
https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/
Is there any proper implementation steps to this?

You are not able do this on GKE, EKS and other managed clusters.
In order to change/add flags in kube-controller-manager - you should have access to your /etc/kubernetes/manifests/ directory on master node and be able to modify parameters in /etc/kubernetes/manifests/kube-controller-manager.yaml.
GKE, EKS and other clusters manages only by their providers without getting you permissions to have access to master nodes.
But you can create cluster with kubeadm init and configure/change as you like.

you can stop your minikube cluster and start it with your extra configs ...
minikube start --extra-config 'controller-manager.horizontal-pod-autoscaler-sync-period=5s'
for more details, you can go through https://minikube.sigs.k8s.io/docs/handbook/config/#modifying-kubernetes-defaults

Related

How to set node allocatable computation on kubernetes?

I'm reading the Reserve Compute Resources for System Daemons task in Kubernetes docs and it briefly explains how to allocate a compute resource to a node using kubelet command and flags --kube-reserved, --system-reserved and --eviction-hard.
I'm learning on Minikube for masOS and as far as I got, minikube is configured to use command kubectl along with minikube command.
For local learning purposes on minikube I don't need to have it set (maybe it can't be done on minikube) but
How this could be done let's say in K8's development environment on a node?
This could be be done by:
1. Passing config file during cluster initialization or initilize kubelet with additional parameters via config file,
For cluster initialization using config file it should contains at least:
kind: InitConfiguration
kind: ClusterConfiguration
additional configuratuion types like:
kind: KubeletConfiguration
In order to get basic config file you can use kubeadm config print init-defaults
2. For the live cluster please consider reconfigure current cluster using steps "Generate the configuration file" and "Push the configuration file to the control plane" like described in "Reconfigure a Node's Kubelet in a Live Cluster"
3. I didn't test it but for minikube - please take a look here:
Note:
Minikube has a “configurator” feature that allows users to configure the Kubernetes components with arbitrary values. To use this feature, you can use the --extra-config flag on the minikube start command.
This flag is repeated, so you can pass it several times with several different values to set multiple options.
This flag takes a string of the form component.key=value, where component is one of the strings from the below list, key is a value on the configuration struct and value is the value to set.
Valid keys can be found by examining the documentation for the Kubernetes componentconfigs for each component. Here is the documentation for each supported configuration:
kubelet
apiserver
proxy
controller-manager
etcd
scheduler
Hope this helped:
Additional community resources:
Memory usage in kubernetes cluster

GKE - Upgrading cluster master after cluster creation completes

Once we increase load by using JMeter client than my deployed service is interrupted and on GCP/GKE console it says that -
Upgrading cluster master
The values shown below are going to change soon.
And my kubectl client throw this error during upgrade -
Unable to connect to the server: dial tcp 35.236.238.66:443: connectex: No connection could be made because the target machine actively refused it.
How can I stop this upgrade or prevent my service interruption ? If service will be intrupted than there is no benefit of this auto scaling. I am new to GKE, please let me know if I am missing any configuration or parameter here.
I am using this command to create my cluster-
gcloud container clusters create ajeet-gke --zone us-east4-b --node-locations us-east4-b --machine-type n1-standard-8 --num-nodes 1 --enable-autoscaling --min-nodes 4 --max-nodes 16
It is not upgrading k8s version. Because it works fine with smaller load but as I increase load than cluster starts upgrade of master. So it looks the master is resizing itself for more nodes. After upgrade I can see more nodes on GCP console. https://github.com/terraform-providers/terraform-provider-google/issues/3385
Below command says auto scaling is not enabled on instance group.
> gcloud compute instance-groups managed list
NAME AUTOSCALED LOCATION SCOPE ---
ajeet-gke-cluster- no us-east4-b zone ---
default-pool-4***0
Workaround
Sorry forget to update it here, I found a workaround to fix it - after splitting cluster creation command in to two steps cluster is auto scaling without restarting master node:
gcloud container clusters create ajeet-ggs --zone us-east4-b --node-locations us-east4-b --machine-type n1-standard-8 --num-nodes 1
gcloud container clusters update ajeet-ggs --enable-autoscaling --min-nodes 1 --max-nodes 10 --zone us-east4-b --node-pool default-pool
To prevent this you should always create your cluster with hardcoded cluster version to the last version available.
See the documentation: https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture#master
This means that Goolge is managing the master, meaning that if your master is not up to date it will be updated to be in the last version and allow google to limit the number of version currently managed. https://cloud.google.com/kubernetes-engine/docs/concepts/regional-clusters
Now why do you have an interruption of service during the update: because you are in zonal mode with only one master, to prevent this you should go in regional cluster mode with more than one master, allowing for clean rolling update.
The master won't resize the node, unless the autoscaling feature is enabled in it.
As mentioned in above answer, this is a feature at the node-pool level. By looking at description of the issue, it does seems like 'autoscaling' is enabled on your node-pool and eventually a GKE's cluster autoscaler automatically resizes clusters based on the demands of the workloads you want to run(ie when there are pods that are not able to be scheduled due to resource shortages such as CPU).
Additionaly, Kubernetes cluster autoscaling does not use the Managed Instance Group autoscaler. It runs a cluster-autoscaler controller on the Kubernetes master that uses Kubernetes-specific signals to scale your nodes.
It is therefore, highly recommended not use(or rely on the autoscaling status showed by MIG) Compute Engine's autoscaling feature on instance groups created by Kubernetes Engine.

How to add flag to Kubernetes controller manager

I'm new to K8s. In process to config Openstack Cinder as K8s StorageClass, i have to add some flags to my kube controller manager, and I found that it's my big problem.
I'm using K8s 1.11 in VMs, and my K8s cluster has a kube-controller-manager pod, but I don't know how to add these flags to my kube-controller-manager.
After hours search, i found that there's a lot of task require add flag to kube-controller-manager, but no exactly document guide me how to do that. Please share me the way to go over it.
Thank you.
You can check /etc/kubernetes/manifests dir on your master nodes.
This dir would contain yaml files for master components.
These are also known as static pods.
More Info : https://kubernetes.io/docs/tasks/administer-cluster/static-pod/
Update these files and you would be able to see your changes as kubelet should restart the pod on file change.
As a more long term solution, you will need to incorporate the flags to the tooling that you use to generate your k8s cluster.

Is Kubernetes high availability using kubeadm possible without failover/load balancer?

I am trying to achieve the k8s high availability using kubeadm. I am following the document k8s HA using kubeadm
In the official document, it is recommended to have the failover mechanism/load balancer for the kube-apiserver. I tried keepalived but, in case of setup on aws/gcp instaces, it lands in split brain situation as multicast is not supported and so I am not allowed to use it. Is there any way out for this?
Kubernetes is a container-orchestration system for automating deployment, scaling, and management of containerized applications.
Kubernetes play best in High Available and Load Balancing environments.
As #jaxxstorm mentioned, cloud providers give you a possibility to use native load balancers, and I also suggest it is
a good pole position with High Availability attempt. You may be interested in GCP documentation.
Kubeadm on Kubernetes homebrewed environment requires some additional work, and from my point of view, it is good
to set up Kubernetes The Hardway then starts to play with Kubeadm.
OK, I assume servers for the installation are ready. To create a not complex installation of multi-master cluster, you need 3 masters node (10.0.0.50-52) and Load Balancer (10.0.0.200).
Generate token and save the output to file:
kubeadm token generate
Create a kubeadm config file:
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
etcd:
endpoints:
- "http://10.0.0.50:2379"
- "http://10.0.0.51:2379"
- "http://10.0.0.52:2379"
apiServerExtraArgs:
apiserver-count: "3"
apiServerCertSANs:
- "10.0.0.50"
- "10.0.0.51"
- "10.0.0.52"
- "10.0.0.200"
- "127.0.0.1"
token: "YOUR KUBEADM TOKEN"
tokenTTL: "0"
Copy the config file to all nodes.
Do initialization on the first master instance:
kubeadm init --config /path/to/config.yaml
The new master instance, will have all the certificates and keys necessary for our master cluster.
Copy directory structure /etc/kubernetes/pki to other masters to the same location.
On other master servers:
kubeadm init --config /path/to/config.yaml
Now let’s start to set up load balancer:
Copy /etc/kubernetes/admin.conf into $HOME/.kube/config
next, edit $HOME/.config and replace
server:10.0.0.50
with
server:10.0.0.200
Check if nodes are working fine:
kubectl get nodes
On all workers execute:
kubeadm join --token YOUR_CLUSTER_TOKEN 10.0.0.200:6443 --discovery-token-ca-cert-hash sha256:89870e4215b92262c5093b3f4f6d57be8580c3442ed6c8b00b0b30822c41e5b3
And that’s it! If everything was setup cleanly, you should now have a highly available cluster.
I found "HA Kubernetes cluster via Kubeadm" tutorial useful, thank you #Nate Baker for inspiration.
No, you need a loadbalancer to have HA with kubeadm.
If you're using AWS/GCP, why not consider using the native loadbalancers for those environments, like ELB or a Google Cloud Load Balancer?
You definnitely need nginx/haproxy + keepalived for failover and High availability

Why can't I delete heapster and kubernetes-dashboard on gke namespace=kube-system

I want to have full control of what I do with my single node cluster (savings...lol), but somehow I can't do this even if I delete the deployment it respawns ..
As mentioned in another answer, you cannot delete them directly via the Kubernetes API; however, you can delete them indirectly via the Google Container Engine API.
To remove the dashboard, run gcloud container clusters update $CLUSTER_NAME --update-addons=KubernetesDashboard=DISABLED.
To disable heapster you need to disable monitoring using gcloud container clusters update $CLUSTER_NAME --monitoring-service=none (it may actually require disabling another add-on too, I can't recall at the moment).
See https://cloud.google.com/sdk/gcloud/reference/container/clusters/update for the commands referenced above.
Heapster is configured as a cluster addon. The addon manager is going to reconcile it to it's preconfigured state if you change or delete it.
You are stuck with it.
Even if you delete heapster pod; it restart automatically. I can made it with scaling it down to zero as shown below
kubectl scale --replicas=0 deployment/heapster-v1.6.0-beta.1 --namespace=kube-system
And you can find the exact name of the heapster pod within result of the command below
kubectl get deployments --namespace=kube-system
By the way you can find more options to reduce resource usage here:
https://cloud.google.com/kubernetes-engine/docs/how-to/small-cluster-tuning