Kubenetes: change hpa min-replica - kubernetes

I have Kubernetes cluster hosted in Google Cloud. I created a deployment and defined a hpa rule for it:
kubectl autoscale deployment my_deployment --min 6 --max 30 --cpu-percent 80
I want to run a command that editing the --min value, without remove and re-create a new hpa rule. Something like:
$ kubectl autoscale deployment my_deployment --min 1 --max 30
Error from server (AlreadyExists): horizontalpodautoscalers.autoscaling "my_deployment" already exists
Is this possible to edit hpa (min, max, cpu-percent, ...) on command line?

Is this possible to edit hpa (min, max, cpu-percent, ...) on command line?
They are editable just as any other resource is, though either kubectl edit hpa $the_hpa_name for an interactive edit, or kubectl patch hpa $the_hpa_name -p '{"spec":{"minReplicas": 1}}' for doing so in a "batch" setting.
If you don't know the $the_hpa_name, you can get a list of them like any other resource: kubectl get hpa, and similarly you can view the current settings and status with kubectl get -o yaml hpa $the_hpa_name (or even omit $the_hpa_name to see them all, but that might be a lot of text, depending on your cluster setup).

Related

kubectl delete pod vs set env

I can see that both kubectl delete pod and kubectl set env will restart.
I would like to know the best practice to be followed, is there any advantage of using set env
Other than these two options, is there any other better option to restart a pod?
Although your question is not really clear, I believe that you are now using deployment or other types of replica set to spawn the pods.
So, neither kubectl delete pod nor kubectl set env are the correct one to restart all the replicas in the Kubernetes. The proper way to restart all the pod under a replica set is kubectl rollout restart <type-of-replica-set>/<name>.
But kubectl delete pod and kubectl set env still work by seeing the conclusion only. Here are the reasons.
kubectl delete pod will reduce the number of desired pods for your replica set. The replica set controller will reconcile and spawn a new pod in order to fulfill your desired number defined.
kubectl set env is used for updating the environment variable in the specification. So, once you run the command, the whole version in the replica set is bumped. The controller will reconcile and try to spawn a new set of pods for you.
Eventually, both of the commands can lead you to the restart of the pods. But the commands are not designed for restart.

How to delete pod created with rolling restart?

I ran kubectl rollout restart deployment.
It created a new pod which is now stuck in Pending state because there are not enough resources to schedule it.
I can't increase the resources.
How do I delete the new pod?
please check if that pod has a Deployment controller (which should be recreating the pod), use:
kubectl get deployments
Then try to delete the Deployment with
Kubectl delete deployment DEPLOYMENT_NAME
Also, I would suggest to check resources allocation on GKE and its usage on your nodes with next command:
kubectl describe nodes | grep -A10 "Allocated resources"
And if you need more resources, try to activate GKE CA (cluster autoscaler) or in case you already have it enabled, then increase the number of nodes on Max value. You can also try to manually add a new node by manually resizing the Nodepool you are using.

kubectl apply vs kubernetes deployment - Terraform

I am trying to use Kubernetes Deployment , i would like to know whether this is same as kubectl apply -f deployment.yaml or does this wait for the deployments to be up and running . because when i used kubernetes deployment to create a basic pod which i know will not work, i got this error
Error: Waiting for rollout to finish: 0 of 1 updated replicas are available...
Is this just giving me the error from kubernetes or the entire terraform script fails because of this?
According to the documentation
A Deployment ensures that a specified number of pod “replicas” are running at any one time. In other words, a Deployment makes sure that a pod or homogeneous set of pods are always up and available. If there are too many pods, it will kill some. If there are too few, the Deployment will start more.
So, It will wait to ensure number of expected replicas are up

how to update max replicas in running pod

I'm looking to update manually with the command kubectl autoscale my maximum number of replicas for auto scaling.
however each time I run the command it creates a new hpa that fails to launch the pod why I don't know at all:(
Do you have an idea how i can update manually with kubectl my HPA ?
https://gist.github.com/zyriuse75/e75a75dc447eeef9e8530f974b19c28a
I think you are mixing two topics here, one is manually scale a pod (you can do it through a deployment applying kubectl scale deploy {mydeploy} --replicas={#repl}). In the other hand you have HPA (Horizontal Pod AutoScaler), in order to do this (HPA) you should have configured any app metrics provider system
e.g:
metrics server
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/metrics-server
heapster (deprecated) https://github.com/kubernetes-retired/heapster
then you can create a HPA to handle your autoscaling, you can get more info on this link https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
Once created you can patch your HPA or deleted it and create it again
kubectl delete hpa hpa-pod -n ns-svc-cas
kubectl autoscale hpa-pod --min={#number} --max={#number} -n ns-svc-cas
easiest way

Kubernetes Autoscaling

I have Kubernetes v1.12.1 installed on my cluster.
I downloaded the metrics-server from the following repo:
https://github.com/kubernetes-incubator/metrics-server
and then run the following command:
kubectl create -f metrics-server/deploy/1.8+/
and then I tried autoscaling a deployment using:
kubectl autoscale deployment example-app-tier --min 1 --max 3 --cpu-percent 70 --spacename example
but the targets here shows unkown/70%
kubectl get hpa --spacename example
NAMESPACE NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
example example-app-tier Deployment/example-app-tier <unknown>/70% 1 3 1 3h35m
and when I try running the kubectl top nodes or pods I get an error saying:
error: Metrics not available for pod default/pi-ss8j6, age: 282h48m5.334137739s
So I'm looking for any tutorial that helps me step by step enabling autoscaling using metrics-server or Prometheus (and not Heapster as it is deprecated and will not be supported anymore)
Thank you!
you need to register your metrics server with API server and make sure they communicate.
https://github.com/kubernetes/kubernetes/issues/59438
If it is done already , you need to check the help for the kubectl top command in your version of k8s , the command may default to use heapster , and you may need to tell it to use the new service instead.
https://github.com/kubernetes/kubernetes/pull/56206
from the help command it looks like it is not yet ported to new metric server and still looking for heapster by default.
C02W84XMHTD5:tmp iahmad$ kubectl top node --help
Display Resource (CPU/Memory/Storage) usage of nodes.
The top-node command allows you to see the resource consumption of nodes.
Aliases:
node, nodes, no
Examples:
# Show metrics for all nodes
kubectl top node
# Show metrics for a given node
kubectl top node NODE_NAME
Options:
--heapster-namespace='kube-system': Namespace Heapster service is located in
--heapster-port='': Port name in service to use
--heapster-scheme='http': Scheme (http or https) to connect to Heapster as
--heapster-service='heapster': Name of Heapster service
-l, --selector='': Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l
key1=value1,key2=value2)
Usage:
kubectl top node [NAME | -l label] [options]
Use "kubectl options" for a list of global command-line options (applies to all commands).
note: I am using 1.10 , maybe in your version , the options are different