Kubernates autoscaling. ScalingActive False - kubernetes

Trying to add autoscaling to my deployment,but getting ScalingActive False,most answers are about DNS,Heapster,Limits I've done all but still can't find solution.
kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
fetch Deployment/fetch <unknown>/50% 1 4 1 13m
kubectl cluster-info
Kubernetes master is running at --
addon-http-application-routing-default-http-backend is running at --
addon-http-application-routing-nginx-ingress is running at --
Heapster is running at --
KubeDNS is running at --
kubernetes-dashboard is running at --
kubectl describe hpa`
yaml `
PS.I tried to deploy example witch azure provides....getting the same,so yaml settings isn't problem
kubectl describe pod `
kubectl top pod fetch-54f697989d-wczvn --namespace=default`
autoscaling by memory yaml `
description`
kubectl get hpa give the same result,unknown/60%

I've experienced similar issues, my solutions are setting resources.requests.cpu section up in deployment config in order to calculate the current percentage based on the requested resource values. Your event log messages also means not to set up the request resource, but your deployment yaml seems no problem to me too.
Let we do double check as following steps.
If you can verify the resources as following cmd,
# kubectl top pod <your pod name> --namespace=<your pod running namespace>
And you would also need to check the pod requested cpu resources using below cmd in order to ensure same the config with your deployment yaml.
# kubectl describe pod <your pod name>
...
Requests:
cpu: 250m
...
I hope it help you to resolve your issues. ;)

This one helped me github issue. I just deployed metric server to my cluster and recreated hpa.

Related

Kubernetes metrics-server not working with Linkerd

I have a metrics-server and a horizontal pod autoscaler using this server, running on my cluster.
This works perfectly fine, until i inject linkerd-proxies into the deployments of the namespace where my application is running. Running kubectl top pod in that namespace results in a error: Metrics not available for pod <name> error. However, nothing appears in the metrics-server pod's logs.
The metrics-server clearly works fine in other namespaces, because top works in every namespace but the meshed one.
At first i thought it could be because the proxies' resource requests/limits weren't set, but after running the injection with them (kubectl get -n <namespace> deploy -o yaml | linkerd inject - --proxy-cpu-request "10m" --proxy-cpu-limit "1" --proxy-memory-request "64Mi" --proxy-memory-limit "256Mi" | kubectl apply -f -), the issue stays the same.
Is this a known problem, are there any possible solutions?
PS: I have a kube-prometheus-stack running in a different namespace, and this seems to be able to scrape the pod metrics from the meshed pods just fine
The problem was apparently a bug in the cAdvisor stats provider with the CRI runtime. The linkerd-init containers keep producing metrics after they've terminated, which shouldn't happen. The metrics-server ignores stats from pods that contain containers that report zero values (to avoid reporting invalid metrics, like when a container is restarting, metrics aren't collected yet,...). You can follow up on the issue here. Solutions seem to be changing to another runtime or using the PodAndContainerStatsFromCRI flag, which will let the internal CRI stats provider be responsible instead of the cAdvisor one.
I'm able to use kubectl top on pods that have linkerd injected:
:; kubectl top pod -n linkerd --containers
POD NAME CPU(cores) MEMORY(bytes)
linkerd-destination-5cfbd7468-7l22t destination 2m 41Mi
linkerd-destination-5cfbd7468-7l22t linkerd-proxy 1m 13Mi
linkerd-destination-5cfbd7468-7l22t policy 1m 81Mi
linkerd-destination-5cfbd7468-7l22t sp-validator 1m 34Mi
linkerd-identity-fc9bb697-s6dxw identity 1m 33Mi
linkerd-identity-fc9bb697-s6dxw linkerd-proxy 1m 12Mi
linkerd-proxy-injector-668455b959-rlvkj linkerd-proxy 1m 13Mi
linkerd-proxy-injector-668455b959-rlvkj proxy-injector 1m 40Mi
So I don't think there's anything fundamentally incompatible with the Linkerd and the Kubernetes metrics server.
I have noticed that I will sometimes see the errors for the first ~1m after a pod starts, before the metrics server has gotten its initial state for a pod; but these error messages seem a little different than what you reference:
:; kubectl rollout restart -n linkerd deployment linkerd-destination
deployment.apps/linkerd-destination restarted
:; while ! kubectl top pod -n linkerd --containers linkerd-destination-6d974dd4c7-vw7nw ; do sleep 10 ; done
Error from server (NotFound): podmetrics.metrics.k8s.io "linkerd/linkerd-destination-6d974dd4c7-vw7nw" not found
Error from server (NotFound): podmetrics.metrics.k8s.io "linkerd/linkerd-destination-6d974dd4c7-vw7nw" not found
Error from server (NotFound): podmetrics.metrics.k8s.io "linkerd/linkerd-destination-6d974dd4c7-vw7nw" not found
Error from server (NotFound): podmetrics.metrics.k8s.io "linkerd/linkerd-destination-6d974dd4c7-vw7nw" not found
POD NAME CPU(cores) MEMORY(bytes)
linkerd-destination-6d974dd4c7-vw7nw destination 1m 25Mi
linkerd-destination-6d974dd4c7-vw7nw linkerd-proxy 1m 13Mi
linkerd-destination-6d974dd4c7-vw7nw policy 1m 18Mi
linkerd-destination-6d974dd4c7-vw7nw sp-validator 1m 19Mi
:; kubectl version --short
Client Version: v1.23.3
Server Version: v1.21.7+k3s1

How to debug why my pods are pending in GCE

I'#m trying to get a pod running on GCE. The pod has an init container, and is created by me applying a manifest with a deployment that creates 1 replica of the pod.
When I look at my workloads on the cloud console, I can see that under 'Active revisions' my deployment is in the state of 'Pods are pending', and under 'Managed pods', the status is 'PodsInitializing'.
The container logs are empty, and the audit logs contain a single entry for the creation of the deployment.
My pods seem to be stuck in the above state, and I'm not really sure why. How do I go about debugging that?
Edit:
kubectl get pods --namespace=my-namespace
Outputs:
NAME READY STATUS RESTARTS AGE
my-pod-v77jm 0/1 Init:0/1 0 55m
But when I run:
kubectl describe pod my-pod-v77jm
I get
Error from server (NotFound): pods "my-pod-v77jm" not found
If you have access to kube-api via kubectl:
Use describe see details about the pod and containers
kubectl describe myPod --namespace mynamespace
To view container logs (including init containers)
kubectl logs myPod --namespace mynamespace -c initContainerName
You can get more information about pod statuses and how to debug init containers here

How to restart a failed pod in kubernetes deployment

I have 3 nodes in kubernetes cluster. I create a daemonset and deployed it in all the 3 devices. This daemonset created 3 pods and they were successfully running. But for some reasons, one of the pod failed.
I need to know how can we restart this pod without affecting other pods in the daemon set, also without creating any other daemon set deployment?
Thanks
kubectl delete pod <podname> it will delete this one pod and Deployment/StatefulSet/ReplicaSet/DaemonSet will reschedule a new one in its place
There are other possibilities to acheive what you want:
Just use rollout command
kubectl rollout restart deployment mydeploy
You can set some environment variable which will force your deployment pods to restart:
kubectl set env deployment mydeploy DEPLOY_DATE="$(date)"
You can scale your deployment to zero, and then back to some positive value
kubectl scale deployment mydeploy --replicas=0
kubectl scale deployment mydeploy --replicas=1
Just for others reading this...
A better solution (IMHO) is to implement a liveness probe that will force the pod to restart the container if it fails the probe test.
This is a great feature K8s offers out of the box. This is auto healing.
Also look into the pod lifecycle docs.
kubectl -n <namespace> delete pods --field-selector=status.phase=Failed
I think the above command is quite useful when you want to restart 1 or more failed pods :D
And we don't need to care about name of the failed pod.

How to kill pods on Kubernetes local setup

I am starting exploring runnign docker containers with Kubernetes. I did the following
Docker run etcd
docker run master
docker run service proxy
kubectl run web --image=nginx
To cleanup the state, I first stopped all the containers and cleared the downloaded images. However I still see pods running.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
web-3476088249-w66jr 1/1 Running 0 16m
How can I remove this?
To delete the pod:
kubectl delete pods web-3476088249-w66jr
If this pod is started via some replicaSet or deployment or anything that is creating replicas then find that and delete that first.
kubectl get all
This will list all the resources that have been created in your k8s cluster. To get information with respect to resources created in your namespace kubectl get all --namespace=<your_namespace>
To get info about the resource that is controlling this pod, you can do
kubectl describe web-3476088249-w66jr
There will be a field "Controlled By", or some owner field using which you can identify which resource created it.
When you do kubectl run ..., that's a deployment you create, not a pod directly. You can check this with kubectl get deploy. If you want to delete the pod, you need to delete the deployment with kubectl delete deploy DEPLOYMENT.
I would recommend you to create a namespace for testing when doing this kind of things. You just do kubectl create ns test, then you do all your tests in this namespace (by adding -n test). Once you have finished, you just do kubectl delete ns test, and you are done.
If you defined your object as Pod then
kubectl delete pod <--all | pod name>
will remove all of the generated Pod. But, If wrapped your Pod to Deployment object then running the command above only will trigger a re-creation of them.
In that case, you need to run
kubectl delete deployment <--all | deployment name>
That will also remove the Service object that is related to the deleted Deployment

Error while creating pods in Kubernetes

I have installed Kubernetes in Ubuntu server using instructions here. I am trying to create pods using kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --hostport=8000 --port=8080 as listed in the example. However, when I do kubectl get pod I get the status of the container as pending. I further did kubectl describe pod for debugging and I see the message:
FailedScheduling pod (hello-minikube-3383150820-1r4f7) failed to fit in any node fit failure on node (minikubevm): PodFitsHostPorts.
I am further trying to delete this pod by kubectl delete pod hello-minikube-3383150820-1r4f7 but when I further do kubectl get pod I see another pod with prefix "hello-minikube-3383150820-" that I havent created. Does anyone know how to fix this problem? Thank you in advance.
The PodFitsHostPorts predicate is failing because you have something else on your nodes using port 8000. You might be able to find what it is by running kubectl describe svc.
kubectl run creates a deployment object (you can see it with kubectl describe deployments) which makes sure that you always keep the intended number of replicas of the pod running (in this case 1). When you delete the pod, the deployment controller automatically creates another for you. If you want to delete the deployment and the pods it keeps creating, you can run kubectl delete deployments hello-minikube.