kubectl get hpa targets:unknow - kubernetes

I have installed kubeadm. Heapster show me metrics, but hpa no
kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
httpd Deployment/httpd <unknown> / 2% 2 5 2 19m
kubeadm version
kubeadm version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.6", GitCommit:"7fa1c1756d8bc963f1a389f4a6937dc71f08ada2", GitTreeState:"clean", BuildDate:"2017-06-16T18:21:54Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}
docker version
Client:
Version: 1.11.2
API version: 1.23
Go version: go1.5.4
Git commit: b9f10c9
Built: Wed Jun 1 22:00:43 2016
OS/Arch: linux/amd64

You may have had to enable a metrics-server. Heapster is now deprecated. Also make sure you have Kubernetes version greater than 1.7. You can check this buy typing kubectl get nodes.
You can enable the metrics server by looking at the minikube addons.
minikube addons list gives you the list of addons.
minikube addons enable metrics-server enables metrics-server.
Wait a few minutes, then if you type kubectl get hpa the percentage for the TARGETS <unknown> should appear.

I found the solution:
kubectl describe hpa
failed to get cpu utilization: missing request for cpu on container httpd in pod default/httpd-796666570-2h1c6
Change the yaml of deployment and add:
resources:
requests:
cpu:400m
Then kubectl describe hpa
failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from heapster
Wait a few minutes and all works fine.

In kubernetes it can say unknown for hpa. In this situation you should check several places.
In K8s 1.9 uses custom metrics. so In order to work your k8s cluster
with heapster you should check kube-controller-manager.
Add these parameters.
--horizontal-pod-autoscaler-use-rest-clients=false
--horizontal-pod-autoscaler-sync-period=10s
based on https://github.com/kubernetes/kubernetes/issues/57673
Case you should change your heapster deployment.
--source=kubernetes:https://kubernetes.default?kubeletPort=10250&kubeletHttps=true&insecure=true parameter is enough.
I found this link very informative https://blog.inkubate.io/deploy-kubernetes-1-9-from-scratch-on-vmware-vsphere/

you have to enable the metrics server which you can do it using the helm chart
helm chart is easy way to add the metrics server
helm install stable/metrics-server
wait for 3-4 minutes after pods started running and you open kubectl get hpa you can check there target is showing values.

Make sure your spec has this part properly configured:
metrics:
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage}}
In my case I had name: Memory with uppercase M and that cost me a day to find out.

Related

Kubernetes metrics-server not working with Linkerd

I have a metrics-server and a horizontal pod autoscaler using this server, running on my cluster.
This works perfectly fine, until i inject linkerd-proxies into the deployments of the namespace where my application is running. Running kubectl top pod in that namespace results in a error: Metrics not available for pod <name> error. However, nothing appears in the metrics-server pod's logs.
The metrics-server clearly works fine in other namespaces, because top works in every namespace but the meshed one.
At first i thought it could be because the proxies' resource requests/limits weren't set, but after running the injection with them (kubectl get -n <namespace> deploy -o yaml | linkerd inject - --proxy-cpu-request "10m" --proxy-cpu-limit "1" --proxy-memory-request "64Mi" --proxy-memory-limit "256Mi" | kubectl apply -f -), the issue stays the same.
Is this a known problem, are there any possible solutions?
PS: I have a kube-prometheus-stack running in a different namespace, and this seems to be able to scrape the pod metrics from the meshed pods just fine
The problem was apparently a bug in the cAdvisor stats provider with the CRI runtime. The linkerd-init containers keep producing metrics after they've terminated, which shouldn't happen. The metrics-server ignores stats from pods that contain containers that report zero values (to avoid reporting invalid metrics, like when a container is restarting, metrics aren't collected yet,...). You can follow up on the issue here. Solutions seem to be changing to another runtime or using the PodAndContainerStatsFromCRI flag, which will let the internal CRI stats provider be responsible instead of the cAdvisor one.
I'm able to use kubectl top on pods that have linkerd injected:
:; kubectl top pod -n linkerd --containers
POD NAME CPU(cores) MEMORY(bytes)
linkerd-destination-5cfbd7468-7l22t destination 2m 41Mi
linkerd-destination-5cfbd7468-7l22t linkerd-proxy 1m 13Mi
linkerd-destination-5cfbd7468-7l22t policy 1m 81Mi
linkerd-destination-5cfbd7468-7l22t sp-validator 1m 34Mi
linkerd-identity-fc9bb697-s6dxw identity 1m 33Mi
linkerd-identity-fc9bb697-s6dxw linkerd-proxy 1m 12Mi
linkerd-proxy-injector-668455b959-rlvkj linkerd-proxy 1m 13Mi
linkerd-proxy-injector-668455b959-rlvkj proxy-injector 1m 40Mi
So I don't think there's anything fundamentally incompatible with the Linkerd and the Kubernetes metrics server.
I have noticed that I will sometimes see the errors for the first ~1m after a pod starts, before the metrics server has gotten its initial state for a pod; but these error messages seem a little different than what you reference:
:; kubectl rollout restart -n linkerd deployment linkerd-destination
deployment.apps/linkerd-destination restarted
:; while ! kubectl top pod -n linkerd --containers linkerd-destination-6d974dd4c7-vw7nw ; do sleep 10 ; done
Error from server (NotFound): podmetrics.metrics.k8s.io "linkerd/linkerd-destination-6d974dd4c7-vw7nw" not found
Error from server (NotFound): podmetrics.metrics.k8s.io "linkerd/linkerd-destination-6d974dd4c7-vw7nw" not found
Error from server (NotFound): podmetrics.metrics.k8s.io "linkerd/linkerd-destination-6d974dd4c7-vw7nw" not found
Error from server (NotFound): podmetrics.metrics.k8s.io "linkerd/linkerd-destination-6d974dd4c7-vw7nw" not found
POD NAME CPU(cores) MEMORY(bytes)
linkerd-destination-6d974dd4c7-vw7nw destination 1m 25Mi
linkerd-destination-6d974dd4c7-vw7nw linkerd-proxy 1m 13Mi
linkerd-destination-6d974dd4c7-vw7nw policy 1m 18Mi
linkerd-destination-6d974dd4c7-vw7nw sp-validator 1m 19Mi
:; kubectl version --short
Client Version: v1.23.3
Server Version: v1.21.7+k3s1

Unable able to see Pods CPU and Memory Utilization and graphs are missing Kubernetes dashboard

K8s VERSION = v1.18.6
I have deployed the Kubernetes dashboard using the following command and added a privileged user with which I logged into the dashboard.
but not able to see Pods CPU and Memory Utilization graphs are missing Kubernetes dashboard
The Kubernetes Metrics Server is an aggregator of resource usage data in your cluster,
To deploy the Metrics Server
Deploy the Metrics Server with the following command:
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml
Verify that the metrics-server deployment is running the desired number of pods with the following command.
kubectl get deployment metrics-server -n kube-system
Output
NAME READY UP-TO-DATE AVAILABLE AGE
metrics-server 1/1 1 1 6m
Also you can validate by below command:
kubectl top nodes
to see node cpu utilisation if it works, it should then come up in Dashboard as well.
Resource usage metrics are only available for K8s clusters once Metrics Server has been installed.

Grafana dashboard not displaying pod name instead pod_name

I have deployed application on kubernetes cluster and for monitoring using prometheus and grafana. For kubernetes pods monitoring using Grafana dashboard: Kubernetes cluster monitoring (via Prometheus) https://grafana.com/grafana/dashboards/315
I had imported the dashboard using id 315 and its reflecting without pod name and containers name instead getting pod_name . Can anyone pls help how can i get pod name and container name in dashboard.
Provided tutorial was updated 2 years ago.
Current version of Kubernetes is 1.17. As per tags, tutorial was tested on Prometheus v. 1.3.0, Kubernetes v.1.4.0 and Grafana v.3.1.1 which are quite old at the moment.
In requirements you have statement:
Prometheus will use metrics provided by cAdvisor via kubelet service (runs on each node of Kubernetes cluster by default) and via kube-apiserver service only.
In Kubernetes 1.16 metrics labels like pod_name and container_name was removed. Instead of that you need to use pod and container. You can verify it here.
Any Prometheus queries that match pod_name and container_name labels (e.g. cadvisor or kubelet probe metrics) must be updated to use pod and container instead.
Please check this Github Thread about dashboard bug for more information.
Solution
Please change pod_name to pod in your query.
Kubernetes version v1.16.0 has Removed cadvisor metric labels pod_name and container_name to match instrumentation guidelines. Any Prometheus queries that match pod_name and container_name labels (e.g. cadvisor or kubelet probe metrics) must be updated to use pod and container instead.
You can check:
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.16.md#metrics-changes

HPA not able to fetch metrics from Prometheus in Kubernetes

I have a two node Kubernetes cluster i.e one master node and two worker nodes. For monitoring purpose, I have deployed Prometheus and Grafana. Now, I want to autoscale pods based on CPU usage. But even after configuring Grafana and Prometheus, I am getting the following error ---
Name: php-apache
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Mon, 17 Jun 2019 12:33:01 +0530
Reference: Deployment/php-apache
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): <unknown> / 50%
Min replicas: 1
Max replicas: 10
Deployment pods: 1 current / 0 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedGetResourceMetric 112s (x12408 over 2d4h) horizontal-pod-autoscaler unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)
Can anybody let me know why Kubernetes is not fetching metrics from Prometheus ?
Kubernetes retrieves metrics from either the metrics.k8s.io API (normally implemented by the metrics-server which can be seperatly installed) or the custom.metrics.k8s.io API (which can be any type of metric and is normally provided by third parties). To use prometheus in HPA for kubernetes the Prometheus Adapter for the custom metrics API needs to be installed.
A walkthrough for the setup can be found here.
heapster is now depracted : https://github.com/kubernetes-retired/heapster
To enable auto-scaling on your cluster you can use HPA(horizontal pod auto-scaler) and you can also install metrics server to check all metrics.
To install metrics server on kubernetes you can follow this guide also :
amazon : https://docs.aws.amazon.com/eks/latest/userguide/metrics-server.html
https://github.com/kubernetes-incubator/metrics-server
https://medium.com/#cagri.ersen/kubernetes-metrics-server-installation-d93380de008
You don't need custom metrics to use HPA for auto-scaling pods based on their CPU usage.
As #Blokje5 mentioned earlier, you just need to install 'kube-state-metrics'.
The most convenient way to do it is with a dedicated helm chart (kube-state-metrics).
Hint: use override parameters with 'helm install' to create ServiceMonitor object for 'kube-state-metrics' Pod, to allow Prometheus to discover a new target for metrics scraping, e.g.:
helm install stable/kube-state-metrics --set prometheus.monitor.enabled=true
Remark: Pay attention to the 'serviceMonitorSelector' defined in your existing Prometheus resource object/configuration, so that it matches the ServiceMonitor definition for 'kube-state-metrics'. This is to make available Pods' metrics in Prometheus console.

not able to run hpa, get metrics to api metrics

I am trying to run horizontal pod autoscaler in kubernetes, want to auto scale my pods based on cpu default metrics.
For that I installed metrics server after that I can see metrics - metrics.k8s.io/v1beta1 (kubectl api-versions). Then I tried deploying prometheus-operator. But upon runnning kubectl top node/pod - error I am getting is
error: Metrics not available for pod default/web-deployment-658cd556f8-ztf6c, age: 35m23.264812635s" and "error: metrics not available yet"
Do I need to run heapster?
#batman, as you said enabling minikube metrics-server add-on is enough in case of using minikube.
In general case, if using metrics-server you edited the metrics server deployment by running: kubectl edit deployment metrics-server -n kube-system
Under spec: -> containers: add following flag:
spec:
containers:
- command:
- /metrics-server
- --kubelet-insecure-tls
As described on metrics-server github:
--kubelet-insecure-tls: skip verifying Kubelet CA certificates. Not recommended for production usage, but can be useful in test clusters
with self-signed Kubelet serving certificates.
Here you can find tutorial describing HPA using custom metrics and Prometheus.
In minikube, we have to enable metrics-server add-on.
minikube addons list
minikube addons enable metrics-server
Then create hpa, deployment and boom!!
Anyone has done autoscaling based on custom metrics? like based on no. of http requests?