not able to run hpa, get metrics to api metrics - kubernetes

I am trying to run horizontal pod autoscaler in kubernetes, want to auto scale my pods based on cpu default metrics.
For that I installed metrics server after that I can see metrics - metrics.k8s.io/v1beta1 (kubectl api-versions). Then I tried deploying prometheus-operator. But upon runnning kubectl top node/pod - error I am getting is
error: Metrics not available for pod default/web-deployment-658cd556f8-ztf6c, age: 35m23.264812635s" and "error: metrics not available yet"
Do I need to run heapster?

#batman, as you said enabling minikube metrics-server add-on is enough in case of using minikube.
In general case, if using metrics-server you edited the metrics server deployment by running: kubectl edit deployment metrics-server -n kube-system
Under spec: -> containers: add following flag:
spec:
containers:
- command:
- /metrics-server
- --kubelet-insecure-tls
As described on metrics-server github:
--kubelet-insecure-tls: skip verifying Kubelet CA certificates. Not recommended for production usage, but can be useful in test clusters
with self-signed Kubelet serving certificates.
Here you can find tutorial describing HPA using custom metrics and Prometheus.

In minikube, we have to enable metrics-server add-on.
minikube addons list
minikube addons enable metrics-server
Then create hpa, deployment and boom!!
Anyone has done autoscaling based on custom metrics? like based on no. of http requests?

Related

Stackdriver Prometheus sidecar with Prometheus Operator helm chart

We have setup Prometheus + Grafana on our GKE cluster using the stable/prometheus-operator helm chart. Now we want to export some metrics to Stackdriver because we have installed custom metrics Stackdriver adapter. We are already using some Pub/Sub metrics from Stackdriver for autoscaling few deployments. Now we also want to use some Prometheus metrics (mainly nginx request rate) in the autoscaling of other deployments.
So, my first question: Can we use Prometheus adapter in parallel with Stackdriver adapter for autoscaling in the same cluster?
If not, we will need to install Stackdriver Prometheus Sidecar for exporting the Prometheus metrics to Stackdriver and then use them for autoscaling via Stackdriver adapter.
From the instructions here, it looks like we need to install Stackdriver sidecar on same pod on which Prometheus is running. I gave it a try. When I run the patch.sh script, I got the message back: statefulset.apps/prometheus-prom-operator-prometheus-o-prometheus patched but when I inspected the statefulset again, it didn't have the Stackdriver sidecar container in it. Since this statefulset is created by a Helm chart, we probably can't modify it directly. Is there a recommended way of doing this in Helm?
Thanks to this comment on GitHub, I figured it out. There are so many configuration options accepted by this Helm chart that I missed it while reading the docs.
So, turns out that this Helm chart accepts a configuration option prometheus.prometheusSpec.containers. Its description in the docs says: "Containers allows injecting additional containers. This is meant to allow adding an authentication proxy to a Prometheus pod". But obviously, it is not limited to the authentication proxy and you can pass any container spec here and it will be added to Prometheus StatefulSet created by this Helm chart.
Here is the sample configuration I used. Some key points:
Please replace the values in angle brackets with your actual values.
Feel free to remove the arg --include. I added it because nginx_http_requests_total is the only Prometheus metric I want to send to Stackdriver for now. Check Managing costs for Prometheus-derived metrics for more details about it.
To figure out the name of volume to use in volumeMounts:
List down StatefulSets in Prometheus Operator namespace. Assuming that you installed it in monitoring namespace: kubectl get statefulsets -n monitoring
Describe the Prometheus StatefulSet assuming that its name is prometheus-prom-operator-prometheus-o-prometheus: kubectl describe statefulset prometheus-prom-operator-prometheus-o-prometheus -n monitoring
In details of this StatefulSet, find container named prometheus. Note the value passed to it in arg --storage.tsdb.path
Find the volume that is mounted on this container on same path. In my case, it was prometheus-prom-operator-prometheus-o-prometheus-db so I mounted the same volume on my Stackdriver sidecar container as well.
prometheus:
prometheusSpec:
containers:
- name: stackdriver-sidecar
image: gcr.io/stackdriver-prometheus/stackdriver-prometheus-sidecar:0.7.5
imagePullPolicy: Always
args:
- --stackdriver.project-id=<GCP PROJECT ID>
- --prometheus.wal-directory=/prometheus/wal
- --stackdriver.kubernetes.location=<GCP PROJECT REGION>
- --stackdriver.kubernetes.cluster-name=<GKE CLUSTER NAME>
- --include=nginx_http_requests_total
ports:
- name: stackdriver
containerPort: 9091
volumeMounts:
- name: prometheus-prom-operator-prometheus-o-prometheus-db
mountPath: /prometheus
Save this yaml to a file. Let's assume you saved it to prom-config.yaml
Now, find the release name you have used to install Prometheus Operator Helm chart on your cluster:
helm list
Assuming that release name is prom-operator, you can update this release according to the config composed above by running this command:
helm upgrade -f prom-config.yaml prom-operator stable/prometheus-operator
I hope you found this helpful.

Using Prometheus to monitoring AKS ( Azure Kubernetes Service) cannot discover the Kubelet component

We are target using Prometheus, alertmanager and Grafana for monitoring AKS but it has been found that cannot obtain the kubelet metrics, I don't know whether it is blackbox/hidden by Azure or not. In addition, container CPU usage i.e. container_cpu_usage_seconds_total cannot obtain in Prometheus. Is anyone have experience on using Prometheus to monitor AKS ?
Remark: I using this https://github.com/camilb/prometheus-kubernetes to install Prometheus on AKS
I assume kubelet is not being detected target to scrape metrics from.
it has to do with your AKS version,
in prior 1.15 versions the metrics-server was started as follow:
command:
/metrics-server
--kubelet-port=10255
--deprecated-kubelet-completely-insecure=true
--kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
while in recent versions of aks:
command:
/metrics-server
--kubelet-insecure-tls
--kubelet-preferred-address-types=InternalIP

I can't found a way to set up a grafana dashboard with influxdb to monitor a kubernetes cluster

I am currently working with Minikube and trying to set up a Grafana dashboard using Influxdb for the monitoring of my cluster.
I found several tutorials and used this one: https://github.com/kubernetes-retired/heapster/blob/master/docs/influxdb.md as I found many tutorials redirecting to the .yaml here: https://github.com/kubernetes-retired/heapster/tree/master/deploy/kube-config/influxdb
I just modified those .yaml a bit, switching all "extensions/v1beta1" into "apps/v1" and setting the type for the grafana service to NodePort.
But when I am checking the creation of the services and deployments grafana, influxdb and heapster are nowhere to be found.
kubectl deployments created:
deployments and services not found:
I found this may be because the images I am using are no longer available so I tried using other images like
image: gcr.io/google_containers/heapster-grafana-amd64:v4.0.2 for grafana
image: gcr.io/google_containers/heapster-influxdb-amd64:v1.1.1 for influx-db
but nothing change.
Grafana, influxdb and heapster are created in kube-system namespace. So you can find them using:
kubectl get pods -n kube-system
kubectl get svc -n kube-system
kubectl get deployments -n kube-system
Also the git repository which you are using in archived, I would advice not to use it. Heapster is deprecated in favor of metrics server.

Grafana dashboard not displaying pod name instead pod_name

I have deployed application on kubernetes cluster and for monitoring using prometheus and grafana. For kubernetes pods monitoring using Grafana dashboard: Kubernetes cluster monitoring (via Prometheus) https://grafana.com/grafana/dashboards/315
I had imported the dashboard using id 315 and its reflecting without pod name and containers name instead getting pod_name . Can anyone pls help how can i get pod name and container name in dashboard.
Provided tutorial was updated 2 years ago.
Current version of Kubernetes is 1.17. As per tags, tutorial was tested on Prometheus v. 1.3.0, Kubernetes v.1.4.0 and Grafana v.3.1.1 which are quite old at the moment.
In requirements you have statement:
Prometheus will use metrics provided by cAdvisor via kubelet service (runs on each node of Kubernetes cluster by default) and via kube-apiserver service only.
In Kubernetes 1.16 metrics labels like pod_name and container_name was removed. Instead of that you need to use pod and container. You can verify it here.
Any Prometheus queries that match pod_name and container_name labels (e.g. cadvisor or kubelet probe metrics) must be updated to use pod and container instead.
Please check this Github Thread about dashboard bug for more information.
Solution
Please change pod_name to pod in your query.
Kubernetes version v1.16.0 has Removed cadvisor metric labels pod_name and container_name to match instrumentation guidelines. Any Prometheus queries that match pod_name and container_name labels (e.g. cadvisor or kubelet probe metrics) must be updated to use pod and container instead.
You can check:
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.16.md#metrics-changes

Hpa not fetching existing custom metric?

I'm using mongodb-exporter for store/query the metrics via prometheus. I have set up a custom metric server and storing values for that .
That is the evidence of prometheus-exporter and custom-metric-server works compatible .
Query:
kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/monitoring/pods/*/mongodb_mongod_wiredtiger_cache_bytes"
Result:
{"kind":"MetricValueList","apiVersion":"custom.metrics.k8s.io/v1beta1","metadata":{"selfLink":"/apis/custom.metrics.k8s.io/v1beta1/namespaces/monitoring/pods/%2A/mongodb_mongod_wiredtiger_cache_bytes"},"items":[{"describedObject":{"kind":"Pod","namespace":"monitoring","name":"mongo-exporter-2-prometheus-mongodb-exporter-68f95fd65d-dvptr","apiVersion":"/v1"},"metricName":"mongodb_mongod_wiredtiger_cache_bytes","timestamp":"TTTTT","value":"0"}]}
In my case when I create a hpa for this custom metrics from mongo exporter, hpa return this error to me :
failed to get mongodb_mongod_wiredtiger_cache_bytes utilization: unable to get metrics for resource mongodb_mongod_wiredtiger_cache_bytes: no metrics returned from resource metrics API
What is the main issue on my case ? I have checked all configs and flow is looking fine, but where is the my mistake .
Help
Thanks :)
In comments you wrote that you have enabled external.metrics, however in original question you had issues with custom.metrics
In short:
metrics supports only basic metric like CPU or Memory.
custom.metrics allows you to extend basic metrics to all Kubernetes objects (http_requests, number of pods, etc.).
external.metrics allows to gather metrics which are not Kubernetes objects:
External metrics allow you to autoscale your cluster based on any
metric available in your monitoring system. Just provide a metric
block with a name and selector, as above, and use the External metric
type instead of Object
For more detailed description, please check this doc.
Minikube
To verify if custom.metrics are enabled you need to execute command below and check if you can see any metrics-server... pod.
$ kubectl get pods -n kube-system
...
metrics-server-587f876775-9qrtc 1/1 Running 4 5d1h
Second way is to check if minikube have enabled metrics-server by
$ minikube addons list
...
- metrics-server: enabled
If it is disabled just execute
$ sudo minikube addons enable metrics-server
✅ metrics-server was successfully enabled
GKE
Currently at GKE heapster and metrics-server are turn on as default but custom.metrics are not supported by default.
You have to install prometheus adapter or stackdriver.
Kubeadm
Kubeadm do not include heapster or metrics server at the beginning. For easy installation, you can use this YAML.
Later you have to install prometheus adapter.
Apply custom.metrics
It's the same for Minikube, Kubeadm, GKE.
Easiest way to apply custom.metrics is to install prometheus adapter via Helm.
After helm installation you will be able to see note:
NOTES:
my-release-prometheus-adapter has been deployed.
In a few minutes you should be able to list metrics using the following command(s):
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1
As additional information, you can use jq to get more user friendly output.
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 | jq .