Grafana dashboard not displaying pod name instead pod_name - kubernetes

I have deployed application on kubernetes cluster and for monitoring using prometheus and grafana. For kubernetes pods monitoring using Grafana dashboard: Kubernetes cluster monitoring (via Prometheus) https://grafana.com/grafana/dashboards/315
I had imported the dashboard using id 315 and its reflecting without pod name and containers name instead getting pod_name . Can anyone pls help how can i get pod name and container name in dashboard.

Provided tutorial was updated 2 years ago.
Current version of Kubernetes is 1.17. As per tags, tutorial was tested on Prometheus v. 1.3.0, Kubernetes v.1.4.0 and Grafana v.3.1.1 which are quite old at the moment.
In requirements you have statement:
Prometheus will use metrics provided by cAdvisor via kubelet service (runs on each node of Kubernetes cluster by default) and via kube-apiserver service only.
In Kubernetes 1.16 metrics labels like pod_name and container_name was removed. Instead of that you need to use pod and container. You can verify it here.
Any Prometheus queries that match pod_name and container_name labels (e.g. cadvisor or kubelet probe metrics) must be updated to use pod and container instead.
Please check this Github Thread about dashboard bug for more information.
Solution
Please change pod_name to pod in your query.

Kubernetes version v1.16.0 has Removed cadvisor metric labels pod_name and container_name to match instrumentation guidelines. Any Prometheus queries that match pod_name and container_name labels (e.g. cadvisor or kubelet probe metrics) must be updated to use pod and container instead.
You can check:
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.16.md#metrics-changes

Related

Stackdriver Prometheus sidecar with Prometheus Operator helm chart

We have setup Prometheus + Grafana on our GKE cluster using the stable/prometheus-operator helm chart. Now we want to export some metrics to Stackdriver because we have installed custom metrics Stackdriver adapter. We are already using some Pub/Sub metrics from Stackdriver for autoscaling few deployments. Now we also want to use some Prometheus metrics (mainly nginx request rate) in the autoscaling of other deployments.
So, my first question: Can we use Prometheus adapter in parallel with Stackdriver adapter for autoscaling in the same cluster?
If not, we will need to install Stackdriver Prometheus Sidecar for exporting the Prometheus metrics to Stackdriver and then use them for autoscaling via Stackdriver adapter.
From the instructions here, it looks like we need to install Stackdriver sidecar on same pod on which Prometheus is running. I gave it a try. When I run the patch.sh script, I got the message back: statefulset.apps/prometheus-prom-operator-prometheus-o-prometheus patched but when I inspected the statefulset again, it didn't have the Stackdriver sidecar container in it. Since this statefulset is created by a Helm chart, we probably can't modify it directly. Is there a recommended way of doing this in Helm?
Thanks to this comment on GitHub, I figured it out. There are so many configuration options accepted by this Helm chart that I missed it while reading the docs.
So, turns out that this Helm chart accepts a configuration option prometheus.prometheusSpec.containers. Its description in the docs says: "Containers allows injecting additional containers. This is meant to allow adding an authentication proxy to a Prometheus pod". But obviously, it is not limited to the authentication proxy and you can pass any container spec here and it will be added to Prometheus StatefulSet created by this Helm chart.
Here is the sample configuration I used. Some key points:
Please replace the values in angle brackets with your actual values.
Feel free to remove the arg --include. I added it because nginx_http_requests_total is the only Prometheus metric I want to send to Stackdriver for now. Check Managing costs for Prometheus-derived metrics for more details about it.
To figure out the name of volume to use in volumeMounts:
List down StatefulSets in Prometheus Operator namespace. Assuming that you installed it in monitoring namespace: kubectl get statefulsets -n monitoring
Describe the Prometheus StatefulSet assuming that its name is prometheus-prom-operator-prometheus-o-prometheus: kubectl describe statefulset prometheus-prom-operator-prometheus-o-prometheus -n monitoring
In details of this StatefulSet, find container named prometheus. Note the value passed to it in arg --storage.tsdb.path
Find the volume that is mounted on this container on same path. In my case, it was prometheus-prom-operator-prometheus-o-prometheus-db so I mounted the same volume on my Stackdriver sidecar container as well.
prometheus:
prometheusSpec:
containers:
- name: stackdriver-sidecar
image: gcr.io/stackdriver-prometheus/stackdriver-prometheus-sidecar:0.7.5
imagePullPolicy: Always
args:
- --stackdriver.project-id=<GCP PROJECT ID>
- --prometheus.wal-directory=/prometheus/wal
- --stackdriver.kubernetes.location=<GCP PROJECT REGION>
- --stackdriver.kubernetes.cluster-name=<GKE CLUSTER NAME>
- --include=nginx_http_requests_total
ports:
- name: stackdriver
containerPort: 9091
volumeMounts:
- name: prometheus-prom-operator-prometheus-o-prometheus-db
mountPath: /prometheus
Save this yaml to a file. Let's assume you saved it to prom-config.yaml
Now, find the release name you have used to install Prometheus Operator Helm chart on your cluster:
helm list
Assuming that release name is prom-operator, you can update this release according to the config composed above by running this command:
helm upgrade -f prom-config.yaml prom-operator stable/prometheus-operator
I hope you found this helpful.

I can't found a way to set up a grafana dashboard with influxdb to monitor a kubernetes cluster

I am currently working with Minikube and trying to set up a Grafana dashboard using Influxdb for the monitoring of my cluster.
I found several tutorials and used this one: https://github.com/kubernetes-retired/heapster/blob/master/docs/influxdb.md as I found many tutorials redirecting to the .yaml here: https://github.com/kubernetes-retired/heapster/tree/master/deploy/kube-config/influxdb
I just modified those .yaml a bit, switching all "extensions/v1beta1" into "apps/v1" and setting the type for the grafana service to NodePort.
But when I am checking the creation of the services and deployments grafana, influxdb and heapster are nowhere to be found.
kubectl deployments created:
deployments and services not found:
I found this may be because the images I am using are no longer available so I tried using other images like
image: gcr.io/google_containers/heapster-grafana-amd64:v4.0.2 for grafana
image: gcr.io/google_containers/heapster-influxdb-amd64:v1.1.1 for influx-db
but nothing change.
Grafana, influxdb and heapster are created in kube-system namespace. So you can find them using:
kubectl get pods -n kube-system
kubectl get svc -n kube-system
kubectl get deployments -n kube-system
Also the git repository which you are using in archived, I would advice not to use it. Heapster is deprecated in favor of metrics server.

How to use existing prometheus for Grafana on GKE?

I have one question about Grafana. How I can use exiting Prometheus deamonset on GKE for Grafana. I do not want to spin up one more Prometheus deployment for just Grafana. I come up with this question after I spin up the GKE cluster. I have checked kube-system namespace and it turns out there is Prometheus deamonset already deployed.
$ kubectl get daemonsets -n kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
prometheus-to-sd 2 2 2 2 2 beta.kubernetes.io/os=linux 19d
and I would like to use this Prometheus
I have Grafana deployment with helm stable/grafana
$ kubectl get deploy -n dev
NAME READY UP-TO-DATE AVAILABLE AGE
grafana 1/1 1 1 9m20s
Currently, I am using stable/prometheus
prometheus-to-sd is not a Prometheus instance, but a component that allows getting data from Prometheus to GCP's stackdriver. More info here: https://github.com/GoogleCloudPlatform/k8s-stackdriver/tree/master/prometheus-to-sd
If you'd like to have Prometheus you'll have to run it separately. (prometheus-operator helm chart is able to deploy whole monitoring stack to your GKE cluster easily (which my or may not be exactly what you need here).
Note that recent Grafana versions come with Stackdriver datasource, which allows you to query Stackdriver directly from Grafana (if all metrics you need are or can be in Stackdriver).

Hpa not fetching existing custom metric?

I'm using mongodb-exporter for store/query the metrics via prometheus. I have set up a custom metric server and storing values for that .
That is the evidence of prometheus-exporter and custom-metric-server works compatible .
Query:
kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/monitoring/pods/*/mongodb_mongod_wiredtiger_cache_bytes"
Result:
{"kind":"MetricValueList","apiVersion":"custom.metrics.k8s.io/v1beta1","metadata":{"selfLink":"/apis/custom.metrics.k8s.io/v1beta1/namespaces/monitoring/pods/%2A/mongodb_mongod_wiredtiger_cache_bytes"},"items":[{"describedObject":{"kind":"Pod","namespace":"monitoring","name":"mongo-exporter-2-prometheus-mongodb-exporter-68f95fd65d-dvptr","apiVersion":"/v1"},"metricName":"mongodb_mongod_wiredtiger_cache_bytes","timestamp":"TTTTT","value":"0"}]}
In my case when I create a hpa for this custom metrics from mongo exporter, hpa return this error to me :
failed to get mongodb_mongod_wiredtiger_cache_bytes utilization: unable to get metrics for resource mongodb_mongod_wiredtiger_cache_bytes: no metrics returned from resource metrics API
What is the main issue on my case ? I have checked all configs and flow is looking fine, but where is the my mistake .
Help
Thanks :)
In comments you wrote that you have enabled external.metrics, however in original question you had issues with custom.metrics
In short:
metrics supports only basic metric like CPU or Memory.
custom.metrics allows you to extend basic metrics to all Kubernetes objects (http_requests, number of pods, etc.).
external.metrics allows to gather metrics which are not Kubernetes objects:
External metrics allow you to autoscale your cluster based on any
metric available in your monitoring system. Just provide a metric
block with a name and selector, as above, and use the External metric
type instead of Object
For more detailed description, please check this doc.
Minikube
To verify if custom.metrics are enabled you need to execute command below and check if you can see any metrics-server... pod.
$ kubectl get pods -n kube-system
...
metrics-server-587f876775-9qrtc 1/1 Running 4 5d1h
Second way is to check if minikube have enabled metrics-server by
$ minikube addons list
...
- metrics-server: enabled
If it is disabled just execute
$ sudo minikube addons enable metrics-server
✅ metrics-server was successfully enabled
GKE
Currently at GKE heapster and metrics-server are turn on as default but custom.metrics are not supported by default.
You have to install prometheus adapter or stackdriver.
Kubeadm
Kubeadm do not include heapster or metrics server at the beginning. For easy installation, you can use this YAML.
Later you have to install prometheus adapter.
Apply custom.metrics
It's the same for Minikube, Kubeadm, GKE.
Easiest way to apply custom.metrics is to install prometheus adapter via Helm.
After helm installation you will be able to see note:
NOTES:
my-release-prometheus-adapter has been deployed.
In a few minutes you should be able to list metrics using the following command(s):
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1
As additional information, you can use jq to get more user friendly output.
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 | jq .

Kubernetes Autoscaling

I have Kubernetes v1.12.1 installed on my cluster.
I downloaded the metrics-server from the following repo:
https://github.com/kubernetes-incubator/metrics-server
and then run the following command:
kubectl create -f metrics-server/deploy/1.8+/
and then I tried autoscaling a deployment using:
kubectl autoscale deployment example-app-tier --min 1 --max 3 --cpu-percent 70 --spacename example
but the targets here shows unkown/70%
kubectl get hpa --spacename example
NAMESPACE NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
example example-app-tier Deployment/example-app-tier <unknown>/70% 1 3 1 3h35m
and when I try running the kubectl top nodes or pods I get an error saying:
error: Metrics not available for pod default/pi-ss8j6, age: 282h48m5.334137739s
So I'm looking for any tutorial that helps me step by step enabling autoscaling using metrics-server or Prometheus (and not Heapster as it is deprecated and will not be supported anymore)
Thank you!
you need to register your metrics server with API server and make sure they communicate.
https://github.com/kubernetes/kubernetes/issues/59438
If it is done already , you need to check the help for the kubectl top command in your version of k8s , the command may default to use heapster , and you may need to tell it to use the new service instead.
https://github.com/kubernetes/kubernetes/pull/56206
from the help command it looks like it is not yet ported to new metric server and still looking for heapster by default.
C02W84XMHTD5:tmp iahmad$ kubectl top node --help
Display Resource (CPU/Memory/Storage) usage of nodes.
The top-node command allows you to see the resource consumption of nodes.
Aliases:
node, nodes, no
Examples:
# Show metrics for all nodes
kubectl top node
# Show metrics for a given node
kubectl top node NODE_NAME
Options:
--heapster-namespace='kube-system': Namespace Heapster service is located in
--heapster-port='': Port name in service to use
--heapster-scheme='http': Scheme (http or https) to connect to Heapster as
--heapster-service='heapster': Name of Heapster service
-l, --selector='': Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l
key1=value1,key2=value2)
Usage:
kubectl top node [NAME | -l label] [options]
Use "kubectl options" for a list of global command-line options (applies to all commands).
note: I am using 1.10 , maybe in your version , the options are different