How to use existing prometheus for Grafana on GKE? - kubernetes

I have one question about Grafana. How I can use exiting Prometheus deamonset on GKE for Grafana. I do not want to spin up one more Prometheus deployment for just Grafana. I come up with this question after I spin up the GKE cluster. I have checked kube-system namespace and it turns out there is Prometheus deamonset already deployed.
$ kubectl get daemonsets -n kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
prometheus-to-sd 2 2 2 2 2 beta.kubernetes.io/os=linux 19d
and I would like to use this Prometheus
I have Grafana deployment with helm stable/grafana
$ kubectl get deploy -n dev
NAME READY UP-TO-DATE AVAILABLE AGE
grafana 1/1 1 1 9m20s
Currently, I am using stable/prometheus

prometheus-to-sd is not a Prometheus instance, but a component that allows getting data from Prometheus to GCP's stackdriver. More info here: https://github.com/GoogleCloudPlatform/k8s-stackdriver/tree/master/prometheus-to-sd
If you'd like to have Prometheus you'll have to run it separately. (prometheus-operator helm chart is able to deploy whole monitoring stack to your GKE cluster easily (which my or may not be exactly what you need here).
Note that recent Grafana versions come with Stackdriver datasource, which allows you to query Stackdriver directly from Grafana (if all metrics you need are or can be in Stackdriver).

Related

Unable able to see Pods CPU and Memory Utilization and graphs are missing Kubernetes dashboard

K8s VERSION = v1.18.6
I have deployed the Kubernetes dashboard using the following command and added a privileged user with which I logged into the dashboard.
but not able to see Pods CPU and Memory Utilization graphs are missing Kubernetes dashboard
The Kubernetes Metrics Server is an aggregator of resource usage data in your cluster,
To deploy the Metrics Server
Deploy the Metrics Server with the following command:
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml
Verify that the metrics-server deployment is running the desired number of pods with the following command.
kubectl get deployment metrics-server -n kube-system
Output
NAME READY UP-TO-DATE AVAILABLE AGE
metrics-server 1/1 1 1 6m
Also you can validate by below command:
kubectl top nodes
to see node cpu utilisation if it works, it should then come up in Dashboard as well.
Resource usage metrics are only available for K8s clusters once Metrics Server has been installed.

Kubernetes: Prometheus context deadline exceeded error

I have several nodejs microservices which are running dev namespace which I expose metrics and able to access via http://localhost:9187/metrics.
But when I deploy prometheus server which is running monitoring namespace I received an below error in Targets page.
Get http://1.../metrics: context deadline exceeded.
I assume none of these allow access from the namespace monitoring
so need to add an additional one into the namespace dev to allow the prometheus pod from namespace monitoring to scrape the below pod or what might be the reason of this error?
What is the best way to add netpol to my application to allow prometheus from namespace monitoring?
kubectl get netpol -n dev
myapp-api-dev app.kubernetes.io/instance=myapp-api-dev,app.kubernetes.io/name=oneapihub-api 5h33m
myapp-auth-dev app.kubernetes.io/instance=myapp-auth-dev,app.kubernetes.io/name=oneapihub-auth 56m
myapp-backend-dev app.kubernetes.io/instance=myapp-backend-dev,app.kubernetes.io/name=oneapihub-backend 5h42m
redis app=redis,release=redis 33d
kubectl get pods -n monitoring
NAME READY STATUS RESTARTS AGE
monitoring-prometheus-server-6cc796c4db-hp4rg 2/2 Running 0 2d4h
I guess you have kube-prometheus installed. In this case you need to create custom roles and role bindings to let Prometheus to monitor other namespaces, see here

kube-stat-metrics is not working when only master node is running in a cluster

I am monitoring Kubernetes cluster ( which is deployed using kubeadm ) with grafana and Prometheus.
I am facing some difficulty with kube-state-metrics that is when I up the only master node then I am seeing kube-state-metrics down in Prometheus targets but when I up the one node then kube-state-metrics is up in Prometheus target.
Another interesting part here is when I up only master node I see one kube-state-meterics pod is up and running in kube-system namespace but I am unable to access the metrics.
Am I missing anything in understanding kube-state-metrics?
Please help me out.

I can't found a way to set up a grafana dashboard with influxdb to monitor a kubernetes cluster

I am currently working with Minikube and trying to set up a Grafana dashboard using Influxdb for the monitoring of my cluster.
I found several tutorials and used this one: https://github.com/kubernetes-retired/heapster/blob/master/docs/influxdb.md as I found many tutorials redirecting to the .yaml here: https://github.com/kubernetes-retired/heapster/tree/master/deploy/kube-config/influxdb
I just modified those .yaml a bit, switching all "extensions/v1beta1" into "apps/v1" and setting the type for the grafana service to NodePort.
But when I am checking the creation of the services and deployments grafana, influxdb and heapster are nowhere to be found.
kubectl deployments created:
deployments and services not found:
I found this may be because the images I am using are no longer available so I tried using other images like
image: gcr.io/google_containers/heapster-grafana-amd64:v4.0.2 for grafana
image: gcr.io/google_containers/heapster-influxdb-amd64:v1.1.1 for influx-db
but nothing change.
Grafana, influxdb and heapster are created in kube-system namespace. So you can find them using:
kubectl get pods -n kube-system
kubectl get svc -n kube-system
kubectl get deployments -n kube-system
Also the git repository which you are using in archived, I would advice not to use it. Heapster is deprecated in favor of metrics server.

Grafana dashboard not displaying pod name instead pod_name

I have deployed application on kubernetes cluster and for monitoring using prometheus and grafana. For kubernetes pods monitoring using Grafana dashboard: Kubernetes cluster monitoring (via Prometheus) https://grafana.com/grafana/dashboards/315
I had imported the dashboard using id 315 and its reflecting without pod name and containers name instead getting pod_name . Can anyone pls help how can i get pod name and container name in dashboard.
Provided tutorial was updated 2 years ago.
Current version of Kubernetes is 1.17. As per tags, tutorial was tested on Prometheus v. 1.3.0, Kubernetes v.1.4.0 and Grafana v.3.1.1 which are quite old at the moment.
In requirements you have statement:
Prometheus will use metrics provided by cAdvisor via kubelet service (runs on each node of Kubernetes cluster by default) and via kube-apiserver service only.
In Kubernetes 1.16 metrics labels like pod_name and container_name was removed. Instead of that you need to use pod and container. You can verify it here.
Any Prometheus queries that match pod_name and container_name labels (e.g. cadvisor or kubelet probe metrics) must be updated to use pod and container instead.
Please check this Github Thread about dashboard bug for more information.
Solution
Please change pod_name to pod in your query.
Kubernetes version v1.16.0 has Removed cadvisor metric labels pod_name and container_name to match instrumentation guidelines. Any Prometheus queries that match pod_name and container_name labels (e.g. cadvisor or kubelet probe metrics) must be updated to use pod and container instead.
You can check:
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.16.md#metrics-changes