Prometheus is not compatible with Kubernetes v1.16 - kubernetes

I installed the stable/prometheus helm chart with some minor changes proposed at helm/charts#17268 to make it compatible with Kubernetes v1.16
After installation, none of the Kubernetes grafana dashboards show correct values. I am using 8769 (https://grafana.com/grafana/dashboards/8769) dashboard which provides many information on cpu, memory, network, etc. This dashboard is working properly on older k8s versions but on v1.16 it shows no results. I also randomly tried some other dashboards (8588, 6879, 10551) but they either just show the requested resource for each pod and not the live usage or showing nothing.
What these dashboards do is they send a promql query to prometheus and get the results. For example this is the promql query for cpu usage from 8769 dashboard:
sum (rate (container_cpu_usage_seconds_total{id!="/",namespace=~"$Namespace",pod_name=~"^$Deployment.*$"}[1m])) by (pod_name)
I don't know if I have to change the promql or the problem is somewhere else.

Kubernetes 1.16 removes the labels pod_name and container_name from
cAdvisor metrics, duplicates of pod and container.
You need change pod_name -> pod, container_name -> container in Grafana dashboards JSON models.

Try the installation this way, as the new CRDs had some issue, so I used old CRDs-
kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.32/example/prometheus-operator-crd/alertmanager.crd.yaml
kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.32/example/prometheus-operator-crd/prometheus.crd.yaml
kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.32/example/prometheus-operator-crd/prometheusrule.crd.yaml
kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.32/example/prometheus-operator-crd/servicemonitor.crd.yaml
kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.32/example/prometheus-operator-crd/podmonitor.crd.yaml
helm install --name prometheus --namespace monitoring stable/prometheus-operator --set prometheusOperator.createCustomResource=false
Make sure that CRD's don't exist priory, you can delete them via
kubectl delete crd --all

Related

Missing information when scraping data in grafana using prometheus and minikube

I am trying to use prometheus and grafana to get information like cluster cpu usage, cluster memory usage and pod cpu usage in my kubernetes cluster. I am using minikube in wsl2. I am using following commands to get everything up and running:
To start minikube:
$ minikube start
To add repo and install prometheus:
$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
$ helm repo update
$ helm install prometheus prometheus-community/prometheus
To create a nodeport on the port 9090:
$ kubectl expose service prometheus-server --type=NodePort --target-port=9090 --name=prometheus-server-ext
To access prometheus server outside minikube:
$ minikube service prometheus-server-ext
Adding grafana report and installing with helm:
$ helm repo add grafana https://grafana.github.io/helm-charts
$ helm repo update
$ helm install grafana grafana/grafana
To create a nodeport on the port 3000 :
$ kubectl expose service grafana --type=NodePort --target-port=3000 --name=grafana-ext
To access grafana server outside minikube:
$ minikube service grafana-ext
To get decode and get password to grafana(username:admin):
$ kubectl get secret --namespace default grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
Then I add a data source with type prometheus and the URL seen above: https://192.168.49.2:30988.
Until here everything works as expected. Then I import two different dashboards with the id "6417" and "12740". I then get the following dashboards:
and
My question is why do I only see the number of pods and cluster memory usage, but no cpu usage of pods or the cluster? It seems like there is a lot of information missing.
Here is the JSON code for dashboard with id 6417: https://pastebin.com/ywrT2qmQ
Here is the JSON code for dashboard with id 12740: https://pastebin.com/2neEzESu
I get the dashboards by using the id's 6417 and 12740 and importing it. Check image below:

Issues setting up Prometheus on EKS - Pods in Pending State (Seems to be dependent on PVCs waiting on Volume being created)

I have an EKS cluster for my university project and I want to setup Prometheus on the cluster. To do this I am using helm with the following commands (see this tutorial https://archive.eksworkshop.com/intermediate/240_monitoring/deploy-prometheus/):
kubectl create namespace prometheus
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm install prometheus prometheus-community/prometheus \
--namespace prometheus \
--set alertmanager.persistentVolume.storageClass="gp2" \
--set server.persistentVolume.storageClass="gp2"
When I check the status of the prometheus pods, the alert-manager and server seem to be in an infinite Pending state:
When I describe the prometheus-alertmanager-0 pod I see the following VolumeBinding error:
When I describe the prometheus-server-5d858bd4bd-6xmws pod I see the following VolumeBinding error:
I can also see there are 2 pvcs in Pending state:
When I describe the prometheus-server pvc, I can see its waiting for a volume to be created:
Im familiar with Kubernetes basics but pvcs is not something that I have used before. Is the solution here to create a "volume" and if so how do I do that?, would that solve the issue?, or am I way off the mark?
Should I try to install Prometheus in a different way?
Any help on this greatly appreciated
Note: Although similar this is not a duplicate of Prometheus server in pending state after installation using Helm. For one the errors highlighted there are different errors, also other manual steps such as creating volumes were also performed (which I have not done), Finally, I am following the specific tutorial referenced and also I am asking if I should try to setup Prometheus a different way if there is a simpler way

Grafana dashboard not displaying pod name instead pod_name

I have deployed application on kubernetes cluster and for monitoring using prometheus and grafana. For kubernetes pods monitoring using Grafana dashboard: Kubernetes cluster monitoring (via Prometheus) https://grafana.com/grafana/dashboards/315
I had imported the dashboard using id 315 and its reflecting without pod name and containers name instead getting pod_name . Can anyone pls help how can i get pod name and container name in dashboard.
Provided tutorial was updated 2 years ago.
Current version of Kubernetes is 1.17. As per tags, tutorial was tested on Prometheus v. 1.3.0, Kubernetes v.1.4.0 and Grafana v.3.1.1 which are quite old at the moment.
In requirements you have statement:
Prometheus will use metrics provided by cAdvisor via kubelet service (runs on each node of Kubernetes cluster by default) and via kube-apiserver service only.
In Kubernetes 1.16 metrics labels like pod_name and container_name was removed. Instead of that you need to use pod and container. You can verify it here.
Any Prometheus queries that match pod_name and container_name labels (e.g. cadvisor or kubelet probe metrics) must be updated to use pod and container instead.
Please check this Github Thread about dashboard bug for more information.
Solution
Please change pod_name to pod in your query.
Kubernetes version v1.16.0 has Removed cadvisor metric labels pod_name and container_name to match instrumentation guidelines. Any Prometheus queries that match pod_name and container_name labels (e.g. cadvisor or kubelet probe metrics) must be updated to use pod and container instead.
You can check:
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.16.md#metrics-changes

Hpa not fetching existing custom metric?

I'm using mongodb-exporter for store/query the metrics via prometheus. I have set up a custom metric server and storing values for that .
That is the evidence of prometheus-exporter and custom-metric-server works compatible .
Query:
kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/monitoring/pods/*/mongodb_mongod_wiredtiger_cache_bytes"
Result:
{"kind":"MetricValueList","apiVersion":"custom.metrics.k8s.io/v1beta1","metadata":{"selfLink":"/apis/custom.metrics.k8s.io/v1beta1/namespaces/monitoring/pods/%2A/mongodb_mongod_wiredtiger_cache_bytes"},"items":[{"describedObject":{"kind":"Pod","namespace":"monitoring","name":"mongo-exporter-2-prometheus-mongodb-exporter-68f95fd65d-dvptr","apiVersion":"/v1"},"metricName":"mongodb_mongod_wiredtiger_cache_bytes","timestamp":"TTTTT","value":"0"}]}
In my case when I create a hpa for this custom metrics from mongo exporter, hpa return this error to me :
failed to get mongodb_mongod_wiredtiger_cache_bytes utilization: unable to get metrics for resource mongodb_mongod_wiredtiger_cache_bytes: no metrics returned from resource metrics API
What is the main issue on my case ? I have checked all configs and flow is looking fine, but where is the my mistake .
Help
Thanks :)
In comments you wrote that you have enabled external.metrics, however in original question you had issues with custom.metrics
In short:
metrics supports only basic metric like CPU or Memory.
custom.metrics allows you to extend basic metrics to all Kubernetes objects (http_requests, number of pods, etc.).
external.metrics allows to gather metrics which are not Kubernetes objects:
External metrics allow you to autoscale your cluster based on any
metric available in your monitoring system. Just provide a metric
block with a name and selector, as above, and use the External metric
type instead of Object
For more detailed description, please check this doc.
Minikube
To verify if custom.metrics are enabled you need to execute command below and check if you can see any metrics-server... pod.
$ kubectl get pods -n kube-system
...
metrics-server-587f876775-9qrtc 1/1 Running 4 5d1h
Second way is to check if minikube have enabled metrics-server by
$ minikube addons list
...
- metrics-server: enabled
If it is disabled just execute
$ sudo minikube addons enable metrics-server
✅ metrics-server was successfully enabled
GKE
Currently at GKE heapster and metrics-server are turn on as default but custom.metrics are not supported by default.
You have to install prometheus adapter or stackdriver.
Kubeadm
Kubeadm do not include heapster or metrics server at the beginning. For easy installation, you can use this YAML.
Later you have to install prometheus adapter.
Apply custom.metrics
It's the same for Minikube, Kubeadm, GKE.
Easiest way to apply custom.metrics is to install prometheus adapter via Helm.
After helm installation you will be able to see note:
NOTES:
my-release-prometheus-adapter has been deployed.
In a few minutes you should be able to list metrics using the following command(s):
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1
As additional information, you can use jq to get more user friendly output.
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 | jq .

Install Istio in multi master nodes in kubernetes

I read about Istio and I need to install it in Kubernetes.
I don't know what is the best way to install Istio in a multi-node Kubernetes cluster.
The setup is multi-node master cluster and multi-node slave for Kubernetes.
Is the best way to install with Istio multicluster or sidecar injection (automatic)?
Regards.
There is no difference on how many Master and Slave Nodes your Kubernetes cluster has if you want to install Istio.
You can follow the instructions from this link
Briefly, you need to:
Download Istio release
Install Istio’s Custom Resource Definitions using kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml from that release
Install Istio components using one of options:
without mutual TLS authentication between sidecars using kubectl apply -f install/kubernetes/istio-demo.yaml
with default mutual TLS authentication kubectl apply -f install/kubernetes/istio-demo-auth.yaml
Render Kubernetes manifest with Helm and deploy with kubectl
Use Helm and Tiller to manage the Istio deployment
For auto injection, you need to install istio-sidecar-injector component and add istio-injection=enabled label for a Namespace in which you want it to work.
Example of commands:
kubectl label namespace <namespace> istio-injection=enabled
kubectl create -n <namespace> -f <your-app-spec>.yaml