I have deployed the istio service mesh on the GKE cluster using base & istiod helm charts using this documents in the istio-system namespace.
I have deployed Prometheus, grafana & alert-manager using kube-prometheus-stack helm chart.
Every pod of this workload is working fine; I didn't see any error. Somehow I didn't get any metrics in Prometheus UI related to istio workload. Because of that, I didn't see any network graph in kiali dashboard.
Can anyone help me resolve this issue?
Istio expects Prometheus to discover which pods are exposing metrics through the use of the Kubernetes annotations prometheus.io/scrape, prometheus.io/port, and prometheus.io/path.
The Prometheus community has decided that those annotations, while popular, are insufficiently useful to be enabled by default. Because of this the kube-prometheus-stack helm chart does not discover pods using those annotations.
To get your installation of Prometheus to scrape your Istio metrics you need to either configure Istio to expose metrics in a way that your installation of Prometheus expects (you'll have to check the Prometheus configuration for that, I do not know what it does by default) or add a Prometheus scrape job which will do discovery using the above annotations.
Details about how to integrate Prometheus with Istio are available here and an example Prometheus configuration file is available here.
Need to add additionalScrapConfigs for istio in kube-prometheus-stack helm chart values.yaml.
prometheus:
prometheusSpec:
additionalScrapeConfigs:
- {{ add your scrap config for istio }}
Related
My Opensearch app runs in Kubernetes.
I found this OpenSearch exporter plugin:
https://github.com/aparo/opensearch-prometheus-exporter
to export metrics to prometheus.
Is there any option to integrate this solution to kubernetes?
Yes, after you install plugins, You need Prometheus running on Kubernetes.
I recommend you install Prometheus-operator, which creates Prometheus service for you.
https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/user-guides/getting-started.md
It includes the "ServiceMonitor" kind, which defines what "pod" under a "service" you would like to monitor.
After creating the "ServerMonitor" point to your Opensearch pod correctly, Prometheus will scape your metrics.
I recently learned about helm and how easy it is to deploy the whole prometheus stack for monitoring a Kubernetes cluster, so I decided to try it out on a staging cluster at my work.
I started by creating a dedicates namespace on the cluster for monitoring with:
kubectl create namespace monitoring
Then, with helm, I added the prometheus-community repo with:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
Next, I installed the chart with a prometheus release name:
helm install prometheus prometheus-community/kube-prometheus-stack -n monitoring
At this time I didn't pass any custom configuration because I'm still trying it out.
After the install is finished, it all looks good. I can access the prometheus dashboard with:
kubectl port-forward prometheus-prometheus-kube-prometheus-prometheus-0 9090 -n monitoring
There, I see a bunch of pre-defined alerts and rules that are monitoring but the problem is that I don't quite understand how to create new rules to check the pods in the default namespace, where I actually have my services deployed.
I am looking at http://localhost:9090/graph to play around with the queries and I can't seem to use any that will give me metrics on my pods in the default namespace.
I am a bit overwhelmed with the amount of information so I would like to know what did I miss or what am I doing wrong here?
The Prometheus Operator includes several Custom Resource Definitions (CRDs) including ServiceMonitor (and PodMonitor). ServiceMonitor's are used to define services to the Operator to be monitored.
I'm familiar with the Operator although not the Helm deployment but I suspect you'll want to create ServiceMonitors to generate metrics for your apps in any (including default) namespace.
See: https://github.com/prometheus-operator/prometheus-operator#customresourcedefinitions
ServiceMonitors and PodMonitors are CRDs for Prometheus Operator. When working directly with Prometheus helm chart (without operator), you need have to configure your targets directly in values.yaml by editing the scrape_configs section.
It is more complex to do it, so take a deep breath and start by reading this: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config
We have setup Prometheus + Grafana on our GKE cluster using the stable/prometheus-operator helm chart. Now we want to export some metrics to Stackdriver because we have installed custom metrics Stackdriver adapter. We are already using some Pub/Sub metrics from Stackdriver for autoscaling few deployments. Now we also want to use some Prometheus metrics (mainly nginx request rate) in the autoscaling of other deployments.
So, my first question: Can we use Prometheus adapter in parallel with Stackdriver adapter for autoscaling in the same cluster?
If not, we will need to install Stackdriver Prometheus Sidecar for exporting the Prometheus metrics to Stackdriver and then use them for autoscaling via Stackdriver adapter.
From the instructions here, it looks like we need to install Stackdriver sidecar on same pod on which Prometheus is running. I gave it a try. When I run the patch.sh script, I got the message back: statefulset.apps/prometheus-prom-operator-prometheus-o-prometheus patched but when I inspected the statefulset again, it didn't have the Stackdriver sidecar container in it. Since this statefulset is created by a Helm chart, we probably can't modify it directly. Is there a recommended way of doing this in Helm?
Thanks to this comment on GitHub, I figured it out. There are so many configuration options accepted by this Helm chart that I missed it while reading the docs.
So, turns out that this Helm chart accepts a configuration option prometheus.prometheusSpec.containers. Its description in the docs says: "Containers allows injecting additional containers. This is meant to allow adding an authentication proxy to a Prometheus pod". But obviously, it is not limited to the authentication proxy and you can pass any container spec here and it will be added to Prometheus StatefulSet created by this Helm chart.
Here is the sample configuration I used. Some key points:
Please replace the values in angle brackets with your actual values.
Feel free to remove the arg --include. I added it because nginx_http_requests_total is the only Prometheus metric I want to send to Stackdriver for now. Check Managing costs for Prometheus-derived metrics for more details about it.
To figure out the name of volume to use in volumeMounts:
List down StatefulSets in Prometheus Operator namespace. Assuming that you installed it in monitoring namespace: kubectl get statefulsets -n monitoring
Describe the Prometheus StatefulSet assuming that its name is prometheus-prom-operator-prometheus-o-prometheus: kubectl describe statefulset prometheus-prom-operator-prometheus-o-prometheus -n monitoring
In details of this StatefulSet, find container named prometheus. Note the value passed to it in arg --storage.tsdb.path
Find the volume that is mounted on this container on same path. In my case, it was prometheus-prom-operator-prometheus-o-prometheus-db so I mounted the same volume on my Stackdriver sidecar container as well.
prometheus:
prometheusSpec:
containers:
- name: stackdriver-sidecar
image: gcr.io/stackdriver-prometheus/stackdriver-prometheus-sidecar:0.7.5
imagePullPolicy: Always
args:
- --stackdriver.project-id=<GCP PROJECT ID>
- --prometheus.wal-directory=/prometheus/wal
- --stackdriver.kubernetes.location=<GCP PROJECT REGION>
- --stackdriver.kubernetes.cluster-name=<GKE CLUSTER NAME>
- --include=nginx_http_requests_total
ports:
- name: stackdriver
containerPort: 9091
volumeMounts:
- name: prometheus-prom-operator-prometheus-o-prometheus-db
mountPath: /prometheus
Save this yaml to a file. Let's assume you saved it to prom-config.yaml
Now, find the release name you have used to install Prometheus Operator Helm chart on your cluster:
helm list
Assuming that release name is prom-operator, you can update this release according to the config composed above by running this command:
helm upgrade -f prom-config.yaml prom-operator stable/prometheus-operator
I hope you found this helpful.
I want to configure istio in such a way that it does not uses Prometheus or Grafana which come by default with it. I want to use my existing Prometheus and Grafana which is already deplyoed in cluster. Any help will be appreciated.
You need to configure your existing prometheus with a scrape configuration.For prometheus config you can use this ConfigMap. For grafana you need to configure your prometheus as datasource and you can use this configMap for that. You can generate the ConfigMaps using helm template and use.
Check this guide
I already have some services in my k8s cluster and want to mantain them separately. Examples:
grafana with custom dashboards and custom dockerfile
prometheus-operator instead of basic prometheus
jaeger pointing to elasticsearch as internal storage
certmanager in my own namespace (also I use it for nginx-ingress legacy routing)
Is it possible to use existing instances instead of creating istio-specific ones? Can istio communicate with them or it's hardcoded?
Yes - it is possible to use external services with istio. You can disable grafana and prometheus just by setting proper flags in values.yaml of istio helm chart (grafana.enabled=false, etc).
You can check kyma-project project to see how istio is integrated with prometheus-operator, grafana deployment with custom dashboards, and custom jaeger deployment. From your list only certmanager is missing.
Kubernetes provides quite a big variety of Networking and Load Balancing features from the box. However, the idea to simplify and extend the functionality of Istio sidecars is a good choice as they are used for automatic injection into the Pods in order to proxy the traffic between internal Kubernetes services.
You can implement sidecars manually or automatically. If you choose the manual way, make sure to add the appropriate parameter under Pod's annotation field:
annotations:
sidecar.istio.io/inject: "true"
Automatic sidecar injection requires Mutating Webhook admission controller, available since Kubernetes version 1.9 released, therefore sidecars can be integrated for Pod's creation process as well.
Get yourself familiar with this Article to shed light on using different monitoring and traffic management tools in Istio.