I have a ready-made Kubernetes cluster with configured grafana + prometheus(operator) monitoring.
I added the following labels to pods with my app:
prometheus.io/scrape: "true"
prometheus.io/path: "/my/app/metrics"
prometheus.io/port: "80"
But metrics don't get into Prometheus. However, prometheus has all the default Kubernetes metrics.
What is the problem?
You should create ServiceMonitor or PodMonitor objects.
ServiceMonitor which describes the set of targets to be monitored by Prometheus. The Operator automatically generates Prometheus scrape configuration based on the definition and the targets will have the IPs of all the pods behind the service.
Example:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: example-app
labels:
team: frontend
spec:
selector:
matchLabels:
app: example-app
endpoints:
- port: web
PodMonitor, which declaratively specifies how groups of pods should be monitored. The Operator automatically generates Prometheus scrape configuration based on the definition.
Example:
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: example-app
labels:
team: frontend
spec:
selector:
matchLabels:
app: example-app
podMetricsEndpoints:
- port: web
Related
Can somebody explain to me what is logic, or how should i proceed with following problem. I have Prometheus CR with following ServiceMonitor selector.
Name: k8s
Namespace: monitoring
Labels: prometheus=k8s
Annotations: <none>
API Version: monitoring.coreos.com/v1
Kind: Prometheus
...
Service Monitor Namespace Selector:
Service Monitor Selector:
...
Prometheus is capable of discovering all serviceMonitors it created, but it does not discover mine (newly created). Is the upper code supposed to match everything, or do you know about how to accomplish this (that is to match every single ServiceMonitor) ?
example of mine ServiceMonitor
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: example-app
namespace: monitoring
labels:
# release: prometheus
# team: frontend
spec:
selector:
matchLabels:
app: example-app
namespaceSelector:
# matchNames:
# - default
matchNames:
- e
endpoints:
- port: web
Rest of details
I know that i can discover it with something like this but this would require change in all of other monitors.
serviceMonitorSelector:
matchLabels:
team: frontend
I don't want install Prometheus operator using helm, so instead I installed it from https://github.com/prometheus-operator/kube-prometheus#warning.
If you just want to discover all serviceMonitors on a given cluster, that prometheus and prometheus operator have access to with their respective RBAC, you can use and empty selector like so:
serviceMonitorNamespaceSelector: {}
serviceMonitorSelector: {}
I'm trying to make Prometheus find metrics from RabbitMQ (and a few other services, but the logic is the same).
My current configuration is:
# RabbitMQ Service
# This live in the `default` namespace
kind: Service
apiVersion: v1
metadata:
name: rabbit-metrics-service
labels:
name: rabbit-metrics-service
spec:
ports:
- protocol: TCP
port: 15692
targetPort: 15692
selector:
# This selects the deployment and it works
app: rabbitmq
I then created a ServiceMonitor:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
# The name of the service monitor
name: rabbit-monitor
# The namespace it will be in
namespace: kube-prometheus-stack
labels:
# How to find this service monitor
# The name I should use in `serviceMonitorSelector`
name: rabbit-monitor
spec:
endpoints:
- interval: 5s
port: metrics
path: /metrics
# The namespace of origin service
namespaceSelector:
matchNames:
- default
selector:
matchLabels:
# Where the monitor will attach to
name: rabbit-metrics-service
kube-prometheus-stack has the following values.yml
# values.yml
prometheusSpec:
serviceMonitorSelector:
matchLabels:
name:
- rabbit-monitor
So from what I understand is: in the metadata/label section I define a labelKey/labelValue pair, and then reference this pair on the selector/matchLabels. I then add a custom serviceMonitorSelector that will match N labels. If it finds the labels, Prometheus should discover the ServiceMonitor, and hence, the metrics endpoint, and start scraping. But I guess there's something wrong with this logic. I tried a few other variations of this as well, but no success.
Any ideas on what I might be doing wrong?
Documentation usually uses the same name everywhere, so I can quite understand where exactly should that name come from, since I tend to prefer to add -service, -deployment suffixes to the resources to be able to easily identify them later. I already add RabbitMQ prometheus plugin, and the endpoint seems to be working fine.
i want to collect metrics from a deployment (with multiple pods) from Kubernetes, and on of my metrics is the number of calls that my deployment received, my question is about Prometheus, how can i tell Prometheus to call all the pods that are part of the deployment and collect metrics from them? And what is the best practice to achieve this goal?
I would highly recommend using prometheus-operator to do all heavy lifting with configuring Prometheus monitoring for your applications.
For example, having the Deployment and Service like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-app
spec:
replicas: 3
selector:
matchLabels:
app: example-app
template:
metadata:
labels:
app: example-app
spec:
containers:
- name: example-app
image: fabxc/instrumented_app
ports:
- name: web
containerPort: 8080
---
kind: Service
apiVersion: v1
metadata:
name: example-app
labels:
app: example-app
spec:
selector:
app: example-app
ports:
- name: web
port: 8080
You may configure ServiceMonitor object which will use Service as a service discovery endpoint to find all the pods of the Deployment. This assumes that your application is exposing metrics using HTTP path /metrics.
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: example-app
labels:
team: frontend
spec:
selector:
matchLabels:
app: example-app
endpoints:
- port: web
This will make Prometheus scrape metrics for your application.
You may read more about ServiceMonitors here: https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md
I have a service running in a k8s cluster, which I want to monitor using Prometheus Operator. The service has a /metrics endpoint, which returns simple data like:
myapp_first_queue_length 12
myapp_first_queue_processing 2
myapp_first_queue_pending 10
myapp_second_queue_length 4
myapp_second_queue_processing 4
myapp_second_queue_pending 0
The API runs in multiple pods, behind a basic Service object:
apiVersion: v1
kind: Service
metadata:
name: myapp-api
labels:
app: myapp-api
spec:
ports:
- port: 80
name: myapp-api
targetPort: 80
selector:
app: myapp-api
I've installed Prometheus using kube-prometheus, and added a ServiceMonitor object:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: myapp-api
labels:
app: myapp-api
spec:
selector:
matchLabels:
app: myapp-api
endpoints:
- port: myapp-api
path: /api/metrics
interval: 10s
Prometheus discovers all the pods running instances of the API, and I can query those metrics from the Prometheus graph. So far so good.
The issue is, those metrics are aggregate - each API instance/pod doesn't have its own queue, so there's no reason to collect those values from every instance. In fact it seems to invite confusion - if Prometheus collects the same value from 10 pods, it looks like the total value is 10x what it really is, unless you know to apply something like avg.
Is there a way to either tell Prometheus "this value is already aggregate and should always be presented as such" or better yet, tell Prometheus to just scrape the values once via the internal load balancer for that service, rather than hitting each pod?
edit
The actual API is just a simple Deployment object:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-api
labels:
app: myapp-api
spec:
replicas: 2
selector:
matchLabels:
app: myapp-api
template:
metadata:
labels:
app: myapp-api
spec:
imagePullSecrets:
- name: mysecret
containers:
- name: myapp-api
image: myregistry/myapp:2.0
ports:
- containerPort: 80
volumeMounts:
- name: config
mountPath: "app/config.yaml"
subPath: config.yaml
volumes:
- name: config
configMap:
name: myapp-api-config
In your case to avoid metrics aggregation you can use, as already mentioned in your post, avg() operator to or PodMonitor instead of ServiceMonitor.
The PodMonitor custom resource definition (CRD) allows to
declaratively define how a dynamic set of pods should be monitored.
Which pods are selected to be monitored with the desired configuration
is defined using label selections.
This way it will scrape the metrics from the specified pod only.
Prometheus Operator developers are kindly working (as of Jan 2023) on a generic ScrapeConfig CRD that is designed to solve exactly the use case you describe: https://github.com/prometheus-operator/prometheus-operator/issues/2787
In the meantime, you can use the "additional scrape config" facility of Prometheus Operator to setup a custom scrape target.
The idea is that the configured scrape target will be hit only once per scrape period and the service DNS will load-balance the request to one of the N pods behind the service, thus avoiding duplicate metrics.
In particular, you can override the kube-prometheus-stack Helm values as follows:
prometheus:
prometheusSpec:
additionalScrapeConfigs:
- job_name: 'myapp-api-aggregates':
metrics_path: '/api/metrics'
scheme: 'http'
static_configs:
- targets: ['myapp-api:80']
I am trying to monitor external service (which is exporter of cassandra metrics) in prometheus-operator. I installed prometheus-operator using helm 2.11.0. I installed it using this yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
and these commands on my kubernetes cluster:
kubectl create -f rbac-config.yml
helm init --service-account tiller --history-max 200
helm install stable/prometheus-operator --name prometheus-operator --namespace monitoring
Next, basing on article:
how monitor to an external service
I tried to do steps described in it. As suggested I created Endpoints, Service and ServiceMonitor with label for existing Prometheus. Here are my yaml files:
apiVersion: v1
kind: Endpoints
metadata:
name: cassandra-metrics80
labels:
app: cassandra-metrics80
subsets:
- addresses:
- ip: 10.150.1.80
ports:
- name: web
port: 7070
protocol: TCP
apiVersion: v1
kind: Service
metadata:
name: cassandra-metrics80
namespace: monitoring
labels:
app: cassandra-metrics80
release: prometheus-operator
spec:
externalName: 10.150.1.80
ports:
- name: web
port: 7070
protocol: TCP
targetPort: 7070
type: ExternalName
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: cassandra-metrics80
labels:
app: cassandra-metrics80
release: prometheus-operator
spec:
selector:
matchLabels:
app: cassandra-metrics80
release: prometheus-operator
namespaceSelector:
matchNames:
- monitoring
endpoints:
- port: web
interval: 10s
honorLabels: true
And in prometheus service discovery page I can see:
That this service is not active and all labels are dropped.
I did a numerous things trying to fix this, like setting targetLabels. Trying to relabel the once that are discovered, as
described here: prometheus relabeling
But unfortunately nothing works. What could be the issue or how can I investigate it better?
Ok, I found out that service should be in the same namespace as service monitor and endpoint, after that prometheus started to see some metrics from cassandra.
When using the kube-prometheus-stack helm chart, it can be done as follows
prometheus:
prometheusSpec:
additionalScrapeConfigs:
- job_name: external
metrics_path: /metrics
static_configs:
- targets:
- <IP>:<PORT>
To be strict only "Endpoints" and "Service" should be in the same namespace.
Additionally "Endpoints" and "Service" should have the same name as Lucas mentioned it before.
ServiceMonitor can be placed anywhere, it finds and scrapes SVC/Endpoint inside defined namespaces (namespaceSelector->matchNames) and matching all labels (selector->matchLabels):
spec:
selector:
matchLabels:
app: cassandra-metrics80
release: prometheus-operator
namespaceSelector:
matchNames:
- my-namespace
Furthermore now there is much more easier method to define additional scraping:
https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/additional-scrape-config.md
The only drawback for the second one is that it requires pod restart after the change. Configuration based on Endpoint/Service/ServiceMonitor seem to be discovered and applied automatically.