Prometheus many-to-many problem for kube cronjobs - kubernetes

Hy there,
I'm trying to configure Kubernetes Cronjobs monitoring & alerts with Prometheus. I found this helpful guide
But I always get a many-to-many matching not allowed: matching labels must be unique on one side error.
For example, this is the PromQL query which triggers this error:
max(
kube_job_status_start_time
* ON(job_name) GROUP_RIGHT()
kube_job_labels{label_cronjob!=""}
) BY (job_name, label_cronjob)
The queries by itself result in e.g. these metrics
kube_job_status_start_time:
kube_job_status_start_time{app="kube-state-metrics",chart="kube-state-metrics-0.12.1",heritage="Tiller",instance="REDACTED",job="kubernetes-service-endpoints",job_name="test-1546295400",kubernetes_name="kube-state-metrics",kubernetes_namespace="monitoring",kubernetes_node="REDACTED",namespace="test-develop",release="kube-state-metrics"}
kube_job_labels{label_cronjob!=""}:
kube_job_labels{app="kube-state-metrics",chart="kube-state-metrics-0.12.1",heritage="Tiller",instance="REDACTED",job="kubernetes-service-endpoints",job_name="test-1546295400",kubernetes_name="kube-state-metrics",kubernetes_namespace="monitoring",kubernetes_node="REDACTED",label_cronjob="test",label_environment="test-develop",namespace="test-develop",release="kube-state-metrics"}
Is there something I'm missing here? The same many-to-many error happens for every query I tried from the guide.
Even constructing it by myself from ground up resulted in the same error.
Hope you can help me out here :)

In my case I don't get this extra label from Prometheus when installed via helm (stable/prometheus-operator).
You need to configure it in Prometheus. It calls: honor_labels: false
# If honor_labels is set to "false", label conflicts are resolved by renaming
# conflicting labels in the scraped data to "exported_<original-label>" (for
# example "exported_instance", "exported_job") and then attaching server-side
# labels.
So you have to configure your prometheus.yaml file - config with option honor_labels: false
# Setting honor_labels to "true" is useful for use cases such as federation and
# scraping the Pushgateway, where all labels specified in the target should be
# preserved
Anyway if I have it like this (I have now exported_jobs), still can't do proper query, but I guess is still because of my LHS.
Error executing query: found duplicate series for the match group
{exported_job="kube-state-metrics"} on the left hand-side of the operation:
[{__name__=

I ran into the same issue when I followed that article, but for me, I actually get duplicate job names but in different namespaces.
Ex. When running kube_job_status_start_time:
kube_job_status_start_time{instance="REDACTED",job="kube-state-metrics",job_name="job-abc-123",namespace="us"}
kube_job_status_start_time{instance="REDACTED",job="kube-state-metrics",job_name="job-abc-123",namespace="ca"}
So I had to either add a filter for the namespace or add namespace into the ON/BY clauses to get it to be unique.
e.g. for one of the subqueries I had to do this:
max(
kube_job_status_start_time
* ON(namespace, job_name) GROUP_RIGHT()
kube_job_labels{label_cronjob!=""}
) BY (namespace, label_cronjob)
Essentially had to apply that principle to all the rest of the queries for it to work for me. Not sure if that applies in your case.

Replacing kube_job_status_start_time with max(kube_job_status_start_time) by (job_name) will aggregate out any duplicates and should resolve the error.
The resulting query will look like this
max(
max(kube_job_status_start_time) by (job_name)
* ON(job_name) GROUP_RIGHT()
kube_job_labels{label_cronjob!=""}
) BY (job_name, label_cronjob)

I dug into this issue a bit more, and I guess the root cause of it is within this one-to-many vector matching expression:
kube_job_status_start_time * ON(job_name) GROUP_RIGHT() kube_job_labels{label_cronjob!=""}
where the group modifier "GROUP_RIGHT()" suggests, that each vector element from the left side (kube_job_status_start_time) can match with multiple elements on the right side (kube_job_labels), based on common label (job_name). The thing is that we are really dealing here with many-to-many matching, as each vector element from right side can match also multiple elements from left vector as well:
I think that what we are missing here is the way to uniquely identify exported Job objects from K8S by Prometheus. The author of this blog post, mentions about this feature in his setup:
...Prometheus resolves this collision of label names by including the
raw metric’s label as an exported_job label...
In my case I don't get this extra label from Prometheus when installed via helm (stable/prometheus-operator).

Regarding the missing labels - make sure that your kube-state-metrics is configured with a --metric-labels-allowlist. This is "new" since kube-state-metrics v2. See https://kubernetes.io/blog/2021/04/13/kube-state-metrics-v-2-0/#what-is-new-in-v2-0
By default, the metric contains only name and namespace labels.
But... the original guide is not woking with newer kube-state-metrics anyway. I can recommend this guide, which is a rework and does not need the labels.

Related

Sum query result by name specified by regex

I am using Grafana together with Prometheus to display data of my Pods from Kubernetes Cluster. Here I am displaying Memory Usage for each Pod by name:
sum (container_memory_working_set_bytes{namespace="namespace1", image!="",name=~"^k8s_.*",kubernetes_io_hostname=~"^$Node$"}) by (pod_name)
It gives correct result for each pod. In example:
namespace1-eventstore-1
namespace1-eventstore-0
avsandbox-X-64ff4d-rl9z6
avsandbox-X-64ff4d-ldfnx
avsandbox-Y-7d9df9ddff-asdf
avsandbox-Y-7d9df9ddff-dfas
avsandbox-Z-5957dbaf58dt-gds24
avsandbox-Z-5957dbaf58dt-g4gd7
Now I want to sum them by their respective names to receive following result or closest I can get to it
namespace1-eventstore
avsandbox-X
avsandbox-Y
avsandbox-Z
So in conclusion I want to sum everything that has same name before second -. How can I achieve that?
Edit.: Here's further example what I'm looking for (hopefully it's helpful to give practical example and general idea)
sum (container_memory_working_set_bytes{namespace="namespace1", image!="",name=~"^k8s_.*",kubernetes_io_hostname=~"^$Node$"}) by (pod_name="([a-zA-Z0-9]+-[a-zA-Z0-9])-.*")
But that's not possible because of syntax.

How to nest variables in grafana?

I have a simple Custom variable called route with e.g. this value:
/foo/bar,/foo/baz,/foo/baz/foo
I'm trying to map these values to some more understandable values, e.g. Custom route_names:
bar,baz,foo
Searching on google resulted in people doing nested variables, but whatever I try in Grafana 5.3.4, I can't get it to work. If I do a Query variable and use -- Grafana -- as source, I don't know what to put in the query field. route.* didn't do anything, $route neither.
What is the correct way of selecting a value from one variable and map it to the other? I.e. What is the query language being used when selecting -- Grafana -- as datasource?
As a side note, I have two datasources at the moment, my actual data source where I get my graph data from and -- Grafana --.
There are correct answers on the first floor. solve "key/value pairs" by SELECT 'txt1' AS __text, 'value1' AS __value UNION SELECT 'txt2' AS __text, 'value2' AS __value
This is not possible with Custom template variables (unless smth changed in recent Grafana versions). It can be done with variables coming from mysql, postgres and clickhouse datasource queries. See examples in https://community.grafana.com/t/key-value-style-for-custom-template-variable-configuration-and-usage/3109 thread. Can't tell about this feature support in other datasource types.

PromQL metric query returning other metrics than what I want

I must just not understand PromQL yet, but everything I read says this query should work fine:
node_cpu
Really simple right? Name of my metric, and I do get them in my result set.
node_cpu{app="prometheus",chart="prometheus-6.2.1",component="node-exporter",cpu="cpu0",heritage="Tiller",instance="10.85.166.16:9100",io_cattle_field_appId="prometheus",job="kubernetes-service-endpoints",kubernetes_name="prometheus-node-exporter",kubernetes_namespace="prometheus",mode="guest_nice",release="prometheus"} 0
node_cpu{app="prometheus",chart="prometheus-6.2.1",component="node-exporter",cpu="cpu0",heritage="Tiller",instance="10.85.166.16:9100",io_cattle_field_appId="prometheus",job="kubernetes-service-endpoints",kubernetes_name="prometheus-node-exporter",kubernetes_namespace="prometheus",mode="idle",release="prometheus"} 1784679.96
node_cpu{app="prometheus",chart="prometheus-6.2.1",component="node-exporter",cpu="cpu0",heritage="Tiller",instance="10.85.166.16:9100",io_cattle_field_appId="prometheus",job="kubernetes-service-endpoints",kubernetes_name="prometheus-node-exporter",kubernetes_namespace="prometheus",mode="iowait",release="prometheus"} 2897.73
But I also get a ton of other, unwanted metrics:
kubelet_runtime_operations_latency_microseconds_count{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_os="linux",instance="la-1pk8s-w4",job="kubernetes-nodes",kubernetes_io_hostname="la-1pk8s-w4",node_role_kubernetes_io_worker="true",operation_type="image_status"}
container_start_time_seconds{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_os="linux",id="/docker/8effa9b35affbf17118e7cc83a586d70da9fa960097ab717076c7251bf4eb324",image="rancher/rke-tools:v0.1.13",instance="la-1pk8s-w2",job="kubernetes-nodes-cadvisor",kubernetes_io_hostname="la-1pk8s-w2",name="rke-log-linker-nginx-proxy",node_role_kubernetes_io_worker="true"}
storage_operation_duration_seconds_bucket{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_os="linux",instance="la-1pk8s-w4",job="kubernetes-nodes",kubernetes_io_hostname="la-1pk8s-w4",le="0.1",node_role_kubernetes_io_worker="true",operation_name="volume_unmount",volume_plugin="kubernetes.io/configmap"}
Not sure why they are there, strange. So I figure I'll filter on the label component="node-exporter" since that label only exists in the metrics I want.
node_cpu{component="node-exporter"} yields the same result set.
node_cpu{component=~"node-exporter"} yields same result set.
Why can't I just get all node_cpu metrics and why is the filtering not working? Thanks.
Either this is a bug that was fixed in 2.3.0, or you have a remote_read that's returning undesired results.

Combine Grafana metrics with mismatched labels

I have two metrics (relating to memory usage in my Kubernetes pods) defined as follows:
kube_pod_container_resource_limits_memory_bytes{app="kube-state-metrics",container="foo",instance="10.244.0.7:8080",job="kubernetes-endpoints",kubernetes_name="kube-state-metrics",kubernetes_namespace="monitoring",namespace="test",node="aks-nodepool1-25518080-0",pod="foo-cb9bc5fb5-2bghz"}
container_memory_working_set_bytes{agentpool="nodepool1",beta_kubernetes_io_arch="amd64",beta_kubernetes_io_instance_type="Standard_A2",beta_kubernetes_io_os="linux",container_name="foo",failure_domain_beta_kubernetes_io_region="westeurope",failure_domain_beta_kubernetes_io_zone="1",id="/kubepods/burstable/pod5b0099a9-eeff-11e8-884b-ca2011a99774/eeb183b21e2b3226a32de41dd85d7a2e9fc8715cf31ea7109bfbb2cae7c00c44",image="#sha256:6d6003ba86a0b7f74f512b08768093b4c098e825bd7850db66d11f66bc384870",instance="aks-nodepool1-25518080-0",job="kubernetes-cadvisor",kubernetes_azure_com_cluster="MC_test.planned.bthbygg.se_bthbygg-test_westeurope",kubernetes_io_hostname="aks-nodepool1-25518080-0",kubernetes_io_role="agent",name="k8s_foo_foo-cb9bc5fb5-2bghz_test_5b0099a9-eeff-11e8-884b-ca2011a99774_0",namespace="test",pod_name="foo-cb9bc5fb5-2bghz",storageprofile="managed",storagetier="Standard_LRS"}
I want to combine these two into a percentage, by doing something like
container_memory_working_set_bytes{namespace="test"}
/ kube_pod_container_resource_limits_memory_bytes{namespace="test"}
but that gives me no data back, presumably because there are no matching labels to join the data sets on. As you can see, I do have matching label values, but the label names don't match.
Is there somehow I can formulate my query to join these on e.g. pod == pod_name, without having to change the metrics at the other end (where they are exported)?
You can use PromQL label_replace function to create a new matching label from the original labels.
For instance, you can use the below expression to add a container_name="foo" label to the first metric which can be used to do the join:
label_replace(
kube_pod_container_resource_limits_memory_bytes,
"container_name", "$1", "container", "(.*)")
You can use the above patern to create new labels that can be used for the matching.

How to Query Jackrabbit for same name siblings

Is it possible to find the same-name siblings (SNS) using JCR-SQL2, JCR-SQL or QueryBuilder in Adobe CQ5/Adobe Experience Manager. I'm trying to match those nodes with a query having the following criteria without having to traverse the whole repository (slow and long running operation):
if(node.getIndex() > 1) {
// this node is matching the SNS criteria
}
SNS are defined as follows:
/a/b/c
/a/b/c[2]
/a/b/c[3]
/a/b[2]/c[2]
/a/b/c[3]
/a/d/f
/a/d/f[2]
So the result of the query should include /a/b/c[2], /a/b/c[3], /a/b[2]/c[2], /a/b/c[3], /a/d/f[2].
Adobe published a helpful article for this at:
https://helpx.adobe.com/experience-manager/kb/find-sns-nodes.html
EDIT: One query for this may be as below:
SELECT [jcr:path] FROM [nt:base] WHERE ISDESCENDANTNODE('/') AND [jcr:path] like '%\]'
The idea is that oak queries will be able to find indexed nodes that were migrated via SNS resolution logic. These names will contain ] in their names (paths for URI) which will be selectable via above query.
Use this query with caution as there are a lot of system nodes OOTB that have ] in the name and this is by design.
You can change [nt:base] to other relevant oak index for better filtering.
HTH