Is it possible to sum values within log lines in Loki / Grafana - grafana

If I have 2 log lines, e.g:
01/02/2022 - my log line - arbitrary number: 40
02/02/2022 - another log line - arbitrary number: 60
Within Grafana (or loki) is it possible to add together the 'arbitrary numbers' and give me an output of 100?
The use case is that I have multiple labels, each label has thousands of log lines. I want to create a dashboard which displays a sum of 'arbitrary numbers' per label
From what I've seen in the docs (https://grafana.com/docs/loki/latest/logql/) I get the feeling loki / grafana can only be used for metadata of the log stream, and not manipulating the log values themselves?

Related

how to concatenate two string in grafana metrics

how to append two variable inside vector,
[30d] should be replaced with [${days}${consd}]
my actual query (part of the query)
method=~"[[Method]]",le="+t"}[30d])) * 100
here i want replace the 30d with two grafana variable say one
grafana variable "days" as text box defaulted with 40, which will be edited by user.
grafana variable "consd" as constant defaulted with value "d".
now i need to update the above query like below but its not working its just taking only the first value.
method=~"[[Method]]",le="+t"}[${days}${consd}])) * 100
method=~"[[Method]]",le="+t"}[${days}d])) * 100

Grafana - How To plot the metrics for each variable which is passed dynamically

I'm using prometheus with grafana. I have a usecase where I have to take variables dynamically and need to perform divide operation which to be performed for each variable which is coming dynamically so can plot graph at each variable level.
eg. first metrics is -
rate(container_cpu_usage_seconds_total{id="/",instance=~'${INSTANCE:pipe}'}[5m])
where ${INSTANCE:pipe} getting dynamically
which needs to be divided by -
machine_cpu_cores{kubernetes_io_hostname=~'${INSTANCE:pipe}'}
and i want result in format -
1 entry per variable
eg.
vars result
var1 - 102
var2 - 23
var3 - 453
note (var1,var2,var3 are nothing but dynamically passed variables and result is nothing value return by divide operation)
Thanks in advance
After trying some queries found the solution -
My use-case has 2 metrics as below -
container_cpu_usage_seconds_total
machine_cpu_cores
In both metrics I found common label as kubernetes_io_hostname
I grouped both the metrics with the above label with the following queries -
(sort_desc ( max (rate (container_cpu_usage_seconds_total{id="/",kubernetes_io_role="node"}[5m])) BY (kubernetes_io_hostname)
sort_desc(max (machine_cpu_cores{kubernetes_io_role="node"}) BY (kubernetes_io_hostname ))
So my data has only 1 label named kubernetes_io_hostname
Then I did the division of the above 2 metrics and then got the result for the kubernetes_io_hostname label
If you need more info on this let me know in the comment section.

How to show in grafana's status panel a sum of two measurements from influxdb

I need to show in Grafana in a status panel (plugin) the sum of two different measurements:
in_bytes + out_bytes
Preferably the last data.
Any idea/hack how to overcome as I know there is no join in influxdb and I can't prepare on the server-side the data (merge and write to influx one measurement which sums them)?

Grafana: combining two queries from two prometheus exporters

I have two exporters for feeding data into prometheus - the node exporter and the elasticsearch exporter. I'm trying to combine sources from both exporters into one query, but unfortunately get "No data points" in the graph.
Each of the series successfully shows data:
elasticsearch_jvm_memory_max_bytes{cluster="$cluster", name=~"$node"}
node_memory_MemTotal{name=~"$node"}
This is the result when I try to subtract the two series from one another:
node_memory_MemTotal{name=~"$node"} - elasticsearch_jvm_memory_max_bytes{cluster="$cluster", name=~"$node"}
What am I missing here?
Thanks.
The subtraction you are trying here is more complex than it reads in the beginning. On both sides of the - operator are queries that can result in one or more time series. So the operation requested works as follows: Execute the query on the left hand side and get a result of one or more time series. A time series means a unique combination of a metric and all its labels and their values. Then a second query for your right hand side is executed which also results in one or more time series. Now to calculate the results, only those combinations with matching label combinations are used.
For your example this means that the metrics from node_exporter and from the elasticsearch_exporter have different label names (or even only different values for the labels). When there are no combinations that exist on both sides, you will see the empty result. For details on how operators are applied, please see the prometheus docs.
To solve your problem, you could do the following:
Check the metrics of both left and right side on their own
Evaluate if there are additional labels that could be ignored
See if there is a good label to match on (e.g. instance / node / hostname)
Use the ignoring(a,b,c) on the required side(s) to drop superfluous dimensions, e.g. the job
Try the following query:
node_memory_MemTotal{name=~"$node"}
- on(name)
sum(elasticsearch_jvm_memory_max_bytes{cluster="$cluster", name=~"$node"}) by (name)
It works in the following way:
It selects all the time series matching the node_memory_MemTotal{name=~"$node"} time series selector.
It selects all the time series matching the elasticsearch_jvm_memory_max_bytes{cluster="$cluster", name=~"$node"} selector.
It groups time series found at step 2 by name label value and sums time series in each group with sum() aggregate function. The end result of the sum(...) by (name) is per-name sums.
It finds pairs of time series with identical name label value from the step 1 and step 3 and calculates the difference between the first and the second time series in each pair. The on(name) modifier is used for limiting the set of labels, which are used for finding time series pairs with matching labels. See more details about this process here.

Prometheus Uptime or SLA percentage over sliding window in Grafana

I want to create a Grafana 'singlestat' Panel that shows the Uptime or SLA 'percentage', based on the presence or absence of test failure metrics.
I already have the appropriate metric, e2e_tests_failure_count, for different test frameworks.
This means that the following query returns the sum of observed test failures:
sum(e2e_tests_failure_count{kubernetes_name=~"test-framework-1|test-framework-2|test-framework-3",kubernetes_namespace="platform-edge"})
I already managed to create a graph that is "1" if everything is ok and "0" if there are any test failures:
1 - clamp_max(sum(e2e_tests_failure_count{kubernetes_name=~"test-framework-1|test-framework-1|test-framework-1",kubernetes_namespace="platform-edge"}), 1)
I now want to have a single percentage value that shows the "uptime" (= amount of time the environment was 'helathy') over a period of time, e.g. the last 5 days. Something like "99.5%" or, more appropriate for the screenshot, "65%".
I tried something like this:
(1 - clamp_max(sum(e2e_tests_failure_count{kubernetes_name=~"service-cvi-e2e-tests|service-svhb-e2e-tests|service-svh-roundtrip-e2e-tests",kubernetes_namespace="platform-edge"}), 1))[5d]
but this only results in parser errors. Googling didn't really get me any further, so I'm hoping I can find help here :)
Just figured this out and I believe it is producing correct results. You have to use recording rules because you cannot create a range vector from the instance vector result of a function in a single query, as you have already discovered (you get a parse error). So we record the function result (which will be an instance vector) as a new time series and use that as the metric name in a different query, where you can then add the [5d] to select a range.
We run our tests multiple times per minute against all our services, and each service ("service" is a label where each service's name is the label value) has a different number of tests associated with it, but if any of the tests for a given service fails, we consider that a "down moment". (The number of test failures for a given service is captured in the metrics with the status="failure" label value.) We clamp the number of failures to 1 so we only have zeroes and ones for our values and can therefore convert a "failure values time series" into a "success values time series" instead, using an inequality operator and the bool modifier. (See this post for a discussion about the use of bool.) So the result of the first recorded metric is 1 for every service where all its tests succeeded during that scrape interval, and 0 where there was at least one test failure for that service.
If the number of failures for a service is > 0 for all the values returned for any given minute, we consider that service to be "down" for that minute. (So if we have both a failure and a success in a given minute, that does not count as downtime.) That is why we have the second recorded metric to produce the actual "up for this minute" boolean values. The second recorded metric builds on the first, which is OK since the Prometheus documentation says the recorded metrics are run in series within each group.
So "Uptime" for any given duration is the sum of "up for this minute" values (i.e. 1 for each minute up) divided by the total number of minutes in the duration, whatever that duration happens to be.
Since we have defined a recorded metric named "minute_up_bool", we can then create an uptime graph over whatever range we want. (BTW, recorded metrics are only generated for times after you first define them, so you won't get yesterday's time series data included in a recorded metric you define today.) Here's a query you can put in Grafana to show uptime % over a moving window of the last 5 days:
sum_over_time(minute_up_bool[5d]) * 100 / (5 * 24 * 60)
So this is our recording rule configuration:
groups:
- name: uptime
interval: 1m
# Each rule here builds on the previous one.
rules:
# Get test results as pass/fail => 1/0
# (label_replace() removes confusing status="failure" label value)
- record: test_success_bool
expr: label_replace(clamp_max(test_statuses_total{status="failure"}, 1), "status", "", "", "") != bool 1
# Get the uptime as 1 minute range where the sum of successes is not zero
- record: minute_up_bool
expr: clamp_max(sum_over_time(test_success_bool[1m]), 1)
You have to use recording rules because you cannot create a range
vector from the instance vector result of a function in a single
query
Actually you can, by using a subquery:
(...some complicated instant subexpression...)[5d:1m]
This gives the same results as if you'd used a recording rule with a 1 minute evaluation interval. The recording rule is still beneficial though, as it avoids recomputing the subexpression every time.