How to calculate time difference in talend say for example there is variable on_off in database with 0 and 1 i need to find the difference between the time according to on_off condition for some calculation. 0 has 7:15:00 and 1 has 7:30:00 i need to get the difference as 15 minutes using talend.
Related
I using Grafana and influxdb to collect ‘product data’ in the assembly line. I make a chat by grafana to show how many products was done in every hour.
It works good, but i have one problem, sometimes the worker will clear the total count because of shifts change. this cause the division data less than 0.
Considering this serial data(each data 10 minutes):
100 200 300 400 0 10 20
the correct time divisional value in this hour should be (400-100) + (20-0) = 320
I tried also search but no help, do you have any ideas? (Sperate to 2 time divisions when data set to 0 is also OK , in this sample, we can get two bars with 300 and 20)
Thanks a lot!
Use different approach with subqueries:
1.) Inner query will calculate diff between each record with NON_NEGATIVE_DIFFERENCE()
-> 100 100 100 0 10 10
2.) Outer query will just SUM() result of inner query and group it per hour -> 320
I have set up Prometheus monitoring and I'm generating an 'uptime' report based on a criteria such as: 'error rates < x%'. The corresponding PromQL is
(
sum(increase(errors[5m]))
/ sum(increase(requests[5m]))
) <= bool 0.1
This gets displayed in a single-stat panel in Grafana.
What I want to achieve now is an average of how long it took to recover from a 'downtime' state. Graphically, I need the average duration of the intervals marked 1 and 2 below.
How can I calculate this measure in Prometheus?
Update: I am not looking for the average duration when the stat was 0, but instead for the average of the durations when the stat was 0.
As an example, consider the following time series ( assume value is sampled once per minute):
1 1 1 0 0 1 1 1 1 1 0 0 0 1
We basically have two "down" intervals: 0 0 and 0 0 0. Durations are by definition 2 minutes and 3 minutes, therefore the mean time to recovery is (2+3)/2 = 2.5.
My understanding based on reading the documents and experimentation is that avg_over_time will calculate an arithmetic team, e.g. sum(up)/count(up) = 9/14 =~ 0.64
I need to calculate the first measure, not the second.
TLDR;
You need to convert it to 0 or 1 via a Recording rule which you define in rules file add the path of a file to read rules from to your prometheus.yml .
my_metric_below_threshold = (sum(increase(errors[5m])) / sum(increase(requests[5m]))) <= bool 0.1
And then you can do avg_over_time(my_metric_below_threshold[5m])
The full details:
Basically what you need is avg_over_time of values 0 or 1.
However the result of the bool modifier is instant vector. However, avg_over_time expects type range vector in its call. instant vector Vs. range vector is.
Instant vector - a set of time series containing a single sample for each time series, all sharing the same timestamp
Range vector - a set of time series containing a range of data points over time for each time series
The solution for this is using Recording rules. You can see the conversation about this Prometheus github, this Stack question and in this explanation https://www.robustperception.io/composing-range-vector-functions-in-promql.
There are two general types of functions in PromQL that take timeseries as input, those that take a vector and return a vector (e.g. abs, ceil, hour, label_replace), and those that take a range vector and return a vector (e.g. rate, deriv, predict_linear, *_over_time).
There are no functions that take a range vector and return a range vector, nor is there a way to do any form of subquery. Even with support for subqueries, you wouldn't want to use them regularly as they'd be expensive. So what to do instead?
The answer is to use a recording rule for the inner function, and then you can use the outer function on the time series it creates.
So, as I explained above and from the quotes above - taken from a Core developer on Prometheus - you should be able to get what you need.
Added after question edit:
Doing this is not straight forward since you need a "memory" of the last samples. However it can be done using Textfile Collector and Prometheus Http API.
Define the my_metric_below_threshold using Recording rule as described above.
Install Node exporter with Textfile Collector.
The textfile collector is similar to the Pushgateway, in that it allows exporting of statistics from batch jobs. It can also be used to export static metrics, such as what role a machine has. The Pushgateway should be used for service-level metrics. The textfile module is for metrics that are tied to a machine.
To use it, set the --collector.textfile.directory flag on the Node exporter. The collector will parse all files in that directory matching the glob *.prom using the text format.
Write a script (i.e. successive_zeros.py)py/bash which run anywhere to query this metric using the Prometheus Http API GET /api/v1/query.
Save successive zeros as an environment parameter and clear or increment this parameter.
Write the result in the requested format described in the Textfile Collector documentation - than you have your successive_zeros_metrics in Prometheus.
Do avg_over_time() over successive_zeros_metrics
This is pseudo code of the concept I talk about:
#!/usr/bin/python
# Run as the node-exporter user like so:
# 0 1 * * * node-exporter /path/to/runner successive_zeros.py
r = requests.get('prometheus/api/v1/query'))
j = r.json()
......
if(j.get('isUp') == 0)
successive_zeros = os.environ['successive_zeros']
else
successive_zeros = os.environ['successive_zeros']+
os.environ['successive_zeros'] = successive_zeros
......
print 'successive_zeros_metrics %d' % successive_zeros
The following query must return the average duration the m value was set to 0 before transitioning to 1 over the last 7 days:
(count_over_time((m == 0)[7d:1m]) * 60) / resets((m !=bool 0)[7d:1m])
The query assumes that the interval between samples (aka scrape_interval equals to one minute (see 1m in square brackets). It uses Prometheus subquery alongside the following functions:
count_over_time - it returns the number of samples in m with zero values. This number is multiplied by the number of seconds in one minute - 60. The result is the total duration when m was 0 over the last 7 days.
resets - it returns the number of times m !=bool 0 was reset from 1 to 0. This roughly matches the number of spans with zeroes for m over the last 7 days.
The m !=bool 0 uses bool modifier for == operation.
Now it's time to expand m into (sum(increase(errors[5m])) / sum(increase(requests[5m]))) <= bool 0.1:
(count_over_time((
((sum(increase(errors[5m])) / sum(increase(requests[5m]))) <= bool 0.1) == 0
)[7d:1m]) * 60)
/
resets((
((sum(increase(errors[5m])) / sum(increase(requests[5m]))) <= bool 0.1) !=bool 0
)[7d:1m])
P.S. This monstrous query can be simplified somehow by using WITH templates from VictoriaMetrics:
with (
m = (sum(increase(errors)) / sum(increase(requests))) <= bool 0.1
)
(count_over_time((m == 0)[7d:1m]) * 1m) / resets((m !=bool 0)[7d:1m])
I am not really an expert in Tableau. I have a need to calculate a timedifference in hours, but also want to see fraction of an hour. I am using Tableau 9.
I used the function
IF DATEDIFF("hour", [CL2_Start_Time_ST], [CL2_End_Time_ST]) > 8 then NULL
ELSE DATEDIFF("hour", [CL2_Start_Time_ST], [CL2_End_Time_ST])
END
If the time difference between CL2_Start_Time_ST and CL2_End_Time_ST is less than 1 hour (for example 30 minutes) the result is 0, but I want to see 0.5 in result.
I dont want to calculate in time difference in minutes since all my other calculations are in hours and hence it is easier to create a relative plot with other calculations.
Please help.
I found the answer to the above question. The simple formula below worked. I was using DIV function and that caused the issue.
IF DATEDIFF("hour", [CL2_Start_Time_ST], [CL2_End_Time_ST]) > 8 then NULL
ELSE (DATEDIFF("minute", [CL2_Start_Time_ST], [CL2_End_Time_ST])) / 60
END
I need some help of solving that issue: I have 5 different voltage values that change every single tick time - that mean every single moment. I need to sort them and after they been sorted I want to go to another matrix(like this one at the bottom) and to pull out(read) specific column from it, for every state pre define(timing that I am designing..) That mechanism change every single states/moment. How can I do this ?
The Matrix look like(and could be greater...):
0 0 0 1 1 1...
0 1 1 0 0 1...
1 0 1 0 1 0...
1 1 0 1 0 0...
.. .. .. .. .. ..
Thanks, Henry
I am not sure I understood it correctly. So I will edit my answer after you make your question a bit more clear.
I see two separate things:
Reading 5 voltage values which change at each step. You want to sort these values. To do this you can use the sort function of matlab. It is really easy to use and you can look at it here.
This is the part I didn't understand well. After sorting the voltage readings what do you want to do with the matrix ? If you want to access just a specific column of the matrix and save it in a variable you can do it in this way. Let's assume you have a matrix A which is N x N, if you want to access the 10th column of the matrix and store it in a variable called column10 you will do something like: column10 = A(:,10)
I hope this will help you but let me know if this is what you wanted and I will edit my answer according to it.
Fab.
I will jump straight into a minimal example as I find this difficult to put into words. I have the following example:
Data.Startdate=[datetime(2000,1,1,0,0,0) datetime(2000,1,2,0,0,0) datetime(2000,1,3,0,0,0) datetime(2000,1,4,0,0,0)];
Data.Enddate=[datetime(2000,1,1,24,0,0) datetime(2000,1,2,24,0,0) datetime(2000,1,3,24,0,0) datetime(2000,1,4,24,0,0)];
Data.Value=[0.5 0.1 0.2 0.4];
Event_start=[datetime(2000,1,1,12,20,0) datetime(2000,1,1,16,0,0) datetime(2000,1,4,8,0,0)];
Event_end=[datetime(2000,1,1,14,20,0) datetime(2000,1,1,23,0,0) datetime(2000,1,4,16,0,0)];
What I want to do is add a flag to the Data structure (say a 1) if any time between Data.Startdate and Data.Enddate falls between Event_start and Event_end. In the example above Data.Flag would have have the values 1 0 0 1 because from the Event_start and Event_end vectors you can see there are events on January 1st and January 4th. The idea is that I will use this flag to process the data further.
I am sure this is straightforward but would appreciate any help you can give.
I would convert the dates to numbers using datenum, which then allows fairly convenient comparisons using bsxfun:
isStartBeforeEvent = bsxfun(#gt,datenum(Event_start)',datenum(Data.Startdate));
isEndAfterEvent = bsxfun(#lt,datenum(Event_end)',datenum(Data.Enddate));
flag = any(isStartBeforeEvent & isEndAfterEvent, 1)