I'm trying to build a graph in Grafana that aggregates a metric over hour time periods. So like there'll be aggregation for 11-12, 12-1, etc and from the start of the hour till now.
I've figured out how to aggregate metrics over the last hour of time passed (e.g., if it's 12:11 now, then it'll aggregate from 11:11 to 12:11), but not the way I just described. Does anyone have any ideas? Is it possible to even do what I'm describing?
I haven't done much work with either of these packages before, so my own knowledge is minimalist at best and I haven't found much in online resources either.
Thanks in advance.
Related
I'm using Grafana and I want to see which hours are better to perform operations. So, I want to sum the requests and show the number of requests per hour in, let say, the last week. I mean: how many requests it had from 9:00 to 10:00 despite any day of the last week (and the same for every hour).
My backend is elasticsearch, but I can gather information from a prometheus too.
Does anyone know any way to get these data shown?
The Grafana version I'm using is 7.0.3.
EDIT
I found a possible solution by adding the plugin for hourly heatmaps
so i am constructing a table within grafana with prometheus as a data source. right now, my queries are set to instant, and thus it's showing scrape data from the instant that the query is made (in my case, shows data from the past 2 days)
however, i want to see data from the past 14 days. i know that you can adjust time shift in grafana as well as use the offset <timerange> command to shift the moment when the query is run, however these only adjust query execution points.
using a range vector such as go_info[10h] does indeed go back that range, however the scrapes are done in 15s intervals and as such produce duplicate data in addition to producing query results for a query done in that instant
(and not an offset timepoint), which I don't want
I am wondering if there's a way to gather data from two weeks ago until today, essentially aggregating data from multiple offset time points.
i've tried writing multiple queries on my table to perform this,
e.g:
go_info offset 2d
go_info offset 3d
and so on..
however this doesn't seem very efficient and the values from each query end up in different columns (a problem i could probably alleviate with an alteration to the query, however that doesn't solve the issue of complexity in queries)
is there a more efficient, simpler way to do this? i understand that the latest version of Prometheus offers subquerys as a feature, but i am currently not able to upgrade Prometheus (at least in a simple manner with the way it's currently set up) and am also not sure it would solve my problem. if it is indeed the answer to my question, it'll be worth the upgrade. i just haven't had the environment to test it out
thanks for any help anyone can provide :)
figured it out;
it's not pretty but i had to use offset <#>d for each query in a single metric.
e.g.:
something_metric offset 1d
something_metric offset 2d
How would I query the most recent timestamp for a particular metric in Graphite?
Using the Render API, I can ask for all known datapoints for the metric during a period, but that's both wasteful and unreliable...
Wasteful because it would give me all datapoints for the specified period (1 week by default), while I only need one.
Unreliable because the period, whatever I pick, may be too short...
Can it be done? Thanks!
Well, apparently, it can not be done at the moment. Maybe, Graphite-developers will add the necessary API some day.
I need to get the average elapsed time for each job in Active Job Environment in order to produce a report.
I've tried to extract it from SMF records but I don't seem to get the right one. Also I've tried keystroke language but it's to slow! The job takes around 15min to collect all the data. I thought about using CTMJSA but since I only have examples to UPDATE and DELETE the statistics I thought it would be wiser not to use it.
There must be a file that loads the Statistics Screen and I'd like to ask if anyone knows which is it or how could I get that information.
Thank you!!
Ctmruninf is a better utility to use in this case. I use it on Unix to produce total numbers (via perl) but you should be able to adapt it to mainframe and get averages. To list everything between fixed dates do -
ctmruninf -list 20151101120101 20151109133301 -JOBNAME pdiscm005
I am preparing a small app that will aggregate data on users on my website (via socket.io). I want to insert all data to my monogDB every hour.
What is the best way to do that? setInterval(60000) seems to be a lil bit lame :)
You can use cron for example and run your node.js app as scheduled job.
EDIT:
In case where the program have to run continuously, then probably setTimeout is one of the few possible choices (which is quite simple to implement). Otherwise you can offload your data to some temporary storage system, for example redis and then regularly run other node.js program to move your data, however this may introduce new dependency on other DB system and increase complexity depending on your scenario. Redis can also be in this case as some kind of failsafe solution in case when your main node.js app will unexpectedly be terminated and lose part or all of your data batch.
You should aggregate in real time, not once per hour.
I'd take a look at this presentation by BuddyMedia to see how they are doing real time aggregation down to the minute. I am using an adapted version of this approach for my realtime metrics and it works wonderfully.
http://www.slideshare.net/pstokes2/social-analytics-with-mongodb
Why not just hit the server with a curl request that triggers the database write? You can put the command on an hourly cron job and listen on a local port.
You could have mongo store the last time you copied your data and each time any request comes in you could check to see how long it's been since you last copied your data.
Or you could try a setInterval(checkRestore, 60000) for once a minute checks. checkRestore() would query the server to see if the last updated time is greater than an hour old. There are a few ways to do that.
An easy way to store the date is to just store it as the value of Date.now() (https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Date) and then check for something like db.logs.find({lastUpdate:{$lt:Date.now()-6000000}}).
I think I confused a few different solutions there, but hopefully something like that will work!
If you're using Node, a nice CRON-like tool to use is Forever. It uses to same CRON patterns to handle repetition of jobs.