I want to query my graphite server to retrieve certain metrics.
I am able to query all data points between certain time period but my requirement is, I want to query data points of specific time of previous days.
How can I do this?
The Graphite Render API supports a number of arguments in order to make your query more specific. Specifically, the from / until arguments will be useful to you, you can read about them here: https://graphite.readthedocs.io/en/latest/render_api.html#from-until
edit: I should add that if you're using Grafana for visulaising your data, you can click+drag on the graph to select specific time ranges or use the timepicker in the top-right corner to choose Custom and set your range there.
Related
I'm using Graphite + Grafana to monitor (by sampling) queue lengths in a test system at work. The data that gets uploaded to Graphite is grouped into different series/metrics by properties of the payloads in the queue. These properties can be somewhat arbitrary, at least to the point where they are not all known at the time when the data collection script is run.
For example, a property could be the project that the payload belongs to and this could be uploaded as a separate series/metric so that we can monitor the queues broken down by the different projects.
This has the consequence that Graphite sends a lot of null values for certain metrics if the queues in the test system did not contain any payloads with properties that would group it into that specific series/metric.
For example, if a certain project did not have any payloads in queue at the time when the data collection was ran.
In Grafana this is not so nice as the line graphs don't show up as connected lines and gauges will show either null or the last non-null value.
For line graphs I can just chose to connect null values in Grafana but for gauges thats not possible.
I know about the keepLastValue function in Graphite but that includes a limit for how long to keep the value which I like very much as I would like to keep the last value until the next time data collection is ran. Data collection is run periodically at known intervals.
The problem with keepLastValue is it expects a number of points as this limit. I would rather give it a time period instead. In Grafana the relationship between time and data points is very dynamic so its not easy to hard-code a good limit for keepLastValue.
Thus, my question is: Is there a way to tell Graphite to keep the last value for a given time instead of a given number of points?
There is a graph display elasticsearch index count, see below
I want to add a value: diff = max - min in Legend, how to implement it?
I'm pretty sure you can't, easily. You can hack your way around it by adding yet another query to your graph, something like
max_over_time(my_metric[[[__range_s]]s]) - min_over_time(my_metric[[[__range_s]]s])
Grafana will replace the [[__range_s]] bit with the length of the time range of the current dashboard, e.g. 3600 for the default 1h, so the query actually sent to Prometheus will be
max_over_time(my_metric[3600s]) - min_over_time(my_metric[3600s])
Meaning Prometheus will compute the difference between the max and min separately from Grafana (which does it on top of the samples returned by Prometheus). (It will also compute this difference for the whole time range, not just the most recent sample, which is what you're interested in.) Then you can tweak the display of said time series in Grafana (e.g. by setting line=0, fill=0) so it will not show up on the graph itself, only in the legend. But the legend will then display the current value of the difference, as well as its min, max, avg, which will be quite the crappy UX.
Edit: Or you can add said query to a separate panel (e.g. a table panel), to the right of your graph. That may let you better control the UX, although it still won't be part of the actual legend.
Edit 2: One final thing you could try, that would give you exactly what you want, is to tweak Grafana's graph panel to add a "range" value next to "min", "max" and the bunch. The source code is here, I'm pretty sure it's mostly a copy-pasta job. You likely wouldn't even have to rebuild all of Grafana, you could just package the modified panel as "Tweaked Graph Panel" plugin and drop it into your Grafana deployment's plugins folder. Then, in your dashboard, instead of using "Graph Panel", use "Tweaked Graph Panel".
I have a JMeter project with multiple GET and POST requests and assertions for these. I use Aggregate results and View results tree listeners, but none of these can store results on hourly basis. I tried JMeterPlugins-Standard and JMeterPlugins-Extras packages and jp#gc - Graphs Generator listener, but all of them use aggregated data instead of hourly data. So I would like to get number of successful and failed requests/assertions per hour, maybe a bar chart would be most suitable for this purpose.
I'm going to suggest a non-conventional design-level solution: name your samplers dynamically with hour (or date and hour), so that each hour the name will change, and thus they will appear in different category, i.e.:
The code for such name is:
${__time(dd:hh,)} the rest of sampler name
Such sampler will appear in the following way in Aggregate Report (here I simulated it with minutes/seconds, but same will happen with days/hours, just on larger scale):
Pros and cons of such approach:
Simple, you can aggregate anything by hour, minute, or any other time slice while test is running, and not by analysis after execution.
Not listener-dependant, can be used with pretty much any listener or visualizer
If you want to also have overall stats, it will require to sum up every sub-category. So it alters data, but in the way that it can still can be added back to original relatively easy.
Calculating __time before every sampler will not be unnoticed completely from performance perspective, but I don't think it will add visible overhead to a script.
You could get the same data by properly aggregating JTL or CSV (whichever you use) after execution, so it doesn't provide you with anything that is not possible to achieve using standard methods
Script needs altering to make this happen. if you have 100s of samplers, it's going to take a while. And if you want to change back...
You might want to use Filter Results Tool which has --start-offset and --end-offset parameters, you can "cut" your results file into "interesting" pieces and plot them according to your requirements.
You can install Filter Results Tool using JMeter Plugins Manager
Also be aware that according to JMeter Best Practices you should
Use as few Listeners as possible; if using the -l flag as above they can all be deleted or disabled.
Don't use "View Results Tree" or "View Results in Table" listeners during the load test, use them only during scripting phase to debug your scripts.
You can get whatever information you need from the .jtl results file, you can specify test results location via -l command-line argument
To get summarized results per hour add to your test plan Generate Summary Results:
Generates a summary of the test run so far to the log file and/or standard output
Update interval in jmeter.properties to your needs ,1 hour, 3600 seconds:
summariser.interval=3600
You will get summary per hour of your requests.
You can try with Jmeter backend Listener. It has integration with graphite and Influxdb. After storing the results in these time series database you can display the result in Grafana dashboard. Grafana has its own filtering of showing the results in hourly, monthly, daily basis and so on.
I just started working with Tableau and I fail to find a way to filter dimensions/metrics on the dashboard based on the user's previous selection.
We use MongoDB NoSQL database to store various events sent from our system.
Event consist of Key-Value pairs (translated to metrics and dimensions), each event has a unique Id (EventType) and a list of parameters.
The number of parameters per EventType is constant but vary between event types.
When we connect the Events catalog to Tableau (using MongoDB BI connector) we receive a flat table with all possible keys while only the ones that apply to the specific event has a value.
Since we have a lot of event types and a large number of possible keys (between them) this cause problems when using the dashboard.
The user see a flat list of all possible dimensions and metrics with no correlation between them.
He can not know which metric apply to which eventType.
How can I can guide Tableau to present/highlight only the relevant dimensions / metrics, based on the EventType selected by the user?
You click on the down arrow in the top right of the filter and then select Only Show Relevant Values.
Click for screenshot
I have Grafana with Bosun connected as OpenTSDB source. Problem is Grafana interprets data in different way than Bosun. To be precise, when I set same query in Bosun and in Grafana, resulting graphs differ. When I turn on gauge downsample, graphs are same. So I guess there is implicit gauging of some sort in Grafana. I would be grateful for some hint how to disable that gauging.
Bosun:
Grafana:
The os.net.bytes metric includes metadata to indicate that it is a rate. When you use the default "auto" in Bosun's graph page it will convert the raw counter data into a rate calculation. Grafana's OpenTSDB data source does not have an auto mode, so things always default to a gauge unless you check the Rate box at the bottom of the metric.
In your example you should just need to check the rate box to get the graphs to match. You can also use the Counter option and provide a max or reset value if you need to deal with counter overflows
You can also use the Bosun data source if you want to use a Bosun query instead of accessing OpenTSDB directly. In this example we combine two queries to generate a Singlestat panel (displays last value and a line graph in the background)
The __ny-nexus01/02 part comes from using tsdbrelay to denormalize the metric and address high tag cardinality issues.