Drop a Prometheus label / tag - grafana

I've got metrics that looks like this
kafka_lag{client_id="dcp-0",partition="53"} 1977005
kafka_lag{client_id="dcp-10",partition="53"} 2345234
When I visualize in grafana I get two different lines, however I would like to drop client_id and display only kafka_lag{partition="53"} values.
How can I drop a tag from Prometheus output?

If you have two values at the same time (in your case, the lag for different client) and you want to view only one, you need to combine them in a way that make sense.
By example:
max(kafka_lag) by(partition) if you want to view the maximum lag
avg(kafka_lag) by(partition) if you want to have a sense of overall lag
Any of the aggregation operators can be used to extract information.

Related

Share kusto variables between grafana panels

Well, let's say that I have the query from my previous question: How to do multi graph time series on Grafana with Kusto
Then I'd like to consume the tiemposCicloBruto variable from one panel to another in order to avoid repeating queries.
I saw: https://grafana.com/blog/2020/10/14/learn-grafana-share-query-results-between-panels-to-reduce-load-time/
But there isn't any way to share variables at all...
I also tried it as a dashboard variable, but it doesn't seem to support tabular expressions at all...
You can share only input variables across dashboard panels. Variables work as primitive text substitution in one direction (from dashboard to query), and do not take into account any context in your query language.
Your link tells about sharing results of the query between different panels. If exact same result set returned to a panel fits your needs, you can reuse it "for free", without putting load on the database. You don't need to save it into any variable, you just set it as a pseudo-datasource and you get the result immediately.
You can factor this feature into design of you panels. Examples could be:
time series plus histogram visualizations of the same data;
time-series chart plus a panel with latest readings (or use other Grafana reduce expressions).

Set different maximum values for each field in the Bar Gauge | Grafana

I was trying to set up a dashboard so that I could monitor the number of messages within certain queues. The problem is that in order to create a suitable alert I need to set a maximum value for each of them.
The number of queues is really high so it is impossible to set this value manually, so I thought of retrieving it through another query.
At this point I don’t know how to apply this second query to the first one in order to achieve the desired result.
From above two options, select 'Overrides'.
You will find your metric there.
After that select Max as per given image below,

Query Graphite Metrics for specific data points

I want to query my graphite server to retrieve certain metrics.
I am able to query all data points between certain time period but my requirement is, I want to query data points of specific time of previous days.
How can I do this?
The Graphite Render API supports a number of arguments in order to make your query more specific. Specifically, the from / until arguments will be useful to you, you can read about them here: https://graphite.readthedocs.io/en/latest/render_api.html#from-until
edit: I should add that if you're using Grafana for visulaising your data, you can click+drag on the graph to select specific time ranges or use the timepicker in the top-right corner to choose Custom and set your range there.

how to add a custom value in grafana legend?

There is a graph display elasticsearch index count, see below
I want to add a value: diff = max - min in Legend, how to implement it?
I'm pretty sure you can't, easily. You can hack your way around it by adding yet another query to your graph, something like
max_over_time(my_metric[[[__range_s]]s]) - min_over_time(my_metric[[[__range_s]]s])
Grafana will replace the [[__range_s]] bit with the length of the time range of the current dashboard, e.g. 3600 for the default 1h, so the query actually sent to Prometheus will be
max_over_time(my_metric[3600s]) - min_over_time(my_metric[3600s])
Meaning Prometheus will compute the difference between the max and min separately from Grafana (which does it on top of the samples returned by Prometheus). (It will also compute this difference for the whole time range, not just the most recent sample, which is what you're interested in.) Then you can tweak the display of said time series in Grafana (e.g. by setting line=0, fill=0) so it will not show up on the graph itself, only in the legend. But the legend will then display the current value of the difference, as well as its min, max, avg, which will be quite the crappy UX.
Edit: Or you can add said query to a separate panel (e.g. a table panel), to the right of your graph. That may let you better control the UX, although it still won't be part of the actual legend.
Edit 2: One final thing you could try, that would give you exactly what you want, is to tweak Grafana's graph panel to add a "range" value next to "min", "max" and the bunch. The source code is here, I'm pretty sure it's mostly a copy-pasta job. You likely wouldn't even have to rebuild all of Grafana, you could just package the modified panel as "Tweaked Graph Panel" plugin and drop it into your Grafana deployment's plugins folder. Then, in your dashboard, instead of using "Graph Panel", use "Tweaked Graph Panel".

Tableau performance

I've a problem with the dashboard in Tableau. In the dashboard there are many worksheets, and all the columns that are in the report are calculable. The problem is that dashboard is being formed for a very long time. The report contains approximately 2 million rows. And it is generated about 5 minutes.
Tell me, what are the solutions in this case?
Maybe I can somehow adjust the page display and not all the records at once?
To reduce the calculation time, try to exclude data you don't need with a data source filter in tableau. You can also hide or delete unused calculated fields. Other things you can do is reduce sheets that are not used.
Here's a link: https://www.tableau.com/about/blog/2016/1/5-tips-make-your-dashboards-more-performant-48574
Steps to follow to reduce calculation time:
Extract the data and use Extract data and also keep option as extract instead of live.Also replace the data source using extract data.
Use "User Filter" to reduce calculation time so that tableau will display of particular user data only.
I hope this will work to solve your problems.
I have one more idea to resolve this issue.
1)when you loan first time your dashboard put into Dashboard Action Filter
First Time load dashboard data exclude in your sheet.
Dashboard Menu->Action->add action->select sheet and exclude option.
2) Live to Extract data source and select radio button extract.
3)use user filter.
I am following the other answers (use extract, dashboard action filter...) and I want to add one point:
Drag every field used by any tablesheet on the dashboard on "Detail" of every tablesheet you are using on the Dashboard. Now Tableau loads all needed data while loading the first tablesheet and can use this data for the other sheets.
i.e. A dashboard contains three tablesheets (A, B, C) now you drag every field used by A on "Deatil" of B and C, every field used by B on "Deatil" of A and C, every field used by C on "Deatil" of B and A.
We are also having a similar issue with 150 million rows but I want to check if you are doing following steps. This may help you. This goes back to fundamentals of Tableau reporting.
1/ Try to make sure your data set is in star schema format. This will help a lot in report.
2/ Try to have tables and views in DB in such a way that same columns are used in Tableau. Any extra columns in tables adds to the performance issue.
3/Make sure indexing is done properly for all the fields that are joined.
4/ In my experience Dashboard adds extra performance lag. So make sure you try to get as much performance tuning on sheets as possible before even going to dashboard.
5/ If required try to use materialized views.
hope this helps.
Try to capture performance metrics using performance recorder option in Tableau.
Check for the underlying DB tables and joins present on the data source layer.
Try using optimized sets and parameters as required and get rid of less relevant filters.
Try using data extracts with scheduled refresh with data source filter for limited business years data.