Set different maximum values for each field in the Bar Gauge | Grafana - queue

I was trying to set up a dashboard so that I could monitor the number of messages within certain queues. The problem is that in order to create a suitable alert I need to set a maximum value for each of them.
The number of queues is really high so it is impossible to set this value manually, so I thought of retrieving it through another query.
At this point I don’t know how to apply this second query to the first one in order to achieve the desired result.

From above two options, select 'Overrides'.
You will find your metric there.
After that select Max as per given image below,

Related

Graphite: keepLastValue for a given time period instead of number of points

I'm using Graphite + Grafana to monitor (by sampling) queue lengths in a test system at work. The data that gets uploaded to Graphite is grouped into different series/metrics by properties of the payloads in the queue. These properties can be somewhat arbitrary, at least to the point where they are not all known at the time when the data collection script is run.
For example, a property could be the project that the payload belongs to and this could be uploaded as a separate series/metric so that we can monitor the queues broken down by the different projects.
This has the consequence that Graphite sends a lot of null values for certain metrics if the queues in the test system did not contain any payloads with properties that would group it into that specific series/metric.
For example, if a certain project did not have any payloads in queue at the time when the data collection was ran.
In Grafana this is not so nice as the line graphs don't show up as connected lines and gauges will show either null or the last non-null value.
For line graphs I can just chose to connect null values in Grafana but for gauges thats not possible.
I know about the keepLastValue function in Graphite but that includes a limit for how long to keep the value which I like very much as I would like to keep the last value until the next time data collection is ran. Data collection is run periodically at known intervals.
The problem with keepLastValue is it expects a number of points as this limit. I would rather give it a time period instead. In Grafana the relationship between time and data points is very dynamic so its not easy to hard-code a good limit for keepLastValue.
Thus, my question is: Is there a way to tell Graphite to keep the last value for a given time instead of a given number of points?

Query Graphite Metrics for specific data points

I want to query my graphite server to retrieve certain metrics.
I am able to query all data points between certain time period but my requirement is, I want to query data points of specific time of previous days.
How can I do this?
The Graphite Render API supports a number of arguments in order to make your query more specific. Specifically, the from / until arguments will be useful to you, you can read about them here: https://graphite.readthedocs.io/en/latest/render_api.html#from-until
edit: I should add that if you're using Grafana for visulaising your data, you can click+drag on the graph to select specific time ranges or use the timepicker in the top-right corner to choose Custom and set your range there.

Drop a Prometheus label / tag

I've got metrics that looks like this
kafka_lag{client_id="dcp-0",partition="53"} 1977005
kafka_lag{client_id="dcp-10",partition="53"} 2345234
When I visualize in grafana I get two different lines, however I would like to drop client_id and display only kafka_lag{partition="53"} values.
How can I drop a tag from Prometheus output?
If you have two values at the same time (in your case, the lag for different client) and you want to view only one, you need to combine them in a way that make sense.
By example:
max(kafka_lag) by(partition) if you want to view the maximum lag
avg(kafka_lag) by(partition) if you want to have a sense of overall lag
Any of the aggregation operators can be used to extract information.

Can you calculate active users using time series

My atomist client exposes metrics on commands that are run. Each command is a metric with a username element as well a status element.
I've been scraping this data for months without resetting the counts.
My requirement is to show the number of active users over a time period. i.e 1h, 1d, 7d and 30d in Grafana.
The original query was:
count(count({Username=~".+"}) by (Username))
this is an issue because I dont clear the metrics so its always a count since inception.
I then tried this:
count(max_over_time(help_command{job=“Application
Name”,Username=~“.+“}[1w]) -
max_over_time(help_command{job=“Application name”,Username=~“.+“}[1w]
offset 1w) > 0)
which works but only for one command I have about 50 other commands that need to be added to that count.
I tried the:
"{__name__=~".+_command",job="app name"}[1w] offset 1w"
but this is obviously very expensive (timeout in browser) and has issues with integrating max_over_time which doesn't support it.
Any help, am I using the metric in the wrong way. Is there a better way to query... my only option at the moment is the count (format working above for each command)
Thanks in advance.
To start, I will point out a number of issues with your approach.
First, the Prometheus documentation recommends against using arbitrarily large sets of values for labels (as your usernames are). As you can see (based on your experience with the query timing out) they're not entirely wrong to advise against it.
Second, Prometheus may not be the right tool for analytics (such as active users). Partly due to the above, partly because it is inherently limited by the fact that it samples the metrics (which does not appear to be an issue in your case, but may turn out to be).
Third, you collect separate metrics per command (i.e. help_command, foo_command) instead of a single metric with the command name as label (i.e. command_usage{commmand="help"}, command_usage{commmand="foo"})
To get back to your question though, you don't need the max_over_time, you can simply write your query as:
count by(__name__)(
(
{__name__=~".+_command",job=“Application Name”}
-
{__name__=~".+_command",job=“Application name”} offset 1w
) > 0
)
This only works though because you say that whatever exports the counts never resets them. If this is simply because that exporter never restarted and when it will the counts will drop to zero, then you'd need to use increase instead of minus and you'd run into the exact same performance issues as with max_over_time.
count by(__name__)(
increase({__name__=~".+_command",job=“Application Name”}[1w]) > 0
)

How to modify a record within a record type in oracle forms

I created 3 blocks in my oracle 10g form, Headers, Lines and Lines Details. I am fetching the records using cursors for all the three blocks everything is working fine. Now in Lines Details block there is a numeric field called priority. By default I am using FIFO method for priority value starting from 1 to n numbers. Now I want user to decide the priority such that any specific record can be shifted up or down to increase or decrease the priority without committing line details. Once user is satisfied with priority he will click on save to commit the changes. Please help me with this. Thanks in advance.
Locate the changed record and based on the current priority value make it current priority +/- number of times user clicked Up or Down. Declare a record type variable with exact number of columns as in your lines details data block. Copy the all records including the changed record into record type variable. Clear block with no validate and then re-populate the changed record. To shift the record as per the priority values modify your default order by clause. This will solve your problem.