I'm using Kubernetes with kube-state-metrics and Prometheus/grafana to graph various metrics of the Kubernetes Cluster.
Now I'd like to Graph how many new PODs have been created per Hour over Time.
The Metric kube_pod_created contains the Creation-Timestamp as Value but since there is a Value in each Time-Slot, the following Query also returns Results >0 for Time-Slots where no new PODs have been created:
count(rate(kube_pod_created[1h])) by(namespace)
Can I use the Value in some sort of criteria to only count if Value is within the "current" Time-Slot ?
PODs created in past hour
count ( (time() - sum by (pod) (kube_pod_created)) < 60*60 )
As per docs https://prometheus.io/docs/prometheus/latest/querying/functions/ rate() should be used with counters only. I suggest you use changes() function as time of creation value should change within your time frame in case of pod creation and maybe sum is better than count too.
changes()
For each input time series, changes(v range-vector) returns the number of times its value has changed within the provided time range as an instant vector.
sum(changes(kube_pod_created[1h])) by(namespace)
The following query returns the number of pods created during the last hour:
count(last_over_time(kube_pod_created[1h]) > time() - 3600)
How does it work?
The last_over_time(kube_pod_created[1h]) returns creation timestamps for pods, which were active during the last hour (see last_over_time() docs). This includes pods, which could be started long time ago and are still active alongside pods, which where created during the last hour.
We need to filter out pods, which were created more than a hour ago. This is performed by comparing pod creation timestamps to time() - 3600 (see time() docs). Such comparison removes time series for pods created more than a hour ago. See these docs for details on how doe comparison operators work in PromQL.
Then the outer count() returns the number of time series, which equals to the number of pods created during the last hour.
Related
I have 2 queries in a grafana panel.
I want to run Query A every 5 min
I want to run Query B every 10 min, so I can check the value difference between each query using transform.
How can I set the query interval, I know I can change scrape interval but my goal here is to check pending messages and if it doesnt change in 10 min trigger an alert. I am trying to get a count at 1st minute and get count again at 10th minute. check the difference using transform and trigger an alert if no change (messages are not getting processed )
using grafana 7
Thanks !
I am new to Grafana Monitoring and not quite familiar with PromQL query. I wanted to try to retrieve a table has pod names and time created from Kubernetes cluster. How do I need to retrieve the info using PromQL?
For instance, in the table will have
Pod Name nginx-xxxxxxxx-xxxxxx and creation time HH:MM:SS or any suitable format.
You can use metric kube_pod_created which returns the time in unix timestamp. Multiply the value by 1000 and select date format.
kube_pod_created{pod="nginx-xxx"} * 1000
More you can find here https://github.com/grafana/grafana/issues/6297.
Is it possible in grafana with a prometheus backend to determine the highest value recorded for the lifetime of a data set, and if so, determine the time that the value occurred?
For example, I'm using site_logged_in as the query in a Singlestat panel to get the current number of logged in users, along with a nice graph of recent activity over the past hour. Wrapping that in a max() seems to do nothing, and a max_over_time(site_logged_in[1y]) gives me a far too low number.
The value is a single gauge value coming from the endpoint like so
# HELP site_logged_in Logged In Members
# TYPE site_logged_in gauge
site_logged_in 583
Is something like determining highest values even a realistic use case for prometheus?
max_over_time(site_logged_in[1y]) is the max over the past year, however this presumes that you have a year worth of data to work from.
The highest value over the specified time range can be obtained with max_over_time() function. For example, the following value would return the maximum value for site_logged_in metric over the last year:
max_over_time(site_logged_in[1y])
Unfortunately Prometheus doesn't provide the function for returning the timestamp for the maximum value. If you need to obtain the timestamp for the maximum value, then you can use tmax_over_time() function from MetricsQL. For example, the following MetricsQL query returns the timestamp in seconds for the maximum value of site_logged_in metric over the last year:
tmax_over_time(site_logged_in[1y])
I've created a metric to count unique ids in a data set over the day yesterday and am scraping it into prometheus. So say yesterday's count was 100, this value will be reported throughout today until tomorrow, when today's tally will be computed an then reported throughout tomorrow.
So far so good. Now when I display the value 100 in Grafana, it will show it with today's date, when the actual value is actually yesterday's.
Is there a way to simply offset the x axis in Grafana by -1d to make the dates and values align again, i.e. to change the scrape date to the 'value' date, if you will?
I know there's a 'time shift' in Grafana, but that will just offset the scrape date. I'm also aware of prometheus' 'offset' operator, which will do the same.
What I'm looking for is simply to tell Grafana that it should display 'now' as 'now-1d'.
I've found a setting on the dashboard level that is labeled "Now delay now-". However, this also doesn't shift the x axis and does nothing to change the display.
Grafana version 4.1.1, Prometheus version 1.5.3
I want to create analytics using redis - basic counters per object, per hour/day/week/month/year and total
what redis data structure would be effective for this and how can I avoid doing many calls to redis?
would it better to have each model have this sets of keys:
hash - model:<id>:years => every year has a counter
hash - model:<id>:<year> => every month has a counter
hash - model:<id>:<year>:<month> => every day has a counter
hash - model:<id>:<year>:<month>:<day> => every hour has a counter
if this scheme is correct, how would I chart this data without doing many calls to redis? I would have to loop on all year in model:<id>:years and fetch the month, then loop on the month, etc? Or I just grab all fields and their values from all keys as a batch request and then process that in the server?
It's better to use a zset for this instead of an hash. Using timestamp as score you will be able to retrieve data for specific time range
For a date range you will use model:<id>:<year>:<month>, for an hour range (using model:<id>:<year>:<month>:<day>) and so on...
Indeed, if the date range is larger than a month (e.g. from January 1st 2014 to March 20th 2014), you will have to retrieve multiple zset (model:<id>:2014:01, model:<id>:2014:02 and model:<id>:2014:03) and merge the results.
If you really want to do a date range inside a single request, you can always store day precision data inside model:<id>:<year>. And if you want to handle date range over multiple years, you will just need to have a single zset e.g. model:<id>:byDay.
However, please note that storing historical data will increase memory consumption over time so you should already think about data retention. With Redis you can either use EXPIRE on zset or do it yourself with crons.