I have latitude/longitude values being put into an InfluxDB instance under different topics. They are provided via MQTT, e.g. topics are foo/bar/lat and foo/bar/lon with the coordinate values as float strings in the message part. These are then processed sequentially by Telegraf.
So each positioning event results in a 2-tuple of entries in different series that have slightly different timestamps. The ∆ is typically below 1 millisecond whereas the events are several seconds (or more) apart due to the processing in Telegraf.
When visualizing this in Grafana using a Time Series or (old) Graph panel, the difference does not matter, but the Geomap panel does not recognize the lat/lon values as actual coordinate pairs, so no data are displayed.
How can I write the queries (or a single query?) such that the timestamps are matched? Or can I prevent the timestamps from differing when being processed in Telegraf? There is no actual timestamp in the data sent over MQTT.
Related
I need to plot trend charts on the react app based on user inputs such as timestamps, devices, etc. I have related time series data in DynamoDB and S3 (which I can query using Athena).
Returning all those millions of data points for a graph seems unreasonable and is super laggy.
I guess one option is "binning" where I decide the number of bins based on how big the time range is and take averages of the readings in that bin. However, concerned about how well it will show the drops and high we need to show them accurately.
Athena queries and DDB queries (due to the 1MB limit) - both seem fairly slow so far.
Of course the size of the response payload is another concern as API and Lambda both limit it to 10 and 6Mb respectively.
Any ideas?
I can't suggest anything smarter than "binning", but if you are concerned that the bucket interval might become too wide and performance might suffer, you can fixate the interval. Then create more than one table. For example, the interval can be 1 hour and you can have a new table for each week.
This is what we did when we had to deal with time series in dynamo. At some point, we decided to switch to Amazon Timestream
I want to monitor the cpu usage of kafka container, but the graph is chopped up into different pieces. There seem to be gaps in the graph and after each gap a different colored line follows. The time range is last 30 days. For the exporter we use danielqsj/kafka-exporter:v1.4.2
The promql query used to create this graph is:
rate(container_cpu_usage_seconds_total{container="cp-kafka-broker"}[1m])
Can I merge these lines into one continual? If so, with what promql expression/dashboard configuration?
This happens when at least 1 of the labels that are attached to the metric changes. The rate function keeps all the original labels from the underline time series. In Prometheus, each time series is uniquely identified by the metric name container_cpu_usage_seconds_total and any labels (key-value pairs) attached to the metric (container, for instance). This is why Grafana uses different colors because they are different time series.
If you want to get a single series in Grafana you can aggregate using the sum operator:
sum(rate(container_cpu_usage_seconds_total{container="cp-kafka-broker"}[1m]))
which by default will not keep any of the original labels.
I'm logging custom metric data into AWS Cloudwatch and trying to graph it. I assumed that Dimensions in Cloudwatch were metadata for enriching my data, but it seems that once you add dimensions you can no longer query across different combinations of dimensions. So for one I don't really see the point of dimensions as any unique combination is basically just a new metric. But more importantly, is there a way to log one set of data with different labels or dimensions and then slice and dice that data (e.g., in Grafana).
To make it more concrete, I am logging cache load times in my application. I have one metric called "cache-miss", with several dimensions, for example:
the cached collection
the customer associated with the cached data
I want to several different graphs:
Total cache misses (i.e., ignore dimensions, just see a count over time)
Total cache misses per collection (aggregate by first dimension)
Total cache misses per customer (aggregate by second dimension)
Is there some way to achieve this with Cloudwatch metrics and/or Grafana (or alternate tool)?
As you have mentioned - https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html :
CloudWatch treats each unique combination of dimensions as a separate metric, even if the metrics have the same metric name. You can only retrieve statistics using combinations of dimensions that you specifically published. When you retrieve statistics, specify the same values for the namespace, metric name, and dimension parameters that were used when the metrics were created.
So if you have pushed Total cache misses with 2 dimensions, you can query this metric only with 2 dimensions. So you really can't just see a count over time.
Possible workarounds:
CloudWatch math - see example in CloudWatch does not aggregate across dimensions for your custom metrics
in theory also Grafana 7+ transformation feature https://grafana.com/blog/2020/06/11/new-in-grafana-7.0-data-transformations-for-all-visualizations-that-support-queries/
Or you can switch from the CloudWatch to better TSDB for your use case.
I'm looking for a function in Grafana which looks like it should be trivial, but until now I haven't been able to find out how, if at all, it is possible to do.
With the recent templating options, I can easily create my dashboard once, and quickly change the displayed data to look at different subsets of my data, and that's great.
What I'm looking for is a way to combine this functionality to create interactive graphs that show aggregations on different subsets of my data.
E.g., the relevant measurement for me is a "clicks per views" measurement.
For each point in the series, I can calculate this ratio for each state (or node) in code before sending it to the graphite layer, and this is what I've been doing until now.
My problem starts where I want to combine several states together, interactively: I could use the "*" in one of the nodes, and use an aggregate function like "avg" or "sum" to collect the different values covered in the sub-nodes together.
Problem is, I can't just use an average of averages - as the numbers may be calculated on very different sample sizes,the results will be highly inaccurate.
Instead, I'd like to send to the graphite the "raw data" - number of clicks and number of views per state for each point in the series, and have grafana calculate something like "per specified states, aggregate number of clicks AND DIVIDE BY aggregate number of views".
Is there a was to do this? as far as I can tell, the asPercent function doesn't seem to do the trick.
You can use a query like this in edit mode:
SELECT (aggregate_function1(number_of_clicks)/aggregate_function2(number_of_views)) as result
FROM measurement_name
WHERE $timeFilter
GROUP BY time($_interval), state.
I have a time series and I'd like to:
a) Know how Jasper Reports (or JFreeChart) will combine my data for a single point on the chart by default
and
b) Be able to change how that combination is performed
For instance, let's say that I have samples of data once per second, and my time series is configured for "minute". That means that I have 60 pieces of real data for each single value shown on the chart. I'd like to be able to control how that mapping is done (e.g. average, maximum, etc.).
I looked around for documentation on how to see the default or modify how the plot works, but I wasn't able to find anything. Perhaps my search terms (chart, time series, etc.) were too generic.