I am getting this issue while storing the data in influxdb. The logs are generating as data is stored but when it comes to influxdb I am unable to find the data in the measurement. I thought it may be due to database size limit but for other measurements the data is getting inserted and no issue with them. So, if you have any idea about how to check measurement size or any more ideas to solve this issue.
Thanks in advance.
If you are on InfluxDB v1.x, you could use influx_inspect's report-disk.
If you are on InfluxDB v2.x, you take advantage of the internal stats as following:
from(bucket: "yourBucket")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_measurement"] == "storage_shard_disk_size")
|> filter(fn: (r) => r["_field"] == "gauge")
|> last()
Related
I have data stored in influxdb recording when my heating and cooling systems turn on and off. It forms a table like:
Time, furnace on (boolean), fan on (Boolean), cooling on (boolean)
But there are only entries when the state changes. I’m having trouble modifying the data into the forms I want:
time on, time off data so I can annotate grafana temperature graphs
Calculating the total time active for arbitrary times. This is to calculate carbon used or estimate my house’s insulation.
What’s the idiomatic way of doing this in Flux?
I tried reading the documentation to better understand flux’s functional philosophy.
I looked into using “reduce” but I can’t figure out how to pass more state than a running total.
The sql support for influxdb doesn’t have the advanced features necessary to express these queries.
You could try the stateDuration function.
from(bucket: "yourBucket")
|> range(start: -5m)
|> stateDuration(fn: (r) => r._value == "1", column: "furnace_on", unit: 1s) // searches the yourBucket bucket over the past 5 minutes to find how many seconds a furnace has been on
from(bucket: "yourBucket")
|> range(start: -5m)
|> stateDuration(fn: (r) => r._value == "1", column: "fan_on", unit: 1s) // searches the yourBucket bucket over the past 5 minutes to find how many seconds a fan has been on
from(bucket: "yourBucket")
|> range(start: -5m)
|> stateDuration(fn: (r) => r._value == "1", column: "cooling_on", unit: 1s) // searches the yourBucket bucket over the past 5 minutes to find how many seconds a cooling has been on
I have an InfluxDB 4.5.0 database running on Home assistant 2022.8.7.
I want to plot two InfluxDB queries on the same Grafana 7.6.0 graph, but one series is timeshifted by +24hrs.
After several hours of research I see it it possible to timeshift all the series on a Grafana panel using the "Query Options" but I can find no way to timeshift just one of the series.
I note that there is a timeshift function in InfluxDB but am stumped as to how I can modify the query in Grafana to timeshift this by +24hrs
As an example, if the series I want to timeshift is given by the query
SELECT mean("value") FROM "kWh" WHERE ("entity_id" = 'energy_tomorrow') AND time >= now() - 7d and time <= now() GROUP BY time(5m) fill(linear)
is there anyway to modify this query to timeshift the result by +24h, or alternatively what other method is available to achieve this basic result in Grafana with InfluxDB ?
Thanks in advance.
After many hours of trying I've finally found a solution, and post for any others with the same problem.
There are two ways of connecting influxDB to Grafana on home assistant
InfluxQL : which is an SQL like query language and the default connection method
Flux : which is InfluxData’s functional data scripting language
It does not seem possible to timeshift with (1), but there is a function in (2) which allows timeshifting. So the solution is to add a new datasource in grafana, through the UI
Configuration : Data Sources : Add Data Source : InfluxDB
Give this new datasource a recognisable name (eg FluxQuery) and configure this datasource to use Flux instead of the default InfluxQL. Then when adding a new panel in grafana, providing you select the appropriate datasource (eg FluxQuery), then flux querying is enabled and timeshifting possible.
As an example, if the InfluxDB database has the name HomeAssistant which includes two database entity_id's called energy_today and energy_tomorrow. The following flux query plots energy_today timeshfited +24hrs so it overlays correctly on energy_tomorrow
Query A
from(bucket: "HomeAssistant/autogen")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_measurement"] == "kWh")
|> filter(fn: (r) => r["entity_id"] == "energy_tomorrow")
Query B
from(bucket: "HomeAssistant/autogen")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> timeShift(duration: 24h)
|> filter(fn: (r) => r["_measurement"] == "kWh")
|> filter(fn: (r) => r["entity_id"] == "energy_today")
I have a dashboard in grafana (v 8.4.4) that uses InfluxDB with Flux query which looks something like this:
from(bucket: "landscape_sizing")
|> range(start: ${__from}, stop: ${__to})
|> filter(fn: (r) => r["_measurement"] == "old_snapshots")
If I select some range like Last 7 days or Last 90 days, I get no data in the dashboard. If I select absolute time rage and though with fixed dates/time data shows up. The fun part is this query used to work for a while, and as far as I'm aware there were no major changes on either Grafana or Influx side. Is there a way to check what the __from and __to variables in the query are interpred to?
You should be able to see that in Grafana, from the visual Inspect -> Query.
that should show you the actual query text
Not sure I got my title right, apologies.
Recently discovered influxdb 2.0 and grafana 7, vast improvement from previous version.
I wondered if something is possible to do, I have a system that posts to influxdb the time it took to do a task, is it possible to count the number of entries for the last 30 days or calendar month ideally and display it as a gauge or text on grafana?
the Flux syntax is not like anything I have seen before so no idea where to start and any obvious googling I have done doesn't seem to bare fruit
Could be I need to collect the data via python, work it out and post it to a new measurement, seems kludgey
Thanks
I finally figured this out
In order to count you need to group the data and then only keep the columns you need, for my table I grouped and kept the "serial" column, then ran a count on this. the Range -1mo is the previous calendar month
from(bucket: "provisioning")
|> range(start: -1mo, stop: now())
|> filter(fn: (r) => r["_measurement"] == "provision")
|> filter(fn: (r) => r["_field"] == "ttl")
|> group()
|> keep(columns: ["serial"])
|> count(column: "serial")
I am hitting a wall at the moment. I am working with Elixir and Ecto and I have a table with data and a column with Datetime.
The Datetime column is as follows:
2017-11-16 16:02:01
2017-11-23 09:00:07
2017-11-27 13:19:58
2017-12-05 07:48:42
What I want to do is sort this table based on time, instead of date. So the result(ASC) would be:
2017-12-05 07:48:42
2017-11-23 09:00:07
2017-11-27 13:19:58
2017-11-16 16:02:01
Do you guys have any ideas in ecto? A postgres query might also help!
So I think something like this should work:
Post
|> order_by([p], fragment("?::time", p.inserted_at))
|> Repo.all()
Here you use a fragment to in order to leverage the PostgreSQL casting mechanism which extracts the time part from the datetime. I have not tested it but I guess it should work.
If the accepted answer won't work, try
from(p in Post, order_by: fragment("? DESC", p.inserted_at))