InfluxDB and NodeRED - grafana

It is quite a while since I coded something and it is the first time I am dealing with Influxdb and NodeRED.
I am acquiring four sets of measurements from a sensor connected to a Pi. This is a screenshot taken during the debug, the measurements are coming trough.
I managed to get the data from the sensors into NodeRED:
The problems I am facing are:
how to structure the table (measurements) in InfluxDB and get those
data into the right column;
and how/where to define the sample interval to avoid millions of data
into the db?
I will later on try to connect the DB with Grafana and it is all new for me.
Any help is appreciated.

First, add a function node at the end of each sensor node and save the output as variable. The code will vary greatly depending how you are getting your sensor data, but here is how I do it:
msg.payload = Number(msg.payload);
flow.set("presion_agua_psi",msg.payload);
flow.set("sensor_presion_agua","Wemos D1");
return {payload: msg.payload};
In example below, I am using MQTT to send the sensor data
Then, separately, use an inject node and set it to repeat every xx minutes. This is the timeframe you will use to actually save the data into influxDB.
After the inject node, add a function node that creates a dictionary, using the variable name and its value. This will make sure your columns in influx are stored with a name.
Once again, the code will vary, but here is an example:
msg.payload = {
Timestamp:date,
Device:flow.get("sensor_nivel_agua"),
Nivel_Agua_Tinaco:flow.get("Agua_Tinaco"),
}
return msg;
Finally, add your influxDB credentials and debug to make sure the data is getting stored correctly.

Related

Graphite: keepLastValue for a given time period instead of number of points

I'm using Graphite + Grafana to monitor (by sampling) queue lengths in a test system at work. The data that gets uploaded to Graphite is grouped into different series/metrics by properties of the payloads in the queue. These properties can be somewhat arbitrary, at least to the point where they are not all known at the time when the data collection script is run.
For example, a property could be the project that the payload belongs to and this could be uploaded as a separate series/metric so that we can monitor the queues broken down by the different projects.
This has the consequence that Graphite sends a lot of null values for certain metrics if the queues in the test system did not contain any payloads with properties that would group it into that specific series/metric.
For example, if a certain project did not have any payloads in queue at the time when the data collection was ran.
In Grafana this is not so nice as the line graphs don't show up as connected lines and gauges will show either null or the last non-null value.
For line graphs I can just chose to connect null values in Grafana but for gauges thats not possible.
I know about the keepLastValue function in Graphite but that includes a limit for how long to keep the value which I like very much as I would like to keep the last value until the next time data collection is ran. Data collection is run periodically at known intervals.
The problem with keepLastValue is it expects a number of points as this limit. I would rather give it a time period instead. In Grafana the relationship between time and data points is very dynamic so its not easy to hard-code a good limit for keepLastValue.
Thus, my question is: Is there a way to tell Graphite to keep the last value for a given time instead of a given number of points?

Node-red dynamic RBE for multiple sensors

I want to write data into DB only if it has changed. For that i've used Swith + RBE nodes ().
What i would like to achieve is to have dynamic number of sensors. Switch is separating by mac address of sensor. Payload into node "by sensor" looks like that:
msg.payload = {"tmp":22.8,"hum":36,"batt":73,"mac":"a4c1382665a7"}
So my goal is to write data into database if it is has changed. How could I make marked area 'dynamic' so i could easily add new sensors without changing node-red workflow?
RBE runs separate channels for each msg.topic so as long as each sensor uses a different topic then they should be filtered accordingly.

How do we change the "precision:ms" setting in the Grafana Query Inspector?

I have an InfluxDB database with only x11 data points in it. These data are not displaying correctly (or at least as I would expect) in Grafana when the time between them is shorter than 1ms.
If I insert data points 1 ms apart, then everything works as expected and I see all x11 points at the correct times, as shown below.:
However, if I delete these points and upload new ones but this time one point per 100 μs, then although the data displays correctly in InfluxDB, in Grafana I see only two points in my graph:
It seems like the data is being rounded/binned to the nearest millisecond, an that this is related to the “precision=ms” setting in the query here:
but I cannot find any way to change this setting. What is the correct way to fix this?
You can't configure Grafana to support different time precision for the InfluxDB. It is hardcoded in the source code: https://github.com/grafana/grafana/blob/36fd746c5df1438f27aa33fc74b24be77debc7ff/public/app/plugins/datasource/influxdb/datasource.ts#L364 (It may need to be fixed in multiple places of the source, not only in this one.)
So the correct way to fix it is to code it, which is of course not in the scope of this question.

Node Red MongoDB

I have sensor data from MongoLab to Node-RED and I want visualize this data using Node-Red dashboard in form of a gauge or chart.
Data from the mongoLab collection looks like this:
[{"_id":"5947e34de8fef902920defd8","sensorId":"5947340048225508","value":34,"date":"2017-06-19T14:44:29.000Z"},{"_id":"5947e34e6737e202b54f0a62","sensorId":"13359295204302776","value":25,"date":"2017-06-19T14:44:30.000Z"},{"_id":"5947e352e8fef902920defdc","sensorId":"5947340048225508","value":37,"date":"2017-06-19T14:44:34.000Z"},{"_id":"5947e3536737e202b54f0a66","sensorId":"13359295204302776","value":24,"date":"2017-06-19T14:44:35.000Z"}]
I want to visualize the values based on the sensorId...or is there any way I can be able to visualize this data using Node Red.
The function node is using the following javascript
msg.headers = {"Content-Type":"application/json"};
return msg;
My intention is to visualize the sensor value on the ui_gauge or chart.
Make a gauge/graph for each of the unique data steams you want to reflect in the UI/dashboard,
Then you'll need to double the output lines to another function that passes this information to the msg.payload, and then from that function, tie it to the corresponding dashboard gauges.
A gauge will obviously show the last value sent, while a graph will show you a history. May need to tweak the visual layout of the dashboard gauges/charts to show more data, to your liking.
Flow Chart Example
Your code might look something like this in the new forked function that is then tied to your gauges:
msg.payload = msg.value;
return msg;
or you can use a switch, that then breaks the values to multiple outputs, that then each output goes to a corresponding gauge to reflect the data.
Flow Chart Example Using Switch
I really hope this helps.

'In-Crate' updates instead of client programs?

This is a simplified example scenario.
I collect interval temperature data from a room's heating, let's say every minute.
Additionally, there is a light switch, which sends its status (on/off) when someone switches the light on or off. All events are stored in crate with their timestamps.
I'd like to query all temperature readings while the light switch is in the "on"-state.
My design is to denormalize the data. So, each temperature event gets a new field, which contains the light switch status. My query breaks down to a simple filter then. However, I have only the events if someone presses the switch, but no continuous readings. So, I need to read out all light switch data, resemble the switch's state over time and update all temperature data accordingly.
Using crate, is there a way doing everything within crate using crate's SQL only, i.e. 'in-crate' data updates? I do not want to setup and maintain external client programs for such operations.
In more complex scenarios, I may also face the problem of reading a huge amount of data first via a single client program in order to update afterwards other data stored within the same data store. This "bottleneck" design approach worries me. Any ideas?
Thanks,
nodot