Node-red dynamic RBE for multiple sensors - workflow

I want to write data into DB only if it has changed. For that i've used Swith + RBE nodes ().
What i would like to achieve is to have dynamic number of sensors. Switch is separating by mac address of sensor. Payload into node "by sensor" looks like that:
msg.payload = {"tmp":22.8,"hum":36,"batt":73,"mac":"a4c1382665a7"}
So my goal is to write data into database if it is has changed. How could I make marked area 'dynamic' so i could easily add new sensors without changing node-red workflow?

RBE runs separate channels for each msg.topic so as long as each sensor uses a different topic then they should be filtered accordingly.

Related

Multiple agents arrival based on Variable and database column

In my source block I want to be the amount of agents based on two different factors namely the amount of beds and visitors per bed. The visitors per bed is just a variable (e.g. visitors=3) and the amount of beds is loaded from the database table which is an excel file (see first image). Now I want to code this in the code block as shown in the example in image 2, but I do not know the correct code and do not know if it is even possible.
Simplest solution is just to do the pre-calcs in the input file and have in the dbase.
The more complex solution is to set the Source arrivals as:
Now, you read your dbase code at the start of the model using SQL (i.e. the query constructor). Make the necessary computations and create a Dynamic Event for each arrival when you want it to happen, relative to the model start. Each dynamic event then calls the source.inject(1) method.
Better still is to not use Source at all but a simple Enter block. The dynamic event creates the agent with all relevant properties from your dbase and pushes it into the Enter block using enter.take(myNewAgent)
But as I said: this is not trivial

InfluxDB and NodeRED

It is quite a while since I coded something and it is the first time I am dealing with Influxdb and NodeRED.
I am acquiring four sets of measurements from a sensor connected to a Pi. This is a screenshot taken during the debug, the measurements are coming trough.
I managed to get the data from the sensors into NodeRED:
The problems I am facing are:
how to structure the table (measurements) in InfluxDB and get those
data into the right column;
and how/where to define the sample interval to avoid millions of data
into the db?
I will later on try to connect the DB with Grafana and it is all new for me.
Any help is appreciated.
First, add a function node at the end of each sensor node and save the output as variable. The code will vary greatly depending how you are getting your sensor data, but here is how I do it:
msg.payload = Number(msg.payload);
flow.set("presion_agua_psi",msg.payload);
flow.set("sensor_presion_agua","Wemos D1");
return {payload: msg.payload};
In example below, I am using MQTT to send the sensor data
Then, separately, use an inject node and set it to repeat every xx minutes. This is the timeframe you will use to actually save the data into influxDB.
After the inject node, add a function node that creates a dictionary, using the variable name and its value. This will make sure your columns in influx are stored with a name.
Once again, the code will vary, but here is an example:
msg.payload = {
Timestamp:date,
Device:flow.get("sensor_nivel_agua"),
Nivel_Agua_Tinaco:flow.get("Agua_Tinaco"),
}
return msg;
Finally, add your influxDB credentials and debug to make sure the data is getting stored correctly.

Assigning bins to records in CHAID model

I built a custom CHAID tree in SPSS modeler. I would like to assign the particular terminal nodes to all of the records in the dataset. How would I go about doing this from within the software?
Assuming that you used the regular node called CHAID, if you select inside the diamond icon (created chaid model) in the tab configurations the rule identifyer, the output will add another variable called $RI-XXX that will classify all the records within the terminal nodes. Just check that option and then put a table node after that and all the records will be classified.
You just need to apply the algorithm to whatever data set you need, and you only need to inputs to be the same (type and eventually storage).
The diamond contains the algo and you can disconnect it and connects to whatever you want.
http://beyondthearc.com/blog/wp-content/uploads/2015/02/spss.png

Node Red MongoDB

I have sensor data from MongoLab to Node-RED and I want visualize this data using Node-Red dashboard in form of a gauge or chart.
Data from the mongoLab collection looks like this:
[{"_id":"5947e34de8fef902920defd8","sensorId":"5947340048225508","value":34,"date":"2017-06-19T14:44:29.000Z"},{"_id":"5947e34e6737e202b54f0a62","sensorId":"13359295204302776","value":25,"date":"2017-06-19T14:44:30.000Z"},{"_id":"5947e352e8fef902920defdc","sensorId":"5947340048225508","value":37,"date":"2017-06-19T14:44:34.000Z"},{"_id":"5947e3536737e202b54f0a66","sensorId":"13359295204302776","value":24,"date":"2017-06-19T14:44:35.000Z"}]
I want to visualize the values based on the sensorId...or is there any way I can be able to visualize this data using Node Red.
The function node is using the following javascript
msg.headers = {"Content-Type":"application/json"};
return msg;
My intention is to visualize the sensor value on the ui_gauge or chart.
Make a gauge/graph for each of the unique data steams you want to reflect in the UI/dashboard,
Then you'll need to double the output lines to another function that passes this information to the msg.payload, and then from that function, tie it to the corresponding dashboard gauges.
A gauge will obviously show the last value sent, while a graph will show you a history. May need to tweak the visual layout of the dashboard gauges/charts to show more data, to your liking.
Flow Chart Example
Your code might look something like this in the new forked function that is then tied to your gauges:
msg.payload = msg.value;
return msg;
or you can use a switch, that then breaks the values to multiple outputs, that then each output goes to a corresponding gauge to reflect the data.
Flow Chart Example Using Switch
I really hope this helps.

'In-Crate' updates instead of client programs?

This is a simplified example scenario.
I collect interval temperature data from a room's heating, let's say every minute.
Additionally, there is a light switch, which sends its status (on/off) when someone switches the light on or off. All events are stored in crate with their timestamps.
I'd like to query all temperature readings while the light switch is in the "on"-state.
My design is to denormalize the data. So, each temperature event gets a new field, which contains the light switch status. My query breaks down to a simple filter then. However, I have only the events if someone presses the switch, but no continuous readings. So, I need to read out all light switch data, resemble the switch's state over time and update all temperature data accordingly.
Using crate, is there a way doing everything within crate using crate's SQL only, i.e. 'in-crate' data updates? I do not want to setup and maintain external client programs for such operations.
In more complex scenarios, I may also face the problem of reading a huge amount of data first via a single client program in order to update afterwards other data stored within the same data store. This "bottleneck" design approach worries me. Any ideas?
Thanks,
nodot