Transferring .csv files through XBee Modules - raspberry-pi

We have set up a Monitoring System that can collect data. The system consists of several RPi's with attached accelerometers that log the data to a .csv file.
The RPi's are so spread out that they are not in reach of eachother and their own created PiFY.
We use XBee S1 with Digimesh 2.4 for increased range to give the RPi's commands through XCTU. The XBee modules are set up as Routers. We can start and stop data collecting.
Now we are interested in transferring the collected data (.csv file) to a Master RPi. How can it be done through these XBee modules?

I'd recommend doing any coding in Python, and using the pyserial module to send/receive data on the serial port. It's fairly simple to get started with that.
Configure the routers in "AT mode" (also called "transparent serial mode") via ATAP=0 with DL and DH set to 0 (telling it to use the coordinator as a destination for all serial data.
Simple Coordinator Solution
Have the routers include some sort of node ID in each CSV record, and then configuring the coordinator in "AT mode" as well. That way it will receive CSV records from multiple sources and just dump them out of its serial port. As long as you send complete lines of data from each router, you shouldn't see corrupted CSV records on the coordinator.
More Complicated Coordinator Solution
Configure the coordinator in "API mode" via ATAP=1. Pick a programming language your comfortable with, like C, Java or Python and grab one of Digi's Open Source "host libraries" from their GitHub repository.
The coordinator will receive CSV data inside of API frames so it can identify the source device that sent the data. With this configuration, you can easily send data back to a specific device or make use of remote AT commands to change I/O on the routers.
Note that with either setup, there's no need for the RPi to create the file -- it can just send a CSV line whenever it has data ready. Just make sure you're staging a complete line and sending it in a single "serial write" call to ensure that it isn't split into multiple packets over the air.

Related

Omnetpp application sends multiple streams

Let's say I have a car with different sensors: several cameras, LIDAR and so on, the data from this sensors are going to be send to some host over 5G network (omnetpp + inet + simu5g). For video it is like 5000 packets 1400 bytes each, for lidar 7500 packets 1240 bytes and so on. Each flow is encoded in UDP packets.
So in omnetpp module in handleMessage method I have two sentTo calls, each is scheduled "as soon as possible", i.e., with no delay - that corresponds to the idea of multiple parallel streaming. How does omnetpp handle situations, when it needs to send two different packets at the same time from the same module to the same module (some client, which receives sensor data streams)? Does it create some inner buffer on the sender or receiver side, therefore allowing really only one packet sending per handleMessage call or is it wrong? I want to optimize data transmission and play with packet sizes and maybe with sending intervals, so I want to know, how omnetpp handles multiple streaming at the same time, because if it actually buffers, maybe than it makes sense to form a single package from multiple streams, each such package will consist of a certain amount of data from each stream.
There is some confusion here that needs to be clarified first:
OMNeT++ is a discrete event simulator framework. An OMNeT++ model contains modules that communicate with each other, using OMNeT++ API calls like sendTo() and handleMessage(). Any call of the sendTo() method just queues the provided message into the future event queue (an internal, time ordered queue). So if you send more than one packet in a single handleMessage() method, they will be queued in that order. The packets will be delivered one by one to the requested destination modules when the requested simulation time is reached. So you can send as many packets as you wish and those packets will be delivered one by one to the destination's handleMessage() method. But beware! Even if the different packets will be delivered one by one sequentially in the program's logic, they can still be delivered simultaneously considering the simulation time. There are two time concepts here: real-time that describes the execution order of the code and simulation-time which describes the time passes from the point of the simulated system. That's why, while OMNeT++ is a single threaded application that runs each events sequentially it still can simulate infinite number of parallel running systems.
BUT:
You are not modeling directly with OMNeT++ modules, but rather using INET Framework which is a model directly created to simulate internet protocols and networks. INET's core entity is a node which is something that has network interface(s) (and queues belonging to them). Transmission between nodes are properly modeled and only a single packet can travel on an ethernet line at a time. Other packets must queue in the network interface queue and wait for an opportunity to be delivered from there.
This is actually the core of the problem for Time Sensitive Networks: given a lot of pre-defined data streams in a network, how the various packets interfere and affect each other and how they change the delay and jitter statistics of various streams at the destination, Plus, how you can configure the source and network gate scheduling to achieve some desired upper bounds on those statistics.
The INET master branch (to be released as INET 4.4) contains a lot TSN code, so I highly recommend to try to use it if you want to model in vehicle networks.
If you are not interested in the in-vehicle communication, bit rather want to stream some data over 5G, then TSN is not your interest, but you should NOT start to multiplex/demultiplex data streams at application level. The communication layers below your UDP application will fragment/defragment and queue the packets exactly how it is done in the real world. You will not gain anything by doing mux/demux at application layer.

Why is writing considered an input operation and reading considered an output operation?

I'm currently working my way through Computer Systems: A Programmer's Perspective 3, and in chapter 4 they consistently refer to read operations in the Y86 processor as being output from various CPU components and I don't understand why. Surely we would want to input data from memory into hardware component?
Here's a quote from the text alongside an accompanying diagram:
The Register file has four ports. It supports up to two simultaneous
reads (on ports A and B) and two simultaneous writes (on ports E and
M). Each port has both an address connection and a data connection,
where the address connection is a register ID, and the data connection
is a set of 64 wires serving as either an output word (for a read
port) or an input word (for a write port) of the register file.
This correspondence has been consistent through the text
It's just a matter of perspective:
A read port: a port from which you can read: i.e. data will flow from the register file to the reading entity, thus from the perspective of the register file implementation the data connection is an output wire.

Simulating multiple modbus slave devices using node red

I've managed to simulate a single slave device on my raspberry pi using node-red using functions to send data random values to the Modbus flex server. However, now I want to be able to simulate multiple Modbus slave devices on the port number and I'm unsure how to do this.
I've tried creating another Modbus flex server with the same port number, but this causes the whole node-red application to crash when it's deployed. Secondly, I've tried using different Modbus flex-write nodes to simulate different slave devices, but I'm unsure whether this is correct and if so, how I'd configure them to appear as different slave devices. This is because so far, my raspberry pi appears as slave 1, but I'm unsure where this comes from. I'm guessing it's to do with the unit-id of the Modbus flex-server but when I change the unit-id to a different number and type that number as the address in the master, it says no connection.
In conclusion, is it possible to use a single raspberry pi to simulate multiple slave devices on node-red using node-red-contrib-modbus and if so how do you do it?
The concept of Slaves in Modbus TCP differ somewhat from RTP as set out in the Modbus TCP Spec:
The MODBUS ‘slave address’ field usually used on MODBUS Serial Line is
replaced by a single byte ‘Unit Identifier’ within the MBAP Header.
The ‘Unit Identifier’ is used to communicate via devices such as
bridges, routers and gateways that use a single IP address to support
multiple independent MODBUS end units.
So there is a difference in termanalogy between Modbus RTP and TCP as well as a difference in the intended use of this field. The solution suggested by the spec would be to setup multiple servers on different ports (you cannot run multiple servers on a single port).
Having said that some TCP->RTP gateways (and other devices) use the unitid as the slave ID so I'm assuming you are trying to simulate something like this?
The first issue is that there appears to be a bug in Modbus Flex Server (reported) in that when you change the unit-id it is being stored as a string rather than a number. If you export the flow you will see something like "unitId": "3",; changing this to "unitId": 3, (no quotes around the 3) and importing fixes the issue (so that probably explains why you could not get this working).
Having said that changing the unit-id like this does not help you because it only supports one ID. However if you set the unit-id to 255 then it will listen on all unit-ids (this is a feature of the modbus-serial module used internally). Remember that you will currently need to manually fix the config to get this to work due to the bug.
Having done that you can do something like the following to respond to requests to different unit ids (the example will return the unit id (1 or 2) for all addresses so is not useful but shows the concept):

Unix - How can I send a message to multiple processes?

I have a process A that needs to send a message to all process of type B that are running. The process A doesn't know about these other processes, they can be created and destroyed depending on external factors, thus I can have a varying number of process of type B running.
I thought I could use an UDP socket in the process A to send messages to a port P and have all my processes of type B to listen to this port P and receive the a copy of the message.
Is that possible?
I am working with Linux OpenWRT.
I am trying with LuaSockets, but I am getting a "address already in use" error. It seems that I can not have multiples applications to listen to the same port ?
Thanks for your help
It could be useful to use shared memory if all the processes are local to a single machine.
Have a look at http://man7.org/linux/man-pages/man7/shm_overview.7.html for an explanation.
In short you will need the master process to create a shared memory region and write the data into it. The slave processes can then check the data in the memory region and if it has been changed act upon it. This is however just one of many ways to accomplish this problem. You could also look into using pipes and tee.

kernel-based (Linux) data relay between two TCP sockets

I wrote TCP relay server which works like peer-to-peer router (supernode).
The simplest case are two opened sockets and data relay between them:
clientA <---> server <---> clientB
However the server have to serve about 2000 such A-B pairs, ie. 4000 sockets...
There are two well known data stream relay implementations in userland (based on socketA.recv() --> socketB.send() and socketB.recv() --> socketA.send()):
using of select / poll functions (non-blocking method)
using of threads / forks (blocking method)
I used threads so in the worst case the server creates 2*2000 threads! I had to limit stack size and it works but is it right solution?
Core of my question:
Is there a way to avoid active data relaying between two sockets in userland?
It seems there is a passive way. For example I can create file descriptor from each socket, create two pipes and use dup2() - the same method like stdin/out redirecting. Then two threads are useless for data relay and can be finished/closed.
The question is if the server should ever close sockets and pipes and how to know when the pipe is broken to log the fact?
I've also found "socket pairs" but I am not sure about it for my purpose.
What solution would you advice to off-load the userland and limit amount fo threads?
Some extra explanations:
The server has defined static routing table (eg. ID_A with ID_B - paired identifiers). Client A connects to the server and sends ID_A. Then the server waits for client B. When A and B are paired (both sockets opened) the server starts the data relay.
Clients are simple devices behind symmetric NAT therefore N2N protocol or NAT traversal techniques are too complex for them.
Thanks to Gerhard Rieger I have the hint:
I am aware of two kernel space ways to avoid read/write, recv/send in
user space:
sendfile
splice
Both have restrictions regarding type of file descriptor.
dup2 will not help to do something in kernel, AFAIK.
Man pages: splice(2) splice(2) vmsplice(2) sendfile(2) tee(2)
Related links:
Understanding sendfile() and splice()
http://blog.superpat.com/2010/06/01/zero-copy-in-linux-with-sendfile-and-splice/
http://yarchive.net/comp/linux/splice.html (Linus)
C, sendfile() and send() difference?
bridging between two file descriptors
Send and Receive a file in socket programming in Linux with C/C++ (GCC/G++)
http://ogris.de/howtos/splice.html
OpenBSD implements SO_SPLICE:
relayd asiabsdcon2013 slides / paper
http://www.manualpages.de/OpenBSD/OpenBSD-5.0/man2/setsockopt.2.html
http://metacpan.org/pod/BSD::Socket::Splice .
Does Linux support something similar or only own kernel-module is the solution?
TCPSP
SP-MOD described here
TCP-Splicer described here
L4/L7 switch
HAProxy
Even for loads as tiny as 2000 concurrent connections, I'd never go with threads. They have the highest stack and switching overhead, simply because it's always more expensive to ensure that you can be interrupted anywhere than when you can only be interrupted at specific places. Just use epoll() and splice (if your sockets are TCP, which seems to be the case) and you'll be fine. You can even make epoll work in event triggered mode, where you only register your fds once.
If you absolutely want to use threads, use one thread per CPU core to spread the load, but if you need to do this, it means you're playing at speeds where affinity, RAM location on each CPU socket etc... plays a significant role, which doesn't seem to be the case in your question. So I'm assuming that a single thread is more than enough in your case.