Synchronization of Named pipes server and clients - operating-system

I want to send data between 1 server(overlapped) and 3 clients using named pipes, in high level i am using the named pipe to Toggle the 3 different GPIO pins in microcontroller . When i am doing that First client is fast and second client is slow and third client is slower
Speed::Client 1 > Client 2 > Client 3
I want 3 clients to run at same speed or at in a synchronization

Related

Omnetpp application sends multiple streams

Let's say I have a car with different sensors: several cameras, LIDAR and so on, the data from this sensors are going to be send to some host over 5G network (omnetpp + inet + simu5g). For video it is like 5000 packets 1400 bytes each, for lidar 7500 packets 1240 bytes and so on. Each flow is encoded in UDP packets.
So in omnetpp module in handleMessage method I have two sentTo calls, each is scheduled "as soon as possible", i.e., with no delay - that corresponds to the idea of multiple parallel streaming. How does omnetpp handle situations, when it needs to send two different packets at the same time from the same module to the same module (some client, which receives sensor data streams)? Does it create some inner buffer on the sender or receiver side, therefore allowing really only one packet sending per handleMessage call or is it wrong? I want to optimize data transmission and play with packet sizes and maybe with sending intervals, so I want to know, how omnetpp handles multiple streaming at the same time, because if it actually buffers, maybe than it makes sense to form a single package from multiple streams, each such package will consist of a certain amount of data from each stream.
There is some confusion here that needs to be clarified first:
OMNeT++ is a discrete event simulator framework. An OMNeT++ model contains modules that communicate with each other, using OMNeT++ API calls like sendTo() and handleMessage(). Any call of the sendTo() method just queues the provided message into the future event queue (an internal, time ordered queue). So if you send more than one packet in a single handleMessage() method, they will be queued in that order. The packets will be delivered one by one to the requested destination modules when the requested simulation time is reached. So you can send as many packets as you wish and those packets will be delivered one by one to the destination's handleMessage() method. But beware! Even if the different packets will be delivered one by one sequentially in the program's logic, they can still be delivered simultaneously considering the simulation time. There are two time concepts here: real-time that describes the execution order of the code and simulation-time which describes the time passes from the point of the simulated system. That's why, while OMNeT++ is a single threaded application that runs each events sequentially it still can simulate infinite number of parallel running systems.
BUT:
You are not modeling directly with OMNeT++ modules, but rather using INET Framework which is a model directly created to simulate internet protocols and networks. INET's core entity is a node which is something that has network interface(s) (and queues belonging to them). Transmission between nodes are properly modeled and only a single packet can travel on an ethernet line at a time. Other packets must queue in the network interface queue and wait for an opportunity to be delivered from there.
This is actually the core of the problem for Time Sensitive Networks: given a lot of pre-defined data streams in a network, how the various packets interfere and affect each other and how they change the delay and jitter statistics of various streams at the destination, Plus, how you can configure the source and network gate scheduling to achieve some desired upper bounds on those statistics.
The INET master branch (to be released as INET 4.4) contains a lot TSN code, so I highly recommend to try to use it if you want to model in vehicle networks.
If you are not interested in the in-vehicle communication, bit rather want to stream some data over 5G, then TSN is not your interest, but you should NOT start to multiplex/demultiplex data streams at application level. The communication layers below your UDP application will fragment/defragment and queue the packets exactly how it is done in the real world. You will not gain anything by doing mux/demux at application layer.

Simulating multiple modbus slave devices using node red

I've managed to simulate a single slave device on my raspberry pi using node-red using functions to send data random values to the Modbus flex server. However, now I want to be able to simulate multiple Modbus slave devices on the port number and I'm unsure how to do this.
I've tried creating another Modbus flex server with the same port number, but this causes the whole node-red application to crash when it's deployed. Secondly, I've tried using different Modbus flex-write nodes to simulate different slave devices, but I'm unsure whether this is correct and if so, how I'd configure them to appear as different slave devices. This is because so far, my raspberry pi appears as slave 1, but I'm unsure where this comes from. I'm guessing it's to do with the unit-id of the Modbus flex-server but when I change the unit-id to a different number and type that number as the address in the master, it says no connection.
In conclusion, is it possible to use a single raspberry pi to simulate multiple slave devices on node-red using node-red-contrib-modbus and if so how do you do it?
The concept of Slaves in Modbus TCP differ somewhat from RTP as set out in the Modbus TCP Spec:
The MODBUS ‘slave address’ field usually used on MODBUS Serial Line is
replaced by a single byte ‘Unit Identifier’ within the MBAP Header.
The ‘Unit Identifier’ is used to communicate via devices such as
bridges, routers and gateways that use a single IP address to support
multiple independent MODBUS end units.
So there is a difference in termanalogy between Modbus RTP and TCP as well as a difference in the intended use of this field. The solution suggested by the spec would be to setup multiple servers on different ports (you cannot run multiple servers on a single port).
Having said that some TCP->RTP gateways (and other devices) use the unitid as the slave ID so I'm assuming you are trying to simulate something like this?
The first issue is that there appears to be a bug in Modbus Flex Server (reported) in that when you change the unit-id it is being stored as a string rather than a number. If you export the flow you will see something like "unitId": "3",; changing this to "unitId": 3, (no quotes around the 3) and importing fixes the issue (so that probably explains why you could not get this working).
Having said that changing the unit-id like this does not help you because it only supports one ID. However if you set the unit-id to 255 then it will listen on all unit-ids (this is a feature of the modbus-serial module used internally). Remember that you will currently need to manually fix the config to get this to work due to the bug.
Having done that you can do something like the following to respond to requests to different unit ids (the example will return the unit id (1 or 2) for all addresses so is not useful but shows the concept):

Transferring .csv files through XBee Modules

We have set up a Monitoring System that can collect data. The system consists of several RPi's with attached accelerometers that log the data to a .csv file.
The RPi's are so spread out that they are not in reach of eachother and their own created PiFY.
We use XBee S1 with Digimesh 2.4 for increased range to give the RPi's commands through XCTU. The XBee modules are set up as Routers. We can start and stop data collecting.
Now we are interested in transferring the collected data (.csv file) to a Master RPi. How can it be done through these XBee modules?
I'd recommend doing any coding in Python, and using the pyserial module to send/receive data on the serial port. It's fairly simple to get started with that.
Configure the routers in "AT mode" (also called "transparent serial mode") via ATAP=0 with DL and DH set to 0 (telling it to use the coordinator as a destination for all serial data.
Simple Coordinator Solution
Have the routers include some sort of node ID in each CSV record, and then configuring the coordinator in "AT mode" as well. That way it will receive CSV records from multiple sources and just dump them out of its serial port. As long as you send complete lines of data from each router, you shouldn't see corrupted CSV records on the coordinator.
More Complicated Coordinator Solution
Configure the coordinator in "API mode" via ATAP=1. Pick a programming language your comfortable with, like C, Java or Python and grab one of Digi's Open Source "host libraries" from their GitHub repository.
The coordinator will receive CSV data inside of API frames so it can identify the source device that sent the data. With this configuration, you can easily send data back to a specific device or make use of remote AT commands to change I/O on the routers.
Note that with either setup, there's no need for the RPi to create the file -- it can just send a CSV line whenever it has data ready. Just make sure you're staging a complete line and sending it in a single "serial write" call to ensure that it isn't split into multiple packets over the air.

kernel-based (Linux) data relay between two TCP sockets

I wrote TCP relay server which works like peer-to-peer router (supernode).
The simplest case are two opened sockets and data relay between them:
clientA <---> server <---> clientB
However the server have to serve about 2000 such A-B pairs, ie. 4000 sockets...
There are two well known data stream relay implementations in userland (based on socketA.recv() --> socketB.send() and socketB.recv() --> socketA.send()):
using of select / poll functions (non-blocking method)
using of threads / forks (blocking method)
I used threads so in the worst case the server creates 2*2000 threads! I had to limit stack size and it works but is it right solution?
Core of my question:
Is there a way to avoid active data relaying between two sockets in userland?
It seems there is a passive way. For example I can create file descriptor from each socket, create two pipes and use dup2() - the same method like stdin/out redirecting. Then two threads are useless for data relay and can be finished/closed.
The question is if the server should ever close sockets and pipes and how to know when the pipe is broken to log the fact?
I've also found "socket pairs" but I am not sure about it for my purpose.
What solution would you advice to off-load the userland and limit amount fo threads?
Some extra explanations:
The server has defined static routing table (eg. ID_A with ID_B - paired identifiers). Client A connects to the server and sends ID_A. Then the server waits for client B. When A and B are paired (both sockets opened) the server starts the data relay.
Clients are simple devices behind symmetric NAT therefore N2N protocol or NAT traversal techniques are too complex for them.
Thanks to Gerhard Rieger I have the hint:
I am aware of two kernel space ways to avoid read/write, recv/send in
user space:
sendfile
splice
Both have restrictions regarding type of file descriptor.
dup2 will not help to do something in kernel, AFAIK.
Man pages: splice(2) splice(2) vmsplice(2) sendfile(2) tee(2)
Related links:
Understanding sendfile() and splice()
http://blog.superpat.com/2010/06/01/zero-copy-in-linux-with-sendfile-and-splice/
http://yarchive.net/comp/linux/splice.html (Linus)
C, sendfile() and send() difference?
bridging between two file descriptors
Send and Receive a file in socket programming in Linux with C/C++ (GCC/G++)
http://ogris.de/howtos/splice.html
OpenBSD implements SO_SPLICE:
relayd asiabsdcon2013 slides / paper
http://www.manualpages.de/OpenBSD/OpenBSD-5.0/man2/setsockopt.2.html
http://metacpan.org/pod/BSD::Socket::Splice .
Does Linux support something similar or only own kernel-module is the solution?
TCPSP
SP-MOD described here
TCP-Splicer described here
L4/L7 switch
HAProxy
Even for loads as tiny as 2000 concurrent connections, I'd never go with threads. They have the highest stack and switching overhead, simply because it's always more expensive to ensure that you can be interrupted anywhere than when you can only be interrupted at specific places. Just use epoll() and splice (if your sockets are TCP, which seems to be the case) and you'll be fine. You can even make epoll work in event triggered mode, where you only register your fds once.
If you absolutely want to use threads, use one thread per CPU core to spread the load, but if you need to do this, it means you're playing at speeds where affinity, RAM location on each CPU socket etc... plays a significant role, which doesn't seem to be the case in your question. So I'm assuming that a single thread is more than enough in your case.

Basic client-server synchronization

Let do simple thing, we have a cloud, which client draws, and server which sends commands to move cloud. Assume what client 1 runs on 60 fps and Client 2 runs on 30 fps and we want kinda smooth cloud transition.
First problem - server have different fps with clients and if send move command every tick, it will start spamming commands much faster, then clients will draw.
Possible solution 1 - client sends "i want update" command after finishing frame.
Possible soolution 2 - server sends move cloud commands every x ms, but then cloud will not move smoothly. Can be combined with solution 3.
Possible solution 3 - server sends - "start move cloud with speed x" and "change "cloud direction" instead of "move cloud to x". But problem again is what checks for changing cloud dir on edge of screen, will trigger faster then cloud actually drawned on client.
Also Client 2 draws 2 times slower then Client 1, how compensate this?
How sync server logic with clients drawning in basic way?
Solution 3 sounds like the best one by far, if you can do it. All of your other solutions are much too chatty: they require extremely frequent communication between the client and server, much too frequent unless servers and clients have a very good network connection between them.
If your cloud movements are all simple enough that they can be sent to the clients as vectors such that the client can move the cloud along one vector for an extended period of time (many frames) before receiving new instructions (a new starting location and vector) from the server then you should definitely do that. If your cloud movements are not so easily representable as simple vectors then you can choose a more complex model (e.g. add instructions to transform the vector over time) and send the model's parameters to the clients.
If the cloud is part of a larger world and the clients track time in the world, then each of the sets of instructions coming from the server should include a timestamp representing the time when the initial conditions in the model are valid.
As for your question about how to compensate for client 2 drawing two times slower than client 1, you need to make your world clock tick at a consistent rate on both clients. This rate need not have any relationship with the screen refresh rate on either client.