sending/receiving the value as a message or writing/reading the value from a file - return-value

I have several hosts in mininet and a controller responsible for monitoring these hosts. Hosts are supposed to do some processing and produce an integer value (which is updated periodically) and the controller is responsible for checking the values produced by these hosts (so controller should somehow get the value produced by each host)
To do so, I was thinking about sending a UDP packet from each host to the controller periodically containing the updated produced value
or
creating separate text files for each host to write this value which is updated each second, so that controller will read the content of these files periodically for getting the updated value.
But since I'm new in programming, I wanted to know which one will be a more reasonable solution (considering overhead, complexity and any other similar factor)

Related

What is General Call Address and what is the purpose of it in I2C?

I wonder what is General Call Address in I2C (0x00). If we have a master and some slaves can we communicate with these slaves through our master with this address?
Section 3.2.10 of I2C specification v.6 (https://www.i2c-bus.org/specification/) clearly describes the purpose of general call.
3.2.10General call address
The general call address is for addressing every device connected to the I2C-bus at the
same time. However, if a device does not need any of the data supplied within the general
call structure, it can ignore this address. If a device does require data from a general call
address, it behaves as a slave-receiver. The master does not actually know how many
devices are responsive to the general call. The second and following bytes are received
by every slave-receiver capable of handling this data. A slave that cannot process one of
these bytes must ignore it. The meaning of the general call address is always specified in
the second byte (see Figure 30).
You can use it to communicate with your slaves, but three restrictions applied.
General call can only write data to slave, not read.
Every slave should receive general call, you cannot address specific device with it, or you have to encode device address in general call message body, and decode it in the slave.
There are standard general call message format. You should not use standard codes for for your own functions.

Transferring .csv files through XBee Modules

We have set up a Monitoring System that can collect data. The system consists of several RPi's with attached accelerometers that log the data to a .csv file.
The RPi's are so spread out that they are not in reach of eachother and their own created PiFY.
We use XBee S1 with Digimesh 2.4 for increased range to give the RPi's commands through XCTU. The XBee modules are set up as Routers. We can start and stop data collecting.
Now we are interested in transferring the collected data (.csv file) to a Master RPi. How can it be done through these XBee modules?
I'd recommend doing any coding in Python, and using the pyserial module to send/receive data on the serial port. It's fairly simple to get started with that.
Configure the routers in "AT mode" (also called "transparent serial mode") via ATAP=0 with DL and DH set to 0 (telling it to use the coordinator as a destination for all serial data.
Simple Coordinator Solution
Have the routers include some sort of node ID in each CSV record, and then configuring the coordinator in "AT mode" as well. That way it will receive CSV records from multiple sources and just dump them out of its serial port. As long as you send complete lines of data from each router, you shouldn't see corrupted CSV records on the coordinator.
More Complicated Coordinator Solution
Configure the coordinator in "API mode" via ATAP=1. Pick a programming language your comfortable with, like C, Java or Python and grab one of Digi's Open Source "host libraries" from their GitHub repository.
The coordinator will receive CSV data inside of API frames so it can identify the source device that sent the data. With this configuration, you can easily send data back to a specific device or make use of remote AT commands to change I/O on the routers.
Note that with either setup, there's no need for the RPi to create the file -- it can just send a CSV line whenever it has data ready. Just make sure you're staging a complete line and sending it in a single "serial write" call to ensure that it isn't split into multiple packets over the air.

Push data to client, how to handle slow clients?

In a push model, where server pushes data to clients, how does one handle clients with low or variable bandwidth?
For example i receive data from a producer and send the data to my clients (push). What if one of my clients decides to download a linux iso, the available bandwidth to this client becomes too little to download my data.
Now when my producers produces data and the server pushes it to the client, all clients will have to wait until all clients have downloaded the data. This is a problem when there is one or more slow clients with little bandwidth.
I can cache the data to be send for every client, but because the data size is big this isn't really an option (lots of clients * data size = huge memory requirements).
How is this generally solved? No need for code, just a few thoughts/ideas are already more then welcome.
Now when my producers produces data and the server pushes it to the
client, all clients will have to wait until all clients have
downloaded the data.
The above shouldn't be the case -- your clients should be able to download asynchronously from each other, with each client maintaining its own independent download state. That is, client A should never have to wait for client B to finish, and vice versa.
I can cache the data to be send for every client, but because the data
size is big this isn't really an option (lots of clients * data size =
huge memory requirements).
As Warren said in his answer, this problem can be reduced by keeping only one copy of the data rather than one copy per client. Reference-counting (e.g. via shared_ptr, if you are using C++, or something equivalent in another language) is an easy way to make sure that the shared data is deleted only when all clients are done downloading it. You can make the sharing more fine-grained, if necessary, by breaking up the data into chunks (e.g. instead of all clients holding a reference to a single 800MB linux iso, you could break it up into 800 1MB chunks, so that you can start removing the earlier chunks from memory as soon as all clients have downloaded them, instead of having to hold the entire 800MB of data in memory until every client has downloaded the entire thing)
Of course, that sort of optimization only gets you so far -- e.g. if two clients each request a different 800MB file, then you're liable to end up with 1.6GB of RAM usage for caching, unless you come up with a more clever solution.
Here are some possible approaches you could try (from less complex to more complex). You could try any of these either separately or in combination:
Monitor how much each client's "backlog" is -- that is, keep a count of the amount of data you have cached waiting to send to that client. Also keep track of the number of bytes of cached data your server is currently holding; if that number gets too high, force-disconnect the client with the largest backlog, in order to free up memory. (this doesn't result in a good user experience for the client, of course; but if the client has a buggy or slow connection he was unlikely to have a good user experience anyway. It does keep your server from crashing or swapping itself to death because a single client has a bad connection)
Keep track of how much data your server has cached and waiting to send out. If the amount of data you have cached is too large (for some appropriate value of "too large"), temporarily stop reading from the socket(s) that are pushing the data out to you (or if you are generating your data internally, temporarily stop generating it). Once the amount of cached data gets down to an acceptable level again, you can resume receiving (or generating) more data to push.
(this may or may not be applicable to your use-case) Revise your data model so that instead of being communications-oriented, it becomes state-oriented. For example, if your goal is to update the clients' state to match the state of the data-source, and you can organize the data-source's state into a set of key/value pairs, then you can require that the data-source include a key with each piece of data it sends. Whenever a key/value pair is received from the data-source, simply place that key-value pair into a map (or hash table or some other key/value oriented data structure) for each client (again, used shared_ptr's or similar here to keep memory usage reasonable). Whenever a given client has drained its queue of outgoing TCP data, remove the oldest item from that client's key/value map, convert it into TCP bytes to send, and add them to the outgoing-TCP-data queue. Repeat as necessary. The advantage of this is that "obsolete" values for a given key are automatically dropped inside the server and therefore never need to be sent to the slow clients; rather the slow clients will only ever get the "latest" value for that given key. The beneficial consequence of that is that a given client's maximum "backlog" will be limited by the number of keys in the state-model, regardless of how slow or intermittent the client's bandwidth is. Thus a slow client might see fewer updates (per second/minute/hour), but the updates it does see will still be as recent as possible given its bandwidth.
Cache the data once only, and have each client handler keep track of where it is in the download, all using the same cache. Once all clients have all the data, the cached data can be deleted.

Golang tcp socket read gives EOF eventually

I have problem reading from socket. There is asterisk instance running with plenty of calls (10-60 in a minute) and I'm trying to read and process CDR events related to those calls (connected to AMI).
Here is library which I'm using (not mine, but was pushed to fork because of bugs) https://github.com/warik/gami
Its pretty straightforward, main action goes in gami.go - readDispatcher.
buf := make([]byte, _READ_BUF) // read buffer
for {
rc, err := (*a.conn).Read(buf)
So, there is TCPConn (a.conn) and buffer with size 1024 to which I'm reading messages from socket. So far so good, but eventually, from time to time (this time may vary from 10 minutes to 5 hours independently of data amount which comes through socket) Read operation fails with io.EOF error. I was trying to reconnect and relogin immediately, but its also impossible - connection times out, so i was pushed to wait for about 40-60sec, and this time is very crucial to me, I'm losing a lot of data because of delay. I was googling, reading sources and trying a lot of stuff - nothing. The most strange thing, that simple socket opened in python or php does not fail.
Is it possible that problem because of lack of file descriptors to represent socket on mine machine or on asterisk server?
Is it possible that problem in asterisk configuration (because i have another asterisk on which this problem doesn't reproduce, but also, i have time less calls on last one)?
Is it possible that problem in my way to deal with socket connection or with Go in general?
go version go1.2.1 linux/amd64
asterisk 1.8
Update to latest asterisk. There was bug like that when AMI send alot of data.
For check issue, you have send via ami command like "COMMAND sip show peers"(or any other long output command) and see result.
Ok, problem was in OS socket buffer overflow. As appeared there were to much data to handle.
So, were are three possible ways to fix this:
increase socket buffer volume
increase somehow speed of process which reeds data from socket
lower data volume or frequency
The thing that gami is by default reading all data from asterisk. And i was reading all of them and filter them after actual read operation. According that AMI listening application were running on pretty poor PC it appeared that it simply cannot read all the data before buffer capacity will be exposed.But its possible to receive only particular events, by sending "Events" action to AMI and specifying desired "EventMask".
So, my decision was to do that. And create different connections for different events type.

How to create and maintain a receiver buffer for a network simulation framework?

I am trying to simulate a mesh network in matlab. The intermediate nodes and destination need to maintain a receiver buffer so that whenever a packet arrives from a source, it is stored in the buffer and can be used for further operations. I am using a main file and the source, intermediate and destination nodes are functions. Since functions are called everytime a new packet arrives, how and where can I maintain a individual or combined buffer for reception? The packets cant be treated on a first come first served basis but need to be collectively buffered.Please ask if I haven't explained the problem correctly.