Implementation multi sink in contiki - simulation

I want to implement program of multi sink sensor networks in contiki. I have two types of thing and each of networks of thing has a sink node that send data its sink. The sinks should communicate with themselves. According to the Support of multiple sinks via a virtual root for the RPL routing protocol. EURASIP Journal on Wireless Communications and Networking 2014 2014:91 paper, sinks can communicate in several ways. For example Forward packet to correct sink, Forward packet to all sink and Forward packet to central unit. This paper focus on third method. But do not refer to implementation it in contiki. Is there an example for implement of communication between sinks in contiki?

Related

Omnetpp application sends multiple streams

Let's say I have a car with different sensors: several cameras, LIDAR and so on, the data from this sensors are going to be send to some host over 5G network (omnetpp + inet + simu5g). For video it is like 5000 packets 1400 bytes each, for lidar 7500 packets 1240 bytes and so on. Each flow is encoded in UDP packets.
So in omnetpp module in handleMessage method I have two sentTo calls, each is scheduled "as soon as possible", i.e., with no delay - that corresponds to the idea of multiple parallel streaming. How does omnetpp handle situations, when it needs to send two different packets at the same time from the same module to the same module (some client, which receives sensor data streams)? Does it create some inner buffer on the sender or receiver side, therefore allowing really only one packet sending per handleMessage call or is it wrong? I want to optimize data transmission and play with packet sizes and maybe with sending intervals, so I want to know, how omnetpp handles multiple streaming at the same time, because if it actually buffers, maybe than it makes sense to form a single package from multiple streams, each such package will consist of a certain amount of data from each stream.
There is some confusion here that needs to be clarified first:
OMNeT++ is a discrete event simulator framework. An OMNeT++ model contains modules that communicate with each other, using OMNeT++ API calls like sendTo() and handleMessage(). Any call of the sendTo() method just queues the provided message into the future event queue (an internal, time ordered queue). So if you send more than one packet in a single handleMessage() method, they will be queued in that order. The packets will be delivered one by one to the requested destination modules when the requested simulation time is reached. So you can send as many packets as you wish and those packets will be delivered one by one to the destination's handleMessage() method. But beware! Even if the different packets will be delivered one by one sequentially in the program's logic, they can still be delivered simultaneously considering the simulation time. There are two time concepts here: real-time that describes the execution order of the code and simulation-time which describes the time passes from the point of the simulated system. That's why, while OMNeT++ is a single threaded application that runs each events sequentially it still can simulate infinite number of parallel running systems.
BUT:
You are not modeling directly with OMNeT++ modules, but rather using INET Framework which is a model directly created to simulate internet protocols and networks. INET's core entity is a node which is something that has network interface(s) (and queues belonging to them). Transmission between nodes are properly modeled and only a single packet can travel on an ethernet line at a time. Other packets must queue in the network interface queue and wait for an opportunity to be delivered from there.
This is actually the core of the problem for Time Sensitive Networks: given a lot of pre-defined data streams in a network, how the various packets interfere and affect each other and how they change the delay and jitter statistics of various streams at the destination, Plus, how you can configure the source and network gate scheduling to achieve some desired upper bounds on those statistics.
The INET master branch (to be released as INET 4.4) contains a lot TSN code, so I highly recommend to try to use it if you want to model in vehicle networks.
If you are not interested in the in-vehicle communication, bit rather want to stream some data over 5G, then TSN is not your interest, but you should NOT start to multiplex/demultiplex data streams at application level. The communication layers below your UDP application will fragment/defragment and queue the packets exactly how it is done in the real world. You will not gain anything by doing mux/demux at application layer.

How cluster event bus in Vertx works?

I am new to Vertx. I am confused about event bus in clustering environment.
As documentation of vertx
The event bus doesn’t just exist in a single Vert.x instance. By
clustering different Vert.x instances together on your network they
can form a single, distributed event bus.
How exactly event bus of different Vert.x instances are joined together in cluster to form a single distributed event bus and the role of ClusterManager in this case? How the communication between nodes work in distributed event bus? Please explain me this in detail of technical. Thanks
There is more info about clustering in the cluster managers section of the docs.
The key points are:
Vert.x has a clustering SPI; implementations are named "cluster managers"
Cluster managers, provide Vert.x with discovery and membership management of the clustered nodes
Vert.x does not use the cluster manager for message transport, it uses its own set of TCP connections
If you want to try this out, take a look at Infinispan Cluster Manager examples.
For more technical details, I guess the best option is to go to the source code.

Spring XD - UDP inside Jobs

I have been using Spring XD for a while for continuous ingestion of sensor data and it works perfectly.
The new requirement that I have is the ability to "replay" portions of that data. In my particular case it would be reading from MongoDB (with a certain query), generate a UDP packet with a certain filed of the entry and send it to a SocketAddress in a fixed interval of time.
The first attempt that I am implementing is through spring-batch job. The reader is simple since it is just querying MongoDB for the data, but I am concern about the UDP portion. It does not feel natural to use spring-batch for sending UDP packets, so I would like to know if anybody can suggest me an idea for implementing this.
Thanks
You could use a custom XD source with a MongoDB Inbound Channel Adapter piped to a custom sink using a UDP Outbound Channel Adapter.

What is the difference between atomic broadcast and atomic multicast?

It is not clear to me why some papers use one or another term. I think they are the same thing: maybe atomic multicast is actually atomic broadcast that is implemented using IP multicast (like ring Paxos).
The term Atomic Broadcast is more related to a single central entity, usually called a sequencer, which is enforcing and sending a total ordered set of messages, usually called the event-stream. How it sends the messages (broadcast, multicast, unicast, tcp, etc.) is not its main characteristic, or at least it shouldn't be. Adding to what #jop has said, there are big technical differences between UDP broadcast and UDP multicast when it comes to the TCP/IP protocol. For example:
Multicast can travel across subnets, broadcast cannot
Multicast usually requires IGMP, broadcast does not
Most kernel-bypass network cards will accelerate multicast, but not broadcast
That does not mean that UDP broadcast should never be used. Some networks might not support multicast but will support broadcast. Ideally a messaging system will support several transport mechanisms:
UDP Multicast
UDP Unicast
UDP Broadcast
TCP
Memory
Shared Memory
DUAL (TCP + UDP)
For an example of an atomic broadcast messaging system which is not tied to any specific transport mechanism, you can check CoralSequencer.
Disclaimer: I'm one of the developer of CoralSequencer.
In distributed systems theory, it is not related to using IP multicast or any other implementation detail. Actually, most of the time it is a matter of personal preference of the author and you can safely assume that they mean the same.
In detail, to be strict, when you say multicast you are assuming that not all processes are necessarily intended to receive all messages. When you say broadcast, you are assuming that all processes are targeted by all messages. The ambiguity arises as follows: As often multicast algorithms are layered on top of membership management, an abstract protocol that multicasts to all members in the context of a view is pretty much indistinguishable from one that broadcasts to all processes in the system model. Hence, you can describe it as either multicast or broadcast. It really depends on the context.

Pub Sub implementation zero mq 3.xx

I have been working with qpid and now i am trying to move to broker less messaging system , but I am really confused about network traffic in a Pub Sub pattern. I read the following document :
http://www.250bpm.com/pubsub#toc4
and am really confused how subscription forwarding is actually done ?
I thought zero mq has to be agnostic for the underlying network topology but it seems it is not. How does every node knows what to forward and what to not (for e.g. : in eth network , where there can be millions subscriber and publisher , message tree does not sound a feasible to me . What about the hops that do not even know about the existence of zero mq , how would they forward packets to subscribers connected to them , for them it would be just a normal packet , so they would just forward multiple copies of data packets even if its the same packet ?
I am not networking expert so may be I am missing something obvious about message tree and how it is even created ?
Could you please give certain example cases how this distribution tree is created and exactly which nodes are xpub and xsub sockets created ?
Is device (term used in the link) something like a broker , in the whole article it seemed like device is just any general intermediary hop which does not know anything about zero mq sockets (just a random network hop) , if it is indeed a broker kind of thing , does that mean for pub sub , all nodes in messaging tree have to satisfy the definition of being a device and hence it is not a broke less design ?
Also in the tree diagram (from the link , which consist P,D,C) , I initially assumed C and C are two subscribers and P the only publisher (D just random hop), but now it seems that we have D as the zero mq . Does C subscribes to D and D subscribes to P ? or both the C just subscribe to P (To be more generic , does each node subscribe to its parent only in the ). Sorry for the novice question but it seems i am missing on something obvious here, it would be nice if some one can give more insights.
zeromq uses the network to establish a connection between nodes directly (e.g via tcp), but only ever between 1 sender and 1-n receivers. These are connected "directly" and can exchange messages using the underlying protocol.
Now when you subscribe to only certain events in a pub-sub scenario, zeromq used to filter out messages subscriber side causing unnecessary network traffic from the publisher to at least a number of subscribers.
In newer versions of zeromq (3.0 and 3.1) the subscriber process sends its subscription list to the publisher, which manages a list of subscribers and the topics they are interested in. Thus the publisher can discard messages that are not subscribed too by any subscriber and potentially send targeted messages at only interested subscribers.
When the publisher is itself a subscriber of events (e.g. a forwarding or routing device service) it might forward those subscriptions again by similarly subscribing to its connected publishers.
I am not sure whether zeromq still does client side filtering in newer versions even if it "forwards" its subscriptions though.
A more efficient mechanism for pub/sub to multiple subscribers is to use multicast whereby a single message traverses the network and is received by all subscribers (who can then filter what they wish).
ZeroMQ supports a standardised reliable multicast called Pragmatic General Multicast.
These references should give you an idea how it all works. Note that multicast generally only works on a single subLAN and may need router configuration or TCP bridges to span multiple subLANs.