Esper publisher and subscriber at different hosts - complex-event-processing

Is there any sample code that can have one host publishing events and the other host receiving the event (listener or subscriber) through the Esper framework. I notice Esper provides different adaptors (socket, JMS and HTTP) but cannot find corresponding sample code. Thanks

EsperIO [1] would be what you are looking for
[1] http://esper.codehaus.org/esperio-4.9.0/doc/reference/en-US/html_single/index.html

Related

How cluster event bus in Vertx works?

I am new to Vertx. I am confused about event bus in clustering environment.
As documentation of vertx
The event bus doesn’t just exist in a single Vert.x instance. By
clustering different Vert.x instances together on your network they
can form a single, distributed event bus.
How exactly event bus of different Vert.x instances are joined together in cluster to form a single distributed event bus and the role of ClusterManager in this case? How the communication between nodes work in distributed event bus? Please explain me this in detail of technical. Thanks
There is more info about clustering in the cluster managers section of the docs.
The key points are:
Vert.x has a clustering SPI; implementations are named "cluster managers"
Cluster managers, provide Vert.x with discovery and membership management of the clustered nodes
Vert.x does not use the cluster manager for message transport, it uses its own set of TCP connections
If you want to try this out, take a look at Infinispan Cluster Manager examples.
For more technical details, I guess the best option is to go to the source code.

Implementation multi sink in contiki

I want to implement program of multi sink sensor networks in contiki. I have two types of thing and each of networks of thing has a sink node that send data its sink. The sinks should communicate with themselves. According to the Support of multiple sinks via a virtual root for the RPL routing protocol. EURASIP Journal on Wireless Communications and Networking 2014 2014:91 paper, sinks can communicate in several ways. For example Forward packet to correct sink, Forward packet to all sink and Forward packet to central unit. This paper focus on third method. But do not refer to implementation it in contiki. Is there an example for implement of communication between sinks in contiki?

Spring XD - UDP inside Jobs

I have been using Spring XD for a while for continuous ingestion of sensor data and it works perfectly.
The new requirement that I have is the ability to "replay" portions of that data. In my particular case it would be reading from MongoDB (with a certain query), generate a UDP packet with a certain filed of the entry and send it to a SocketAddress in a fixed interval of time.
The first attempt that I am implementing is through spring-batch job. The reader is simple since it is just querying MongoDB for the data, but I am concern about the UDP portion. It does not feel natural to use spring-batch for sending UDP packets, so I would like to know if anybody can suggest me an idea for implementing this.
Thanks
You could use a custom XD source with a MongoDB Inbound Channel Adapter piped to a custom sink using a UDP Outbound Channel Adapter.

Can ZMQ publish message to specific client by pub-sub socket?

I am using pub/Sub Socket and currently the server subscribe byte[0] (all topics)
while client subscribe byte[16] - a specific header as topic
However, I cannot stop client to subscribe byte[0] which can receive all other messages.
My application is a like a app game which has one single server using ZMQ as connection
and many clients have a ZMQ sockets to talk with server.
What pattern or socket I should use in this case?
Thanks
" ... cannot stop client to subscribe byte[0] which can receive all other messages."
Stopping a "subscribe to all" mode of the SUB client
For the ZMQ PUB/SUB Formal Communication Pattern archetype, the SUB client has to submit it's subscription request ( via zmq_setsockopt() ).
PUB-side ( a Game Server ) has got no option to do that from it's side.
There is no-subscription state right on a creation of a new SUB socket, thus an absolutely restrictive filter, thas no message pass through. ( For furhter details on methods for SUBSCRIBE / UNSUBSCRIBE ref. below )
ZeroMQ specification details setting for this:
int zmq_setsockopt ( void *socket,
int option_name,
const void *option_value,
size_t option_len
);
Caution: only ZMQ_SUBSCRIBE
ZMQ_UNSUBSCRIBE
ZMQ_LINGER
take effect immediately,
other options are active only for subsequent socket bind/connects.
ZMQ_SUBSCRIBE: Establish message filter
The ZMQ_SUBSCRIBE option shall establish a new message filter on a ZMQ_SUB socket. Newly created ZMQ_SUB sockets shall filter out all incoming messages, therefore you should call this option to establish an initial message filter.
An empty option_value of length zero shall subscribe to all incoming messages.
A non-empty option_value shall subscribe to all messages beginning with the specified prefix.
Multiple filters may be attached to a single ZMQ_SUB socket, in which case a message shall be accepted if it matches at least one filter.
ZMQ_UNSUBSCRIBE: Remove message filter
The ZMQ_UNSUBSCRIBE option shall remove an existing message filter on a ZMQ_SUB socket. The filter specified must match an existing filter previously established with the ZMQ_SUBSCRIBE option. If the socket has several instances of the same filter attached the ZMQ_UNSUBSCRIBE option shall remove only one instance, leaving the rest in place and functional.
How to enforce an ad-hoc, server-dictated, ZMQ_SUBSCRIBE restrictions?
This is possible via extending the messaging layer and adding a control-mode socket, that will carry server-initiated settings for the client ZMQ_SUB messages filtering.
Upon receiving a new, the server-dictated, ZMQ_SUBSCRIBE/ZMQ_UNSUBSCRIBE setting, the ZMQ_SUB client side code will simply handle that request and add zmq_setsockopt() accordingly.
FSA-driven grammars for this approach are rich of further possibilites, so will allow any Game Server / Game Community to smoothly go this way.
What pattern or socket I should use?
ZeroMQ is rather a library of LEGO-style elements to get assembled into a bigger picture.
Expecting such a smart library to have a one-size-fits-all ninja-element is on a closer look an oxymoron.
So, to avoid a "Never-ending-story" of adding "although this ... and also that ..."
Review all requirements and & list features for the end-to-end scaleable solution,
Design a messaging concept & validate it to meet all the listed requirements & cover all features in [1]
Implement [2]
Test [3] & correct it for meeting 1:1 the end-to-end specification [1]
Enjoy it. You have done it end-to-end right.

WIthout using the JMS Wrapper how to emulate JMS topic w/ HornetQ core API

I would like to translate the concept of JMS topics using HornetQ core API.
The problem i see from my brief examination it would appear the main class JMSServerManagerImpl (from hornetq-jms.jar) uses jndi to coordinate the various collaborators it requires. I would like to avoid jndi as it is not self contained and is a globally shared object which is a problem especially in an osgi environment. One alternative is to copy starting at JMSServerManagerImpl but that seems like a lot of work.
I would rather have a confirmation that my approach to emulating how topics are supported in hornetq is the right way to solve this problem. If anyone has sufficient knowledge perhaps they can comment on what i think is the approach to writing my own emulation of topics using the core api.
ASSUMPTION
if a message consumer fails (via rollback) the container will try deliverying the message to another different consumer for the same topic.
EMULATION
wrap each message that is added for the topic.
sender sends msg w/ an acknowledgement handler set.
the wrapper for (1) would rollback after the real listener returns.
the sender then acknowledges delivery
I am assuming after 4 the msg is delivered after being given to all msg receivers. If i have made any mistakes or my assumptions are wrong please comment. Im not sure exactly if this assumption of how acknowledgements work is correct so any pointers would be nice.
If you are trying to figure out how to send a message to multiple consumers using the core API; here is what I recommend
Create queue 1 and bind to address1
Create queue 2 and bind to address1
Make queue N and bind to address 1
Send a message on address1
Start N consumers where each consumer listens on queue 1-N
This way it basically works like a topic.
http://hornetq.sourceforge.net/docs/hornetq-2.0.0.BETA5/user-manual/en/html/using-jms.html
7.5. Directly instantiating JMS Resources without using JNDI