How to make a Pub/Sub service with CometD and Jetty - publish-subscribe

I need to create a Java 8- based service that provides a CometD channel that multiple clients can subscribe to. The idea is that the server can send notifications to the clients when certain events occur.
I am using Jetty 9 as my servlet container (necessary to meet the requirements for my group). I have been reading CometD documentation and looking for some kind of example that I can use. The documentation is extensive but isn't helping (lack of context), and I haven't been able to find a decent example of what I am trying to do.
Can someone provide a simple example of creating a publication mechanism, in Java, that can be used with Jetty? Failing that, can someone point me to an example of how to do it?
Please advise.

The CometD Project has an outstanding task to bring back the tutorials.
This particular question was answered by the server-side stock price tutorial, for which you can find the source here while we work on it to bring it back online as part of the documentation.
Glossing over a few details, the service you need to write is similar to the tutorial's stock price service: upon receiving an external event, the service should broadcast the event to subscribers.
#Service
public class StockPriceService implements StockPriceEmitter.Listener
{
#Inject
private BayeuxServer bayeuxServer;
#Session
private LocalSession sender;
public void onUpdates(List<StockPriceEmitter.Update> updates)
{
for (StockPriceEmitter.Update update : updates)
{
// Create the channel name using the stock symbol.
String channelName = "/stock/" + update.getSymbol().toLowerCase(Locale.ENGLISH);
// Initialize the channel, making it persistent and lazy.
bayeuxServer.createChannelIfAbsent(channelName, new ConfigurableServerChannel.Initializer()
{
public void configureChannel(ConfigurableServerChannel channel)
{
channel.setPersistent(true);
channel.setLazy(true);
}
});
// Convert the Update business object to a CometD-friendly format.
Map<String, Object> data = new HashMap<>(4);
data.put("symbol", update.getSymbol());
data.put("oldValue", update.getOldValue());
data.put("newValue", update.getNewValue());
// Publish to all subscribers.
ServerChannel channel = bayeuxServer.getChannel(channelName);
channel.publish(sender, data);
}
}
}
Class StockPriceEmitter is the source of your external events, and publishes them to StockPriceEmitter.Listener in form of StockPriceEmitter.Update events.
How the external events are relayed to the CometD server is the detail that StockPriceEmitter hides; it could be done via JMS messages, or by polling an external REST service, or via a custom network protocol, or by polling a database, etc.
The important thing is that when the external events arrive, StockPriceService.onUpdates(...) is called, and there you can convert the events into a CometD friendly JSON format, and then publish them to the CometD channel.
Publishing to the CometD channel, in turn, will send the message to all subscribers for that channel, typically remote clients such as browsers.
The CometD channel has been made lazy because it is a way to avoid bombing the clients with a very frequent update rate (say, higher than 2-4 updates per second).
You will need to decide about the lazyness of the channel based on your particular use case.

Related

SAP Enterprise messaging - Add new queues with listeners to existing queues on runtime

I have a usecase around SAP Enterprise messaging(Consume BusinessEvents from S4HC) to make it multitenant. For this, the approach is by making One queue per tenant and a particular queue would be subscribed to multiple business events of that tenant.
Currently, I have achieved the functionality to make it work/listen only for 1 queue with the following code. Note that all the events are asynchronous or non blocking calls with a listener class implemented.
#Bean
public Connection getSession(MessagingServiceJmsConnectionFactory connectionFactory) throws JMSException, InterruptedException {
Connection connection = connectionFactory.createConnection();
//connection.start();
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
Queue queue = session.createQueue(QUEUE_PREFIX + QUEUE);
final MessageConsumer messageConsumer = session.createConsumer(queue);
messageConsumer.setMessageListener(new DefaultMessageListener());
connection.start();
Thread.sleep(5000);
return connection;
}
Approach is to create queues on subscription callbacks through service manager and to make the application listen to the new queue(add it to existing queues) without stopping/restarting the app.
How to get the connection factory session and add the new queues with the listener to make it dynamic using SpringBoot?
Can you help in this regard.
This code doesn't look like the SAP Cloud SDK. We checked twice and the mentioned classes are not found in our code. We had PoC support for Enterprise Messaging but deprecated it in favor of the planned release of the library for it by Cloud Application Programming model (CAP)
Check CAP's section on the Event Handlers in Java for more details. As far as I can see from the feature overview table CAP doesn't yet have support for multitenancy with the Java library. Implementation for Node.js is complete in that sense.
In SDK we plan to provide some convenience on top of the CAPs implementation when it's finalized. Their implementation
I think you can approach CAP via their support channels.

Combining PubNub Publish/Subscribe with Chat-engine

I want to use PubNub's Java client (publish/subscribe) with a chat-engine Angular client. The only material I could find on how what to accomplish this was this page in the wiki which describes the channel structure. After attempting to subscribe/publish on every combination of their channel structure as synonymous with a direct message, eg: globalChannel + '#user#' + myUUID + '#write.*', I could not get a chat-engine client and a pub/sub client to communicate. Has anyone been able to do this successfully? Is the channel structure actually different? Are their limitations due to the lack of invitations? I have the clients detecting each other's presence, so I can't imagine I'm far.
I decided on a workaround. Instead of direct chat, you can create a 2 person private channel with chat-engine (say channelName) and can subscribe in java with "chat-engine#chat#private.#" + channelName. This isn't direct messaging but should be functionally similar.

WSO2 CEP bidirectional REST API

I'm using wso2 cep 4.1
I created receiver to catch some json data from my source. Then I process this data internally and should give the response with additional data. My response should be through the same point as data come in to CEP. It is classical rest API. Is it possible and how can I make it?
Or, I need websocket (websocket-local) for similar purposes?
Hope you are still trying to understand functionalities of WSO2 CEP. Let me explain the basic overview of CEP before addressing your question. If you look at below diagram you will understand what is happening under the hood at a high level. . I will explain what these components suppose to do in the context of event processing.
Event receivers :-
Event receivers receive events that are coming to the CEP. WSO2 CEP supports the most common adapter implementations by default. For specific use cases, you can also plug custom adapters. For more information, see Configuring Event Receivers.
Event streams :-
Event streams contain unique sets of attributes of specific types that provide a structure based on which the events processed by the relevant event flow are selected. Event streams are stored as stream definitions in the file system via the data bridge stream definition store.
Event processors :-
Event processor handles actual event processing. It is the core event processing unit of the CEP. It manages different execution plans and processes events based on logic, with the help of different Siddhi queries. Event Processor gets a set of event streams from the Event Stream Manager, processes them using Siddhi engine, and triggers new events on different event streams back to the Event Stream Manager. For more information, see Creating a Standalone Execution Plan.
Event publishers
Event publishers:- publish events to external systems and store data to databases for future analysis. Like the event receivers, this component also has different adapter implementations. The most common ones are available by default in the CEP. You can implement custom adapters for specific use cases. For more information, see Configuring CEP to Create Alerts.
According to your requirement, you should have HTTP receiver as well as HTTP publisher where the receiver receives a request from a third party API and hand message over to event processors so as to perform some pre-defined tasks.This may compose with several event streams and execution plans. Once processing is done event publishers can be used to publish result to required third-party API as you pointed out.
OOB CEP provides HTTP receiver and HTTP publisher adapters[1-2] which you can try it out.There are some limitations which might not suit for your scenario. You are required to implement your own custom HTTP receiver and publisher[3-4] which does what you intended to do.
Since you need to publish a response to difference endpoints,you can achieve this defining REST API endpoint,user credentials(if required) and HTTP verbs and other information which required to send a message in the event stream[5] as meta information. Then that information you can read from the stream itself and push to desired third-party API as you require.
I need websocket (websocket-local) for similar purposes?
This isn't clear what exactly is to be done. Please raise an another question and ask it again.
https://docs.wso2.com/display/CEP410/HTTP+Event+Receiver
https://docs.wso2.com/display/CEP410/HTTP+Event+Publisher
https://docs.wso2.com/display/CEP410/Building+Custom+Event+Receivers
https://docs.wso2.com/display/CEP410/Building+Custom+Event+Publishers
https://docs.wso2.com/display/CEP410/Understanding+Event+Streams
The feature you are looking for doesn't come OOTB with CEP. However, you can try something similar to below;
Implement a REST API. Probably using Apache CXF since CXF dependencies are present in WSO2 servers by default. You can follow this guide if you are using a swagger based approach to develop the REST API.
Within that custom REST implementation, you need to read the HTTP request, send it to CEP (step 3), wait for an output from CEP (step 4) and then send back that details as HTTP response inside the method which represents your operation.
To send an event to CEP you can use WSO2 Event receiver. Create a receiver at CEP side and then send events to the receiver using DataPublisher client. Make sure you have the same stream definition that you set in CEP receiver in the DataPublisher.publish() method and object array you send adhere to that definition. Also, you might need to set truststore and keystore params here.
After publishing your events successfully you need to block the request thread till you receive a response from CEP. You can use a java object like CountDownLatch for this purpose.
To receive a response you need to consume events though EventStreamService For this you need to implement a WSO2EventConsumer and subscribe to EventStreamService. After successfully subscribing, events coming to stream id mentioned in your event consumer will be forwarded to receive method of your Consumer. From there you can extract the results, unblock the initial request thread and return with those results. To access the EventStreamService from within your web app you can use below code snippet.
EventStreamService eventStreamService = (EventStreamService) PrivilegedCarbonContext.getThreadLocalCarbonContext().getOSGiService(EventStreamService.class, null);
Hope this helped.

Implementation of server-side responses to long polling via REST API

Say you are designing a REST API over HTTP for a server "room" where subscribing clients want to monitor public events happening to the room (e.g. a new participant joins the room, another one leaves the room, and so on...) by making long poll requests.
What is the best way to implement this from a server side point of view so that the client will not miss any events between consecutive polls? For example, should the server implement a queue of events which need to exist in the queue until all the subscribers have got them?
Are there any tutorials, examples, some theory on internet about designing such an API and all the things that should be taken into account from the server perspective?
Very short answer - why not just use EventStore?
Short answer - why not just use Event Store as a reference implementation, and adapt their solution to match your implementation constraints?
What is the best way to implement this from a server side point of view so that the client will not miss any events between consecutive polls? For example, should the server implement a queue of events which need to exist in the queue until all the subscribers have got them?
REST by itself offers a few guidelines. There should be no application state stored on the server; the message sent by the client should include any client side state (like current position in the event stream) that the server will need to fulfill the request. The resource identified in the request is an abstraction - so the client can send messages to, for example "the event that comes after event 7", which makes sense even if that next event doesn't exist yet. The uniform interface should be respected, to allow for scaling via caches and the like that are outside of the control of the server. The representation of the state of the resource should be hypermedia, with controls that allow the client to advance after it has consumed the currently available messages.
HTTP throws in a few more specifics. Since there is no tracking of client state on the server, reading from the queue is a safe operation. Therefore, one of the safe HTTP methods (GET, to be precise) should be used for the read. Since GET doesn't actually support content body in the request, the information that the server will need should all be packed into the header of the request.
In other words, the URI is used to specify the current position of the client in the event stream.
Atom Syndication provides a good hypermedia format for event processing - the event stream maps to a feed, events map to entries.
By itself, those pieces give you a big head start on an event processor that conforms to the REST architectural constraints. You just need to bolt long polling onto it.
To get a rough idea at how you might implement long polling on your own, you can take a look at the ticketing demo, written by Michael Barker (maintainer of LMAX Disruptor).
The basic plot in Michael's demo is that a single writer thread is tracking (a) all of the clients currently waiting for an update and (b) the local cache of events. That thread reads a batch of events, identifies which requests need to be notified, responds to each of those requests in turn, and then advances to process the next batch of events.
I tend to think of the local cache of events as a ring buffer (like the disruptor itself, but private to the writer thread). The writer thread knows (from the information in the HTTP request) the position of each client in the event stream. Comparing that position to the current pointer in the ring buffer, each pending request can be classified has
Far Past The position that the client is seeking has already been evicted from the cache. Redirect the client to a "cold" persistent copy of that location in the stream, where it can follow the hypermedia controls to catch up to the present.
Recent Past The position that the client is seeking is currently available in the cache, so immediately generate a response to the client with the events that are available, and dispatch that response.
Near future The position that the client is seeking is not available in the cache, but the writer anticipates being able to satisfy that request before the SLA expires. So we park the client until more events arrive.
Far future The position that the client is seeking is not available in the cache, and we don't anticipate that we will be able to satisfy the request in the allotted time. So we just respond now, and let the client decide what to do.
(If you get enough polling clients that you need to start scaling out the long polling server, you need to consider the case where those servers get out of sync, and a client gets directed from a fast server to one that has fallen behind. So you'll want to have instrumentation in place that lets you track how often this is happening, so that you can apply the appropriate remedy).
There are also some edge cases to consider -- if a very large batch comes in, then you may need to evict the events your clients are waiting on before you get a chance to send them.
Simple, have the client pass in the timestamp (or id, or index) of the last message they received.
Requesting GET /rooms/5/messages returns all the messages the server knows about, like
[
{
"message": "hello",
"timestamp": "2016-07-18T18:44:34Z"
},
{
"message": "world",
"timestamp": "2016-07-18T18:47:16Z"
}
]
The client then long polls the server with GET /rooms/5/messages?since=2016-07-18T18:47:16Z which returns either all the messages since that time (if there are any) or blocks until the room has a new message.
Send reference number with all the events.
Cleint will call with reference number of the latest event received. You will block long poll request if no event is available and respond once event is available again with new reference number.
In Case events are already available it will return all events generated after the request reference number event.
I strongly recommend using WebSockets. Check out socket.io. Long polling is a hack that isn't necessarily desirable and isn't really "supported".
Long polling is not a good idea. Specifically when one wants to live monitor the changes those happen at server side.There are mechanisms where server send the notifications to clients for the changes. This can be achieved by using, as gcoreb already mentioned, Socket.io (Nodejs stack) or SignalR (.net stack).

CometD service vs. broadcast channel

In the article http://www.cometdaily.com/2008/05/15/the-many-shades-of-bayeuxcometd-2/index.html the author describes:
Often with PubSub, developers feel the need to create a channel per user in order to deliver private messages to a client. For example, if a trading system wants to notify a user of completed trades, the temptation is to create a channel like /trades/a_user_id and each user will subscribe to their own channel. This approach works, but is not the most resource sensible way of solving this issue and requires security code to prevent unauthorized clients subscribing to other users channels.
What are the trade-offs between the service and broadcast channels to implement messages for a particular user? I understand the security aspect of the trade-off but what about resource overhead? I don't understand why there would be any more resources used with a broadcast channel than there would be for custom-routed service. If you could explain why one is better over the other for the use-case, rather than a blanket statement of being sensible or not, that could help lead me to a decision.
The article is pretty old, it refers to CometD 1 while we are now at CometD 3.
You may want to check updates on the CometD website and read the CometD 3 documentation.
The concepts behind broadcast vs service channels are still valid for CometD 3.
The server allocates data structures for every channel is created, being it a broadcast or service channel.
In the example from that article, it is compared creating N broadcast channels - one for each user_id, versus creating just one service channel. The former solution is obviously using more resources on the server than the latter, and it's subject to sneak peeking (a client can guess a user_id and subscribe to that channel, thus receiving messages that are destined to other users).
For this particular case, all the application needs to do is to deliver a message to a specific client. For this use case, it is better to use a service channel because it uses less resources (the same server-side channel can be used for all users, without the risk that a user receives messages not destined to him/her) and it is more secure.