Vert.x Event Bus to retain message - vert.x

I am following the vertx sockjs example to transfer data over the SockJS event bus bridge.
The sending code:
eventBus.publish(ebAddress, data);
The consumer code:
var eb = new EventBus("http://localhost:8088/eventbus");
eb.onopen = function () {
eb.registerHandler("/ebaddress", function (err, msg) {
var str = "<code>" + msg.body + "</code><br>";
console.log(str);
})
}
The first client works fine. However, for the second connected client, since it is subscribing the same eb address, it cannot get the most current data that has been sent to the first client. It won't be an issue if the data is coming in fast. But if the time interval between data points are long, the second client will have no data for a long time until the next new data point arrive.
So, is the event bus of Vert.x able to retain message so that whenever a new client connects, it can get the most recent data right away?
I am pretty new to Vert.x, so any comments will be greatly appreciated.

Simple answer: no, Vert.x EventBus doesn't persist messages. Nor does it able to replay them, for that reason. It just that: a bus to send events on. After all, when you write in JavaScript element.on("click", function() {}), you don't usually expect to receive all previous clicks, right?
But, it doesn't mean it's not possible.
In your JavaScript:
eb.onopen = function () {
// On connect your client asks on a different channel to get some previously stored messages
eb.send("/replay", {count: 10}, null, function(err, msg) {
// Populate your code
});
// Continue here as usual
eb.registerHandler("/ebaddress", function (err, msg) {
// Something happens here
})
}
Of course on your server side you'll need to
Persist some amount of messages, either in-memory or in some storage of your choice
Listen to this new /replay channel
Use .send() to reply to specific client with previous messages

Related

Response not submitted when rxEnd is used in HTTP server

I have a two verticle server written in vert.x + reactive extensions. HTTP server verticle uses event bus to send requests to the DB verticle. After receiving the response from the DB verticle (through event bus) I send the response to the http client using rxEnd. However clients does not seem to receive this response and times out eventually. If I were to use end() instead, things work fine. I use postman to test this REST API. Please see below for the code which forward results from the DB verticle to client.
routerFactory.addHandlerByOperationId("createChargePoints", routingContext -> {
RequestParameters params = routingContext.get("parsedParameters");
RequestParameter body = params.body();
JsonObject jsonBody = body.getJsonObject();
vertx.eventBus().rxRequest("dbin", jsonBody)
.map(message -> {
System.out.println(message.body());
return routingContext.response().setStatusCode(200).rxEnd(message.body().toString());
})
.subscribe(res -> {
System.out.println(res);
}, res -> {
System.out.println(res);
});
});
The rxEnd method is a variant of end that returns a Completable. The former is lazy, the latter is not.
In other words, if you invoke rxEnd you have to subscribe to the Completable otherwise nothing happens.
Looking at the code of your snippet, I don't believe using rxEnd is necessary. Indeed, it doesn't seem like you need to know if the request was sent succesfully.

Is the following code with Vert.x really reactive?

Do I have a wrong understanding of "reactive" or is something wrong in my example?
I did a small code sample in Vertx: In a REST service I read data from mongodb and returning as JSON.
...........
Router router = Router.router(vertx);
router.route().handler(BodyHandler.create());
router.get("/gilders").handler(this::listAll);
vertx.createHttpServer().requestHandler(router::accept).listen(8080);
}
private void listAll(RoutingContext routingContext) {
mongoClient.find("gliders", new JsonObject(), results -> {
List<JsonObject> objects = results.result();
/* is this non blocking?!
mongoClient.find return immediately, but the rest client just
gets results, after mongo delivered all results
*/
List<Glider> gilder = objects.stream()
.map(res -> {
Glider g = new Glider();
g.setName(res.getString("name"));
g.setPrice(res.getString("price"));
return g;
})
.collect(Collectors.toList());
routingContext.response()
.putHeader("content-type", "application/json; charset=utf-8")
.end(Json.encodePrettily(gilder));
});
}
OK, its not blocking, I could compute something else meanwhile waiting for mongo.
But somehow I thought about "reactive" is that the REST client will get already the first chunks of the mongo results even mongo is still not ready finding all by that time (HTTP Streaming). But like this, the callback is just invoked, when mongo found all results.
Reactive is not the same as streaming. Reactive is a concept around data flows, your application will react to events, e.g.: data returned from mongoDB. You can now implement streaming on top of it by asking the mongo client to start pumping data asap as it arrives from the network. However in a blocking API you could do streaming by blocking the application for data and then pass it one by one to a consumer.

MassTransit Send only

I am implementing a Service Bus and having a look at MassTransit. My pattern is not Publish/Subscribe but Sender/Receiver where the Receiver can be offline and came back online later.
Right now I am starting to write my tests to verify that MassTransit succesfully deliver the message using the following code:
bus = ServiceBusFactory.New(sbc =>
{
sbc.UseMsmq(
cfg =>
{
cfg.Configurator.UseJsonSerializer();
cfg.Configurator.ReceiveFrom("msmq://localhost/my_queue");
cfg.VerifyMsmqConfiguration();
});
});
Then I grab the bus and publish a message like this:
bus.Publish<TMessage>(message);
As I can notice from MSMQ, two queues are created and the message is sent cause Mass Transit does not raise any error but I cannot find any message in the queue container.
What am I doing wrong?
Update
Reading the Mass Transit newsgroup I found out that in a scenario of Sender/Receiver where the receiver can come online at any time later, the message can be Send using this code:
bus.GetEndpoint(new Uri("msmq://localhost/my_queue")).Send<TMessage>(message);
Again in my scenario I am not writing a Publisher/Subscriber but a Sender/Receiver.
First, to send, you can use a simple EndpointCacheFactory instead of a ServiceBusFactory...
var cache = EndpointCacheFactory.New(x => x.UseMsmq());
From the cache, you can retrieve an endpoint by address:
var endpoint = cache.GetEndpoint("msmq://localhost/queue_name");
Then, you can use the endpoint to send a message:
endpoint.Send(new MyMessage());
To receive, you would create a bus instance as you specified above:
var bus = ServiceBusFactory.New(x =>
{
x.UseMsmq();
x.ReceiveFrom("msmq://localhost/queue_name");
x.Subscribe(s => s.Handler<MyMessage>(x => {});
});
Once your receiver process is complete, call Dispose on the IServiceBus instance. Once your publisher is shutting down, call Dispose on the IEndpointCache instance.
Do not dispose of the individual endpoints (IEndpoint) instances, the cache keeps them available for later use until it is disposed.

Sails pubsub how to subscribe to a model instance?

I am struggling to receive pubsub events in my client. The client store (reflux) gets the data from a project using its id. As I understand it this automatically subscribes the Sails socket for realtime events (from version 0.10), but I don't see it happening.
Here's my client store getting data from sails
(this is ES6 syntax)
onLoadProject(id) {
var url = '/api/projects/' + id;
io.socket.get(url, (p, jwres) => {
console.log('loaded project', id);
this.project = p;
this.trigger(p);
});
io.socket.on("project", function(event){
console.log('realtime event', event);
});
},
Then I created a test "touch" action in my project controller, just to have the modifiedAt field updated.
touch: function(req, res){
var id = req.param('id');
Project.findOne(id)
.then(function(project) {
if (!project) throw new Error('No project with id ' + id);
return Project.update({id: id}, {touched: project.touched+1});
})
.then(function(){
// this should not be required right?
return Project.publishUpdate(id);
})
.done(function() {
sails.log('touched ok');
res.ok();
}, function(e) {
sails.log("touch failed", e.message, e.stack);
res.serverError(e.message);
});
}
This doesn't trigger any realtime event in my client code. I also added a manual Project.publishUpdate(), but this shouldn't be required right?
What am I missing?
-------- edit ----------
There was a complication a result of my model touched attribute, since I set it to 'number' instead of 'integer' and the ORM exception wasn't caught by the promise error handling without a catch() part. So the code above works, hurray! But the realtime events are received for every instance of Project.
So let me rephrase my question:
How can I subscribe the client socket to an instance instead of a model? I could check the id on the client side and retrieve the updated instance data but that seems inefficient since every client receives a notification about every project even though they only should care about a single one.
----- edit again ------
So nevermind. The reason I was getting updates from every instance is simply because at the start of my application I triggered a findAll to get a list of available projects. As a result my socket got subscribed for all of them. The workaround would be to either initiate that call via plain http instead of a socket, or use a separate controller action for retrieving the list (therefor bypassing the blueprint route). I picked the second option because in my case it's silly to fetch all project data prior to picking one.
So to answer my own question. The reason I was getting updates from every instance is simply because at the start of my application I triggered a findAll to get a list of available projects. As a result my socket got subscribed for all of them.
The workaround would be to either initiate that call via plain http instead of a socket, or use a separate controller action for retrieving the list (therefor bypassing the blueprint route). I picked the second option because in my case it's silly to fetch all resources data prior to selecting one.
Here's the function I used to list all resources, where I filter part of the data which is not relevant for browsing the list initially.
list: function(req, res) {
Project.find()
.then(function(projects) {
var keys = [
'id',
'name',
'createdAt',
'updatedAt',
'author',
'description',
];
return projects.map(function(project){
return _.pick(project, keys);
});
})
.catch(function (e){
res.serverError(e.message);
})
.done(function(list){
res.json(list);
}, function(e) {
res.serverError(e.message);
});
},
Note that when the user loads a resource (project in my case) and then switches to another resource, the client is will be subscribed to both resources. I believe it requires a request to an action where you unsubscribe the socket explicitly to prevent this. In my case this isn't such a problem, but I plan to solve that later.
I hope this is helpful to someone.

What's Socket.IO sending and getting data (acknowledgements)?

This example from Socket.IO website is confusing me. Sending and getting data (acknowledgements):
Client:
<script>
socket.on('connect', function () {
socket.emit('ferret', 'tobi', function (data) {
console.log(data); // data will be 'woot'
});
});
</script>
Server:
io.sockets.on('connection', function (socket) {
socket.on('ferret', function (name, fn) {
fn('woot');
});
});
I'm actually reproducing this example. What I can't understand is:
Q1: How does this work in the first place. Does the server (when executing fn) automagically emits the result to the client? Does Socket.IO bind fn to the client third parameter of emit?
Q2: What's the (unused) name parameter in server anonymous function (name, fn)? Logging it shows that it's undefined, why?
Found by myself, correct me if I'm wrong:
name (what unlucky name from the official documentation!!!) is actually the data sent by the client.
fn corresponds to the 3th parameter of client code, and when executed (from the server) automagically (?) sends the data back to the client. Amazing!
Indeed; it gets a lot clearer if you rename "fn" to "callback", as seen here: Acknowledgment for socket.io custom event. That callback is never executed on the server side; the server simply sends the data passed to the callback (in this case, the string "woot") back to the client as an acknowledgement. The callback is then executed on the client using the data sent by the server.
To send data from client to server
socket.emit("Idofhtmltag",value);
To receive data from server, on client html add this
socket.io("Idofhtmltag",function(msg){ }) ;