I have a two verticle server written in vert.x + reactive extensions. HTTP server verticle uses event bus to send requests to the DB verticle. After receiving the response from the DB verticle (through event bus) I send the response to the http client using rxEnd. However clients does not seem to receive this response and times out eventually. If I were to use end() instead, things work fine. I use postman to test this REST API. Please see below for the code which forward results from the DB verticle to client.
routerFactory.addHandlerByOperationId("createChargePoints", routingContext -> {
RequestParameters params = routingContext.get("parsedParameters");
RequestParameter body = params.body();
JsonObject jsonBody = body.getJsonObject();
vertx.eventBus().rxRequest("dbin", jsonBody)
.map(message -> {
System.out.println(message.body());
return routingContext.response().setStatusCode(200).rxEnd(message.body().toString());
})
.subscribe(res -> {
System.out.println(res);
}, res -> {
System.out.println(res);
});
});
The rxEnd method is a variant of end that returns a Completable. The former is lazy, the latter is not.
In other words, if you invoke rxEnd you have to subscribe to the Completable otherwise nothing happens.
Looking at the code of your snippet, I don't believe using rxEnd is necessary. Indeed, it doesn't seem like you need to know if the request was sent succesfully.
Related
I'm looking to respond to a REST endpoint with a Success/Failure response that dynamically accepts a topic as a query param. In Quarkus with smallrye reactive messaging the code would look something like below wrapping the payload with OutgoingKafkaRecordMetadata
i.e. https://myendpoint/publishToKafka?topic=myDynamicTopic
#Channel("test")
Emitter<byte []> kafkaEmitter;
#POST
#Path("/publishToKafka")
public CompletionStage<Void> publishRecord(#QueryParam("topic") String topic, byte [] payload){
kafkaEmitter.send(Message.of(payload).addMetadata(OutgoingKafkaRecordMetadata.<String>builder()
.withKey("my-key")
.withTopic("myDynamicTopic")
.build()));
}
From the Quarkus doco "If the endpoint does not return a CompletionStage, the HTTP response may be written before the message is sent to Kafka, and so failures won’t be reported to the user." The example here describes this process when you send a payload directly (i.e. emitter.send(payload) which returns a CompletionStage but emitter.send(message) returns void) but this requires configuring the topic in advance. Is it possible to specify metadata with a Message and still respond to the calling client with a success/failure response? (I don't mind if it's with Emitter and CompletionStage or MunityEmitter and Uni).
Any advice or suggestions would be appreciated.
Because you use a Message (as you need to specify the topic), you need something a bit more convoluted:
#Channel("test")
Emitter<byte []> kafkaEmitter;
#POST
#Path("/publishToKafka")
public CompletionStage<Void> publishRecord(#QueryParam("topic") String topic, byte [] payload){
CompletableFuture<Void> future = new CompletableFuture<>();
Message<byte[]> message = Message.of(payload).addMetadata(OutgoingKafkaRecordMetadata.
<String>builder()
.withKey("my-key")
.withTopic("myDynamicTopic")
.build()));
message = message.withAck(() -> {
future.complete(null));
return CompleteableFuture.completedFuture(null);
}
.withNack(t -> {
future.completeExceptionnaly(t));
return CompleteableFuture.completedFuture(null);
});
kafkaEmitter.send(message);
return future;
}
In this snippet, I also attach the ack and nack handlers called when the message is either acknowledged (accepted by the broker) or rejected (something wrong happened).
These callbacks report to future, a CompletableFuture created in the method. This is the object to return, as it will do what you want: indicate the outcome.
I know the callbacks are slightly complicated. This is mainly due to the spec: We have to return CompleteableFuture.completedFuture(...); to acknowledge that the nack-process was successful. If we were to return future; instead (which we have set to future.completeExceptionnaly(t));), this would be interpreted as a failure during the nack-process. This would basically be the equivalent to a throw within a catch-block in the imperative world.
Fortunately, an easier version will be available soonish (no worries, we won't break).
What is the recommended way in vert.x to write an Asynchronous request handler?
In this service, a request processing typically involves calling DB, calling external services, etc. I do not want to block the request handling thread however. What is the recommended way to achieve this using vet.x? In a typical asynchronous processing chain, I would use the request handling thread to emit a message to the message bus with the request object. Another handler will pick this message and do some processing such as checking request params. This handler can then emit a new message to the bus which can be picked up by the next handler which will do a remote call. This handler emits a new message with the result of the call which can be picked up by the next handler which will do error checking etc. Next handler would be responsible for creating the response and sending it to the client.
How one can create a similar pipeline using vert.x?
Everything, comprising request handlers for HttpServer, is asynchronous, isn't it?
var server = vertx.createHttpServer(HttpServerOptions())
server.requestHandler { req ->
req.setExpectMultipart(true) // for handling forms
var totalBuffer = Buffer.buffer()
req.handler { buff -> b.appendBuffer(buff) }
.endHandler { // the body has now been fully read
var formAttributes = request.formAttributes()
req.response().putHeader("Content-type","text/html");
req.response().end("Hello HTTP!");
}
// the above is so common that Vertx provides: bodyHandler{totalbuff->..}
}.listen(8080, "127.0.0.1", { res -> if(res.succeeded()) ... });
You just need to (end) write on req.response() on your final handler of your pipeline.
For a more stream-like implementation (i.e., not callback-based), you may use Vert.x Rx/ReactiveStreams API. E.g., you may use Vert.x Web Client for making requests, possibly using its Rx-fied API.
I am seeing a timeout in the browser when the server-side service ends in a failed result. Everything works fine if the service call succeeds but it seems as though the browser never receives a response if the call fails.
My service passes a result handler to a DAO containing the following code:
final SQLConnection conn = ar.result();
conn.updateWithParams(INSERT_SQL, params, insertAsyncResult -> {
if (insertAsyncResult.failed()) {
conn.close();
resultHandler.handle(ServiceException.fail(1, "TODO"));
} else {
resultHandler.handle(Future.succeededFuture());
}
});
I'm not sure where to go from here. How do I debug what the framework is sending back to the client?
The problem was that I needed to register a ServiceExceptionMessageCodec in an intermediate Verticle, one that was sitting between the browser and the Verticle that was performing the database operation.
is there a Java SockJS client for Vert.x available?
Similar to the TCP/IP bridge, but based on SockJS.
Reason is that we want a unified protocol stack, connecting clients
to Vert.x. For JavaScript we can use vertx3-eventbus-client, which work great.
We are looking now for a similar solution for Java.
There isn't yet (work-in-progress). However you can write a basic client yourself using the Vert.x HttpClient:
open a websocket
send pings periodically to prevent the connection from being closed
register a handler
listen for messages
Here's an example:
client.websocket(HTTP_PORT, HTTP_HOST, "/eventbus/websocket", ws -> {
JsonObject msg = new JsonObject().put("type", "ping");
ws.writeFrame(io.vertx.core.http.WebSocketFrame.textFrame(msg.encode(), true));
// Send pings periodically to avoid the websocket connection being closed
vertx.setPeriodic(5000, id -> {
JsonObject msg = new JsonObject().put("type", "ping");
ws.writeFrame(io.vertx.core.http.WebSocketFrame.textFrame(msg.encode(), true));
});
// Register
JsonObject msg = new JsonObject().put("type", "register").put("address", "my-address");
ws.writeFrame(io.vertx.core.http.WebSocketFrame.textFrame(msg.encode(), true));
ws.handler(buff -> {
JsonObject json = new JsonObject(buff.toString()).getJsonObject("body");
// Do stuff with the body
});
});
If you need to work with different addresses then your handler will have to inspect the JSON object, not just get the body.
Do I have a wrong understanding of "reactive" or is something wrong in my example?
I did a small code sample in Vertx: In a REST service I read data from mongodb and returning as JSON.
...........
Router router = Router.router(vertx);
router.route().handler(BodyHandler.create());
router.get("/gilders").handler(this::listAll);
vertx.createHttpServer().requestHandler(router::accept).listen(8080);
}
private void listAll(RoutingContext routingContext) {
mongoClient.find("gliders", new JsonObject(), results -> {
List<JsonObject> objects = results.result();
/* is this non blocking?!
mongoClient.find return immediately, but the rest client just
gets results, after mongo delivered all results
*/
List<Glider> gilder = objects.stream()
.map(res -> {
Glider g = new Glider();
g.setName(res.getString("name"));
g.setPrice(res.getString("price"));
return g;
})
.collect(Collectors.toList());
routingContext.response()
.putHeader("content-type", "application/json; charset=utf-8")
.end(Json.encodePrettily(gilder));
});
}
OK, its not blocking, I could compute something else meanwhile waiting for mongo.
But somehow I thought about "reactive" is that the REST client will get already the first chunks of the mongo results even mongo is still not ready finding all by that time (HTTP Streaming). But like this, the callback is just invoked, when mongo found all results.
Reactive is not the same as streaming. Reactive is a concept around data flows, your application will react to events, e.g.: data returned from mongoDB. You can now implement streaming on top of it by asking the mongo client to start pumping data asap as it arrives from the network. However in a blocking API you could do streaming by blocking the application for data and then pass it one by one to a consumer.