VertX multiple Worker Instances processing same message - vert.x

I have a simple Vertx worker vertical with 4 instances for scaling as defined below. When multiple requests come, I was expecting that each worker instances will process individual request concurrently (4 requests at a time).
Vertx vertx = Vertx.vertx();
DeploymentOptions deploymentOptions = new DeploymentOptions()
.setWorker(true)
.setInstances(4);
vertx.deployVerticle(MailVertical.class.getName(), deploymentOptions);
some code to plumb incoming mail message to publishing method.
// This is executed once per incoming message
this.vertx.eventBus().publish("anAddress", messageString);
Vertical Consumer code to log incoming mail message
public class MailVertical extends AbstractVerticle {
private static final Logger logger = LoggerFactory.getLogger(MailVertical.class);
#Override
public void start(Promise<Void> future) {
logger.info("Welcome to Vertx: MailVertical.");
vertx.eventBus().consumer("anAddress", message -> {
String msg = message.body().toString();
for(int i=0;i<50;i++){
logger.info(msg);
}
try {
updateStatusInDB(msg);
} catch (SQLException e) {
e.printStackTrace();
}
});
}
However, I am observing that each request is processed by all 4 instances somewhat concurrently i.e. if 1 request come in, total 4 processing events occur = 200 log messages.
...
...
[vert.x-worker-thread-7] INFO com.vertx.mailproject.MailVertical - {"msg":"sample message for app-2123x mail notification","appname":"app-1","msgid":"64fd684b-45a8-48c7-9526-4606d6adc311"}
[vert.x-worker-thread-6] INFO com.vertx.mailproject.MailVertical - Msg: sample message for app-2123x mail notification updated to SENT in DB.
[vert.x-worker-thread-7] INFO com.vertx.mailproject.MailVertical - Msg: sample message for app-2123x mail notification updated to SENT in DB.
[vert.x-worker-thread-4] INFO com.vertx.mailproject.MailVertical - Msg: sample message for app-2123x mail notification updated to SENT in DB.
[vert.x-worker-thread-5] INFO com.vertx.mailproject.MailVertical - Msg: sample message for app-2123x mail notification updated to SENT in DB.
Any suggestion what am I doing wrong here? Or is this expected.
vertx-core = 4.2.3

Yes this is indeed a correct behavior. You are starting 4 worker threads and then registering all of them against the same address "anAddress".
Publish Subscribe pattern is being used here. With the method publish() all the registered consumer/handler will receive this event. see Publish / subscribe messaging
If you want only one of worker to receive this event, then use Point-to-point and Request-Response messaging. Basically replace publish() with send()
But looking at your code, I would suggest instead of using worker verticle use executeBlocking in standard verticle

Related

Timing of `postMessage` inside service worker `fetch` event handler

I have a simple service worker which, on fetch, posts a message to the client:
// main.js
navigator.serviceWorker.register("./service-worker.js");
console.log("client: addEventListener message");
navigator.serviceWorker.addEventListener("message", event => {
console.log("client: message received", event.data);
});
<script src="main.js"></script>
// service-worker.js
self.addEventListener("fetch", event => {
console.log("service worker: fetch event");
event.waitUntil(
(async () => {
const clientId =
event.resultingClientId !== ""
? event.resultingClientId
: event.clientId;
const client = await self.clients.get(clientId);
console.log("service worker: postMessage");
client.postMessage("test");
})()
);
});
When I look at the console logs, it's clear that the message event listener is registered after the message is posted by the service worker. Nonetheless, the event listener still receives the message.
I suspect this is because messages are scheduled asynchronously:
postMessage() schedules the MessageEvent to be dispatched only after all pending execution contexts have finished. For example, if postMessage() is invoked in an event handler, that event handler will run to completion, as will any remaining handlers for that same event, before the MessageEvent is dispatched.
https://developer.mozilla.org/en-US/docs/Web/API/Window/postMessage#Notes
However, I'm not sure what this means in this specific example. For example, when the fetch event handler has ran to completion, is the client JavaScript guaranteed to have ran, and therefore the message event listener will be registered?
I have a larger app that is doing something similar to the above, but the client JavaScript is ran slightly later in the page load, so I would like to know when exactly the event listener must be registered in order to avoid race conditions and guarantee the message posted by the service worker will be received.
By default, all messages sent from a page's controlling service worker to the page (using Client.postMessage()) are queued while the page is loading, and get dispatched once the page's HTML document has been loaded and parsed (i.e. after the DOMContentLoaded event fires). It's possible to start dispatching these messages earlier by calling ServiceWorkerContainer.startMessages(), for example if you've invoked a message handler using EventTarget.addEventListener() before the page has finished loading, but want to start processing the messages right away.
https://developer.mozilla.org/en-US/docs/Web/API/ServiceWorkerContainer/startMessages

Kafka Producer : Handle Exception in Async Send with Callback

I need to catch the exceptions in case of Async send to Kafka. The Kafka producer Api comes with a fuction send(ProducerRecord record, Callback callback). But when I tested this against following two scenarios :
Kafka Broker Down
Topic not pre created
The callbacks are not getting called. Rather I am getting warning in the code for unsuccessful send (as shown below).
Questions :
So are the callbacks called only for specific exceptions ?
When does Kafka Client try to connect to Kafka broker while async send : on every batch send or periodically ?
Kafka Warning Image
Note : I am also using linger.ms setting of 25 sec to batch send my records.
public class ProducerDemo {
static KafkaProducer<String, String> producer;
public static void main(String[] args) throws IOException {
final Logger logger = LoggerFactory.getLogger(ProducerDemo.class);
Properties properties = new Properties();
properties.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "127.0.0.1:9092");
properties.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
properties.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
properties.setProperty(ProducerConfig.ACKS_CONFIG, "1");
properties.setProperty(ProducerConfig.LINGER_MS_CONFIG, "30000");
producer = new KafkaProducer<String, String>(properties);
String topic = "first_topic";
for (int i = 0; i < 5; i++) {
String value = "hello world " + Integer.toString(i);
String key = "id_" + Integer.toString(i);
ProducerRecord<String, String> record = new ProducerRecord<String, String>(topic, key, value);
producer.send(record, new Callback() {
public void onCompletion(RecordMetadata recordMetadata, Exception e) {
//execute everytime a record is successfully sent or exception is thrown
if(e == null){
// No Exception
}else{
//Exception Handling
}
}
});
}
producer.close();
}
You will get those warning for non-existing topic as a resilience mechanism provided with KafkaProducer. If you wait a bit longer(should be 60 seconds by default), the callback will be called eventually:
Here's my snippet:
So, when something goes wrong and async send is not successful, it will eventually fail with a failed future or/and a callback with exception.
If you are not running it transactionally, it can still mean that some messages from the batch have found their way to the broker, while others haven't.
It will most certainly be a problem if you need a blocking-style acknowledgement to the upstream system(like http ingestion interface, etc.) per every message that is sent to Kafka. The only way to do that is by blocking every message with the future's get, as described in the documentation:
In general, I've noticed a lot of question related to KafkaProducer delivery semantics and guarantees. It can definitely be documented better.
One more thing, since you mentioned linger.ms:
Note that records that arrive close together in time will generally
batch together even with linger.ms=0 so under heavy load batching will
occur regardless of the linger configuration
For the first question, here is the answer.
As per the apache kafka documentation, you can capture below exceptions using onCompletion method when you are implementing Callback interface
https://kafka.apache.org/25/javadoc/org/apache/kafka/clients/producer/Callback.html
For the second question, the combination of below properties control when to send the records and as far as i understand, it's same for synchronous or asynchronous call.
linger.ms
max.block.ms
https://kafka.apache.org/documentation/#linger.ms
So are the callbacks called only for specific exceptions ?
Yes, that's how it works. From documentation (2.5.0):
* Fully non-blocking usage can make use of the {#link Callback} parameter to provide a callback that
* will be invoked when the request is complete.
Notice the important part: when the request is complete, what means that the producer must have accepted the record and sent the ProduceRequest to Kafka Broker. Without digging too deep into internals, this means that broker metadata must be present and the partition must exist.
When it comes to formal specification, you'd need to take a good look at send()'s Javadoc and possibly at KafkaProducer's implementation of doSend method. Out there you're going to see that multiple exceptions can be thrown at the in submitting call (instead of returning a future and invoking callback), e.g. :
if broker metadata is not available in timeout given,
if data could not be serialized,
if serialized form was too large, etc.

how does Netty server know when client is disconnected?

I am making Netty Server that satisfies the following conditions:
Server needs to do transaction process A, when it receives packet from its client.
After finishing transaction, if it is still connected, it sends back return message to client. If not, do some rollback process B.
But my problem is that when I send back to client, the Server does not know wheter it is still connected or not.
I've tried following codes to figure out its connection before sending messages. However it always succeeds even though client already closed its socket. It only fails when client process is forcibly killed (e.g. Ctrl+C)
final ChannelFuture cf = inboundChannel.writeAndFlush(resCodec);
cf.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) {
if (future.isSuccess()) {
inboundChannel.close();
} else {
try {
// do Rollback Process B Here
}
}
});
I thought this is because of TCP protocol. If client disconnect gracefully, then FIN signal is sent to server. So it does think writeAndFlush succeeds somehow even when it doesn't.
So I've tried following code too, but these have the same result (always return true)
if(inboundChannel.isActive()){
inboundChannel.writeAndFlush(msg);
} else{
// do RollBack B
}
// Similar codes using inboundChannel.isOpen(), inboundChannel.isWritable()
Neither 'channelInactive' event nor 'Connection Reset by Peer' exception occur in my case.
This is the part of Netty test client code that I used.
public void channelActive(ChannelHandlerContext ctx) {
ctx.writeAndFlush(message).addListener(ChannelFutureListener.CLOSE);
}
How can I notice disconnection at the time that I want to reply?
May be you should override the below method and see if the control goes here when channel is closed.
#Override
public void channelInactive(ChannelHandlerContext ctx) throws Exception {
// write cleanup code
}
I don't think its possible to track that whether client is connected or not in netty because there is an abstraction between netty and client.

How to return JMS answer in REST

So I am writing REST Service in which I want to return JMS answer from queue.
Everythink looks like this:
#Controller
#RequestMapping("/rest")
public class UserService {
#Autowired
JMSProducer jmsproducer;
#RequestMapping(value="/users", method=RequestMethod.GET)
public String getUsers(){
return jmsproducer.send();
}
}
Method send() send message to queue to BACKEND ( backend have connection to DataBase ) then backend send in queue message all users to my rest and then my JMSProducer class with method onMessage from MessageListener receive sended message. (using MessageProducer's and MessageConsumer's)
Question is: How can i receive this all users from queue on this method getUsers because onMessage function is void type.
Please help me, i don't have any idea how to do that in a good way.
I am using ActiveMQ.

MassTransit Send only

I am implementing a Service Bus and having a look at MassTransit. My pattern is not Publish/Subscribe but Sender/Receiver where the Receiver can be offline and came back online later.
Right now I am starting to write my tests to verify that MassTransit succesfully deliver the message using the following code:
bus = ServiceBusFactory.New(sbc =>
{
sbc.UseMsmq(
cfg =>
{
cfg.Configurator.UseJsonSerializer();
cfg.Configurator.ReceiveFrom("msmq://localhost/my_queue");
cfg.VerifyMsmqConfiguration();
});
});
Then I grab the bus and publish a message like this:
bus.Publish<TMessage>(message);
As I can notice from MSMQ, two queues are created and the message is sent cause Mass Transit does not raise any error but I cannot find any message in the queue container.
What am I doing wrong?
Update
Reading the Mass Transit newsgroup I found out that in a scenario of Sender/Receiver where the receiver can come online at any time later, the message can be Send using this code:
bus.GetEndpoint(new Uri("msmq://localhost/my_queue")).Send<TMessage>(message);
Again in my scenario I am not writing a Publisher/Subscriber but a Sender/Receiver.
First, to send, you can use a simple EndpointCacheFactory instead of a ServiceBusFactory...
var cache = EndpointCacheFactory.New(x => x.UseMsmq());
From the cache, you can retrieve an endpoint by address:
var endpoint = cache.GetEndpoint("msmq://localhost/queue_name");
Then, you can use the endpoint to send a message:
endpoint.Send(new MyMessage());
To receive, you would create a bus instance as you specified above:
var bus = ServiceBusFactory.New(x =>
{
x.UseMsmq();
x.ReceiveFrom("msmq://localhost/queue_name");
x.Subscribe(s => s.Handler<MyMessage>(x => {});
});
Once your receiver process is complete, call Dispose on the IServiceBus instance. Once your publisher is shutting down, call Dispose on the IEndpointCache instance.
Do not dispose of the individual endpoints (IEndpoint) instances, the cache keeps them available for later use until it is disposed.