Vertx event bus not communicating in single JVM - vert.x

I have two verticles as below
First verticle is just listening on a address test and reply to messages
public class FirstVerticle extends AbstractVerticle {
#Override
public void start() {
Logger logger = LogManager.getLogger(FirstVerticle.class);
vertx.eventBus().consumer("test",message->{
logger.info("message received " + message.headers());
message.reply("hi!!!!");
});
}
}
Second verticle just sends a message to address test
public class SecondVerticle extends AbstractVerticle {
#Override
public void start() throws Exception {
Logger logger = LogManager.getLogger(SecondVerticle.class);
vertx.eventBus().request("test","hey there",handler->{
if(handler.failed())
logger.error("Failed to get data"+handler.cause());
else
logger.info("response " + handler.result().headers());
});
}
}
The two vertciles are deployed using a common main class
Vertx.clusteredVertx(new VertxOptions().setHAEnabled(true), vertx ->
vertx.result().deployVerticle(verticleName, new DeploymentOptions().setHa(true))
);
When running as a separate program and deploy the verticles in different JVM, verticles can communicate with each other using event bus, but when deploying two verticles using common class at a time is not working, getting below error
Failed to get data(TIMEOUT,-1) Timed out after waiting 30000(ms) for a reply. address:
__vertx.reply.9da86cc6-f689-47d5-a5b4-bceafbce254a, repliedAddress: test
Any help is highly appreciated.
Thanks in advance.

Vertx event bus can communicate within a JVM.
It was my mistake that I configured the event bus interceptor, but did not completed the chain by calling context.next().

Related

Spring data MongoDB change stream with multiple application instances

I have a springboot with   springdata  mongodb application where I am connecting to mongo change stream to save the changes to a audit collection.  My application is running multiple instances (2 instances) and will be scaled up to n number instances when the load increased.   When records are created in the original collection (“my collection”), the listeners will be triggered in all running instances and creates duplicate records.  Following is my setup
build.gradle
…
// spring data mingodb version 3.1.5
implementation 'org.springframework.boot:spring-boot-starter-data-mongodb'
…
Listener config
#Configuration
#Slf4j
public class MongoChangeStreamListenerConfig {
#Bean
MessageListenerContainer changeStreamListenerContainer(
MongoTemplate template,
PartyConsentAuditListener consentAuditListener,
ErrorHandler errorHandler) {
MessageListenerContainer messageListenerContainer =
new MongoStreamListenerContainer(template, errorHandler);
ChangeStreamRequest<PartyConsentEntity> request =
ChangeStreamRequest.builder(consentAuditListener)
.collection("my-collection")
.filter(newAggregation(match(where("operationType").in("insert", "update", "replace"))))
.fullDocumentLookup(FullDocument.UPDATE_LOOKUP)
.build();
messageListenerContainer.register(request, MyEntity.class, errorHandler);
log.info("mongo stream listener is registered");
return messageListenerContainer;
}
#Bean
ErrorHandler getLoggingErrorHandler() {
return new ErrorHandler() {
#Override
public void handleError(Throwable throwable) {
log.error("error in creating audit records {}", throwable);
}
};
}
}
Listener container
public class MongoStreamListenerContainer extends DefaultMessageListenerContainer {
public MongoStreamListenerContainer(MongoTemplate template, ErrorHandler errorHandler) {
super(template, Executors.newFixedThreadPool(15), errorHandler);
}
#Override
public boolean isAutoStartup() {
return true;
}
}
ChangeListener
#Component
#Slf4j
#RequiredArgsConstructor
/**
* This class will listen to mongodb change stream and process changes. The onMessage will triggered
* when a record added, updated or replaced in the mongo db.
*/
public class MyEntityAuditListener
implements MessageListener<ChangeStreamDocument<Document>, MyEntity> {
#Override
public void onMessage(Message<ChangeStreamDocument<Document>, MyEntity > message) {
var update = message.getBody();
log.info("db change event received");
if (update != null) {
log.info("creating audit entries for id {}", update.getId());
// This will execute in all the instances and creating duplicating records
}
}
}
Is there a way to control the execution on one instance at a given time and share the load between nodes?. It would be really nice to know if there is a config from spring data mongodb to control the flow.
Also, I have checked the following post in stack overflow and I am not sure how to use this with spring data.
Mongo Change Streams running multiple times (kind of): Node app running multiple instances
Any help or tip to resolve this issue is highly appreciated. Thank you very much in advance.

Vertx - stop method in verticle is not guaranteed

If you run the following code multiple times you will see the inconsistency: some times there are 3 lines displayed, some times there are only 2 lines displayed (the one missing is "Successfully stopped MyVerticle"). Why the .stop method is not called?
public class Main {
public static void main(String[] args) {
Vertx vertx = Vertx.vertx();
vertx.rxDeployVerticle(new MyVerticle()).subscribe();
Runtime.getRuntime().addShutdownHook(
new Thread(() -> {
//vertx.deploymentIDs().forEach( deploymentId -> vertx.undeploy(deploymentId));
vertx.close(result -> System.out.println("Result" + result));
System.out.println("Successfully stopped Vertx");
})
);
}
}
class MyVerticle extends AbstractVerticle {
#Override
public void start(Future<Void> startFuture) {
System.out.println("Successfully started MyVerticle");
startFuture.complete();
}
#Override
public void stop(Future<Void> stopFuture) {
System.out.println("Successfully stopped MyVerticle");
stopFuture.complete();
}
}
The method stop() is invoked when Vert.x undeploys a verticle.
When terminating your application, Vert.x will attempt to undeploy the verticles as well, but it's a race between event loop still running and your application shutting down.

Spring cloud stream + spring retry, How to add recovery callback and disable logic that send to DLQ?

I'm using spring cloud stream + rabbit mq binder.
In my #StreaListener I want to apply retry logic on specific exceptions using RetryTemplate. After retries are exhausted or not retriable error is thrown, I would like to add a recovery callback that will save a new record with an error message to my Postgres DB and finish with the message (move to the next).
Here what I got so far:
#StreamListener(Sink.INPUT)
public void saveUser(User user) {
User user = userService.saveUser(user); //could throw exceptions
log.info(">>>>>>User is created successfully: {}", user);
}
#StreamRetryTemplate
public RetryTemplate myRetryTemplate() {
RetryTemplate retryTemplate = new RetryTemplate();
retryTemplate.setBackOffPolicy(new ExponentialBackOffPolicy());
Map<Class<? extends Throwable>, Boolean> retryableExceptions = new HashMap<>();
retryableExceptions.put(ConnectionException.class, true);
retryTemplate.registerListener(new RetryListener() {
#Override
public <T, E extends Throwable> boolean open(RetryContext context,
RetryCallback<T, E> callback) {
return true;
}
#Override
public <T, E extends Throwable> void close(RetryContext context, RetryCallback<T, E> callback,
Throwable throwable) {
//could add recovery logic here, like save error to db why sertain user was not saved
log.info("retries exausted");
}
#Override
public <T, E extends Throwable> void onError(RetryContext context,
RetryCallback<T, E> callback, Throwable throwable) {
log.error("Error on retry", throwable);
}
});
retryTemplate.setRetryPolicy(
new SimpleRetryPolicy(properties.getRetriesCount(), retryableExceptions, true));
return retryTemplate;
}
from properties, I only have these (no any dlq configuration)
spring.cloud.stream.bindings.input.destination = user-topic
spring.cloud.stream.bindings.input.group = user-consumer
And after retries are exhausted I get this log.
2020-06-01 20:05:58.674 INFO 18524 --- [idge-consumer-1] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [localhost:56722]
2020-06-01 20:05:58.685 INFO 18524 --- [idge-consumer-1] o.s.a.r.c.CachingConnectionFactory : Created new connection: rabbitConnectionFactory.publisher#319c51b0:0/SimpleConnection#2a060201 [delegate=amqp://guest#127.0.0.1:56722/, localPort= 50728]
2020-06-01 20:05:58.697 INFO 18524 --- [idge-consumer-1] c.e.i.o.b.c.RetryConfiguration : retry finish
2020-06-01 20:05:58.702 ERROR 18524 --- [127.0.0.1:56722] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no exchange 'DLX' in vhost '/', class-id=60, method-id=40)
After RetryListener close method triggered, I can see that listener tries to connect to DLX probably to publish an error message. And I don't want it to do that as well as observe this error message in the log each time.
So my questions are:
1) Where to add RecoveryCalback for my retryTemplate? Supposedly I could write my recover logic with saving error to db in RetryListener#close method, but there definetely should be more appropriate way to do that.
2) How to configure rabbit-mq binder not to send messages to DLQ, maybe I could override some method? Currently, after retries are exhausted (or not retriable error is coming) listener tries to send a message to DLX and logs error that couldn't find it. I don't need any messages to be sent to dlq in scope of my application, I only need to save it to DB.
There is currently no mechanism to provision a custom recovery callback.
Set republishToDlq to false (it used to be). It was changed to true, which is wrong if autoBindDlq is false (default); I will open an issue for that.
Then, when retries are exhausted, the exception will be thrown back to the container; you can use a ListenerContainerCustomizer to add a custom ErrorHandler.
However, the data you get there will be a ListenerExecutionFailed exception with the raw (unconverted) Spring AMQP Message in its failedMessage property, not your User object.
EDIT
You can add a listener to the binding's error channel...
#SpringBootApplication
#EnableBinding(Sink.class)
public class So62137618Application {
public static void main(String[] args) {
SpringApplication.run(So62137618Application.class, args);
}
#StreamListener(Sink.INPUT)
public void listen(String in) {
System.out.println(in);
throw new RuntimeException("test");
}
#ServiceActivator(inputChannel = "user-topic.user-consumer.errors")
public void errors(String in) {
System.out.println("Retries exhausted for " + new String((byte[]) in.getFailedMessage().getPayload()));
}
}

How to deploy a verticle on a Web server/Application server?

I'm just beginning to learn Vert.x and how to code Verticles. I wonder if it makes any sense to deploy a Verticle from within an Application server or Web server like Tomcat. For example:
public class HelloVerticle extends AbstractVerticle {
private final Logger logger = LoggerFactory.getLogger(HelloVerticle.class);
private long counter = 1;
#Override
public void start() {
vertx.setPeriodic(5000, id -> {
logger.info("tick");
});
vertx.createHttpServer()
.requestHandler(req -> {
logger.info("Request #{} from {}", counter++, req.remoteAddress().host());
req.response().end("Hello!");
})
.listen(9080);
logger.info("Open http://localhost:9080/");
}
public static void main(String[] args) {
Vertx vertx = Vertx.vertx();
vertx.deployVerticle(new HelloVerticle());
}
}
Obviously the main method needs to be replaced by some ContextListener of any trigger provided by the Application Server. Does it make any sense or it's not supposed to use Vert.x in this Context?
Thanks
Using Vert.x as a Verticle inside a Tomcat app doesn't make much sense from my POV, because it defeats the whole point of componentization.
On the other hand you might want to simply connect to Event Bus to send/publish/receive messages, and is fairly easy to achieve.
I did it for a Grails (SB-based) project and put the Vertx stuff inside a service like:
class VertxService {
Vertx vertx
#PostConstruct
void init() {
def options = [:]
Vertx.clusteredVertx(options){ res ->
if (res.succeeded())
vertx = res.result()
else
System.exit( -1 )
})
}
void publish( addr, msg ){ vertx.publish addr, msg }
//...
}

Creating an Esper long running process or service

I'd like to create an Esper engine long running process but I'm not sure of Esper's threading model nor the model I should implement to do this. Naively I tried the following:
public class EsperTest {
/**
* #param args the command line arguments
*/
public static void main(String[] args) {
//EPServiceProvider epService = EPServiceProviderManager.getDefaultProvider();
EPServiceProvider epService = EPServiceProviderManager.getProvider("CoreEngine");
epService.addServiceStateListener(new EPServiceStateListener() {
#Override
public void onEPServiceDestroyRequested(EPServiceProvider epsp) {
System.out.println("Service destroyed");
}
#Override
public void onEPServiceInitialized(EPServiceProvider epsp) {
System.out.println("System initialised");
}
});
epService.initialize();
}
}
But the code appears to execute to the end of the main() method and the JVM ends.
Referring to the Esper documentation, section 14.7 p456:
In the default configuration, each engine instance maintains a single timer thread (internal timer)
providing for time or schedule-based processing within the engine. The default resolution at which
the internal timer operates is 100 milliseconds. The internal timer thread can be disabled and
applications can instead send external time events to an engine instance to perform timer or
scheduled processing at the resolution required by an application.
Consequently I thought that by creating a an engine instance ("CoreEngine") at least one (timer) thread would be created and assuming this is not a daemon thread the main() method would not complete but this appears not to be the case.
Do I have to implement my own infinite loop in main() or is there a configuration which can be provided to Esper which will allow it to run 'forever.?
The timer threads is a daemon thread.
Instead of a loop use a latch like this.
final CountDownLatch shutdownLatch = new CountDownLatch(1);
Runtime.getRuntime().addShutdownHook(new Thread() {
public void run() {
shutdownLatch.countDown();
}
});
shutdownLatch.await();