Camel keep sending messages to queue via JMS after 1 minute - scala

I am currently learning Camel and i am not sure if we can send messages to a activemq queue/topic from camel at fixed interval.
Currently i have created code in Scala which looks up the database and create a message and sends it to queue after every minute can we do this in camel.
We have a timer component in camel but it does not produce the message. I was thinking something like this.
from("timer://foo?fixedRate=true&period=60000")
.to("customLogic")
.to("jms:myqueue")
Timer will kick after a minute.
Custom logic will do database look up and create a message
Finally send to jms queue
I am very new to Camel so some code will be really helpful thanks
Can you please point me to how can i create this customeLogic method that can create a message and pass it to next ".to("jms:myqueue")". Is there some class that in need to inherit/implement which will pass the the message etc.

I guess your question is about how to hook custom java logic into your camel route to prepare the JMS message payload.
The JMS component will use the exchange body as the JMS message payload, so you need to set the body in your custom logic. There are several ways to do this.
You can create a custom processor by implementing the org.apache.camel.Processor interface and explicitly setting the new body on the exchange:
Processor customLogicProcessor = new Processor() {
#Override
public void process(Exchange exchange) {
// do your db lookup, etc.
String myMessage = ...
exchange.getIn().setBody(myMessage);
}
};
from("timer://foo?fixedRate=true&period=60000")
.process(customLogicProcessor)
.to("jms:myqueue");
A more elegant option is to make use of Camel's bean binding:
public class CustomLogic {
#Handler
public String doStuff() {
// do your db lookup, etc.
String myMessage = ...
return myMessage;
}
}
[...]
CustomLogic customLogicBean = new CustomLogic();
from("timer://foo?fixedRate=true&period=60000")
.bean(customLogicBean)
.to("jms:myqueue");
The #Handler annotation tells Camel which method it should call. If there's only one qualifying method you don't need that annotation.
Camel makes the result of the method call the new body on the exchange that will be passed to the JMS component.

Related

Any pointers for creating Kafka endpoint support?

I have a fairly immediate need to support Kafka integration testing using the Citrus Framework. I was thinking of taking the existing jms module as an example/framework and using Spring Kafka. Any pointers or gotchas that I should be aware of? I am willing, assuming I'm successful, to donate the module back to the project.
Here is an example of how you can use a Kafka Camel component with Citrus:
#Bean
public CamelContext camelKafkaAdapterContext() throws Exception {
SpringCamelContext context = new SpringCamelContext();
context.addRouteDefinition(new RouteDefinition()
.from("kafka:localhost:9092?topic=test&zookeeperHost=localhost&zookeeperPort=2181&serializerClass=kafka.serializer.StringEncoder")
.to("log:com.consol.citrus.camel?level=DEBUG")
.to("seda:kafka-buffer"));
return context;
}
#Bean
public CamelEndpoint kafkaEndpoint(CamelContext camelContext) {
CamelEndpoint endpoint = new CamelEndpoint();
endpoint.getEndpointConfiguration().setCamelContext(camelContext);
endpoint.getEndpointConfiguration().setEndpointUri("seda:kafka-buffer");
return endpoint;
}
You first define a Camel Context which will be startet when you run any test with Citrus. After it is instantiated, this Camel component will read from the configured topic and send all messages into a buffer seda:kafka-buffer (seda is used only as an example). After which you can use a Citrus CamelEndpoint to read messages from that buffer inside any test.
receive(action -> action.endpoint(kafkaEndpoint)
.messageType(MessageType.JSON)
.payload(...);
Note, this is just an example I have assembled. I haven't tested this exact setup, but it will work once you configure the Camel Context correctly.

Notifying entities when entity state changes in Lagom

Assuming a Record entity, CreateRecord command and a RecordCreated event. I want to invoke some command on one or more other entities (in different modules). What would be the suggested approach to achieve this?
I was thinking about sending a message from the ReadSide handler of the Record entity, which could be received by corresponding service(s), which would convert it to a command and invoke on an entity.
EDIT, thanks #ignasi35: According to Message Broker API publishing of the messages could be possible with this code.
AggregateEventTag<RecordEvent> RECORD_EVENT_TAG = AggregateEventTag.of(RecordEvent.class);
public Topic<RecordMessage> recordsTopic() {
return TopicProducer.singleStreamWithOffset(offset -> {
return persistentEntityRegistry
.eventStream(RECORD_EVENT_TAG, offset)
.map(this::convertEventToRecordMessage);
});
}
Records are created, and corresponding events are persisted, but no messages are received by the following consumer:
#Singleton
public class RecordsConsumer {
#Inject
public RecordsConsumer(RecordService recordService){
recordService.recordsTopic().subscribe()
.atLeastOnce(Flow.fromFunction(this::displayMessage));
}
}
What am I doing wrong?
Finally solved it.
I ended up with a singleton service listening to RecordCreated events from PersistentEntityRegistry.eventStream. The service converts them to RecordMessage and exposes as a Topic (see my question above).
The issue with not receiving any enents from the exposed Topic was missing dependency to kafka-broker (strange that there was no warning about this, and the topic was just not exposed), in my case this was:
<dependency>
<groupId>com.lightbend.lagom</groupId>
<artifactId>lagom-javadsl-kafka-broker_2.12</artifactId>
</dependency>

Writing Verticles that performs CRUD Operations on a file

I'm new to Vert.x and trying I am trying to implement a small REST API that stores its data in JSON files on the local file system.
So far I managed to implement the REST API since Vertx is very well documented on that part.
What I'm currently looking for are examples how to build data access objects in Vert.x. How can I implement a Verticle that can perform crud operations on a text file containing JSON?
Can you provide me any examples? Any hints?
UPDATE 1:
By CRUD operations on a file I'm thinking of the following. Imagine there is a REST resource called Records exposed on the the path /api/v1/user/:userid/records/.
In my verticle that starts my HTTP server I have the following routes.
router.get('/api/user/:userid/records').handler(this.&handleGetRecords)
router.post('/api/user/:userid/records').handler(this.&handleNewRecord)
The handler methods handleGetRecords and handleNewRecord are sending a message using the Vertx event bus.
request.bodyHandler({ b ->
def userid = request.getParam('userid')
logger.info "Reading record for user {}", userid
vertx.eventBus().send(GET_TIME_ENTRIES.name(), "read time records", [headers: [userId: userid]], { reply ->
// This handler will be called for every request
def response = routingContext.response()
if (reply.succeeded()) {
response.putHeader("content-type", "text/json")
// Write to the response and end it
response.end(reply.result().body())
} else {
logger.warn("Reply failed {}", reply.failed())
response.statusCode = 500
response.putHeader("content-type", "text/plain")
response.end('That did not work out well')
}
})
})
Then there is another verticle that consumes these messages GET_TIME_ENTRIES or CREATE_TIME_ENTRY. I think of this consumer verticle as a Data Access Object for Records. This verticle can read a file of the given :userid that contains all user records. The verticle is able to
add a record
read all records
read a specific record
update a record
delete a or all records
Here is the example of reading all records.
vertx.eventBus().consumer(GET_TIME_ENTRIES.name(), { message ->
String userId = message.headers().get('userId')
String absPath = "${this.source}/${userId}.json" as String
vertx.fileSystem().readFile(absPath, { result ->
if (result.succeeded()) {
logger.info("About to read from user file {}", absPath)
def jsonObject = new JsonObject(result.result().toString())
message.reply(jsonObject.getJsonArray('records').toString())
} else {
logger.warn("User file {} does not exist", absPath)
message.fail(404, "user ${userId} does not exist")
}
})
})
What I trying to achieve is to read the file like I did above and deserialise the JSON into a POJO (e.g. a List<Records>). This seems much more convenient that working with JsonObject of Vertx. I don't want to manipulate the JsonObject instance.
First of all, your approach using EventBus is fine, in my opinion. It may be a bit slower, because EventBus will serialize/deserialize your objects, but it gives you a very good decoupling.
Example of another approach you can see here:
https://github.com/aesteve/vertx-feeds/blob/master/src/main/java/io/vertx/examples/feeds/dao/RedisDAO.java
Note how every method receives handler as its last argument:
public void getMaxDate(String feedHash, Handler<Date> handler) {
More coupled, but also more efficient.
And for a more classic and straightforward approach, you can see the official examples:
https://github.com/aokolnychyi/vertx-example/blob/master/src/main/java/com/aokolnychyi/vertx/example/dao/MongoDbTodoDaoImpl.java
You can see that here DAO is pretty much synchronous, but since the handlers are still async, it's fine anyway.
I guess the following link will help you out and this is a good example of Vertx crud operations.
Vertx student crud operations using hikari

JAX-WS SoapHandler with large messages: OutOfMemoryError

Using JAX-WS 2, I see an issue that others have spoken about as well. The issue is that if a SOAP message is received inside a handler, and that SOAP message is large - whether due to inline SOAP body elements that happen to have lots of content, or due to MTOM attachments - then it is dangerously easy to get an OutOfMemoryError.
The reason is that the call to getMessage() seems to set off a chain of events that involve reading the entire SOAP message on the wire, and creating an object (or objects) representing what was on the wire.
For example:
...
public boolean handleMessage(SOAPMessageContext context)
{
// for a large message, this will cause an OutOfMemoryError
System.out.println( context.getMessage().countAttachments() );
...
My question is: is there a known mechanism/workaround for dealing with this? Specifically, it would be nice to access the SOAP part in a SOAP message without forcing the attachments (if MTOM for example) to also be vacuumed up.
For those who run their app on JBoss 6 & 7 (with Apache CXF)... I was able to troubleshoot the problem by implementing my handler from the LogicalHandler interface instead of the SOAPHandler.
In this case your handleMessage() method would get the LogicalMessageContext context (instead of SOAPMessageContext) in the arguments that has no issues with the context.getMessage() call
There's actually a JAX-WS RI (aka Metro) specific solution for this which is very effective.
See https://javaee.github.io/metro/doc/user-guide/ch02.html#efficient-handlers-in-jax-ws-ri. Unfortunately that link is now broken but you can find it on WayBack Machine. I'll give the highlights below:
The Metro folks back in 2007 introduced an additional handler type, MessageHandler<MessageHandlerContext>, which is proprietary to Metro. It is far more efficient than SOAPHandler<SOAPMessageContext> as it doesn't try to do in-memory DOM representation.
Here's the crucial text from the original blog article:
MessageHandler:
Utilizing the extensible Handler framework provided by JAX-WS
Specification and the better Message abstraction in RI, we introduced
a new handler called MessageHandler to extend your Web Service
applications. MessageHandler is similar to SOAPHandler, except that
implementations of it gets access to MessageHandlerContext (an
extension of MessageContext). Through MessageHandlerContext one can
access the Message and process it using the Message API. As I put in
the title of the blog, this handler lets you work on Message, which
provides efficient ways to access/process the message not just a DOM
based message. The programming model of the handlers is same and the
Message handlers can be mixed with standard Logical and SOAP handlers.
I have added a sample in JAX-WS RI 2.1.3 showing the use of
MessageHandler to log messages and here is a snippet from the sample:
public class LoggingHandler implements MessageHandler<MessageHandlerContext> {
public boolean handleMessage(MessageHandlerContext mhc) {
Message m = mhc.getMessage().copy();
XMLStreamWriter writer = XMLStreamWriterFactory.create(System.out);
try {
m.writeTo(writer);
} catch (XMLStreamException e) {
e.printStackTrace();
return false;
}
return true;
}
public boolean handleFault(MessageHandlerContext mhc) {
.....
return true;
}
public void close(MessageContext messageContext) { }
public Set getHeaders() {
return null;
}
}
(end quote from 2007 blog post)
You can find a full example in the Metro GitHub repo.
What JAX-WS implementation runtime are you using? If there's a way to do this using the runtime built into WebSphere I'm certain there's a way to do this cleanly in other runtimes like Axis2 (proper), Apache CXF, and Metro/RI.
I am using the other way to reduce the memory costing, which is Message Accessor.
Instead of using context.getMessage(), I changed it to this way:
Object accessor = context.get("jaxws.message.accessor");
if (accessor != null) {
baosInString = accessor.toString();
}
Base on advice from IBM website. http://www-01.ibm.com/support/docview.wss?uid=swg1PM21151

Spring DefaultMessageListenerContainer/SimpleMessageListenerContainer (JMS/AMQP) Annotation configuration

So I'm working on a project where many teams are using common services and following a common architecture. One of the services in use is messaging, currently JMS with ActiveMQ. Pretty much all teams are required to follow a strict set of rules for creating and sending messages, namely, everything is pub-subscribe and the messages that are sent are somewhat like the following:
public class WorkDTO {
private String type;
private String subtype;
private String category;
private String jsonPayload; // converted custom Java object
}
The 'jsonPayload' comes from a base class that all teams extend from so it has common attributes.
So basically in JMS, everyone is always sending the same kind of message, but to different ActiveMQ Topics. When the message (WorkDTO) is sent via JMS, first it is converted into a JSON object then it is sent in a TextMessage.
Whenever a team wishes to create a subscriber for a topic, they create a DefaultMessageListenerContainer and configure it appropriately to receive messages (We are using Java-based Spring configuration). Basically every DefaultMessageListenerContainer that a team defines is pretty much the same except for maybe the destination from which to receive messages and the message handler.
I was wondering how anyone would approach further abstracting the messaging configuration via annotations in such a case? Meaning, since everyone is pretty much required to follow the same requirements, could something like the following be useful:
#Retention(RetentionPolicy.RUNTIME)
#Target(ElementType.TYPE)
public #interface Listener {
String destination();
boolean durable() default false;
long receiveTimeout() default -1; // -1 use JMS default
String defaultListenerMethod() default "handleMessage";
// more config details here
}
#Listener(destination="PX.Foo", durable=true)
public class FooListener {
private ObjectMapper mapper = new ObjectMapper(); // converts JSON Strings to Java Classes
public void handleMessage(TextMessage message){
String text = message.getText();
WorkDTO dto = mapper.readValue(text, WorkDto.class);
String payload = dto.getPayload();
String type = dto.getType();
String subType = dto.getSubType();
String category = dto.getCategory();
}
}
Of course I left out the part on how to configure the DefaultMessageListenerContainer by use of the #Listener annotation. I started looking into a BeanFactoryPostProcessor to create the necessary classes and add them to the application context, but I don't know how to do all that.
The reason I ask the question is that we are switching to AMQP/RabbitMQ from JMS/ActiveMQ and would like to abstract the messaging configuration even further by use of annotations. I know AMQP is not like JMS so the configuration details would be slightly different. I don't believe we will be switching from AMQP to something else.
Here teams only need to know the name of the destination and whether they want to make their subscription durable.
This is just something that popped into my head just recently. Any thoughts on this?
I don't want to do something overly complicated though so the other alternative is to create a convenience method that returns a pre-configured DefaultMessageListenerContainer given a destination and a message handler:
#Configuration
public class MyConfig{
#Autowired
private MessageConfigFactory configFactory;
#Bean
public DefaultMessageListenerContainer fooListenerContainer(){
return configFactory.getListenerContainer("PX.Foo", new FooListener(), true);
}
}
class MessageConfigFactory {
public DefaultMessageListenerContainer getListener(String destination, Object listener, boolean durable) {
DefaultMessageListenerContainer l = new DefaultMessageListenerContainer();
// configuration details here
return l;
}
}