Notifying entities when entity state changes in Lagom - persistence

Assuming a Record entity, CreateRecord command and a RecordCreated event. I want to invoke some command on one or more other entities (in different modules). What would be the suggested approach to achieve this?
I was thinking about sending a message from the ReadSide handler of the Record entity, which could be received by corresponding service(s), which would convert it to a command and invoke on an entity.
EDIT, thanks #ignasi35: According to Message Broker API publishing of the messages could be possible with this code.
AggregateEventTag<RecordEvent> RECORD_EVENT_TAG = AggregateEventTag.of(RecordEvent.class);
public Topic<RecordMessage> recordsTopic() {
return TopicProducer.singleStreamWithOffset(offset -> {
return persistentEntityRegistry
.eventStream(RECORD_EVENT_TAG, offset)
.map(this::convertEventToRecordMessage);
});
}
Records are created, and corresponding events are persisted, but no messages are received by the following consumer:
#Singleton
public class RecordsConsumer {
#Inject
public RecordsConsumer(RecordService recordService){
recordService.recordsTopic().subscribe()
.atLeastOnce(Flow.fromFunction(this::displayMessage));
}
}
What am I doing wrong?

Finally solved it.
I ended up with a singleton service listening to RecordCreated events from PersistentEntityRegistry.eventStream. The service converts them to RecordMessage and exposes as a Topic (see my question above).
The issue with not receiving any enents from the exposed Topic was missing dependency to kafka-broker (strange that there was no warning about this, and the topic was just not exposed), in my case this was:
<dependency>
<groupId>com.lightbend.lagom</groupId>
<artifactId>lagom-javadsl-kafka-broker_2.12</artifactId>
</dependency>

Related

Spring boot Kafka request-reply scenario

I am implementing POC of request/reply scenario in order to move event-based microservice stack with using Kafka.
There is 2 options in spring.
I wonder which one is better to use. ReplyingKafkaTemplate or cloud-stream
First is ReplyingKafkaTemplate which can be easily configured to have dedicated channel to reply topics for each instance.
record.headers().add(new RecordHeader(KafkaHeaders.REPLY_TOPIC, provider.getReplyChannelName().getBytes()));
Consumer should not need to know replying topic name, just listens a topic and returns with given reply topic.
#KafkaListener(topics = "${kafka.topic.concat-request}")
#SendTo
public ConcatReply listen(ConcatModel request) {
.....
}
Second option is using combination of StreamListener, spring-integration and IntegrationFlows. Gateway should be configured and reply topics should be filtered.
#MessagingGateway
public interface StreamGateway {
#Gateway(requestChannel = START, replyChannel = FILTER, replyTimeout = 5000, requestTimeout = 2000)
String process(String payload);
}
#Bean
public IntegrationFlow headerEnricherFlow() {
return IntegrationFlows.from(START)
.enrichHeaders(HeaderEnricherSpec::headerChannelsToString)
.enrichHeaders(headerEnricherSpec -> headerEnricherSpec.header(Channels.INSTANCE_ID ,instanceUUID))
.channel(Channels.REQUEST)
.get();
}
#Bean
public IntegrationFlow replyFiltererFlow() {
return IntegrationFlows.from(GatewayChannels.REPLY)
.filter(Message.class, message -> Channels.INSTANCE_ID.equals(message.getHeaders().get("instanceId")) )
.channel(FILTER)
.get();
}
Building reply
#StreamListener(Channels.REQUEST)
#SendTo(Channels.REPLY)
public Message<?> process(Message<String> request) {
Specifying reply channel is mandatory. So receieved reply topics are filtered according to instanceID which a kind of workaround (might bloat up the network). On the other hand, DLQ scenario is enabled with adding
consumer:
enableDlq: true
Using spring cloud streams looks promising in terms of interoperability with RabbitMQ and other features, but not officially supports request reply scenario right away. Issue is still open, not rejected also. (https://github.com/spring-cloud/spring-cloud-stream/issues/1800)
Any suggestions are welcomed.
Spring Cloud Stream is not designed for request/reply; it can be done, it is not straightforward and you have to write code.
With #KafkaListener the framework takes care of everything for you.
If you want it to work with RabbitMQ too, you can annotate it with #RabbitListener as well.

MutableMessageBuilderFactory in Spring Integration

I have a spring cloud stream consumer getting messages from Kafka. I want to modify the message headers, but currently the message I get is of type GenericMessage.
I saw this post and this code from spring integration core so I added to my configuration a bean of type MutableMessageBuilderFactory but I'm still getting the message as GenericMessage. Actually, the bean creating code doesn't even seem to get called, the getMessageBuilderFactory(BeanFactory beanFactory) in IntegrationUtils classs gets called multiple times and everytime beanFactory.getBean("messageBuilderFactory", MessageBuilderFactory.class) returns DefaultMessageBuilderFactory.
What might be the problem causing the factory I defined as bean not to work and the message to keep coming as GenericMessage?
Spring versions:
spring-boot: 1.5.21
spring-integration: 4.3.12
Messages are immutable and there are many reasons for that, but it's out of scope of this question. What you can do is create a new Message in your handler and return it. If you want to copy most of the previous message and then modify the header you can do this:
Message resultMessage = MessageBuilder.fromMessage(sourceMessage).setHeader("myExistingHeader", "foo").build();

Create custom DefaultKafkaHeaderMapper

When I send a record to kafka topic consumer recieves "nativeHeaders" with some unnecessary header (which HeaderMethodArgumentResolver can not even cast to Map).
I'm looking for some way to override HeaderMethodArgumentResolver method "getNativeHeaders" to exclude this garbage header and don't know how to provide this subclass to the spring.
There's an original method from org.springframework.messaging.handler.annotation.support.HeaderMethodArgumentResolver :
private Map<String, List<String>> getNativeHeaders(Message<?> message) {
return (Map)message.getHeaders().get("nativeHeaders");
}
Where this call:
message.getHeaders().get("nativeHeaders");
returns this:
https://ibb.co/qrvMNMk
(as you see there's extra field "headerValue" apart from key-value, which prevents casting)
Send record by kafkaTemplate like this:
kafkaTemplate.send(new ProducerRecord<String, TempContractEntity>(topics.getSubmit(), tempContractEntity));
Consumer gets messages by #KafkaListener annotation:
#KafkaListener(topics = "#{settingsService.getTopics()}")
public void processMessage(OrchestratorRequestImpl orchestratorRequest,
#Header(KafkaHeaders.RECEIVED_TOPIC) String topicName) throws Throwable{//...}
Generally I want to find a way to pre-process kafka headers
The NonTrustedHeaderType indicates that something sent a message with that header and it's class is not trusted. This would not happen with the type of send you show - there is no Message<?> involved there, so something is missing from the picture in your question.
One thing you could do is add a ConsumerInterceptor to the consumer configuration and weed out the unwanted header in the onConsume() method.
But you should really figure out who's sending it.

How to inject KafkaTemplate in Quarkus

I'm trying to inject a KafkaTemplate to send a single message. I'm developing a small function that lies outside the reactive approach.
I can only find examples that use #Ingoing and #Outgoing from Smallrye but I don't need a KafkaStream.
I tried with Kafka-CDI but I'm unable to inject the SimpleKafkaProducer.
Any ideas?
For Clement's answer
It seems the right direction, but executing orders.send("hello"); I receive this error:
(vert.x-eventloop-thread-3) Unhandled exception:java.lang.IllegalStateException: Stream not yet connected
I'm consuming from my topic by command line, Kafka is up and running, if I produce manually I can see the consumed messages.
It seems relative to this sentence by the doc:
To use an Emitter for the stream hello, you need a #Incoming("hello")
somewhere in your code (or in your configuration).
I have this code in my class:
#Incoming("orders")
public CompletionStage<Void> consume(KafkaMessage<String, String> msg) {
log.info("Received message (topic: {}, partition: {}) with key {}: {}", msg.getTopic(), msg.getPartition(), msg.getKey(), msg.getPayload());
return msg.ack();
}
Maybe I've forgotten some configurations?
So, you just need to use an Emitter:
#Inject
#Stream("orders") // Emit on the channel 'orders'
Emitter<String> orders;
// ...
orders.send("hello");
And in your application.properties, declare:
## Orders topic (WRITE)
mp.messaging.outgoing.orders.type=io.smallrye.reactive.messaging.kafka.Kafka
mp.messaging.outgoing.orders.topic=orders
mp.messaging.outgoing.orders.bootstrap.servers=localhost:9092
mp.messaging.outgoing.orders.key.serializer=org.apache.kafka.common.serialization.StringSerializer
mp.messaging.outgoing.orders.value.serializer=org.apache.kafka.common.serialization.StringSerializer
mp.messaging.outgoing.orders.acks=1
To avoid Stream not yet connected exception, as suggested by doc:
To use an Emitter for the stream hello, you need a #Incoming("hello")
somewhere in your code (or in your configuration).
Assuming you have something like this in your application.properties:
# Orders topic (READ)
smallrye.messaging.source.orders-r-topic.type=io.smallrye.reactive.messaging.kafka.Kafka
smallrye.messaging.source.orders-r-topic.topic=orders
smallrye.messaging.source.orders-r-topic.bootstrap.servers=0.0.0.0:9092
smallrye.messaging.source.orders-r-topic.key.deserializer=org.apache.kafka.common.serialization.StringDeserializer
smallrye.messaging.source.orders-r-topic.value.deserializer=org.apache.kafka.common.serialization.StringDeserializer
smallrye.messaging.source.orders-r-topic.group.id=my-group-id
Add something like this:
#Incoming("orders-r-topic")
public CompletionStage<Void> consume(KafkaMessage<String, String> msg) {
log.info("Received message (topic: {}, partition: {}) with key {}: {}", msg.getTopic(), msg.getPartition(), msg.getKey(), msg.getPayload());
return msg.ack();
}
Since Clement's answer the #Stream annotation has been deprecated. The #Channel annotation
must be used instead.
You can use an Emitter provided by the quarkus-smallrye-reactive-messaging-kafka dependency to produce message to a Kafka topic.
A simple Kafka producer implementation:
public class MyKafkaProducer {
#Inject
#Channel("my-topic")
Emitter<String> myEmitter;
public void produce(String message) {
myEmitter.send(message);
}
}
And the following configuration must be added to the application.properties file:
mp.messaging.outgoing.my-topic.connector=smallrye-kafka
mp.messaging.outgoing.my-topic.bootstrap.servers=localhost:9092
mp.messaging.outgoing.my-topic.value.serializer=org.apache.kafka.common.serialization.StringSerializer
This will produce string serialized messages to a kafka topic named my-topic.
Note that by default the name of the channel is also the name of the kafka topic in which the data will be produced. This behavior can be changed through the configuration. The supported configuration attributes are described in the reactive Messaging documentation

Camel keep sending messages to queue via JMS after 1 minute

I am currently learning Camel and i am not sure if we can send messages to a activemq queue/topic from camel at fixed interval.
Currently i have created code in Scala which looks up the database and create a message and sends it to queue after every minute can we do this in camel.
We have a timer component in camel but it does not produce the message. I was thinking something like this.
from("timer://foo?fixedRate=true&period=60000")
.to("customLogic")
.to("jms:myqueue")
Timer will kick after a minute.
Custom logic will do database look up and create a message
Finally send to jms queue
I am very new to Camel so some code will be really helpful thanks
Can you please point me to how can i create this customeLogic method that can create a message and pass it to next ".to("jms:myqueue")". Is there some class that in need to inherit/implement which will pass the the message etc.
I guess your question is about how to hook custom java logic into your camel route to prepare the JMS message payload.
The JMS component will use the exchange body as the JMS message payload, so you need to set the body in your custom logic. There are several ways to do this.
You can create a custom processor by implementing the org.apache.camel.Processor interface and explicitly setting the new body on the exchange:
Processor customLogicProcessor = new Processor() {
#Override
public void process(Exchange exchange) {
// do your db lookup, etc.
String myMessage = ...
exchange.getIn().setBody(myMessage);
}
};
from("timer://foo?fixedRate=true&period=60000")
.process(customLogicProcessor)
.to("jms:myqueue");
A more elegant option is to make use of Camel's bean binding:
public class CustomLogic {
#Handler
public String doStuff() {
// do your db lookup, etc.
String myMessage = ...
return myMessage;
}
}
[...]
CustomLogic customLogicBean = new CustomLogic();
from("timer://foo?fixedRate=true&period=60000")
.bean(customLogicBean)
.to("jms:myqueue");
The #Handler annotation tells Camel which method it should call. If there's only one qualifying method you don't need that annotation.
Camel makes the result of the method call the new body on the exchange that will be passed to the JMS component.