Camel Routing using Headers - apache-kafka

I have a spring boot camel app.
I am reading from an incoming Kafka topic queue and running a processor that lookup a new destination topic from the Redis cache. In the processor, I add this as a new camel header.
exchange.getIn().setHeader("destTopic", cachelookupvalue);
LOG.info ("RouterProcessor Dest Topic set to:= {}", exchange.getIn().getHeader("destTopic"));
This seems to work, and the log shows it set.
On returning to the route builder class
public class RouterBuilder extends RouteBuilder {
...
//setup env properties etc
#Override
public void configure() {
from("kafka:" + SRC_TOPIC + "?brokers=" + SRC_BROKER).routeId("myRoute")
.log("Sending message to kafka: ${header.destTopic}" )
.process(processor)
.to("kafka:${header.destTopic}?brokers="+DEST_BROKER) //not working
but
//.to("kafka:webhook-channel_1-P101?brokers="+DEST_BROKER) //this works syntax
If i hardcode the the ${header.destTopic} expression it work but not if i try to use the header in the .to dsl it does not.
The Log .log("Sending message to kafka: ${header.destTopic}" seems to be right topic.
I am not sure if it's a syntax problem or if I'm missing a step.
The error output is
Failed delivery for (MessageId: 972B795CF230E52-0000000000000001 on
ExchangeId: 972B795CF230E52-0000000000000001). Exhausted after
delivery attempt: 1 caught:
org.apache.kafka.common.errors.InvalidTopicException:
${header.destTopic}

Headers , properties and body ..etc are dynamic values in camel so you must use toD(to Dynamic)
.toD("kafka:${headers.destTopic}?brokers="+DEST_BROKER) //not working

Related

Error handling in Spring Cloud Kafka Streams

I'm using Spring Cloud Stream with Kafka Streams. Let's say I have a processor which is a Function which converts a KStream of Strings to a KStream of CityProgrammes. It invokes an API to find the City by name and an other transformation which finds any events near that city.
Now the problem is that any error happens during the transformation, the whole application stops. I want to send that one particular message to a DLQ and move along. I've been reading for days and everyone suggests to handle errors within the called services but that is a nonesense in my opinion, plus I still need to return a KStream: how do I do that within a catch?
I also looked at UncaughtExeptionHandler but it is not aware of the message and only able to restart the processing which won't skip this invalid message.
This might sound like an A-B problem so the question rephrased: how do I maintain the flow in a KStream when an exception occurs and send the invalid item to the DLQ?
When it comes to the application-level errors you have, it is up to the application itself how the error is handled. Kafka Streams and the Spring Cloud Stream binder mainly support deserialization and serialization errors at the framework level. Although that is the case, I think your scenario can be handled. If you are using Kafka Client prior to 2.8, here is an SO answer I gave before on something similar: https://stackoverflow.com/a/66749750/2070861
If you are using Kafka/Streams 2.8, here is an idea that you can use. However, the code below should only be used as a starting point. Adjust it according to your use case. Read more on how branching works in Kafka Streams 2.8. The branching API is significantly refactored in 2.8 from the prior versions.
public Function<KStream<?, String>, KStream<?, Foo>> convert() {
Foo[] foo = new Foo[0];
return input -> {
final Map<String, ? extends KStream<?, String>> branches =
input.split(Named.as("foo-")).branch((key, value) -> {
try {
foo[0] = new Foo(); // your API call for CitiProgramme converion here, possibly.
return true;
}
catch (Exception e) {
Message<?> message = MessageBuilder.withPayload(value).build();
streamBridge.send("to-my-dlt", message);
return false;
}
}, Branched.as("bar"))
.defaultBranch();
final KStream<?, String> kStream = branches.get("foo-bar");
return kStream.map((key, value) -> new KeyValue<>("", foo[0]));
};
}
}
The default branch is ignored in this code because that only contains the records that threw exceptions. Those were handled by the catch statement above in which we send the records to a DLT programmatically. Finally, we get the good records and map them to a new KStream and send it through the outbound.

Spring boot Kafka request-reply scenario

I am implementing POC of request/reply scenario in order to move event-based microservice stack with using Kafka.
There is 2 options in spring.
I wonder which one is better to use. ReplyingKafkaTemplate or cloud-stream
First is ReplyingKafkaTemplate which can be easily configured to have dedicated channel to reply topics for each instance.
record.headers().add(new RecordHeader(KafkaHeaders.REPLY_TOPIC, provider.getReplyChannelName().getBytes()));
Consumer should not need to know replying topic name, just listens a topic and returns with given reply topic.
#KafkaListener(topics = "${kafka.topic.concat-request}")
#SendTo
public ConcatReply listen(ConcatModel request) {
.....
}
Second option is using combination of StreamListener, spring-integration and IntegrationFlows. Gateway should be configured and reply topics should be filtered.
#MessagingGateway
public interface StreamGateway {
#Gateway(requestChannel = START, replyChannel = FILTER, replyTimeout = 5000, requestTimeout = 2000)
String process(String payload);
}
#Bean
public IntegrationFlow headerEnricherFlow() {
return IntegrationFlows.from(START)
.enrichHeaders(HeaderEnricherSpec::headerChannelsToString)
.enrichHeaders(headerEnricherSpec -> headerEnricherSpec.header(Channels.INSTANCE_ID ,instanceUUID))
.channel(Channels.REQUEST)
.get();
}
#Bean
public IntegrationFlow replyFiltererFlow() {
return IntegrationFlows.from(GatewayChannels.REPLY)
.filter(Message.class, message -> Channels.INSTANCE_ID.equals(message.getHeaders().get("instanceId")) )
.channel(FILTER)
.get();
}
Building reply
#StreamListener(Channels.REQUEST)
#SendTo(Channels.REPLY)
public Message<?> process(Message<String> request) {
Specifying reply channel is mandatory. So receieved reply topics are filtered according to instanceID which a kind of workaround (might bloat up the network). On the other hand, DLQ scenario is enabled with adding
consumer:
enableDlq: true
Using spring cloud streams looks promising in terms of interoperability with RabbitMQ and other features, but not officially supports request reply scenario right away. Issue is still open, not rejected also. (https://github.com/spring-cloud/spring-cloud-stream/issues/1800)
Any suggestions are welcomed.
Spring Cloud Stream is not designed for request/reply; it can be done, it is not straightforward and you have to write code.
With #KafkaListener the framework takes care of everything for you.
If you want it to work with RabbitMQ too, you can annotate it with #RabbitListener as well.

Why isn't Kafka ProducerListener logging trace id and span id?

I have a Kafka instance and a simple spring boot application with one REST controller and a ProducerListener bean. The controller accepts a simple String message and sends it to Kafka via KafkaTemplate. I want the ProducerListener to log an info message, but the trace id and span id is missing. In my opinion, it should be included in the log.
I use Spring Cloud Sleuth and Spring Kafka starters.
The message itself is successfully sent via Kafka topic to another spring boot application that gets the trace id correctly, so I assume that the problem is related just to the ProducerListener.
Debugging
I have tried debugging the code and I ended up in the method TracingProducer.send(..). The span instance there is NoopSpan (this was weird) and because of that the TracingCallback doesn't wrap the Callback and the tracing information gets lost. There are some nasty bit operations that I couldn't understand. The jar was this one brave-instrumentation-kafka-clients-5.7.0.jar
The code
The controller
#PostMapping("/pass-to-kafka")
public void passToKafka(#RequestBody String message) {
logger.info("This log message has trace id and span id");
kafkaTemplate.send("my-test-topic", message);
}
The producer listener
#Bean
public ProducerListener myProducerListener() {
return new ProducerListener<>() {
#Override
public void onSuccess(ProducerRecord producerRecord, RecordMetadata recordMetadata) {
logger.info("This log message is missing trace id and span id");
}
};
}
The project with the code is on GitHub: https://github.com/vaclavnemec/kafka-sleuth-problem
I expect the info message to write out trace id and span id. It should be the same values in both of the loggers.
INFO [bar,208706b5c40f8e93,208706b5c40f8e93,false] com.example.demo.Controller: This log message has trace id and span id
.
.
.
INFO [bar,,,] com.example.demo.DemoApplication: This log message is missing trace id and span id
Please set spring.sleuth.probability=1.0 as described in the documentation and the readme. Then you'll have the tracing turned on.

How to inject KafkaTemplate in Quarkus

I'm trying to inject a KafkaTemplate to send a single message. I'm developing a small function that lies outside the reactive approach.
I can only find examples that use #Ingoing and #Outgoing from Smallrye but I don't need a KafkaStream.
I tried with Kafka-CDI but I'm unable to inject the SimpleKafkaProducer.
Any ideas?
For Clement's answer
It seems the right direction, but executing orders.send("hello"); I receive this error:
(vert.x-eventloop-thread-3) Unhandled exception:java.lang.IllegalStateException: Stream not yet connected
I'm consuming from my topic by command line, Kafka is up and running, if I produce manually I can see the consumed messages.
It seems relative to this sentence by the doc:
To use an Emitter for the stream hello, you need a #Incoming("hello")
somewhere in your code (or in your configuration).
I have this code in my class:
#Incoming("orders")
public CompletionStage<Void> consume(KafkaMessage<String, String> msg) {
log.info("Received message (topic: {}, partition: {}) with key {}: {}", msg.getTopic(), msg.getPartition(), msg.getKey(), msg.getPayload());
return msg.ack();
}
Maybe I've forgotten some configurations?
So, you just need to use an Emitter:
#Inject
#Stream("orders") // Emit on the channel 'orders'
Emitter<String> orders;
// ...
orders.send("hello");
And in your application.properties, declare:
## Orders topic (WRITE)
mp.messaging.outgoing.orders.type=io.smallrye.reactive.messaging.kafka.Kafka
mp.messaging.outgoing.orders.topic=orders
mp.messaging.outgoing.orders.bootstrap.servers=localhost:9092
mp.messaging.outgoing.orders.key.serializer=org.apache.kafka.common.serialization.StringSerializer
mp.messaging.outgoing.orders.value.serializer=org.apache.kafka.common.serialization.StringSerializer
mp.messaging.outgoing.orders.acks=1
To avoid Stream not yet connected exception, as suggested by doc:
To use an Emitter for the stream hello, you need a #Incoming("hello")
somewhere in your code (or in your configuration).
Assuming you have something like this in your application.properties:
# Orders topic (READ)
smallrye.messaging.source.orders-r-topic.type=io.smallrye.reactive.messaging.kafka.Kafka
smallrye.messaging.source.orders-r-topic.topic=orders
smallrye.messaging.source.orders-r-topic.bootstrap.servers=0.0.0.0:9092
smallrye.messaging.source.orders-r-topic.key.deserializer=org.apache.kafka.common.serialization.StringDeserializer
smallrye.messaging.source.orders-r-topic.value.deserializer=org.apache.kafka.common.serialization.StringDeserializer
smallrye.messaging.source.orders-r-topic.group.id=my-group-id
Add something like this:
#Incoming("orders-r-topic")
public CompletionStage<Void> consume(KafkaMessage<String, String> msg) {
log.info("Received message (topic: {}, partition: {}) with key {}: {}", msg.getTopic(), msg.getPartition(), msg.getKey(), msg.getPayload());
return msg.ack();
}
Since Clement's answer the #Stream annotation has been deprecated. The #Channel annotation
must be used instead.
You can use an Emitter provided by the quarkus-smallrye-reactive-messaging-kafka dependency to produce message to a Kafka topic.
A simple Kafka producer implementation:
public class MyKafkaProducer {
#Inject
#Channel("my-topic")
Emitter<String> myEmitter;
public void produce(String message) {
myEmitter.send(message);
}
}
And the following configuration must be added to the application.properties file:
mp.messaging.outgoing.my-topic.connector=smallrye-kafka
mp.messaging.outgoing.my-topic.bootstrap.servers=localhost:9092
mp.messaging.outgoing.my-topic.value.serializer=org.apache.kafka.common.serialization.StringSerializer
This will produce string serialized messages to a kafka topic named my-topic.
Note that by default the name of the channel is also the name of the kafka topic in which the data will be produced. This behavior can be changed through the configuration. The supported configuration attributes are described in the reactive Messaging documentation

Camel keep sending messages to queue via JMS after 1 minute

I am currently learning Camel and i am not sure if we can send messages to a activemq queue/topic from camel at fixed interval.
Currently i have created code in Scala which looks up the database and create a message and sends it to queue after every minute can we do this in camel.
We have a timer component in camel but it does not produce the message. I was thinking something like this.
from("timer://foo?fixedRate=true&period=60000")
.to("customLogic")
.to("jms:myqueue")
Timer will kick after a minute.
Custom logic will do database look up and create a message
Finally send to jms queue
I am very new to Camel so some code will be really helpful thanks
Can you please point me to how can i create this customeLogic method that can create a message and pass it to next ".to("jms:myqueue")". Is there some class that in need to inherit/implement which will pass the the message etc.
I guess your question is about how to hook custom java logic into your camel route to prepare the JMS message payload.
The JMS component will use the exchange body as the JMS message payload, so you need to set the body in your custom logic. There are several ways to do this.
You can create a custom processor by implementing the org.apache.camel.Processor interface and explicitly setting the new body on the exchange:
Processor customLogicProcessor = new Processor() {
#Override
public void process(Exchange exchange) {
// do your db lookup, etc.
String myMessage = ...
exchange.getIn().setBody(myMessage);
}
};
from("timer://foo?fixedRate=true&period=60000")
.process(customLogicProcessor)
.to("jms:myqueue");
A more elegant option is to make use of Camel's bean binding:
public class CustomLogic {
#Handler
public String doStuff() {
// do your db lookup, etc.
String myMessage = ...
return myMessage;
}
}
[...]
CustomLogic customLogicBean = new CustomLogic();
from("timer://foo?fixedRate=true&period=60000")
.bean(customLogicBean)
.to("jms:myqueue");
The #Handler annotation tells Camel which method it should call. If there's only one qualifying method you don't need that annotation.
Camel makes the result of the method call the new body on the exchange that will be passed to the JMS component.