Why isn't Kafka ProducerListener logging trace id and span id? - spring-cloud

I have a Kafka instance and a simple spring boot application with one REST controller and a ProducerListener bean. The controller accepts a simple String message and sends it to Kafka via KafkaTemplate. I want the ProducerListener to log an info message, but the trace id and span id is missing. In my opinion, it should be included in the log.
I use Spring Cloud Sleuth and Spring Kafka starters.
The message itself is successfully sent via Kafka topic to another spring boot application that gets the trace id correctly, so I assume that the problem is related just to the ProducerListener.
Debugging
I have tried debugging the code and I ended up in the method TracingProducer.send(..). The span instance there is NoopSpan (this was weird) and because of that the TracingCallback doesn't wrap the Callback and the tracing information gets lost. There are some nasty bit operations that I couldn't understand. The jar was this one brave-instrumentation-kafka-clients-5.7.0.jar
The code
The controller
#PostMapping("/pass-to-kafka")
public void passToKafka(#RequestBody String message) {
logger.info("This log message has trace id and span id");
kafkaTemplate.send("my-test-topic", message);
}
The producer listener
#Bean
public ProducerListener myProducerListener() {
return new ProducerListener<>() {
#Override
public void onSuccess(ProducerRecord producerRecord, RecordMetadata recordMetadata) {
logger.info("This log message is missing trace id and span id");
}
};
}
The project with the code is on GitHub: https://github.com/vaclavnemec/kafka-sleuth-problem
I expect the info message to write out trace id and span id. It should be the same values in both of the loggers.
INFO [bar,208706b5c40f8e93,208706b5c40f8e93,false] com.example.demo.Controller: This log message has trace id and span id
.
.
.
INFO [bar,,,] com.example.demo.DemoApplication: This log message is missing trace id and span id

Please set spring.sleuth.probability=1.0 as described in the documentation and the readme. Then you'll have the tracing turned on.

Related

Camel Routing using Headers

I have a spring boot camel app.
I am reading from an incoming Kafka topic queue and running a processor that lookup a new destination topic from the Redis cache. In the processor, I add this as a new camel header.
exchange.getIn().setHeader("destTopic", cachelookupvalue);
LOG.info ("RouterProcessor Dest Topic set to:= {}", exchange.getIn().getHeader("destTopic"));
This seems to work, and the log shows it set.
On returning to the route builder class
public class RouterBuilder extends RouteBuilder {
...
//setup env properties etc
#Override
public void configure() {
from("kafka:" + SRC_TOPIC + "?brokers=" + SRC_BROKER).routeId("myRoute")
.log("Sending message to kafka: ${header.destTopic}" )
.process(processor)
.to("kafka:${header.destTopic}?brokers="+DEST_BROKER) //not working
but
//.to("kafka:webhook-channel_1-P101?brokers="+DEST_BROKER) //this works syntax
If i hardcode the the ${header.destTopic} expression it work but not if i try to use the header in the .to dsl it does not.
The Log .log("Sending message to kafka: ${header.destTopic}" seems to be right topic.
I am not sure if it's a syntax problem or if I'm missing a step.
The error output is
Failed delivery for (MessageId: 972B795CF230E52-0000000000000001 on
ExchangeId: 972B795CF230E52-0000000000000001). Exhausted after
delivery attempt: 1 caught:
org.apache.kafka.common.errors.InvalidTopicException:
${header.destTopic}
Headers , properties and body ..etc are dynamic values in camel so you must use toD(to Dynamic)
.toD("kafka:${headers.destTopic}?brokers="+DEST_BROKER) //not working

How shutdown KafkaListener when error occurs

I wrote a Listener in this way
#Autowired
private KafkaListenerEndpointRegistry kafkaListenerEndpointRegistry;
#KafkaListener(containerFactory = "cdcKafkaListenerContainerFactory", errorHandler = "errorHandler")
public void consume(#Payload String message) throws Exception {
...
}
#Bean
public KafkaListenerErrorHandler errorHandler() {
return ((message, e) -> {
kafkaListenerEndpointRegistry.stop();
return null;
});
}
In #KafkaListener annotation I specified my error handler that simply stop the consumer.
It seems to work but I've some question to ask.
Is there a built-in errorHandler for this scope? I've read that ContainerStoppingErrorHandler can be use, but I cannot set it because #KafkaListener's errorHandler accept beans of KafkaListenerErrorHandler type.
I see that with kafkaListenerEndpointRegistry.stop(); do a graceful stop. So before stopping the partition offset of the consumed message is committed.
What I would know is what happen when kafkaListenerEndpointRegistry.stop(); is called and before listener is definitely turned off another message arrive into the topic?
Is this message consumed?
I image this scenario
time0: kafkaListenerEndpointRegistry.stop() is called
time1: a message is pushed into the listened topic
time2: kafkaListenerEndpointRegistry.stop() complete graceful stop
I'm worried about a possible message arrive at time1. What would happen in this scenario?
Do not stop the container within the listener.
ContainerStoppingErrorHandler is set on the container factory, not the annotation.
If you are using Spring Boot, just declare the error handler as a bean and boot will wire it in.
Otherwise add the error handler to the connection factory bean.
With this error handler, throwing an exception will immediately stop the container.

MutableMessageBuilderFactory in Spring Integration

I have a spring cloud stream consumer getting messages from Kafka. I want to modify the message headers, but currently the message I get is of type GenericMessage.
I saw this post and this code from spring integration core so I added to my configuration a bean of type MutableMessageBuilderFactory but I'm still getting the message as GenericMessage. Actually, the bean creating code doesn't even seem to get called, the getMessageBuilderFactory(BeanFactory beanFactory) in IntegrationUtils classs gets called multiple times and everytime beanFactory.getBean("messageBuilderFactory", MessageBuilderFactory.class) returns DefaultMessageBuilderFactory.
What might be the problem causing the factory I defined as bean not to work and the message to keep coming as GenericMessage?
Spring versions:
spring-boot: 1.5.21
spring-integration: 4.3.12
Messages are immutable and there are many reasons for that, but it's out of scope of this question. What you can do is create a new Message in your handler and return it. If you want to copy most of the previous message and then modify the header you can do this:
Message resultMessage = MessageBuilder.fromMessage(sourceMessage).setHeader("myExistingHeader", "foo").build();

How to inject KafkaTemplate in Quarkus

I'm trying to inject a KafkaTemplate to send a single message. I'm developing a small function that lies outside the reactive approach.
I can only find examples that use #Ingoing and #Outgoing from Smallrye but I don't need a KafkaStream.
I tried with Kafka-CDI but I'm unable to inject the SimpleKafkaProducer.
Any ideas?
For Clement's answer
It seems the right direction, but executing orders.send("hello"); I receive this error:
(vert.x-eventloop-thread-3) Unhandled exception:java.lang.IllegalStateException: Stream not yet connected
I'm consuming from my topic by command line, Kafka is up and running, if I produce manually I can see the consumed messages.
It seems relative to this sentence by the doc:
To use an Emitter for the stream hello, you need a #Incoming("hello")
somewhere in your code (or in your configuration).
I have this code in my class:
#Incoming("orders")
public CompletionStage<Void> consume(KafkaMessage<String, String> msg) {
log.info("Received message (topic: {}, partition: {}) with key {}: {}", msg.getTopic(), msg.getPartition(), msg.getKey(), msg.getPayload());
return msg.ack();
}
Maybe I've forgotten some configurations?
So, you just need to use an Emitter:
#Inject
#Stream("orders") // Emit on the channel 'orders'
Emitter<String> orders;
// ...
orders.send("hello");
And in your application.properties, declare:
## Orders topic (WRITE)
mp.messaging.outgoing.orders.type=io.smallrye.reactive.messaging.kafka.Kafka
mp.messaging.outgoing.orders.topic=orders
mp.messaging.outgoing.orders.bootstrap.servers=localhost:9092
mp.messaging.outgoing.orders.key.serializer=org.apache.kafka.common.serialization.StringSerializer
mp.messaging.outgoing.orders.value.serializer=org.apache.kafka.common.serialization.StringSerializer
mp.messaging.outgoing.orders.acks=1
To avoid Stream not yet connected exception, as suggested by doc:
To use an Emitter for the stream hello, you need a #Incoming("hello")
somewhere in your code (or in your configuration).
Assuming you have something like this in your application.properties:
# Orders topic (READ)
smallrye.messaging.source.orders-r-topic.type=io.smallrye.reactive.messaging.kafka.Kafka
smallrye.messaging.source.orders-r-topic.topic=orders
smallrye.messaging.source.orders-r-topic.bootstrap.servers=0.0.0.0:9092
smallrye.messaging.source.orders-r-topic.key.deserializer=org.apache.kafka.common.serialization.StringDeserializer
smallrye.messaging.source.orders-r-topic.value.deserializer=org.apache.kafka.common.serialization.StringDeserializer
smallrye.messaging.source.orders-r-topic.group.id=my-group-id
Add something like this:
#Incoming("orders-r-topic")
public CompletionStage<Void> consume(KafkaMessage<String, String> msg) {
log.info("Received message (topic: {}, partition: {}) with key {}: {}", msg.getTopic(), msg.getPartition(), msg.getKey(), msg.getPayload());
return msg.ack();
}
Since Clement's answer the #Stream annotation has been deprecated. The #Channel annotation
must be used instead.
You can use an Emitter provided by the quarkus-smallrye-reactive-messaging-kafka dependency to produce message to a Kafka topic.
A simple Kafka producer implementation:
public class MyKafkaProducer {
#Inject
#Channel("my-topic")
Emitter<String> myEmitter;
public void produce(String message) {
myEmitter.send(message);
}
}
And the following configuration must be added to the application.properties file:
mp.messaging.outgoing.my-topic.connector=smallrye-kafka
mp.messaging.outgoing.my-topic.bootstrap.servers=localhost:9092
mp.messaging.outgoing.my-topic.value.serializer=org.apache.kafka.common.serialization.StringSerializer
This will produce string serialized messages to a kafka topic named my-topic.
Note that by default the name of the channel is also the name of the kafka topic in which the data will be produced. This behavior can be changed through the configuration. The supported configuration attributes are described in the reactive Messaging documentation

Notifying entities when entity state changes in Lagom

Assuming a Record entity, CreateRecord command and a RecordCreated event. I want to invoke some command on one or more other entities (in different modules). What would be the suggested approach to achieve this?
I was thinking about sending a message from the ReadSide handler of the Record entity, which could be received by corresponding service(s), which would convert it to a command and invoke on an entity.
EDIT, thanks #ignasi35: According to Message Broker API publishing of the messages could be possible with this code.
AggregateEventTag<RecordEvent> RECORD_EVENT_TAG = AggregateEventTag.of(RecordEvent.class);
public Topic<RecordMessage> recordsTopic() {
return TopicProducer.singleStreamWithOffset(offset -> {
return persistentEntityRegistry
.eventStream(RECORD_EVENT_TAG, offset)
.map(this::convertEventToRecordMessage);
});
}
Records are created, and corresponding events are persisted, but no messages are received by the following consumer:
#Singleton
public class RecordsConsumer {
#Inject
public RecordsConsumer(RecordService recordService){
recordService.recordsTopic().subscribe()
.atLeastOnce(Flow.fromFunction(this::displayMessage));
}
}
What am I doing wrong?
Finally solved it.
I ended up with a singleton service listening to RecordCreated events from PersistentEntityRegistry.eventStream. The service converts them to RecordMessage and exposes as a Topic (see my question above).
The issue with not receiving any enents from the exposed Topic was missing dependency to kafka-broker (strange that there was no warning about this, and the topic was just not exposed), in my case this was:
<dependency>
<groupId>com.lightbend.lagom</groupId>
<artifactId>lagom-javadsl-kafka-broker_2.12</artifactId>
</dependency>