I am using Spring Boot 2.1.1.RELEASE and Spring Cloud Greenwich.RC2, and the managed version for spring-cloud-stream-binder-kafka is 2.1.0RC4. The Kafka version is 1.1.0. I have set the following properties as the messages should not be consumed if there is an error.
spring.cloud.stream.bindings.input.group=consumer-gp-1
...
spring.cloud.stream.kafka.bindings.input.consumer.autoCommitOnError=false
spring.cloud.stream.kafka.bindings.input.consumer.enableDlq=false
spring.cloud.stream.bindings.input.consumer.max-attempts=3
spring.cloud.stream.bindings.input.consumer.back-off-initial-interval=1000
spring.cloud.stream.bindings.input.consumer.back-off-max-interval=3000
spring.cloud.stream.bindings.input.consumer.back-off-multiplier=2.0
....
There are 20 partitions in the Kafka topic and Kerberos is used for authentication (not sure if this is relevant).
The Kafka consumer is calling a web service for every message it processes, and if the web service is unavailable then I expect that the consumer will then try to process the message for 3 times before it moves on to the next message. So for my test, I disabled the webservice, and therefore none of the message could be processed correctly. From the logs I can see that this is happening.
After a while I stopped and then restarted the Kafka consumer (webservice is still disabled). I was expecting that after the restart of the Kafka consumer, it would attempt to process the messages that was not successfully processed the first time around. From the logs (I printed out each message with its fields) after the restart of the Kafka Consumer I couldn't see this happening. I thought the partition might be influencing something, but I check the logs and all 20 partitions were assigned to this single consumer.
Is there a property I have missed? I thought the expected behavior when I restart the consumer the second time, is that Kafka broker would pass the records that were not successfully processed to the consumer again.
Thanks
Parameters working as expected. See comment.
Related
Some weeks ago my project has been updated to use Kafka 3.2.1 instead of using the one coming with Spring Boot 2.7.3 (3.1.1). We made this upgrade to avoid an issue in Kafka streams – Illegal state and argument exceptions were not ending in the uncaught exception handler.
On the consumer side, we also moved to the cooperative sticky assignator.
In parallel, we started some resiliency tests and we started to have issues with Kafka records that are not consumed anymore on some partitions when using a Kafka batch listener. The issue occurred after several rebalances caused by the test (deployment is done in Kubernetes and we stopped some pods, micro services and broker instances). The issue not present on every listeners. Kafka brokers and micro-services are up and running.
During our investigations,
we enabled Kafka events and we can clearly see that the consumer is started
we can see in the logs that the partitions that are not consuming events are assigned.
debug has been enabled on the KafkaMessageListenerContainer. We see a lot of occurrences of Receive: 0 records and Commit list: {}
Is there any blocking points to use Kafka 3.2.1 with Spring Boot/Kafka 2.7.3/2.8.8?
Any help or other advices are more than welcome to progress our investigations.
Multiple listeners are defined, the retry seems to be fired from another listener (shared err handler?).
This is a known bug, fixed in the next release:
https://github.com/spring-projects/spring-kafka/issues/2382
https://github.com/spring-projects/spring-kafka/commit/3de1e89ba697ead04de171cfa35273bb0daddbe6
Temporary work around is to give each container its own error handler.
we are using springboot application to develop kafka streams application. Till these days we are using single broker only so we are not facing any issues
But a week ago we created cluster mode with 3 zookeepers and 3 kafka broker for higher availability
we configured our application like the following.
spring.kafka.bootstrap-servers=x.x.x.x:9093,x.x.x.x:9093,x.x.x.x:9093
leader-1
leader-2
leader-3
So we are testing the server down behaviour below are the results
Expected behavior: it has to continuously run without any struggle by consuming and producing the data
Actual behavior: if we down any one server it will throw the exception and broker not available after some time application got stopped
while analysing the cause we found consuming topic is having leader-1 and data producing topic is having leader-2 so when i stop the leader-1 what we thought is it will change to the next leader but it is not?
is this is the default behaviour or else we are doing anything wrong?
can anyone please suggest me how to overcome this issue?
We have a kafka producer configured as -
metadata.broker.list=broker1:9092,broker2:9092,broker3:9092,broker4:9092
serializer.class=kafka.serializer.StringEncoder
request.required.acks=1
request.timeout.ms=30000
batch.num.messages=25
message.send.max.retries=3
producer.type=async
compression.codec=snappy
Replication Factor is 3 and total number of partition currently is 108
Rest of the properties are default.
This producer was running absolutely fine. Then, due to some reason, one of the broker went down. Then, our producer started to show the log as -
"Failed to update metadata after 60000 ms". Nothing else was there in the log and we were seeing this error. In some interval, few requests were getting blocked, even if producer was async.
This issue was resolved when the broker was again up and running.
What can be the reason of this? One broker down should not affect the system as a whole as per my understanding.
Posting the answer for someone who might face this issue -
The reason is older version of Kafka Producer. The kafka producers take bootstrap servers as list. In older versions, for fetching metadata, producers will try to connect with all the servers in Round Robin fashion. So, if one of the broker is down, the requests going to this server will fail and this message will come.
Solution:
Upgrade to newer producer version.
can reduce metadata.fetch.timeout.ms settings: This will ensure the main thread is not getting blocked and send will fail soon. Default value is 60000ms. Not needed in higher version
Note: Kafka send method is blocked till the producer is able to write to buffer.
I got the same error because I forgot to create the topic. Once I created the topic the issue was resolved.
I'm using kafka with zookeeper that comes with the kafka bundle. I'm also using spring cloud stream, and kafka binder.
I wanted to see what happens if zookeeper is down, while the kafka broker is running. I send some items to the kafka topic via spring cloud stream source. To my surprise, instead of getting an exception, spring cloud stream says it's ok. However my items aren't in the kafka queue. Is this the expected behavior, or a bug? Is there something I can configure to get an exception back if zookeeper is down?
Try Kafka Producer sync = true property.
I'm afraid we can reach the Broker for metadata, but when we send a record that is done in async manner by default.
So, track down the network problems fully we have to switch to the sync mode.
http://docs.spring.io/spring-cloud-stream/docs/Chelsea.SR2/reference/htmlsingle/index.html#_kafka_producer_properties
I have kafka installed on my local server, and through some other application running in the server produers are publishing messages to the brokers inside of my kafka server, through the zookeeper I can easily see the health of my kafka which shows all the topics created inside my kafka server a, offsets inside topics etc etc, so only thing zookeeper is not able to show is the messages that are inside the individual topics, so someone recommended kafka-manager tool, I installed and ran it, it worked fine, it showed lot of information from my kafka server, but still it was not able to show real messages that are published or consumed by respective consumers inside my kafka server, so my question is , is there a way/tool/code to find out the messages published or consumed, I mean in addition to this kafka-manager or I have Install some plugins inside of the same kafka-manager so that it will also show the respective messages.Thanks in advance!!
A Kafka broker cannot tell you how many messages are have been consumed for a given consumer on a given topic. The only thing that a Kafka broker knows about is the current log offset of the consumer, and the current max offset of the log. It cannot however, tell you how many messages before the current offset the consumer actually received, as it keeps no counters around this, and the consumer defines its own initial position (as well as being able to seek to various places in the log).
You can get both of these numbers using the $KAFKA_HOME/bin/kafka-consumer-offset-checker.sh script.