Trying to run Kafka Connect for the first time, with an existing Kafka deployment. using SASL_PLAINTEXT and kerberos authentication.
The first time I try and start connect-distributed, I see:
ERROR Uncaught exception in herder work thread, exiting: (org.apache.kafka.connect.runtime.distributed.DistributedHerder:227)
org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata
If I immediately run a second time, not changing anything, instead I see:
ERROR Uncaught exception in herder work thread, exiting: (org.apache.kafka.connect.runtime.distributed.DistributedHerder:227)
org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [Offsets]
This is reproducible.
Worker config:
producer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor
producer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor
bootstrap.servers=mybroker:9092
rest.port=28082
group.id=some-group
config.storage.topic=Configs
offset.storage.topic=Offsets
status.storage.topic=Status
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
rest.advertised.host.name=localhost
log4j.root.loglevel=INFO
security.protocol=SASL_PLAINTEXT
sasl.kerberos.service.name=kafka
sasl.mechanism=GSSAPI
consumer.security.protocol=SASL_PLAINTEXT
consumer.sasl.kerberos.service.name=kafka
consumer.sasl.mechanism=GSSAPI
producer.security.protocol=SASL_PLAINTEXT
producer.sasl.kerberos.service.name=kafka
producer.sasl.mechanism=GSSAPI
A career in software has taught me to always assume that the problem is completely unrelated to the error log, but for once it was correct:
Ranger was configured incorrectly and I genuinely wasn't authorized to access that topic.
Related
After migrating our microservice functionality to Spring Cloud function we have been facing issues with one of the producer topics.
Event of type: abc and key: xxx_yyy could not be sent to kafka org.springframework.messaging.MessageHandlingException: error occurred in message handler [org.springframework.cloud.stream.binder.kafka.KafkaMessageChannelBinder$ProducerConfigurationMessageHandler#2333d598]; nested exception is org.springframework.kafka.KafkaException: Send failed; nested exception is org.apache.kafka.common.errors.TimeoutException: Topic pc-abc not present in metadata after 60000 ms.
o.s.kafka.support.LoggingProducerListener - Exception thrown when sending a message with key='byte[15]' and payload='byte[256]' to topic pc-abc and partition 6: org.apache.kafka.common.errors.TimeoutException: Topic pc-abc not present in metadata after 60000 ms.
FYI: Topics are already created in our staging/prod environment and are not to be created as the application starts.
My producer config:
spring.cloud.stream.bindings.pc-abc-out-0.content-type=application/json
spring.cloud.stream.bindings.pc-abc-out-0.destination=pc-abc
spring.cloud.stream.bindings.pc-abc-out-0.producer.header-mode=headers
***spring.cloud.stream.bindings.pc-abc-out-0.producer.partition-count=5***
spring.cloud.stream.bindings.pc-abc-out-0.producer.partitionKeyExpression=payload.key
spring.cloud.stream.kafka.bindings.pc-abc-out-0.producer.sync=true
I am kind of stuck at this point and exhausted. Has anyone else faced this issue?
Spring Cloud version: 2.5.5
Kafka: 2.7.1
The issue is :
The producer is configured with partition-count=5
and Kafka is looking for partition number 6 , which obviously does not exist , I have commented the auto-add partitions property, but the issue still turns up !! Is it stale configuration? How do I force kafka to take up new configuration.
Someone from my team has changed some configuration in Kafka and I don't know what was changed. No one admits to the changes. I have to explain this case.
From this time our applications show errors similar to this despite access to topics set to All:
2022-01-03 09:16:35,398] ERROR [Worker clientId=connect-1, groupId=test-connect] Uncaught exception in herder work thread, exiting: (org.apache.kafka.connect.runtime.distributed.DistributedHerder:290)
org.apache.kafka.common.errors.GroupAuthorizationException: Not authorized to access group: test-connect
Previously, access was set at the topic's and user's level, not at the group level and it worked well.
Do you have any idea what is wrong?
I have Event Hub Namespace with two Event Hubs (event-hub and event-hub-2). To establish connection I use Kafka - of course namespace is with Standard Tier. When I try to connect to the second EH (event-hub-2 as a Kafka Topic, Connection String as a Kafka Password) I got following stacktrace:
2021-06-17T15:56:04.976Z - WARN: [NetworkClient] [Consumer clientId=consumer-$Default-1, groupId=$Default] Error while fetching metadata with correlation id 11 : {event-hub=TOPIC_AUTHORIZATION_FAILED}
2021-06-17T15:56:04.980Z - ERROR: [Metadata] [Consumer clientId=consumer-$Default-1, groupId=$Default] Topic authorization failed for topics [event-hub]
2021-06-17T15:56:05.007Z - ERROR: [KafkaConsumerActor] [9e1ad] Exception when polling from consumer, stopping actor: org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [event-hub]
org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [event-hub]
My question is: WHY I could got this kind of stacktrace when I didn't even try to connect to topic/EH from stacktrace? It's a weird...
If you are using the same consumer group in both scenarios, your consumer needs read access to all topics used in the consumer group, try changing the group.id and test again.
The problem came back when I connect my subscribers to Event Hubs simultaneously. Just like Ran said, connecting to different consumer groups resolved problem. Many thanks!
today a message prompt out when I try to send message to consumer console through producer console
[2016-11-02 15:12:58,168] ERROR Error when sending message to topic test with
key: null, value: 5 bytes with error:
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Batch containing 1 record(s)
expired due to timeout while requesting metadata from brokers for test-0
Why is this happened? Is this consider as Kafka problem or Zookeeper problem?
Seems that client failed to retrieve metadata for test-0 from the kafka brokers.
Either make sure you are able to connect to the kafka brokers or check if 'advertised.listeners' is set if you are running kafka on IaaS machines.
Well after I rebooted the whole server the problem is gone.
I would like to ask anyone face before this kind of stuation?
Few days ago the kafka is able to work properly but today it starting having problem. The console producer is unable to send message and receive by console consumer. After few seconds it prompt :
" ERROR Error when sending message to topic test with key: null, value: 11 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms. "
Anyone can help? :'(
You'll see this if the broker you are bootstrapping from is not reachable. Try to telnet to the host and port you are trying to access from the producer. If that communication is OK, turn on debug logging by locating the tools-log4j.properties and changing the warn level to debug.