Not authorized to access topics inside Event Hub namespace - apache-kafka

I have Event Hub Namespace with two Event Hubs (event-hub and event-hub-2). To establish connection I use Kafka - of course namespace is with Standard Tier. When I try to connect to the second EH (event-hub-2 as a Kafka Topic, Connection String as a Kafka Password) I got following stacktrace:
2021-06-17T15:56:04.976Z - WARN: [NetworkClient] [Consumer clientId=consumer-$Default-1, groupId=$Default] Error while fetching metadata with correlation id 11 : {event-hub=TOPIC_AUTHORIZATION_FAILED}
2021-06-17T15:56:04.980Z - ERROR: [Metadata] [Consumer clientId=consumer-$Default-1, groupId=$Default] Topic authorization failed for topics [event-hub]
2021-06-17T15:56:05.007Z - ERROR: [KafkaConsumerActor] [9e1ad] Exception when polling from consumer, stopping actor: org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [event-hub]
org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [event-hub]
My question is: WHY I could got this kind of stacktrace when I didn't even try to connect to topic/EH from stacktrace? It's a weird...

If you are using the same consumer group in both scenarios, your consumer needs read access to all topics used in the consumer group, try changing the group.id and test again.

The problem came back when I connect my subscribers to Event Hubs simultaneously. Just like Ran said, connecting to different consumer groups resolved problem. Many thanks!

Related

Kafka Admin client unregistered causing metadata issues

After migrating our microservice functionality to Spring Cloud function we have been facing issues with one of the producer topics.
Event of type: abc and key: xxx_yyy could not be sent to kafka org.springframework.messaging.MessageHandlingException: error occurred in message handler [org.springframework.cloud.stream.binder.kafka.KafkaMessageChannelBinder$ProducerConfigurationMessageHandler#2333d598]; nested exception is org.springframework.kafka.KafkaException: Send failed; nested exception is org.apache.kafka.common.errors.TimeoutException: Topic pc-abc not present in metadata after 60000 ms.
o.s.kafka.support.LoggingProducerListener - Exception thrown when sending a message with key='byte[15]' and payload='byte[256]' to topic pc-abc and partition 6: org.apache.kafka.common.errors.TimeoutException: Topic pc-abc not present in metadata after 60000 ms.
FYI: Topics are already created in our staging/prod environment and are not to be created as the application starts.
My producer config:
spring.cloud.stream.bindings.pc-abc-out-0.content-type=application/json
spring.cloud.stream.bindings.pc-abc-out-0.destination=pc-abc
spring.cloud.stream.bindings.pc-abc-out-0.producer.header-mode=headers
***spring.cloud.stream.bindings.pc-abc-out-0.producer.partition-count=5***
spring.cloud.stream.bindings.pc-abc-out-0.producer.partitionKeyExpression=payload.key
spring.cloud.stream.kafka.bindings.pc-abc-out-0.producer.sync=true
I am kind of stuck at this point and exhausted. Has anyone else faced this issue?
Spring Cloud version: 2.5.5
Kafka: 2.7.1
The issue is :
The producer is configured with partition-count=5
and Kafka is looking for partition number 6 , which obviously does not exist , I have commented the auto-add partitions property, but the issue still turns up !! Is it stale configuration? How do I force kafka to take up new configuration.

Artemis - How to avoid TransactionRolledBackException for Non-Transactional session

I use live/backup with shared-storage, and I use a non-transacted JMS session. I always send one message, and I always receive one message then acknowledge and receive second message only after successful first acknowledge.
I got this exception in my non-transacted session:
Execution of JMS message listener failed. Caused by: [javax.jms.TransactionRolledBackException - AMQ219030: The transaction was rolled back on failover to a backup server]
javax.jms.TransactionRolledBackException: AMQ219030: The transaction was rolled back on failover to a backup server
at org.apache.activemq.artemis.core.client.impl.ClientSessionImpl.rollbackOnFailover(ClientSessionImpl.java:904)
at org.apache.activemq.artemis.core.client.impl.ClientSessionImpl.commit(ClientSessionImpl.java:927)
at org.apache.activemq.artemis.jms.client.ActiveMQMessage.acknowledge(ActiveMQMessage.java:719)
It happens because the session was marked as "rollbackOnly". I got this state after the following steps:
I use Spring-JMS. Consumer session works 24/7 (infinite loop session.receive())
The Master Node crashed, then the Master node was restarted
After recovery (After a couple of hours), I sent a message to the queue. The consumer read the message and throw Exception on acknowledge(because was marked as rollback-only)
I read message again (this is not very bad for my task) but Redelivery Count has not been increased
My consumer code:
onMessage(Message message) {
if (redeliveryCount(message) > 0){
processAsDublicate(message); // It's not invoked - it is error in my business logic.
}
}
I migrated from another broker and and I thought not to change the client logic
Question:
How to avoid TransactionRolledBackException for Non-Transactional session? If this is not possible i should change consumer code?
Thank you in Advance
UPDATE AFTER ANSWER:
https://github.com/apache/activemq-artemis/tree/2.14.0/examples/features/ha/replicated-failback
This example is not suitable for my case - I don't have non-acknowledged messages. I got this state after the following steps: 1) Restart server 2) consume message 3) acknoledge message
We use a broker for ~30 applications (24/7) ~ 200 consumers in total
For example, on the weekend we restart the JMS Broker
Will all consumers start getting this exception after consume new messages
(They don't have non-acknowledged messages)
The TransactionRolledBackException is expected as you can see in the replicated-failback example.
To prevent a consumer from receiving the same message more times, an idempotent consumer must be implemented, ie Apache Camel provides an Idempotent consumer component that would work with any JMS provider, see: http://camel.apache.org/idempotent-consumer.html

Kafka stream: "TopicAuthorizationException: Not authorized to access topics" for an internal state store

Java: OpenJdk 11
Kafka: 2.2.0
Kafka streams lib: 2.3.0
I am trying to deploy my Kafka streams application in a docker container and it fails while trying to create an internal state store with a TopicAuthorizationException.
It works well locally. The main difference between locally and on the server is that there it connects to a server deployed Kafka and authenticates using the usual Kerberos auth.
I fail to understand the link between authentication and the local stores.
My stream looks like that:
StreamsBuilder builder = new StreamsBuilder();
//We stream from the source topic
KStream<String, EnrichedMessagePayload> sourceMessagesStream = builder.stream(sourceTopic, Consumed
.with(Serdes.serdeFrom(String.class), INPUT_SERDE));
//We group per room and window
TimeWindowedKStream<String, EnrichedMessagePayload> windowed = sourceMessagesStream
.groupByKey().windowedBy(TimeWindows.of(Duration.ofMillis(windowSize)).grace(Duration.ZERO));
//We make them a list
KStream<Windowed<String>, WindowedMessages> grouped = windowed
.aggregate(WindowedMessages::new,
(key, value, aggregate) -> aggregate.add(value),
Materialized.with(Serdes.String(), Serdes.serdeFrom(windowSerializer, windowSerializer)))
.suppress(Suppressed.untilWindowCloses(unbounded()))
.toStream();
//Filter
KStream<Windowed<String>, FilterResult> filtered = grouped
.mapValues((readOnlyKey, value) -> filterWindow(value.getMessages()));
//Re map to its original form
KStream<String, OutputPayload> reduced = filtered
.flatMap((KeyValueMapper<Windowed<String>, WindowedMessages, Iterable<KeyValue<String, OutputPayload>>>) (key, value) -> value
.getMessages()
.stream().map(payload -> new KeyValue<>(key.key(), payload))
.collect(toList()));
//Target topic
reduced.to(sinkTopic, Produced
.with(Serdes.serdeFrom(String.class), SERDE));
return builder.build();
It receives a stream of messages, windows it, aggregates all the messages per window, keeps only the last version of the list with a 'Suppressed' and then flatMaps the whole to forward it to another topic.
Every time i get that kind of exception:
Error message was: org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [Topic authorization failed.]
2019-10-09 06:44:03.255 +0000 ERROR [filterer-d83f2f60-b2bd-40b2-a314-4b20f32918f7-StreamThread-1] [StreamThread.java:777] - stream-thread [filterer-d83f2f60-b2bd-40b2-a314-4b20f32918f7-StreamThread-1] Encountered the following unexpected Kafka exception during processing, this usually indicate Streams internal errors: - [rapid_r-live-message-filterer-0-0-1-snapshot-10.1e842f1a-ea60-11e9-9c7d-024298932744] - [] - []
org.apache.kafka.streams.errors.StreamsException: Could not create topic filterer-KTABLE-SUPPRESS-STATE-STORE-0000000005-changelog.
at org.apache.kafka.streams.processor.internals.InternalTopicManager.getNumPartitions(InternalTopicManager.java:212)
at org.apache.kafka.streams.processor.internals.InternalTopicManager.validateTopics(InternalTopicManager.java:226)
at org.apache.kafka.streams.processor.internals.InternalTopicManager.makeReady(InternalTopicManager.java:104)
at org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.prepareTopic(StreamsPartitionAssignor.java:971)
at org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.assign(StreamsPartitionAssignor.java:618)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.performAssignment(ConsumerCoordinator.java:424)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.onJoinLeader(AbstractCoordinator.java:622)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.access$1100(AbstractCoordinator.java:107)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:544)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:527)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:978)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:958)
at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:204)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:167)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:127)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:578)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:388)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:294)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:212)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:415)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:358)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:353)
at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1251)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1216)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1201)
at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:941)
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:846)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:805)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:774)
Caused by: org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [Topic authorization failed.]
It is not "authentication" but "authorization". Look at your log messages, it says "Not authorized to access topics". As far as I can see, you are not authorized to create the internal topic 'filterer-KTABLE-SUPPRESS-STATE-STORE-0000000005-changelog' that backs your local suppress state store. State stores included in Kafka Streams are backed by default by a topic on the Kafka brokers. This internal topics are used during failover to restore local state stores. These internal topics are created automatically by the Kafka Streams application, thus the application needs to have appropriate permissions to create them.
See https://kafka.apache.org/23/documentation/streams/developer-guide/security.html#id1 for more information. There it says "the principal running the application must have the ACL set so that the application has the permissions to create, read and write internal topics."

Can't start Kafka Connect: Timeout expired while fetching topic metadata

Trying to run Kafka Connect for the first time, with an existing Kafka deployment. using SASL_PLAINTEXT and kerberos authentication.
The first time I try and start connect-distributed, I see:
ERROR Uncaught exception in herder work thread, exiting: (org.apache.kafka.connect.runtime.distributed.DistributedHerder:227)
org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata
If I immediately run a second time, not changing anything, instead I see:
ERROR Uncaught exception in herder work thread, exiting: (org.apache.kafka.connect.runtime.distributed.DistributedHerder:227)
org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [Offsets]
This is reproducible.
Worker config:
producer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor
producer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor
bootstrap.servers=mybroker:9092
rest.port=28082
group.id=some-group
config.storage.topic=Configs
offset.storage.topic=Offsets
status.storage.topic=Status
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
rest.advertised.host.name=localhost
log4j.root.loglevel=INFO
security.protocol=SASL_PLAINTEXT
sasl.kerberos.service.name=kafka
sasl.mechanism=GSSAPI
consumer.security.protocol=SASL_PLAINTEXT
consumer.sasl.kerberos.service.name=kafka
consumer.sasl.mechanism=GSSAPI
producer.security.protocol=SASL_PLAINTEXT
producer.sasl.kerberos.service.name=kafka
producer.sasl.mechanism=GSSAPI
A career in software has taught me to always assume that the problem is completely unrelated to the error log, but for once it was correct:
Ranger was configured incorrectly and I genuinely wasn't authorized to access that topic.

Kafka - how to use #KafkaListener(topicPattern="${kafka.topics}") where property kafka.topics is 'sss.*'?

I'm trying to implement Kafka consumer with topic names as a pattern. E.g. #KafkaListener(topicPattern="${kafka.topics}") where property kafka.topics is 'sss.*'. Now when I send message to topic 'sss.test' or any other topic name like 'sss.xyz', 'sss.pqr', it's throwing error as below:
WARN o.apache.kafka.clients.NetworkClient - Error while fetching metadata with correlation id 12 : {sss.xyz-topic=LEADER_NOT_AVAILABLE}
I tried to enable listeners & advertised.listeners in the server.properties file but when I re-start Kafka it consumes messages from all old topics which were tried. The moment I use new topic name, it throws above error.
Kafka doesn't support pattern matching? Or there's some configuration which I'm missing? Please suggest.