Cant send message by using pykafka - apache-kafka

I am using pykafka, I can get topic names but I cant send message. My code is shown below
client = KafkaClient(hosts='xx.xx.xx.xx:9092')
topic = client.topics['test']
producer = topic.get_sync_producer()
producer.produce(b"message")
And I get this error message
raise ProduceFailureError("Delivery report not received after timeout")
pykafka.exceptions.ProduceFailureError: Delivery report not received after timeout

broker.id=1
listeners=PLAINTEXT://localhost:9092
I am sending to an external IP message
If you are setting hosts=some.external.IP:9092, then you need to edit Kafka properties with advertised.listeners=PLAINTEXT://some.external.IP:9092 and make listeners=PLAINTEXT://:9092 to listen on external interfaces.
Listing the topics uses a different protocol, which is why it works fine.

Related

Not authorized to access topics inside Event Hub namespace

I have Event Hub Namespace with two Event Hubs (event-hub and event-hub-2). To establish connection I use Kafka - of course namespace is with Standard Tier. When I try to connect to the second EH (event-hub-2 as a Kafka Topic, Connection String as a Kafka Password) I got following stacktrace:
2021-06-17T15:56:04.976Z - WARN: [NetworkClient] [Consumer clientId=consumer-$Default-1, groupId=$Default] Error while fetching metadata with correlation id 11 : {event-hub=TOPIC_AUTHORIZATION_FAILED}
2021-06-17T15:56:04.980Z - ERROR: [Metadata] [Consumer clientId=consumer-$Default-1, groupId=$Default] Topic authorization failed for topics [event-hub]
2021-06-17T15:56:05.007Z - ERROR: [KafkaConsumerActor] [9e1ad] Exception when polling from consumer, stopping actor: org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [event-hub]
org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [event-hub]
My question is: WHY I could got this kind of stacktrace when I didn't even try to connect to topic/EH from stacktrace? It's a weird...
If you are using the same consumer group in both scenarios, your consumer needs read access to all topics used in the consumer group, try changing the group.id and test again.
The problem came back when I connect my subscribers to Event Hubs simultaneously. Just like Ran said, connecting to different consumer groups resolved problem. Many thanks!

Kafka producer dealing with lost connection to broker

With a producer configuration like below, I am creating a Singleton producer that is used throughout the application:
properties.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "kafka.consul1:9092,kafka.consul2:9092");
properties.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
properties.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
properties.setProperty(ProducerConfig.ACKS_CONFIG, "1");
I am connected to a k8s hosted Kafka cluster. The broker's advertised.listeners is configured to return me the IP addresses and not host names. While normally everything works as expected, the problem occurs when Kafka is restarted, sometimes the IP address changes. Since the producer only knows about the older IP it keeps trying to connect to that host to send messages and none of the messages go through.
I observe that a org.apache.kafka.common.errors.TimeoutException exception is thrown when the send fails. Currently the messages are sent async:
producer.send(data,
(RecordMetadata recordMetadata, Exception e) -> {
if (e != null) {
LOGGER.error("Exception while sending message to kafka", e);
}
});
How should the Timeoutexception be handled now? Given that the producer is shared across the application, closing and recreating in the callback does not sound right.
According to the JavaDocs on the Callback interface the TimeoutException is a retriable Exception that could be handled by increasing the number of retries of the Producer.
In the Kafka documentation you find details on the retries configuration:
retries (Default 0): Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error. Note that this retry is no different than if the client resent the record upon receiving the error. Allowing retries without setting max.in.flight.requests.per.connection to 1 will potentially change the ordering of records because if two batches are sent to a single partition, and the first fails and is retried but the second succeeds, then the records in the second batch may appear first.

Spring Kafka Stream - Unacknowledged message with no error

I am using #StreamListener to consume the Kafka message.
I have set autoCommitOffset to false and autoCommitOnError to false.
I am sending all failed message to DLQ topic as well for maxAttempt for failure. I have a question while testing the changes.
What will happen if I am not acknowledging the consumed message and also not throwing any error ? Will Kafka send the message automatically after sometime ?
when i throw error, replay kicks in and it does retry till my maxAttempt configuration and the failed message goes to DLQ topic.
Let me know if Kafka support retry if the consumer not throwing any error and not acknowledging the message.
What will happen if I am not acknowledging the consumed message and also not throwing any error ? Will Kafka send the message automatically after sometime ?
No; not unless you process no further messages, and even then, you will only get a redelivery after you restart the application.
Kafka doesn't "acknowledge" discrete messages; it just stores the last processed offset within a partition.

Kafka - org.apache.kafka.common.errors.NetworkException

I have a kafka client code which connects to Kafka( Server 0.10.1 and client is 0.10.2) brokers. There are 2 topics with 2 different consumer group in the code and also there is a producer. Getting the NetworkException from the producer code once in a while( once in 2 days, once in 5 days, ...). We see consumer group (Re)joining info in the logs for both the consumer group followed by the NetworkException from the producer future.get() call. Not sure why are we getting this error.
Code :-
final Future<RecordMetadata> futureResponse =
producer.send(new ProducerRecord<>("ping_topic", "ping"));
futureResponse.get();
Exception :-
org.apache.kafka.common.errors.NetworkException: The server disconnected before a response was received.
java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.NetworkException: The server disconnected before a response was received.
at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:70)
at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:57)
at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:25)
Kafka API definition for NetworkException,
"A misc. network-related IOException occurred when making a request.
This could be because the client's metadata is out of date and it is
making a request to a node that is now dead."
Thanks
I had the same error while testing the Kafka Consumer. I was using a sender template for it.
In the consumer configuration I set additionally the following properties:
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 15000);
After sending the Message I added a thread sleep:
ListenableFuture<SendResult<String, String>> future =
senderTemplate.send(MyConsumer.TOPIC_NAME, jsonPayload);
Thread.Sleep(10000).
It was necessary to make the test work, but maybe not suitable for your case.

Kafka - how to use #KafkaListener(topicPattern="${kafka.topics}") where property kafka.topics is 'sss.*'?

I'm trying to implement Kafka consumer with topic names as a pattern. E.g. #KafkaListener(topicPattern="${kafka.topics}") where property kafka.topics is 'sss.*'. Now when I send message to topic 'sss.test' or any other topic name like 'sss.xyz', 'sss.pqr', it's throwing error as below:
WARN o.apache.kafka.clients.NetworkClient - Error while fetching metadata with correlation id 12 : {sss.xyz-topic=LEADER_NOT_AVAILABLE}
I tried to enable listeners & advertised.listeners in the server.properties file but when I re-start Kafka it consumes messages from all old topics which were tried. The moment I use new topic name, it throws above error.
Kafka doesn't support pattern matching? Or there's some configuration which I'm missing? Please suggest.