I am seeing below WARN in my Kafka producer.
WARN [kafka-producer-network-thread | producer-4] o.a.k.c.NetworkClient
[NetworkClient.java:588] Connection to node -1 could not be
established. Broker may not be available.
Kafka Client version = 0.11.0.1
Kafka Broker version is same as client Version.
My Data is flowing into the kafka cluster and I am able to consume as well. But I am keep getting this errors. Any clues >\?
Related
I have a very simple Quarkus microservice which uses smallrye reactive messaging (kafka). Sometimes my kafka broker goes down and I got the following logs :
2020-09-24 04:04:27,067 WARN [org.apa.kaf.cli.NetworkClient] (kafka-producer-network-thread | producer-1) [Producer clientId=producer-1] Bootstrap broker xxxxxxx.xxxx.xxx:2202 (id: -1 rack: null) disconnected 2020-09-24 04:04:27,083 WARN [org.apa.kaf.cli.NetworkClient] (kafka-producer-network-thread | producer-3) [Producer clientId=producer-3] Connection to node -1 (xxxxx.xxxx.xxxx.fr/XX.XX.XX.XXX:2202) could not be established. Broker may not be available.
After the broker has been restarted, I have to manually restart my microservice. Is it possible to add to capability to the microservice to reconsume the new incoming messages without any manual action?
Thank you!
If you are using KafkaProducer and Consumer API they automatically reconnect once the broker is up again.
Please ensure that in your application you do not throw an exception and kill the thread. If you keep the thread alive then it will reconnect. Catch all exceptions for Consumer thread to ensure it is not exiting due to a runtime exception.
I am using mac and install zookeeper and kafka through
brew install confluent-platform
By using the following commands,
zookeeper-server-start /usr/local/etc/kafka/zookeeper.properties
kafka-server-start /usr/local/etc/kafka/server.properties
connect-distributed /usr/local/etc/kafka/connect-distributed.properties
however connector shows the following messages such as
[2020-08-05 09:53:40,222] WARN [Producer clientId=inventory-connector2-dbhistory] Connection to node -1 (kafka/223.82.248.117:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:756)
[2020-08-05 09:53:40,230] WARN [Producer clientId=inventory-connector2-dbhistory] Bootstrap broker kafka:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient:1024)
[2020-08-05 09:53:40,427] WARN [Producer clientId=inventory-connector-dbhistory] Connection to node -1 (kafka/223.82.248.117:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:756)
I started the kafka broker in the localhost, but as the message shows the broker 's address is 223.82.248.117:9092.
how would I fix it?
You would need to set advertised.listeners in server.properties to be localhost:9092
Then bootstrap.servers in connect-*.properties to point there as well
I'm trying to get Kafka to work for the first time. I get the error as described below. Any reason why Kafka would throw this error?
The error:
[kafka-producer-network-thread | producer-1] WARN
org.apache.kafka.clients.NetworkClient - [Producer
clientId=producer-1] 1 partitions have leader brokers without a
matching listener, including [test1-0]
port =9020
advertised.host.name=10.44.72.204
advertised.port=9020
I have been creating a Kafka Producer example using Java. I have been
sending normal data which is just "Test" + Integer as value to Kafka. I
have used the below properties and after I have started the Producer
Client and messages are on the way, during this I am killing the broker
and suddenly receiving the below error message instead of retrying.
Using 3 brokers and topic with 3 partitions and replication factor as 3
and no min-insync-replicas
Below are the properties configured config.put(ProducerConfig.ACKS_CONFIG, "all");
config.put(ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION, "1");
config.put(CommonClientConfigs.RETRIES_CONFIG, 60);
config.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, true);
config.put(ProducerConfig.RETRY_BACKOFF_MS_CONFIG ,10000);
config.put(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG ,30000);
config.put(ProducerConfig.MAX_BLOCK_MS_CONFIG ,10000);
config.put(ProducerConfig.MAX_REQUEST_SIZE_CONFIG , 1048576);
config.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384);
config.put(ProducerConfig.LINGER_MS_CONFIG, 0);
config.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 1073741824); // 1GB
and the result when I have killed all my brokers or sometimes one of the
broker is as below
**Error:**
WARN org.apache.kafka.clients.producer.internals.Sender - [Producer
clientId=producer-1] Got error produce response with correlation id 124
on topic-partition testing001-0, retrying (59 attempts left). Error:
NETWORK_EXCEPTION
27791 [kafka-producer-network-thread | producer-1] WARN
org.apache.kafka.clients.producer.internals.Sender - [Producer
clientId=producer-1] Received invalid metadata error in produce request
on partition testing001-0 due to
org.apache.kafka.common.errors.NetworkException: The server disconnected
before a response was received.. Going to request metadata update now
28748 [kafka-producer-network-thread | producer-1] ERROR
org.apache.kafka.common.utils.KafkaThread - Uncaught exception in thread
'kafka-producer-network-thread | producer-1':
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.<init>(Unknown Source)
at java.nio.ByteBuffer.allocate(Unknown Source)
at org.apache.kafka.common.memory.MemoryPool$1.tryAllocate
(MemoryPool.java:30)
at org.apache.kafka.common.network.NetworkReceive.readFrom
(NetworkReceive.java:112)
at org.apache.kafka.common.network.KafkaChannel.receive
(KafkaChannel.java:335)
at org.apache.kafka.common.network.KafkaChannel.read
(KafkaChannel.java:296)
at org.apache.kafka.common.network.Selector.attemptRead
(Selector.java:560)
at org.apache.kafka.common.network.Selector.pollSelectionKeys
(Selector.java:496)
at org.apache.kafka.common.network.Selector.poll(Selector.java:425)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:510)
at org.apache.kafka.clients.producer.internals.Sender.run
(Sender.java:239)
at org.apache.kafka.clients.producer.internals.Sender.run
(Sender.java:163)
at java.lang.Thread.run(Unknown Source)
I assume you are testing the producer. When a producer connect to the Kafka cluster you will pass all broker IPs and ports as a comma separated string. In your case there are three brokers. When producer try to connect to cluster, as part of initialization cluster controller responds with cluster metadata. Assume your producer only populating message to a single topic. Cluster maintains a leader among brokers for each and every topic. After identify the leader for the topic, your producer only going to communicate to the leader until it is live.
In your testing scenario, you are deliberately killing the broker instances. When it happens kafka cluster need to identify a new leader for your topic and controller has to pass the new meta data to your producer. If the metadata change quite frequently( in your case you may kill another broker mean while) producer may receive invalid metadata.
Could you please tell me about compatibility of Apache Kafka and Zookeeper (native Apache distributuins) with some Confluent's components. I have already installed in my environment Kafka and Zookeeper as a multinodes clusters. But now I need to add schema-registry, kafka-connect.
So I actually tried to deploy Confluent Schema registry from their official docker image. I logged in and was able to successfully telnet kafka broker on port 9093
root#schema-0:/usr/bin# telnet kafka-0.kafka-hs 9093
Trying 10.244.3.47...
Connected to kafka-0.kafka-hs.log-platform.svc.cluster.local.
Escape character is '^]'.
After I tried to do some tests:
# /usr/bin/kafka-avro-console-producer \
--broker-list localhost:9093 --topic bar \
--property value.schema='{"type":"record","name":"myrecord","fields" \
[{"name":"f1","type":"string"}]}'
Add some values:
{"f1": "value1"}
But no luck :(. Got next errors:
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
(org.apache.kafka.clients.producer.ProducerConfig)
[2018-01-28 11:23:23,561] INFO Kafka version : 1.0.0-cp1 (org.apache.kafka.common.utils.AppInfoParser)
[2018-01-28 11:23:23,561] INFO Kafka commitId : ec61c5e93da662df (org.apache.kafka.common.utils.AppInfoParser){"f1": "value1"}
[2018-01-28 11:23:36,233] WARN [Producer clientId=console-producer] Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2018-01-28 11:23:36,335] WARN [Producer clientId=console-producer] Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2018-01-28 11:23:36,486] WARN [Producer clientId=console-producer] Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
Entire system is spinning on Kubernetes
Confluent Platform is Apache Kafka, but with additional components (such as Schema Registry) bundled with it.
The error you're getting is related to the network configuration. You need to make sure that your Broker is available to other nodes, including Schema Registry. In your Schema Registry config you've specified broker-list localhost:9093 but this should be your Kafka broker. In addition as Dmitry Minkovsky mentions, make sure you've set the advertised listener in your broker. This article might help.