Continuous warnings in Kafka server.log - apache-kafka

i started zookeeper and Kafka server in Linux machine. Both servers are started successfully. After that i created a "test" Topic. Everything working fine using console producer and console consumer. But when i try to send an event from a remote machine to Kafka server, it is failing. After some investigation i added the below configuration in server.properties
listeners=PLAINTEXT://0.0.0.0:9092
advertised.listeners=PLAINTEXT://xx.xx.xx.xx:9092
Then i got below continues Warning in Kafka logs
[2018-05-25 14:48:27,685] WARN [Controller id=0, targetBrokerId=0] Connection to node 0 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
But exceptions in controller.log
[2018-05-25 14:48:27,583] WARN [RequestSendThread controllerId=0] Controller 0's connection to broker xx.xx.xx.xx:9092 (id: 0 rack: null) was unsuccessful (kafka.controller.Req
uestSendThread)
java.io.IOException: Connection to xx.xx.xx.xx:9092 (id: 0 rack: null) failed.
at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70)
at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:271)
at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:225)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
Please help to solve this issue
Thanks in advance

Related

Why must controller be localhost when running KRaft mode

In KRaft mode, the Kafka broker does not start unless if the controller listens on localhost. For example, either of the following does not work on my laptop:
listeners=PLAINTEXT://10.0.0.48:9092,CONTROLLER://10.0.0.48:9093
listeners=PLAINTEXT://192.168.56.1:9092,CONTROLLER://192.168.56.1:9093
listeners=PLAINTEXT://localhost:9092,CONTROLLER://192.168.56.1:9093
If I replace the controller IP address with localhost in either of the above, kafka-server-start.sh starts successfully.
I get the following logs continuously in the failure scenario:
[2022-10-27 15:06:19,885] WARN [BrokerToControllerChannelManager broker=1 name=heartbeat] Connection to node 1 (localhost/127.0.0.1:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2022-10-27 15:06:19,885] INFO [BrokerToControllerChannelManager broker=1 name=heartbeat]: Recorded new controller, from now on will use broker localhost:9093 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread)
[2022-10-27 15:06:19,935] INFO [BrokerToControllerChannelManager broker=1 name=heartbeat]: Recorded new controller, from now on will use broker localhost:9093 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread)
[2022-10-27 15:06:19,936] INFO [BrokerToControllerChannelManager broker=1 name=heartbeat] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient)
Until I get the following error and kafka-server-start.sh exits:
[2022-10-27 15:06:22,804] ERROR Exiting Kafka due to fatal exception during startup. (kafka.Kafka$)
java.util.concurrent.CancellationException
at java.base/java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2468)
at kafka.server.BrokerLifecycleManager$ShutdownEvent.run(BrokerLifecycleManager.scala:485)
at org.apache.kafka.queue.KafkaEventQueue$EventHandler.run(KafkaEventQueue.java:174)
at java.base/java.lang.Thread.run(Thread.java:832)
It seems like it expects the controller to be localhost. If this is the case, why?
I had to change controller.quorum.voters to match what's in listeners:
controller.quorum.voters=1#192.168.56.1:9093

Kafka client faced disconnected will reconnect again by again, anyway can disable it?

The client will reconnect again by itself again, anyway I can disable it?
Should I close Kafka client?
[2022-06-30 16:17:51,332] WARN [Consumer clientId=consumer-xxx, groupId=xxx] Bootstrap broker xxx:xxx (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2022-06-30 16:17:52,387] WARN [Consumer clientId=consumer-pg-hudi001_5-1, groupId=pg-hudi001_5] Connection to node -1 (xxx) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
The consumer will constantly poll and periodically refresh its metadata about what brokers are available from the cluster Controller.
The only way to stop it from trying to connect is to close() the consumer, yes.

Kafka consumer should fail on "Bootstrap broker disconnected"

When a Kafka consumer cannot access the bootstrap broker it indefinitely tries to reconnect with the following message:
WARN NetworkClient - [Consumer clientId=consumer-testGroup-1, groupId=testGroup] Connection to node -1 (localhost/127.0.0.1:9999) could not be established. Broker may not be available.
WARN NetworkClient - [Consumer clientId=consumer-testGroup-1, groupId=testGroup] Bootstrap broker localhost:9999 (id: -1 rack: null) disconnected
What I want is that the consumer throws an exception and aborts the execution. In the docs I couldn't find a property to limit the retries.
Is there a recommended way to implement this behaviour or a property I overlooked?
I am using the KafkaReceiver class from project reactor.

Mirror Maker2 not able to connect to target cluster broker

I have two Kafka clusters on AWS MSK (in same environment and region). I have a KafkaConnect cluster setup on the destination cluster and have setup a mirror maker connector to run. The submission of the connector is fine and there are no errors.
When I try to check status of the connector, it says, RUNNING:
{"name":"mirror-maker-test-connector","connector":{"state":"RUNNING","worker_id":"<ip>:<port>"},"tasks":[task_list],"type":"source"}
I see the following exception:
[2022-01-12 19:46:33,772] DEBUG [Producer clientId=connector-producer-mirror-maker-test-connector-0] Connection with b-2.<broker_ip> disconnected (org.apache.kafka.common.network.Selector)
java.io.EOFException
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:120)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576)
at org.apache.kafka.common.network.Selector.poll(Selector.java:481)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:551)
at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243)
at java.base/java.lang.Thread.run(Thread.java:829)
[2022-01-12 19:46:33,773] DEBUG [Producer clientId=connector-producer-mirror-maker-test-connector-0] Node -1 disconnected. (org.apache.kafka.clients.NetworkClient)
[2022-01-12 19:46:33,773] WARN [Producer clientId=connector-producer-mirror-maker-test-connector-0] Bootstrap broker b-2.<broker_ip>:9094 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
I am able to connect to the specified broker with netcat from within the Kafka Connect k8s pod.
Anyone faced this issue before?
I got it to work - had to add SSL properties for both consumer and producer when submitting the Mirror Maker connector.
"target.cluster.security.protocol": "SSL",
"target.cluster.ssl.truststore.location":"<certs_path>",
"target.cluster.ssl.truststore.password": "<password>"
"source.cluster.security.protocol": "SSL",
"source.cluster.ssl.truststore.location": "<certs_path>",
"source.cluster.ssl.truststore.password": "<password>"

kafka connect could not be established to the kafka broker

I am using mac and install zookeeper and kafka through
brew install confluent-platform
By using the following commands,
zookeeper-server-start /usr/local/etc/kafka/zookeeper.properties
kafka-server-start /usr/local/etc/kafka/server.properties
connect-distributed /usr/local/etc/kafka/connect-distributed.properties
however connector shows the following messages such as
[2020-08-05 09:53:40,222] WARN [Producer clientId=inventory-connector2-dbhistory] Connection to node -1 (kafka/223.82.248.117:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:756)
[2020-08-05 09:53:40,230] WARN [Producer clientId=inventory-connector2-dbhistory] Bootstrap broker kafka:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient:1024)
[2020-08-05 09:53:40,427] WARN [Producer clientId=inventory-connector-dbhistory] Connection to node -1 (kafka/223.82.248.117:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient:756)
I started the kafka broker in the localhost, but as the message shows the broker 's address is 223.82.248.117:9092.
how would I fix it?
You would need to set advertised.listeners in server.properties to be localhost:9092
Then bootstrap.servers in connect-*.properties to point there as well