Kafka: java.nio.channels.ClosedSelectorException - apache-kafka

I have two kafka clusters. One is two broker node kafka cluster with replication factor 2 and second one is single broker kafka cluster.
Sometimes observed below exception in Kafka controller.log. What would be the possible reason? Please help me
java.nio.channels.ClosedSelectorException
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:83)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at org.apache.kafka.common.network.Selector.select(Selector.java:489)
at org.apache.kafka.common.network.Selector.poll(Selector.java:298)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:349)
at kafka.utils.NetworkClientBlockingOps$.recursivePoll$1(NetworkClientBlockingOps.scala:135)
at kafka.utils.NetworkClientBlockingOps$.pollContinuously$extension(NetworkClientBlockingOps.scala:142)
at kafka.utils.NetworkClientBlockingOps$.blockingSendAndReceive$extension(NetworkClientBlockingOps.scala:108)
at kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:192)
at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:184)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)

I observed that problem when there is a connection problem. Main reason could be the zookeeper connection. Did you try to enable debug in kafka broker and read the logs? Because, you need to see detailed logs from server.

Related

Unable to connect kafka Consumer to Kafka cluster

We have Kafka cluster with 3 broker nodes. When all are up and running, consumer able to read data from Kafka. However if I stop all Kafka server and brings up only 2 Kafka server except the one which stopped last then Consumer unable to connect to Kafka cluster.
What could be the reason behind this? Thanks in advance.
I would guess that the problem could be the offsets.topic.replication.factor in the broker that by default is 3 while you are now running a cluster with 2 brokers only.
This is the topic where the consumers store the offsets when consuming and it was created with replication factor of 3 on the first run.
When, on the second run, you start only 2 broker, it could be the problem now.

How to retrieve zookeeper host details from Kafka brokerslist

I have list of Brokers for my Kafka cluster. How can I get the zookeeper host using Brokerslist?
If I got your question right you want to register your brokers at a zookeeper cluster. This actually works the other way round: You have to tell each broker where your zookeeper-server (or cluster) can be found. Have a look at the broker configuration setting zookeeper.connect. Together with the broker.id it will register each broker at the zookeeper cluster.
Example:
broker.id=1
zookeeper.connect=zk-host-1:2181,zk-host-2:2181,zk-host-3:2181
Hope that answers your question.
You cannot.
Zookeeper is intended to be abstracted away. There is no such API or method to get Zookeepers connected to a broker.
You'll need to SSH to a broker in that list (which you could do from Java}

What if a Kafka broker cannot connect to zookeeper?

If I have, say, 3 partitions with replication factor 3. Now what I understood is that they have all to connect to the same zookeeper. Ok what if they can't due to network issues ? Will the replication continue when the network is avaialble again?
If ZK is down, your Kafka cluster will have limited functionality. For details, see How does Kafka depend on Zookeeper?
Kafka requires Zookeeper (ZK). If ZK is down, then the entire Kafka cluster will be "down" (meaning: will be almost unusable). ZK is used for a bunch of things like managing internal topics etc.
If ZK becomes available to the Kafka cluster, the cluster will be operational.

Kafka producer unable to produce after one broker dies in a cluster

Thank you, everyone for all your assistance, my project is successfully integrated with Kafka.While testing i came with a issue for which i need a little assistance. my producer and consumer both were pointing to Kafka broker say KB-1. i have cluster of brokers with replication factor 1.KB-1 dies due to some internal reason so we switched the IP's of our producer and consumer to another broker of same cluster KB-2. It was effective to consume all data and process the necessary alerts necessary,but when i tried to produce the data through producer with KB-2 IP in bootstrap server it failed to produce giving the following error: org.apache.kafka.common.errors.TimeoutException
please also explain single point failure if possible.
thank you for the help.

What happens if Zookeeper fails completely?

we have setup a Kafka/Zookeeper Cluster consisting of 3 Brokers. We have one producer, sending messages to one specific Kafka topic and a few consumer groups reading from said topic. Those consumers perform a leader election via Zookeeper for themselves (independent from Kafka).
The versions used are:
Kafka: 0.9.0.1
Zookeeper: 3.4.6 (included in the Kafka-Package)
All processes are managed by Supervisor. So far, everything works just fine. What we tried now (for testing purposes) was to simply kill off all Zookeeper processes and see what happens.
As we expected, our consumer processes couldn't connect to Zookeeper anymore. But unexpectedly, the Kafka Brokers still worked. Our producer didn't complain at all and was still able to write into the topic. While I couldn't use kafka/bin/kafka-topics.sh or similar, since they all require a zookeeper-parameter, I could still see the actual size of the topic-log grow. After restarting the zookeeper processes, everything again worked just like before.
What we couldn't figure out is now... what actually happened there?
We thought, Kafka would require a working Zookeeper-Connection and we couldn't find any explanation for this behaviour online.
When you have one node of zookeeper, broker will not be able to contact zookeeper, after broker discovers zookeeper is not reachable, broker also will become unreachable. Hence the producer and consumer.
In case of producer it starts dropping(reject the record). In case of consumer it can happen that, the read record which is not ack'ed may end up processing again when broker is up and ready...
in case of 3node zk one node failure is acceptable as quorum is still satisfied... but cant afford the 2node failures which will lead to the above consequences...