I need to configure a Kafka cluster on different machines but it does not work, when I start producer and consumer the following errors are displayed:
Producer Error Output
Consumer Error Output
Can you help me please.
In order to get started, I would recommend to read https://kafka.apache.org/documentation/#quickstart. BTW, in your case; you haven't started Kafka yet.
You should start services in following order:
zookeeper
Kafka
producer
consumer
Related
We have Kafka cluster with 3 broker nodes. When all are up and running, consumer able to read data from Kafka. However if I stop all Kafka server and brings up only 2 Kafka server except the one which stopped last then Consumer unable to connect to Kafka cluster.
What could be the reason behind this? Thanks in advance.
I would guess that the problem could be the offsets.topic.replication.factor in the broker that by default is 3 while you are now running a cluster with 2 brokers only.
This is the topic where the consumers store the offsets when consuming and it was created with replication factor of 3 on the first run.
When, on the second run, you start only 2 broker, it could be the problem now.
Currently we are using Apache NiFi to consume messages via Kafka consumer. Output of kafka consumer is connected to hive processor.
I'm looking into how to run kafka consumer instance on a nifi cluster.
I have 3 nodes of nifi cluster and a kafka topic which have 3 partitions, I want the kafka consumer to be able run on each node so each consumer can poll message from one of topic partitions.
After I started the kafka consumer processor ,i can only see that the kafka consumer always run on a single node but not all nodes.
Is there any configuration that I missed?
NiFi uses the Apache Kafka client which is what performs the assignment of consumers to partitions. When you start the processor, assuming you have it set to 1 concurrent task, then you should have 1 consumer on each node of your cluster, and each consumer should get assigned a different partition.
https://bryanbende.com/development/2016/09/15/apache-nifi-and-apache-kafka
I have two kafka clusters. One is two broker node kafka cluster with replication factor 2 and second one is single broker kafka cluster.
Sometimes observed below exception in Kafka controller.log. What would be the possible reason? Please help me
java.nio.channels.ClosedSelectorException
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:83)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at org.apache.kafka.common.network.Selector.select(Selector.java:489)
at org.apache.kafka.common.network.Selector.poll(Selector.java:298)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:349)
at kafka.utils.NetworkClientBlockingOps$.recursivePoll$1(NetworkClientBlockingOps.scala:135)
at kafka.utils.NetworkClientBlockingOps$.pollContinuously$extension(NetworkClientBlockingOps.scala:142)
at kafka.utils.NetworkClientBlockingOps$.blockingSendAndReceive$extension(NetworkClientBlockingOps.scala:108)
at kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:192)
at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:184)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
I observed that problem when there is a connection problem. Main reason could be the zookeeper connection. Did you try to enable debug in kafka broker and read the logs? Because, you need to see detailed logs from server.
Thank you, everyone for all your assistance, my project is successfully integrated with Kafka.While testing i came with a issue for which i need a little assistance. my producer and consumer both were pointing to Kafka broker say KB-1. i have cluster of brokers with replication factor 1.KB-1 dies due to some internal reason so we switched the IP's of our producer and consumer to another broker of same cluster KB-2. It was effective to consume all data and process the necessary alerts necessary,but when i tried to produce the data through producer with KB-2 IP in bootstrap server it failed to produce giving the following error: org.apache.kafka.common.errors.TimeoutException
please also explain single point failure if possible.
thank you for the help.
I am trying to create a tool that can kill kafka producers and consumers randomly in order to simulate production environment. Is there a way to find the active producers and consumers in a kafka cluster? I want to exactly know which thread in which host is acting as the producer or the consumer?
I started by getting the I.P. of at least one kafka broker from the HDP cluster.
I checked the open connections on the kafka broker on its specified port (by default it is 6667) and retrieved the IP addresses of the connected machines.
Using their IP Addresses, I found out the processes that are connected to the kafka broker.
Thus, I figured out the machines that are kafka producers and consumers.