I have a very strange problem when trying to connect locally to Kafka 0.10.0.0 using Python client on CentOS.
My connection options are pretty simple and default:
kafka_consumer = kafka.KafkaConsumer(
bootstrap_servers=['localhost:9092'],
client_id="python-test-consumer"
)
When I manually set listeners option in Kafka's server.properties file like:
listeners=PLAINTEXT://localhost:9092
I get the kafka.errors.NoBrokersAvailable despite the fact that I can still easily connect to Kafka broker server with curl or other linux stuff.
No advertised.listeners or other deprecated advertised options help to solve the problem. Thus, the only state of configuration which is working is one without listeners. What is certainly unacceptable, because we need to setup local cluster somehow.
It seems that solution for this silly problem is simple and is wondering around, but we couldn't figure it ourselves.
This may sound silly, but the exact same problem happened to me because of this:
I upgraded to Kafka 0.10.0.0 via brew (Mac package manager).
Brew then suggests to run like this one-liner:
$ zookeeper-server-start /usr/local/etc/kafka/zookeeper.properties; kafka-server-start /usr/local/etc/kafka/server.properties
Instead of how I executed before:
$ zkServer start
$ kafka-server-start /usr/local/etc/kafka/server.properties
The approach suggested kept throwing those "No Brokers Available" errors in the client. Then I just split the command in two lines:
$ zookeeper-server-start /usr/local/etc/kafka/zookeeper.properties
$ kafka-server-start /usr/local/etc/kafka/server.properties
And everything works like before!
Sorry if this doesn't work for you, but I figured it was worth mentioning.
Related
i want to start my very first kafka, but when i tried to run this on my kafka_2.13-2.8.0 directory bin\windows\zookeeper-server-start.bat .. \ .. \config\zookeeper.properties
why it returns \Kafka\kafka_2.13-2.8.0\bin\windows\../ ../config/log4j.properties was unexpected at this time
idk i already followed this tip to install kafka https://www.youtube.com/watch?v=bYVyRh4C94E&t=303s
It's a known error in the Kafka log4j settings, especially if the install path contains spaces or non alphanumeric characters
If you really want to run Kafka on Windows, you should use WSL2 anyway, or Docker. Otherwise, assuming you did get the bat file working, you'd eventually run into other errors that crash the broker
I am using confluent-5.1.1 and I want to use kafka and zookeeper. The servers are working fine but kafka log shows the following error
ERROR [Producer clientId=confluent-metrics-reporter] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available
I have already made changes to server.properties and zookeeper.properties.
Are there any other changes to be made I am using 2 nodes and instead of local host I am using my own IP.
Can someone please tell what changes I need to perform to other files to remove error
I was having same issue where everything works fine but still I get this error. I backtracked my changes surrounding the Kafka initial set up and figured out below are steps that needs to be taken to fix this issue
Make sure that your JDK is installed correctly. I exactly had this issue because the JDK path/JAVA_HOME path that I was pointing to in my environment variables was incorrect due to which even when the broker started it was not detected by the Client. Even if required, I suggest to install the JDK again correctly to avoid this issue.
If then also nothing works out, then go for
#listeners=PLAINTEXT://:9092
changed this to
listeners=PLAINTEXT://127.0.0.1:9092
or
listeners=PLAINTEXT://localhost:9092
please note : If you see that your broker started and having issue , highly likely you will be seeing the dumps getting created in your kafka\bin\windows location
See the following references for how to set up Kafka across multiple nodes:
https://kafka.apache.org/documentation/#basic_ops_cluster_expansion
https://docs.confluent.io/current/installation/installing_cp/zip-tar.html#zk
I have installed Zookeeper and Kakfa separately. Have started Zookeeper successfully. When I try to start Kafka on windows using the command,
C:\kafka_2.12-2.3.0\bin\windows>kafka-server-start.bat ../../config/server.properties
I keep getting,
\Novosoft\C2J\Bin\c2jruntime.zip was unexpected at this time.
Not sure what's causing this.
My environment variables had the CLASSPATH variable set to C:\Program Files (x86)\Novosoft\C2J\Bin\c2jruntime.zip.
Maybe Kafka was not liking it. Removed it and worked.
Kafka server started now
To date I have either used an existing professional installation for Hadoop with components running, or, installed Kafka and used the also-supplied Zookeeper in a native VM.
I am trying to get the mapR Community Edition Sandbox to run now.
There is a KAFKA library on mapR, but here is no kafka shown when using jps. Seems odd? I managed to get KAFKA to start once.
There is a Zookeeper service on mapR but it uses port 5181, not 2181.
Kafka uses port 9092.
The log.dirs for kafka was set to /tmp/kafka-logs, I changed that to /opt/kafka-logs
The dataDir was also set to /tmp/zookeeper, I changed that to /opt/zookeeper
I also changed the Zookeeper port to 5181 as that is what mapR uses.
It ran once, and then I re-started and I still get this type of error:
java.io.FileNotFoundException: /tmp/kafka-logs/.lock (Permission denied)
I have done chmod 777 where required I think, but I changed the paths to /opt/... from /tmp. So why is it picking /tmp up again?
I have the impression that it keeps on point to /tmp regardless of the updates to the configurations.
I also see a warning - although I do not think this is an issue:
[2019-01-14 13:26:46,355] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
May be because of the mapR Streams I cannot influence it so as to run natively?
OK, I could delete the question as I solved it, but for those on mapR I deduced:
You need to update the port 2181 to 5181 on server.properties immediately. In this case we integrate with an existing zookeeper instance.
Likewise, update the log.dirs for Kafka from /tmp/kafka-logs asap to /opt/kafka-logs.
Likewise, update the dataDir from /tmp/zookeeper asap to /opt/zookeeper.
Trying to fix latterly otherwise leads to all sorts of issues. I ended up just re-installing and doing it right from scratch.
mapR has a faster version called mapR Streams which implements Kafka. I was not wanting to use that for what I was wanting to do, but mapR Sandbox has a lot of up-to-date items straight out of the box -certainly compared to Cloudera.
I am facing below error message when i was trying to connect and see the topic/consumer details of one of my kafka clusters we have.
we have 3 brokers in the cluster which I able to see but the topic and its partitions.
Note : I have kafka 1.0 and kafka tool version is 2.0.1
I had the same issue on my MacBook Pro. The tool was using "tshepo-mbp" as the hostname which it could not resolve. To get it to work I added 127.0.0.1 tshepo-mbp to the /etc/hosts file.
kafka tool is most likely using the hostname to connect to the broker and cannot reach it. You maybe connecting to the zookeeper host by IP address but make sure you can connect/ping the host name of the broker from the machine running the kafka tool.
If you cannot ping the broker either fix the network issues or as a workaround edit the host file on your client to let it know how to reach the broker by its name
This issue occurs if you have not set listeners and advertised.listeners property in server.properties file.
For Ex:
config/server.properties
...
listeners=PLAINTEXT://:9092
...
advertised.listeners=PLAINTEXT://<public-ip/host-name>:9092
...
To fix this issue, we need to change the server.properties file.
$ vim /usr/local/etc/kafka/server.properties
Here update the listeners value from
listeners=PLAINTEXT://:9092
to
listeners=PLAINTEXT://localhost:9092
source:https://medium.com/#Ankitthakur/apache-kafka-installation-on-mac-using-homebrew-a367cdefd273
For better visibility (even already commented the same in early days thread)
In my case, I got to know when I used Kafkatool from my local machine, tool tris to find out Kafka broker port which was blocked from my cluster admins for my local machine, that is the reason I was not able to connect.
Resolution:
Either ask the admin to open the port for intranet if they can, if they can not you can use tunnelling for your testing purpose or time being for your port.
Hope this would help a few.