I set the Kafka on my localhost and then read the broker side using JVM and port 1999 but whatever I tried I can't reach my consumer or producer.
I am able to connect to port 9092 and read MBean on jconsole but I can't send any query to it using the python subprocess
I used the following JVM connection example "service:jmx:rmi:///jndi/rmi://localhost:1999/jmxrmi" and jmxquery/subprocess python libraries.
I wrote down every step I am doing in the repository
https://github.com/mirkan1/kafka_monitoring.git
I am starting Kafka like this:
https://github.com/mirkan1/kafka_monitoring/blob/master/start_kafka.sh
and starting consumers like this:
https://github.com/mirkan1/kafka_monitoring/blob/master/start_consumer.sh
also the producer:
https://github.com/mirkan1/kafka_monitoring/blob/master/start_producer.py
This is the main worker that I am trying to make work:
https://github.com/mirkan1/kafka_monitoring/blob/master/kafka_consumer_watch.py
more info about what I am trying to accomplish is here: https://github.com/mirkan1/kafka_monitoring/tree/master/Kafka_Complete_Monitoring
I found the problem was that I was using the same port which is 9092 for everything, after I assigned different ports for consumer and producer it worked at intended and I was able to read MBeans data using JMX connection
Related
i have a problem with kafka server.
i have done running apache druid with its port 2080, while zookeeper kafka with its port 2181 to avoid crushing a zookeeper between druid and kakfa.
i dont have any problem with druid because it running correctly, and kafka zookeeper well too.
but when i try to run kafka server following this syntax:
./bin/kafka-server-star.sh config/server.properties
with its configuration:
zookeeper.connect=localhost:9092
i have issue that the error say it will not attempt authenticate using SSAL (unknow error) like the picture i drop below:try to connect kafka server getting error result
i used to well, but i dont know it going not well. everyone can help me to solve this issue?
I assume you are referring to zookeeper.connect property in server.properties Kafka configuration file.
zookeeper.connect should point to zookeeper connection string or quorum. From your example, I guess zookeeper.connect property is pointing to Kafka server port, 9092, itself. Since there is no zookeeper server running on the given address localhost:9092, zookeeper-client fails to connect to zookeeper server and throws below error
Opening socket connection to server localhost/0:0:0:0:0:0:0:1:9092. Will not attempt to authenticate using SASL (unknown error)
Zookeeper server port, which is configured using name clientPort, can be found in zookeeper.properties configuration file.
Please try with following setting
zookeeper.connect=localhost:<clientPort, 2181 by default>
Zookeeper connection string is a comma separated host:port pairs, each corresponding to a zookeeper server.
Examples:
zookeeper.connect=localhost:2181
zookeeper.connect=127.0.0.1:2181,127.0.0.1:2182,127.0.0.1:2183
zookeeper.connect=127.0.0.1:2181
You can also append an optional chroot string to the urls to specify the root directory for all Kafka znodes.
I've started learning some big data tools for a new project, and right now I'm on Kafka and Zookeeper.
I have them both install on my local machine, and I can start them up and start producing and consuming messages just fine. Now, I want to try it having two machines, one with a kafka broker, zookeepr and a producer, and the other with a consumer. Lets call them Machine A and Machine B.
Machine A has runs the Zookeeper server, the broker and a producer. Machine B runs a consumer. From what I think I understand, I should be able to setup the consumer to listen to a topic from the producer on Machine A, using Zookeeper. Since both machines are on the same network (i.e. my local home network), I thought I could change the kafka broker server.properties to use my static ip address for Machine A, and then have the consumer on Machine B connect to it.
My problem, is that zookeeper keeps spinning up on localhost, and connecting to 0.0.0.0/0.0.0.0:2181 so when my broker tries to connect to it using my static ip address (i.e 192.168.x.x), it times out. I have looked all over for a solution, but I cannot find anything that tells me how to configure the Zookeeper sever to start on a different ip address.
Maybe my understanding of these technologies is simply wrong, but I thought this would be a fairly simple thing to do. Does anyone know any way to resolve this? Or else if I'm doing it completely wrong, what is the correct approach
zookeeper keeps spinning up on localhost, and connecting to 0.0.0.0/0.0.0.0:2181
Well, that is the bind address.
You need to also (preferably) have a static IP for Zookeeper, then set zookeeper.connect within the server.properties file of Kafka to reach to that other machine's external address.
From the Zookeeper configuration file, you would make sure you have the myid file and have a line in the property file that looks like this (without the double brackets)
server.{{ myid }}={{ ip_address }}:2888:3888
You wouldn't find this in the Kafka documentation, but it is in the Zookeeper documentation
However, if Kafka and Zookeeper are on the same machine, this isn't necessary.
Your external consumer should be setting bootstrap.servers property and the Kafka IP address(es) w/ port 9092.
Your problem might me related instead to the advertised.listeners setting within Kafka.
For example, start with listeners=PLAINTEXT://:9092
As of Zookeeper 3.3.0 (see Advanced Configuration):
clientPortAddress : New in 3.3.0: the address (ipv4, ipv6 or hostname)
to listen for client connections; that is, the address that clients
attempt to connect to. This is optional, by default we bind in such a
way that any connection to the clientPort for any
address/interface/nic on the server will be accepted
So you could use:
clientPortAddress=127.0.0.1
I'm new to Kafka.
I have a Linux machine in which port number 2552 getting data stream from external server.
I want to use Kafka producer to listen to that port and get the stream of data to a topic.
This is a complete hack, but would work for a sandbox example:
nc -l 2552 | ./bin/kafka-console-producer --broker-list localhost:9092 --topic test_topic
It uses netcat to listen on the TCP port, and pipe anything received to a Kafka topic.
A quick Google also turned up this https://github.com/dhanuka84/kafka-connect-tcp which looks to do a similar thing but more robustly, using the Kafka Connect API.
You don't say if the traffic on port 2552 is TCP or UDP but in general you can easily write a program that listens on that port, parses the data received into discrete messages, and then publishes the data to a Kafka Topic as Kafka messages (with or without keys) using the Kafka Producer API.
In some cases there is existing open source code that might already do this for you so you do not need to write it from scratch. If the port 2552 protocol is a well known protocol like for example the TCP or UDP call-logging protocol registered in IANA ( see ftp://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.txt) then there might even be an existing Kafka Connector or Proxy that supports it. Search on GitHub for kafka-connect-[protocol] or take a look at the curated Connector list at https://www.confluent.io/product/connectors/
There may even be a generic TCP or UDP connector that you can use as a reference to configure or build your own for the specific protocol you are trying to ingest.
We are trying to launch multiple standalone kafka hdfs connectors on a given node.
For each connector, we are setting the rest.port and offset.storage.file.filename to different ports and path respectively.
Also kafka broker JMX port is # 9999.
When I start the kafka standalone connector, I get the error
Error: Exception thrown by the agent : java.rmi.server.ExportException: Port already in use: 9999; nested exception is:
java.net.BindException: Address already in use (Bind failed)
Though the rest.port is set to 9100
kafka version: 2.12-0.10.2.1
kafka-connect-hdfs version: 3.2.1
Please help.
We are trying to launch multiple standalone kafka hdfs connectors on a given node.
Have you considered running these multiple connectors within a single instance of Kafka Connect? This might make things easier.
Kafka Connect itself can handle running multiple connectors within a single worker process. Kafka Connect in distributed mode can run on a single node, or across multiple ones.
For those who trying to use rest.port flag and still getting Address already in use error. That flag has been marked as deprecated in KIP-208 and finally removed in PR.
From that point listeners can be used to change default REST port.
Examples from Javadoc
listeners=HTTP://myhost:8083
listeners=HTTP://:8083
Configuring and Running Workers - Standalone mode
You may have open Kafka Connect connections that you don't know about. You can check this with:
ps -ef | grep connect
If you find any, kill those processes.
I have installed node-red on raspberry pi 3 to collect data from sensors and then store them in kafka but now I have some issue with kafka producer node.
I've setup a kafka server on my laptop that correctly works in console: if I send messages on kafka producer console I correctly receive it on consumer console.
Unfortunately when I try to inject a timestamp in kafka producer in node-red on raspberry, server gives no response.
Debug page of node-red says: "BrokerNotAvailableError: Broker not available"
In producer node ZKQuorum field I've typed the ip of the laptop and set port to 9092, as I seen in example on npm site.
I'm sure the topic is correct.
I'm sure zookeeper is running and kafka server also. Indeed if at same time I try to use kafka with laptop console it works great.
I've also tried to reach kafka producer port with telnet: connections are accepted.
I've already posted same question on node-red community, without success for now.
Any hint about this issue?
UPDATE:
Update. I've tried to implement a python function in node-red to send a simple message to the kafka producer and I have obtained a deeper error log in:
/usr/local/lib/python2.7/dist-packages/kafka/client_async.py", line 808
I opened file and at line 808 there is a function with this comment:
It can be helpful?
You have to configure the accepted listeners field on the kafka server properties to the IP address of your laptop. Try to change zookeeper connect to actual ip, not localhost.
Try this property in etc/kafka/server.properties: listeners=PLAINTEXT://<your ip here>:<kafka port here>. you will have to restart kafka for this to take affect