Kafka Producer From Remote Server - apache-kafka

I'm developing a streaming API with Apache Kafka version (2.1.0). I have a Kafka cluster and an external server.
The external Server will be producing data to be consumed on the Kafka cluster.
Let's denote the external Server as E and the cluster as C . E doesn't have Kafka installed. I run on it a JAR file to produce messages. Here is the snippet for Producer properties:
properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "bootstrapIp:9092");
properties.put(ProducerConfig.CLIENT_ID_CONFIG, "producer");
I set bootstrapIp to the Kafka broker IP.
In the cluster side, I start the consumer console using this command:
kafka-console-consumer --bootstrap-server bootstrapIp:9092 --topic T1 --from-beginning
I set bootstrapIp to the cluster bootstrap server IP.
When running the producer and the consumer on the cluster, it works very fine, but when i run the producer in the external server (E) and the consumer in the cluster (C) the data not being consumed.
In localhost every thing is working fine also when i run the producer and the consumer in the cluster (C) everything is working fine, when running the producer externally i can't consume the data in the cluster.
The ping from cluster(C) to external server (E) is working, but i can't see where the problem is exactly.
I am not able to figure out how to consume messages from an external server.
EDIT
From the external server (E) i telnet the (bootstrapIp):
telnet bootstrapIp 9092 and it works, i don't understand the problem

Tbis works for me:
From server.properties Uncomment
listeners=PLAINTEXT://:9092
And
advertised.listeners=PLAINTEXT://<HOST IP>:9092
Replace with actual IP. My case:
advertised.listeners=PLAINTEXT://192.168.75.132:9092

Related

How to connect kafka producer and consumer to a machine that is running kafka and zookeeper

I have a ubuntu machine that is having kafka and zookeepr installed in it, I am using spring boot for making the consumer and producer, locally the process works, however, when the deploy the producer and consumer jar to another machine it doesn't work
Kafka defaults to only listen locally.
You need to set these in Kafka's server.properties
listeners=PLAINTEXT://:9092
advertised.listeners=PLAINTEXT://<external-ip>:9092
https://www.confluent.io/blog/kafka-listeners-explained/
Then, obviously, don't use localhost:9092 in your remote client code.
You should never need Zookeeper connection details. Besides, as of Kafka 3.3.1, Zookeeper isn't required at all.

kafka-python basic producing and consuming not working

I am new to kafka and I'm trying to run basic example.
My kafka is running with this config: https://developer.confluent.io/quickstart/kafka-docker/
python 3.7; kafka installation as follows: pip install kafka-python
(2.0.2)
I follow this doc; then I run two consoles (one for each of consume and produce)
consumer:
from kafka import KafkaConsumer
for m in KafkaConsumer('my-topic', bootstrap_servers='broker'):
print(m)
producer:
from kafka import KafkaProducer
p = KafkaProducer(bootstrap_servers='broker')
p.send('my-topic', b'my message!')
And after p.send() I expect to that consumer gets the message. But nothing happens.
What is wrong with my setup?
Edit: consoles are run container within the same docker-compose
broker only resolves inside the docker-compose network, if you are running the scripts in the host, you should use localhost.
And if you are running the scripts as containers in the same docker-compose, you should use broker:29092 since there is where Kafka is listening for connections from within the docker-compose network.

Kafka broker setup

To connect to a Kafka cluster I've been provided with a set of bootstrap servers with name and port :
s1:90912
s2:9092
s3:9092
Kafka and Zookeeper are running on the instance s4. From reading https://jaceklaskowski.gitbooks.io/apache-kafka/content/kafka-properties-bootstrap-servers.html, it states:
bootstrap server is a comma-separated list of host and port pairs that
are the addresses of the Kafka brokers in a "bootstrap" Kafka cluster
that a Kafka client connects to initially to bootstrap itself.
I reference the above bootstrap server definition as I'm trying to understand the relationship between the kafka brokers s1,s2,s3 and kafka,zookeeper running on s4.
To connect to the Kafka cluster, I set the broker to a CSV list of 's1,s1,s3'. When I send messages to the CSV list of brokers, to verify the messages are added to the topic, I ssh onto the s4 box and view the messages on the topic.
What is the link between the Kafka brokers s1,s2,s3 and s4? I cannot ssh onto any of the brokers s1,s2,s3 as these brokers do not seem accessible using ssh, should s1,s2,s3 be accessible?
The individual responsible for the setup of the Kafka box is no longer available, and I'm confused as to how this configuration works. I've searched for config references of the brokers s1,s2,s3 on s4 but there does not appear to be any configuration.
When Kafka is being set up and configured what allows the linking between the brokers (in this case s1,s2,s3) and s4?
I start Kafka and Zookeeper on the same server, s4.
Should Kafka and Zookeeper also be running on s1,s2,s3?
What is the link between the Kafka brokers s1,s2,s3 and s4?
As per the Kafka documentation about adding nodes to a cluster, each server must share the same zookeeper.connect string and have a unique broker.id to be part of the cluster.
You may check which nodes are in the cluster via zookeeper-shell with an ls /brokers/ids, or via the Kafka AdminClient API, or kafkacat -L
should s1,s2,s3 be accessible?
Via SSH? They don't have to be.
They should respond to TCP connections from your Kafka client machines on their Kafka server ports, though
Should Kafka and Zookeeper also be running on s1,s2,s3?
You should not have 4 Zookeeper servers in a cluster (odd numbers, only)
Otherwise, you've at least been given some ports for Kafka on those machines, therefore Kafka should be

How to use Kafka connect to transmit data to Kafka broker in another machine?

I'm trying to use Kafka connect in Confluent platform 3.2.1 and everything works fine in my local env. Then I encountered this problem when I try to use Kafka source connector to send data to another machine.
I deploy Kafka JDBC source connector in machine A and trying to capture database A. Then I deploy a Kafka borker B(along with zk, schema registry) in machine B. The source connector cannot send data to broker B and throws the following exception:
[2017-05-19 16:37:22,709] ERROR Failed to commit offsets for WorkerSourceTask{id=test-multi-0} (org.apache.kafka.connect.runtime.SourceTaskOffsetCommitter:112)
[2017-05-19 16:38:27,711] ERROR Failed to flush WorkerSourceTask{id=test-multi-0}, timed out while waiting for producer to flush outstanding 3 messages (org.apache.kafka.connect.runtime.WorkerSourceTask:304)
I tried config the server.properties in broker B like this:
listeners=PLAINTEXT://:9092
and leave the advertised.listeners setting commented.
Then I use
bootstrap.servers=192.168.19.234:9092
in my source connector where 192.168.19.234 is the IP of machine B. Machine A and B are in the same subnet.
I suspect this has something to do with my server.properties.
How should I config to get the things done? Thanks in advance.

Can't reach kafka producer with node-red

I have installed node-red on raspberry pi 3 to collect data from sensors and then store them in kafka but now I have some issue with kafka producer node.
I've setup a kafka server on my laptop that correctly works in console: if I send messages on kafka producer console I correctly receive it on consumer console.
Unfortunately when I try to inject a timestamp in kafka producer in node-red on raspberry, server gives no response.
Debug page of node-red says: "BrokerNotAvailableError: Broker not available"
In producer node ZKQuorum field I've typed the ip of the laptop and set port to 9092, as I seen in example on npm site.
I'm sure the topic is correct.
I'm sure zookeeper is running and kafka server also. Indeed if at same time I try to use kafka with laptop console it works great.
I've also tried to reach kafka producer port with telnet: connections are accepted.
I've already posted same question on node-red community, without success for now.
Any hint about this issue?
UPDATE:
Update. I've tried to implement a python function in node-red to send a simple message to the kafka producer and I have obtained a deeper error log in:
/usr/local/lib/python2.7/dist-packages/kafka/client_async.py", line 808
I opened file and at line 808 there is a function with this comment:
It can be helpful?
You have to configure the accepted listeners field on the kafka server properties to the IP address of your laptop. Try to change zookeeper connect to actual ip, not localhost.
Try this property in etc/kafka/server.properties: listeners=PLAINTEXT://<your ip here>:<kafka port here>. you will have to restart kafka for this to take affect