Kafka producer produce data to topic from PORT - apache-kafka

I'm new to Kafka.
I have a Linux machine in which port number 2552 getting data stream from external server.
I want to use Kafka producer to listen to that port and get the stream of data to a topic.

This is a complete hack, but would work for a sandbox example:
nc -l 2552 | ./bin/kafka-console-producer --broker-list localhost:9092 --topic test_topic
It uses netcat to listen on the TCP port, and pipe anything received to a Kafka topic.
A quick Google also turned up this https://github.com/dhanuka84/kafka-connect-tcp which looks to do a similar thing but more robustly, using the Kafka Connect API.

You don't say if the traffic on port 2552 is TCP or UDP but in general you can easily write a program that listens on that port, parses the data received into discrete messages, and then publishes the data to a Kafka Topic as Kafka messages (with or without keys) using the Kafka Producer API.
In some cases there is existing open source code that might already do this for you so you do not need to write it from scratch. If the port 2552 protocol is a well known protocol like for example the TCP or UDP call-logging protocol registered in IANA ( see ftp://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.txt) then there might even be an existing Kafka Connector or Proxy that supports it. Search on GitHub for kafka-connect-[protocol] or take a look at the curated Connector list at https://www.confluent.io/product/connectors/
There may even be a generic TCP or UDP connector that you can use as a reference to configure or build your own for the specific protocol you are trying to ingest.

Related

my producer and consumer are not readable on Kafka using JMX

I set the Kafka on my localhost and then read the broker side using JVM and port 1999 but whatever I tried I can't reach my consumer or producer.
I am able to connect to port 9092 and read MBean on jconsole but I can't send any query to it using the python subprocess
I used the following JVM connection example "service:jmx:rmi:///jndi/rmi://localhost:1999/jmxrmi" and jmxquery/subprocess python libraries.
I wrote down every step I am doing in the repository
https://github.com/mirkan1/kafka_monitoring.git
I am starting Kafka like this:
https://github.com/mirkan1/kafka_monitoring/blob/master/start_kafka.sh
and starting consumers like this:
https://github.com/mirkan1/kafka_monitoring/blob/master/start_consumer.sh
also the producer:
https://github.com/mirkan1/kafka_monitoring/blob/master/start_producer.py
This is the main worker that I am trying to make work:
https://github.com/mirkan1/kafka_monitoring/blob/master/kafka_consumer_watch.py
more info about what I am trying to accomplish is here: https://github.com/mirkan1/kafka_monitoring/tree/master/Kafka_Complete_Monitoring
I found the problem was that I was using the same port which is 9092 for everything, after I assigned different ports for consumer and producer it worked at intended and I was able to read MBeans data using JMX connection

How to connect to Kafka brokers via proxy over tcp (don't want to use kafka rest)

Please find the screen shot, what we are trying to achieve
Our deployment servers can't connect to internet directly without proxy. Hence needed a way to send messages to outside organization kafka cluster. Please note that we do not want to use kafka rest.
Connecting to Kafka is very complex and doesn't support this scenario; when you first connect to the bootstrap servers; they send the actual connections the client needs to make, based on the broker properties (advertised listeners).

Kafka Producer should retry three times in case fail

I want to implement Kafka producer retry logic three times in case of any failure and also i want to test manually whether producer is retrying or not. Can you suggest me, how to manually test this functionality. In below configuration is added to producer configuration to retry in case of any failure. Thank you.
props.put("retries", 3);
You should trust this core functionality of Kafka, but you can track it with capturing the packets of the Producer.
you can use tcpdump in order sniff packets on the producer server and check how many time they sent:
tcpdump -i any port 9092
I also recommend you to view this answer about using tshark for capturing Kafka.
If you want to investigate the protocol even deeper you can use WireShark.
Check out this guide about how to install WireShark on Linux.

How do I remotely produce a message to kafka broker which is present in cloud?

There's a cloud machine Whose public ip is known in which kafka broker is up and running. How do I produce and consume a message to this kafka broker from my laptop.
Open Kafka ports and connect directly via TCP or use Kafka REST proxy
https://github.com/confluentinc/kafka-rest
Click on the address bar.
Type google.com, and enter.
Within the textbox, write kafka producer consumer example, and enter.
There will be plenty of resources showing you how.

Can't reach kafka producer with node-red

I have installed node-red on raspberry pi 3 to collect data from sensors and then store them in kafka but now I have some issue with kafka producer node.
I've setup a kafka server on my laptop that correctly works in console: if I send messages on kafka producer console I correctly receive it on consumer console.
Unfortunately when I try to inject a timestamp in kafka producer in node-red on raspberry, server gives no response.
Debug page of node-red says: "BrokerNotAvailableError: Broker not available"
In producer node ZKQuorum field I've typed the ip of the laptop and set port to 9092, as I seen in example on npm site.
I'm sure the topic is correct.
I'm sure zookeeper is running and kafka server also. Indeed if at same time I try to use kafka with laptop console it works great.
I've also tried to reach kafka producer port with telnet: connections are accepted.
I've already posted same question on node-red community, without success for now.
Any hint about this issue?
UPDATE:
Update. I've tried to implement a python function in node-red to send a simple message to the kafka producer and I have obtained a deeper error log in:
/usr/local/lib/python2.7/dist-packages/kafka/client_async.py", line 808
I opened file and at line 808 there is a function with this comment:
It can be helpful?
You have to configure the accepted listeners field on the kafka server properties to the IP address of your laptop. Try to change zookeeper connect to actual ip, not localhost.
Try this property in etc/kafka/server.properties: listeners=PLAINTEXT://<your ip here>:<kafka port here>. you will have to restart kafka for this to take affect