I am very new to Kafka. Following a few tutorials, I have the following questions regarding consuming actual Kafka topics.
The situation: there is a server in my workplace that is streaming Kafka topics. I have the topic name. I would like to consume this topic from my machine (Windows WSL2 Ubuntu). From this tutorial, I am able to
Run zookeeper with this command:
bin/zookeeper-server-start.sh config/zookeeper.properties
Create a broker with:
bin/kafka-server-start.sh config/server.properties
Run a producer console, with a fake topic named quickstart-events at port localhost:9092:
bin/kafka-console-producer.sh --topic quickstart-events --bootstrap-server localhost:9092
Run a consumer console listening to localhost:9092 and receive the streaming data from the producer:
bin/kafka-console-consumer.sh --topic quickstart-events --from-beginning --bootstrap-server localhost:9092
Now for my real situation: if I know the topic name, what else do I need in order to apply the same steps above to listen to it as a consumer? What are the steps involved? I read in other threads about tunnelling with Jumphost. How to do that?
I understand this question is rather generic. Appreciate any pointers to any relevant readings or guidance.
Based on your company nameserver the next procedure should be done in your wsl instance
To gain outside connection
unable to access network from WSL2
You need to set bootstrap-server to your company server
--bootstrap-server my.company.com:9092
Related
I would like to view all the topics running on a server from my local kafka scripts. I can view the details of a topic like this:
bin/kafka-console-consumer.sh --bootstrap-server <someip>:<somport> --topic
mytopic --from-beginning
But can't find out a way to view all the topics running on <someip>:<someport>. Do I need to have a local instance of zookeeper running in order to do this?
If i understand the question correctly you can just use:
kafka-topics.sh --list --zookeeper remote-zookeeper:2181
and replace the ip and port in the command above. it is as simple as this assuming that the kafka cluster does not require authentication - authorization etc
I've a local Apache Kafka setup and there are total 2 broker (id - 0 and 1) on port 9092 and 9093.
I created a topic and published the messages using this command:
bin/kafka-console.-producer.sh --broker-list localhost:9092 --topic test
Then I consumed the messages on other terminal using the command:
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test
Till now everything is fine.
But when i type command -
bin/kafka-console.-producer.sh --broker-list localhost:9093 --topic test
and write some messages it is showing in the 2nd terminal where I've typed this command -
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test
Why port 9093 messages are publishing to 9092?
Your cluster contains of two brokers. It is not important which host you use for initial connection. Using kafka client you don't specify from which broker you consume or to which your produce messages. Those hostname are only to discover whole list of kafka brokers (cluster)
According to documentation:
https://kafka.apache.org/documentation/#producerconfigs
https://kafka.apache.org/documentation/#consumerconfigs
bootstrap.servers::
A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover the full set of servers.
The system on which my Kafka server is running has two NICs, one with a public IP (135.220.23.45) and the other with a private one (192.168.1.14). The private NIC is connected to a subnet composed of 7 machines in total (all with addresses 192.168.1.xxx). Kafka has been installed as a service using HDP and has been configured with zookeeper.connect=192.168.1.14:2181 and listeners=PLAINTEXT://192.168.1.14:6667. I have started a consumer on the system that hosts the kafka server using: [bin/kafka-console-consumer.sh --bootstrap-server 192.168.1.14:6667 --topic test --from-beginning].
When I start producers (using [bin/kafka-console-producer.sh --broker-list 192.168.1.14:6667 --topic test]) on any of the machines on the private subnet the messages are received normally by the consumer.
I would like to start producers on public systems and receive the messages by the consumer running on the kafka server. I believed that this could be achieved by IP masquerading and by forwarding all external requests to 135.220.23.45:15501 (I have chosen 15501 to receive kafka messages) to 192.168.1.14:6667. To that extend I setup this port forwarding rule on firewalld: [port=15501:proto=tcp:toport=6670:toaddr=192.168.1.14].
However, this doesn’t seem to work since when I start a producer on an external system with [bin/kafka-console-producer.sh --broker-list 135.220.23.45:15501 --topic] the messages cannot be received by the consumer.
I have tried different kafka config settings for listeners and advertised.listeners but none of them worked. Any help will be greatly appreciated.
You need to define different endpoints for your internal and external traffic in order for this to work. As it is currently configured, when you connect to 135.220.23.45:15501 Kafka would reply with "please talk to me on 192.168.1.14:6667 which is not reachable from the outside and everything from there on out fails.
With KIP-103 Kafka was extended to cater to these scenarios by letting you define multiple endpoints.
Full disclosure, I have not yet tried this out, but something along the following lines should at least get you started down the right road.
advertised.listeners=EXTERNAL://135.220.23.45:15501,INTERNAL://192.168.1.14:6667
inter.broker.listener.name=INTERNAL
listener.security.protocol.map=EXTERNAL:PLAINTEXT,INTERNAL:PLAINTEXT
Update:
I've tested this on a cluster of three ec2 machines out of interest. I've used the following configuration:
# internal ip: 172.31.61.130
# external ip: 184.72.211.109
listeners=INTERNAL://:9092,EXTERNAL_PLAINTEXT://:9094
advertised.listeners=INTERNAL://172.31.61.130:9092,EXTERNAL_PLAINTEXT://184.72.211.109:9094
listener.security.protocol.map=INTERNAL:PLAINTEXT,EXTERNAL_PLAINTEXT:PLAINTEXT
inter.broker.listener.name=INTERNAL
And that allowed me to send messages from both an internal machine as well as my laptop at home:
# Create topic
kafka-topics --create --topic testtopic --partitions 9 --replication-factor 3 --zookeeper 127.0.0.1:2181
# Produce messages from internal machine
[ec2-user#ip-172-31-61-130 ~]$ kafka-console-producer --broker-list 127.0.0.1:9092 --topic testtopic
>internal1
>internal2
>internal3
# Produce messages from external machine
➜ bin ./kafka-console-producer --topic testtopic --broker-list 184.72.211.109:9094
external1
external2
external3
# Check topic
[ec2-user#ip-172-31-61-130 ~]$ kafka-console-consumer --bootstrap-server 172.31.52.144:9092 --topic testtopic --from-beginning
external3
internal2
external1
external2
internal3
internal1
How can I produce and consume messages from different servers?
I tried the Quickstart tutorial, but there is no instructions on how to setup for multi server clusters.
My Steps
Server A
1)bin/zookeeper-server-start.sh config/zookeeper.properties
2)bin/kafka-server-start.sh config/server.properties
3)bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor
1 --partitions 1 --topic test
4)bin/kafka-console-producer.sh --broker-list SERVER-a.IP:9092 --topic test
Server B
1A)bin/kafka-console-consumer.sh --bootstrap-server SERVER-a.IP:9092 --topic
test --from-beginning
1B)bin/kafka-console-consumer.sh --bootstrap-server SERVER-a.IP:2181 --topic
test --from-beginning
When I run 1A) consumer and enter messages into the producer, there is no messages appearing in the consumer. Its just blank.
When I run 1B consumer instead, I get a huge & very fast stream of error logs in Server A until I Ctrl+C the consumer. See below
Error log on Server A streaming at hundreds per second
WARN Exception causing close of session 0x0 due to java.io.EOFException (org.apache.zookeeper.server.NIOServerCnxn)
O Closed socket connection for client /188.166.178.40:51168 (no session established for client) (org.apache.zookeeper.server.NIOServerCnxn)
Thanks
Yes, if you want to have your producer on Server A and your consumer on server B, you are in the right direction.
You need to run a Broker on server A to make it work.
bin/kafka-server-start.sh config/server.properties
The other commands are correct.
If anyone is looking for a similar topic for kafka-steams application, it appears that multiple kafka cluster is not supported yet:
Here is a documentation from kafka: https://kafka.apache.org/10/documentation/streams/developer-guide/config-streams.html#bootstrap-servers
bootstrap.servers
(Required) The Kafka bootstrap servers. This is the same setting that is used by the underlying producer and consumer clients to connect to the Kafka cluster. Example: "kafka-broker1:9092,kafka-broker2:9092".
Tip:
Kafka Streams applications can only communicate with a single Kafka
cluster specified by this config value. Future versions of Kafka
Streams will support connecting to different Kafka clusters for
reading input streams and writing output streams.
I'm trying to test run a single Kafka node with 3 brokers & zookeeper. I wish to test using the console tools. I run the producer as such:
kafka-console-producer --broker-list localhost:9092,localhost:9093,localhost:9094 --topic testTopic
Then I run the consumer as such:
kafka-console-consumer --zookeeper localhost:2181 --topic testTopic --from-beginning
And I can enter messages in the producer and see them in the consumer, as expected. However, when I run the updated version of the consumer using bootstrap-server, I get nothing. E.g
kafka-console-consumer --bootstrap-server localhost:9092,localhost:9093,localhost:9094 --topic testTopic --from-beginning
This worked fine when I had one broker running on port 9092 so I'm thoroughly confused. Is there a way I can see what zookeeper is providing as the bootstrap server? Is the bootstrap server different from the broker list? Kafka compiled with Scala 2.11.
I have no idea what was wrong. Likely I put Kafka or Zookeeper in a weird state. After deleting the topics in the log.dir of each broker AND the zookeeper topics in /brokers/topics then recreating the topic, Kafka consumer behaved as expected.
Bootstrap servers are same as kafka brokers. And if you want to see the list of bootstrap server zookeeper is providing, you can query ZNode information via any ZK client. All active brokers are registered under /brokers/ids/[brokerId]. All you need is zkQuorum address. Below command will give you
list of active bootstrap servers :
./zookeeper-shell.sh localhost:2181 <<< "ls /brokers/ids"
I experienced the same problem when using mismatched versions of:
Kafka client libraries
Kafka scripts
Kafka brokers
In my exact scenario I was using Confluent Kafka client libraries version 0.10.2.1 with Confluent Platform 3.3.0 w/ Kafka broker 0.11.0.0. When I downgraded my Confluent Platform to 3.3.2 which matched my client libraries the consumer worked as expected.
My theory is that the latest kafka-console-consumer using the new Consumer API was only retrieving messages using the latest format. There were a number of message format changes introduced in Kafka 0.11.0.0.