I would like to view all the topics running on a server from my local kafka scripts. I can view the details of a topic like this:
bin/kafka-console-consumer.sh --bootstrap-server <someip>:<somport> --topic
mytopic --from-beginning
But can't find out a way to view all the topics running on <someip>:<someport>. Do I need to have a local instance of zookeeper running in order to do this?
If i understand the question correctly you can just use:
kafka-topics.sh --list --zookeeper remote-zookeeper:2181
and replace the ip and port in the command above. it is as simple as this assuming that the kafka cluster does not require authentication - authorization etc
Related
I am very new to Kafka. Following a few tutorials, I have the following questions regarding consuming actual Kafka topics.
The situation: there is a server in my workplace that is streaming Kafka topics. I have the topic name. I would like to consume this topic from my machine (Windows WSL2 Ubuntu). From this tutorial, I am able to
Run zookeeper with this command:
bin/zookeeper-server-start.sh config/zookeeper.properties
Create a broker with:
bin/kafka-server-start.sh config/server.properties
Run a producer console, with a fake topic named quickstart-events at port localhost:9092:
bin/kafka-console-producer.sh --topic quickstart-events --bootstrap-server localhost:9092
Run a consumer console listening to localhost:9092 and receive the streaming data from the producer:
bin/kafka-console-consumer.sh --topic quickstart-events --from-beginning --bootstrap-server localhost:9092
Now for my real situation: if I know the topic name, what else do I need in order to apply the same steps above to listen to it as a consumer? What are the steps involved? I read in other threads about tunnelling with Jumphost. How to do that?
I understand this question is rather generic. Appreciate any pointers to any relevant readings or guidance.
Based on your company nameserver the next procedure should be done in your wsl instance
To gain outside connection
unable to access network from WSL2
You need to set bootstrap-server to your company server
--bootstrap-server my.company.com:9092
I've a local Apache Kafka setup and there are total 2 broker (id - 0 and 1) on port 9092 and 9093.
I created a topic and published the messages using this command:
bin/kafka-console.-producer.sh --broker-list localhost:9092 --topic test
Then I consumed the messages on other terminal using the command:
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test
Till now everything is fine.
But when i type command -
bin/kafka-console.-producer.sh --broker-list localhost:9093 --topic test
and write some messages it is showing in the 2nd terminal where I've typed this command -
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test
Why port 9093 messages are publishing to 9092?
Your cluster contains of two brokers. It is not important which host you use for initial connection. Using kafka client you don't specify from which broker you consume or to which your produce messages. Those hostname are only to discover whole list of kafka brokers (cluster)
According to documentation:
https://kafka.apache.org/documentation/#producerconfigs
https://kafka.apache.org/documentation/#consumerconfigs
bootstrap.servers::
A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover the full set of servers.
Looking through the instructions --
https://www.cloudera.com/documentation/kafka/latest/topics/kafka_command_line.html
I'm running these test command lines and one set works, but the other set doesn't.
Following instructions, it works, but noticed it has "zookeeper" as a parameter and I thought it was discontinued.
Producer:
/usr/bin/kafka-console-producer --broker-list local-ip:9092 --topic test
Consumer:
/usr/bin/kafka-console-consumer --bootstrap-server local-ip:9092 --topic test --from-beginning
the above doesn't work on the Cloudera version, but works on my standalone Kafka installs.
This works on Cloudera:
/usr/bin/kafka-console-consumer --zookeeper local-ip:2181 --topic test --from-beginning
Trying to understand what the difference between the Cloudera's Kakfa version (3.0.0-1.3.0.0.p0.40?) and mine (2.11-0.11.0.1) or there has to be something turned on or off.
I see some similar topic, and tried following them to no avail. I think it's something to do with Cloudera.
Updated answer:
In my case, I have two brokers configured and the kafka's config value for offsets.topic.replication.factor set to 3. So, when Kafka tries to build a topic with more replicas than available brokers, an exception is thrown and the topic is not created.
The solution is to set offsets.topic.replication.factor = 2 and try again. Maybe you need to remove and deploy the brokers again.
I don't know why, maybe is a bug in Cloudera's Kafka release, but I solved it with a local kafka test.
I've downloaded latest version of Kafka from https://kafka.apache.org/downloads and updated the broker config file config\server.properties to use remote zookeeper server. With this, I had a mix configuration broker cluster:
brokers in my laptop
zookeeper in the cloudera cluster
With this configuration, I created a topic and run the kafka-console-consumer and kafka-console-producer from my laptop but against remote zookeeper:
$ kafka-topics --create --zookeeper zookeeper.cloudera-cluster:2181 --replication-factor 1 --partitions 1 --topic test
$ kafka-console-consumer --broker-list localhost:9092 --topic test
$ kafka-console-producer --broker-list localhost:9092 --topic test
this works propertly. Furthermore, using this the topic __consumer_offsets has been created automatically and now the new-consumer version works perfectly. At this point, you can remove the topic created and stop the local brokers and start to use kafka cluster normally.
Is this a bug from Cloudera's release?
Maybe the version of Cloudera is not able to build __consumer_offsets automatically?
Kafka version downloaded: kafka_2.11-1.0.0.tgz
Cloudera's kafka version: 3.0.0-1.3.0.0.p0.40
The system on which my Kafka server is running has two NICs, one with a public IP (135.220.23.45) and the other with a private one (192.168.1.14). The private NIC is connected to a subnet composed of 7 machines in total (all with addresses 192.168.1.xxx). Kafka has been installed as a service using HDP and has been configured with zookeeper.connect=192.168.1.14:2181 and listeners=PLAINTEXT://192.168.1.14:6667. I have started a consumer on the system that hosts the kafka server using: [bin/kafka-console-consumer.sh --bootstrap-server 192.168.1.14:6667 --topic test --from-beginning].
When I start producers (using [bin/kafka-console-producer.sh --broker-list 192.168.1.14:6667 --topic test]) on any of the machines on the private subnet the messages are received normally by the consumer.
I would like to start producers on public systems and receive the messages by the consumer running on the kafka server. I believed that this could be achieved by IP masquerading and by forwarding all external requests to 135.220.23.45:15501 (I have chosen 15501 to receive kafka messages) to 192.168.1.14:6667. To that extend I setup this port forwarding rule on firewalld: [port=15501:proto=tcp:toport=6670:toaddr=192.168.1.14].
However, this doesn’t seem to work since when I start a producer on an external system with [bin/kafka-console-producer.sh --broker-list 135.220.23.45:15501 --topic] the messages cannot be received by the consumer.
I have tried different kafka config settings for listeners and advertised.listeners but none of them worked. Any help will be greatly appreciated.
You need to define different endpoints for your internal and external traffic in order for this to work. As it is currently configured, when you connect to 135.220.23.45:15501 Kafka would reply with "please talk to me on 192.168.1.14:6667 which is not reachable from the outside and everything from there on out fails.
With KIP-103 Kafka was extended to cater to these scenarios by letting you define multiple endpoints.
Full disclosure, I have not yet tried this out, but something along the following lines should at least get you started down the right road.
advertised.listeners=EXTERNAL://135.220.23.45:15501,INTERNAL://192.168.1.14:6667
inter.broker.listener.name=INTERNAL
listener.security.protocol.map=EXTERNAL:PLAINTEXT,INTERNAL:PLAINTEXT
Update:
I've tested this on a cluster of three ec2 machines out of interest. I've used the following configuration:
# internal ip: 172.31.61.130
# external ip: 184.72.211.109
listeners=INTERNAL://:9092,EXTERNAL_PLAINTEXT://:9094
advertised.listeners=INTERNAL://172.31.61.130:9092,EXTERNAL_PLAINTEXT://184.72.211.109:9094
listener.security.protocol.map=INTERNAL:PLAINTEXT,EXTERNAL_PLAINTEXT:PLAINTEXT
inter.broker.listener.name=INTERNAL
And that allowed me to send messages from both an internal machine as well as my laptop at home:
# Create topic
kafka-topics --create --topic testtopic --partitions 9 --replication-factor 3 --zookeeper 127.0.0.1:2181
# Produce messages from internal machine
[ec2-user#ip-172-31-61-130 ~]$ kafka-console-producer --broker-list 127.0.0.1:9092 --topic testtopic
>internal1
>internal2
>internal3
# Produce messages from external machine
➜ bin ./kafka-console-producer --topic testtopic --broker-list 184.72.211.109:9094
external1
external2
external3
# Check topic
[ec2-user#ip-172-31-61-130 ~]$ kafka-console-consumer --bootstrap-server 172.31.52.144:9092 --topic testtopic --from-beginning
external3
internal2
external1
external2
internal3
internal1
I have single machine set up for kafka and wanted to run que based kaffa. I have set up group name in consumer.properties like this.
consumer group id
group.id=test-consumer-group
and when I run 2 consumer both for same topic both gets the message. Is there anything else I need to do other than this?
sudo bin/kafka-console-consumer.sh config/consumer.properties --zookeeper localhost:2181 --topic 219Topic --from-beginning
I am using command prompt only no client currently for testing.
Thanks,