Kafka consumer fails to consume if first broker is down - apache-kafka

I'm using latest version of kafka(kafka_2.12-1.0.0.tgz). I have setup simple cluster with 3 brokers(just changed broker.id=1 and listeners=PLAINTEXT://:9092 in properties file for each instance).After cluster is up I created topic with the following command
./kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 13 --topic demo
then start kafka consumer and producers with the following commands
./kafka-console-producer.sh --topic demo --broker-list localhost:9094,localhost:9093,localhost:9092
./kafka-console-consumer.sh --group test --bootstrap-server localhost:9094,localhost:9093,localhost:9092 --topic demo
Everything is ok when all brokers are up. But if I kill first(by start order) broker messages are sent to brokers but consumer can not receive any message.Messages are not lost. After starting that broker consumer immediately receives message.
Logs of consumer after shutting down broker instance:
[2018-01-09 13:39:31,130] WARN [Consumer clientId=consumer-1,
groupId=test] Connection to node 2147483646 could not be established.
Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2018-01-09 13:39:31,132] WARN [Consumer clientId=consumer-1,
groupId=test] Connection to node 1 could not be established. Broker
may not be available. (org.apache.kafka.clients.NetworkClient)
[2018-01-09 13:39:31,344] WARN [Consumer clientId=consumer-1,
groupId=test] Connection to node 2147483646 could not be established.
Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2018-01-09 13:39:31,451] WARN [Consumer clientId=consumer-1,
groupId=test] Connection to node 1 could not be established. Broker
may not be available. (org.apache.kafka.clients.NetworkClient)
[2018-01-09 13:39:31,848] WARN [Consumer clientId=consumer-1,
groupId=test] Connection to node 2147483646 could not be established.
Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2018-01-09 13:39:31,950] WARN [Consumer clientId=consumer-1,
groupId=test] Connection to node 1 could not be established. Broker
may not be available. (org.apache.kafka.clients.NetworkClient)
[2018-01-09 13:39:32,363] WARN [Consumer clientId=consumer-1,
groupId=test] Connection to node 2147483646 could not be established.
Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2018-01-09 13:39:33,092] WARN [Consumer clientId=consumer-1,
groupId=test] Connection to node 2147483646 could not be established.
Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2018-01-09 13:39:34,216] WARN [Consumer clientId=consumer-1,
groupId=test] Connection to node 2147483646 could not be established.
Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2018-01-09 13:39:34,218] WARN [Consumer clientId=consumer-1,
groupId=test] Asynchronous auto-commit of offsets
{demo-0=OffsetAndMetadata{offset=3, metadata=''},
demo-1=OffsetAndMetadata{offset=3, metadata=''},
demo-2=OffsetAndMetadata{offset=2, metadata=''},
demo-3=OffsetAndMetadata{offset=2, metadata=''},
demo-4=OffsetAndMetadata{offset=1, metadata=''},
demo-5=OffsetAndMetadata{offset=1, metadata=''},
demo-6=OffsetAndMetadata{offset=3, metadata=''},
demo-7=OffsetAndMetadata{offset=2, metadata=''},
demo-8=OffsetAndMetadata{offset=3, metadata=''},
demo-9=OffsetAndMetadata{offset=2, metadata=''},
demo-10=OffsetAndMetadata{offset=3, metadata=''},
demo-11=OffsetAndMetadata{offset=2, metadata=''},
demo-12=OffsetAndMetadata{offset=2, metadata=''}} failed: Offset
commit failed with a retriable exception. You should retry committing
offsets. The underlying error was: The coordinator is not available.
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
[2018-01-09 13:39:34,219] WARN [Consumer clientId=consumer-1,
groupId=test] Connection to node 1 could not be established. Broker
may not be available. (org.apache.kafka.clients.NetworkClient)
Log of consumer after starting missing broker again:
[2018-01-09 13:41:21,739] ERROR [Consumer clientId=consumer-1,
groupId=test] Offset commit failed on partition demo-0 at offset 3:
This is not the correct coordinator.
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
[2018-01-09 13:41:21,739] WARN [Consumer clientId=consumer-1,
groupId=test] Asynchronous auto-commit of offsets
{demo-0=OffsetAndMetadata{offset=3, metadata=''},
demo-1=OffsetAndMetadata{offset=3, metadata=''},
demo-2=OffsetAndMetadata{offset=2, metadata=''},
demo-3=OffsetAndMetadata{offset=2, metadata=''},
demo-4=OffsetAndMetadata{offset=1, metadata=''},
demo-5=OffsetAndMetadata{offset=1, metadata=''},
demo-6=OffsetAndMetadata{offset=3, metadata=''},
demo-7=OffsetAndMetadata{offset=2, metadata=''},
demo-8=OffsetAndMetadata{offset=3, metadata=''},
demo-9=OffsetAndMetadata{offset=2, metadata=''},
demo-10=OffsetAndMetadata{offset=3, metadata=''},
demo-11=OffsetAndMetadata{offset=2, metadata=''},
demo-12=OffsetAndMetadata{offset=2, metadata=''}} failed: Offset
commit failed with a retriable exception. You should retry committing
offsets. The underlying error was: This is not the correct
coordinator.
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
[2018-01-09 13:41:22,353] ERROR [Consumer clientId=consumer-1,
groupId=test] Offset commit failed on partition demo-0 at offset 3:
This is not the correct coordinator.
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
[2018-01-09 13:41:22,354] WARN [Consumer clientId=consumer-1,
groupId=test] Asynchronous auto-commit of offsets
{demo-0=OffsetAndMetadata{offset=3, metadata=''},
demo-1=OffsetAndMetadata{offset=3, metadata=''},
demo-2=OffsetAndMetadata{offset=2, metadata=''},
demo-3=OffsetAndMetadata{offset=2, metadata=''},
demo-4=OffsetAndMetadata{offset=1, metadata=''},
demo-5=OffsetAndMetadata{offset=1, metadata=''},
demo-6=OffsetAndMetadata{offset=3, metadata=''},
demo-7=OffsetAndMetadata{offset=2, metadata=''},
demo-8=OffsetAndMetadata{offset=3, metadata=''},
demo-9=OffsetAndMetadata{offset=2, metadata=''},
demo-10=OffsetAndMetadata{offset=3, metadata=''},
demo-11=OffsetAndMetadata{offset=3, metadata=''},
demo-12=OffsetAndMetadata{offset=2, metadata=''}} failed: Offset
commit failed with a retriable exception. You should retry committing
offsets. The underlying error was: This is not the correct
coordinator.
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
Thanks

Try to check "offsets.topic.replication.factor" in server-*.properties file
For example:
############################# Internal Topic Settings
# The replication factor for the group metadata internal topics
# For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.
offsets.topic.replication.factor=3
http://kafka.apache.org/documentation/#brokerconfigs

Using KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR on yml file solve this issue.
E.g. Using 2 workers on docker-swarm.
environment:
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 2

Related

KAFKA connect EOFException when running connect-standalone with confluent broker and source connector

i am trying to run a connect-standalone application with a sample soure connector. (FileSource or JDBC connector).
i am getting constantly repeating error messages like
[2022-11-14 18:33:09,641] DEBUG [local-file-source|task-0] [Producer clientId=connector-producer-local-file-source-0] Connection with xxxx.westeurope.azure.confluent.cloud/ (channelId=-1) disconnected (org.apache.kafka.common.network.Selector:606)
java.io.EOFException
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576)
at org.apache.kafka.common.network.Selector.poll(Selector.java:481)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560)
at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243)
at java.base/java.lang.Thread.run(Thread.java:1589)
[2022-11-14 18:33:09,643] INFO [local-file-source|task-0] [Producer clientId=connector-producer-local-file-source-0] Node -1 disconnected. (org.apache.kafka.clients.NetworkClient:935)
[2022-11-14 18:33:09,645] DEBUG [local-file-source|task-0] [Producer clientId=connector-producer-local-file-source-0] Cancelled in-flight API_VERSIONS request with correlation id 0 due to node -1 being disconnected (elapsed time since creation: 34ms, elapsed time since send: 34ms, request timeout: 30000ms): ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.2.1') (org.apache.kafka.clients.NetworkClient:335)
[2022-11-14 18:33:09,647] WARN [local-file-source|task-0] [Producer clientId=connector-producer-local-file-source-0] Bootstrap broker xxxx.westeurope.azure.confluent.cloud:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient:1063)
[2022-11-14 18:33:09,757] DEBUG [local-file-source|task-0] [Producer clientId=connector-producer-local-file-source-0] Initialize connection to node xxxx.westeurope.azure.confluent.cloud:9092 (id: -1 rack: null) for sending metadata request (org.apache.kafka.clients.NetworkClient:1160)
[2022-11-14 18:33:09,758] DEBUG [local-file-source|task-0] Resolved host xxxx.westeurope.azure.confluent.cloud as (org.apache.kafka.clients.ClientUtils:113)
[2022-11-14 18:33:09,758] DEBUG [local-file-source|task-0] [Producer clientId=connector-producer-local-file-source-0] Initiating connection to node xxxx.westeurope.azure.confluent.cloud:9092 (id: -1 rack: null) using address xxxx.westeurope.azure.confluent.cloud/ (org.apache.kafka.clients.NetworkClient:989)
[2022-11-14 18:33:09,787] DEBUG [local-file-source|task-0] [Producer clientId=connector-producer-local-file-source-0] Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 (org.apache.kafka.common.network.Selector:531)
[2022-11-14 18:33:09,788] DEBUG [local-file-source|task-0] [Producer clientId=connector-producer-local-file-source-0] Completed connection to node -1. Fetching API versions. (org.apache.kafka.clients.NetworkClient:951)
[2022-11-14 18:33:09,789] DEBUG [local-file-source|task-0] [Producer clientId=connector-producer-local-file-source-0] Initiating API versions fetch from node -1. (org.apache.kafka.clients.NetworkClient:965)
[2022-11-14 18:33:09,789] DEBUG [local-file-source|task-0] [Producer clientId=connector-producer-local-file-source-0] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=connector-producer-local-file-source-0, correlationId=1) and timeout 30000 to node -1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.2.1') (org.apache.kafka.clients.NetworkClient:521)
[2022-11-14 18:33:09,817] DEBUG [local-file-source|task-0] [Producer clientId=connector-producer-local-file-source-0] Connection with xxxx.westeurope.azure.confluent.cloud/ (channelId=-1) disconnected
I can create a topic with the kafka-topics.sh command, write messages through the console producer to the topic and read from the topic through console consumer as well as with connect-standalone with sink connectors.
If i am running kafka server and zookeper locally everything seems to work fine.
commandline:
bin/connect-standalone.sh config/connect-standalone.properties config/connect-file-source.properties
connect-standalone.properties
bootstrap.servers=pkc-pj9zy.westeurope.azure.confluent.cloud:9092
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000
ssl.endpoint.identification.algorithm=https
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="name" password="passphrase";
ssl.protocol=TLSv1.2
ssl.enabled.protocols=TLSv1.2
plugin.path=./plugins,./libs
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.storage.StringConverter
connect-file-source.properties
name=local-file-source
connector.class=FileStreamSource
tasks.max=1
file=test.txt
topic=test_ksql_af_file_source-test
auto.create=true
auto.evolve=true
Got it sorted out.
was missing the properties
producer.security.protocol=SASL_SSL
producer.sasl.mechanism=PLAIN
producer.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="name" password="passphrase";
in my settings.
The log did unfortunately not give a hint about this

Kafka consumer disconnected from kafka cli

I a consuming Kafka messages from Kafka cli with following command
kafka-console-consumer.bat --bootstrap-server kafka:9092 --from-beginning --topic mytopic --consumer.config ....\config\consumer.properties
and I am getting following errors
[2022-09-23 15:59:33,175] WARN [Consumer clientId=console-consumer, groupId=group] Bootstrap broker kafka:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2022-09-23 15:59:54,344] WARN [Consumer clientId=console-consumer, groupId=group] Connection to node -1 (kafka:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2022-09-23 15:59:54,345] WARN [Consumer clientId=console-consumer, groupId=group] Bootstrap broker kafka:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2022-09-23 16:00:15,634] WARN [Consumer clientId=console-consumer, groupId=group] Connection to node -1 (kafka:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
I have updated server.properties with following configuration
port = 9092
advertised.host.name = kafka
KAFKA_CFG_LISTENERS=PLAINTEXT://kafka:9092
advertised.listeners=PLAINTEXT://kafka:9092

Steps to delete data inside the Kafka Topic on Windows?

I am working on Spring Batch and Apache Kafka Integration. Before posting the question I went over web : Is there a way to delete all the data from a topic or delete the topic before every run? to find out better solution, but did not find out.
I am using Kafka version 2.11.
I want to delete all data under the topic without stopping either Zookeeper or Kafka. How can we do that ?
Below commands causes lot of issues in windows
C:\kafka_2.11-2.3.1\bin\windows>kafka-topics.bat --zookeeper localhost:2181 --delete --topic customers
Topic customers is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.
C:\kafka_2.11-2.3.1\bin\windows>kafka-topics.bat --zookeeper localhost:2181 --delete --topic test
C:\kafka_2.11-2.3.1\bin\windows>kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic customers --from-beginning
[2020-04-21 10:25:02,812] WARN [Consumer clientId=consumer-1, groupId=console-consumer-65075] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2020-04-21 10:25:04,886] WARN [Consumer clientId=consumer-1, groupId=console-consumer-65075] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2020-04-21 10:25:06,996] WARN [Consumer clientId=consumer-1, groupId=console-consumer-65075] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2020-04-21 10:25:09,267] WARN [Consumer clientId=consumer-1, groupId=console-consumer-65075] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2020-04-21 10:25:11,744] WARN [Consumer clientId=consumer-1, groupId=console-consumer-65075] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
Processed a total of 0 messages
Terminate batch job (Y/N)?
^C
C:\kafka_2.11-2.3.1\bin\windows>
I am using Kafka version 2.11.
There is no Kafka 2.11. Your command prompt says kafka_2.11-2.3.1: hence, you are using Kafka 2.3.1. The 2.11 part is the Scala version that was used during compilation.
Note: This will have no impact if delete.topic.enable is not set to true.
Did you check your broker configs if delete.topic.enable is set to true? If yes, you should be able to delete a topic without stopping ZK or the brokers. Note though, that deleting topics is async, i.e., when you command returns the topic is not deleted yet and it will take some time until the command is executed.

Kafka send to azure event hub

I've set up a kafka in my machine and I'm trying to set up Mirror Maker to consume from a local topic and mirror it to an azure event hub, but so far i've been unable to do it and I get the following error:
ERROR Error when sending message to topic dev-eh-kafka-test with key: null, value: 5 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
After some time I realized that this must be the producer part so I tried to simply use the kafka-console-producer tool directly to event hub and got the same error.
Here is my producer settings file:
bootstrap.servers=dev-we-eh-feed.servicebus.windows.net:9093
compression.type=none
max.block.ms=0
# for event hub
sasl.mechanism=PLAIN
security.protocol=SASL_SSL
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="Endpoint=sb://dev-we-eh-feed.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=*****”;
Here is the command to spin the producer:
kafka-console-producer.bat --broker-list dev-we-eh-feed.servicebus.windows.net:9093 --topic dev-eh-kafka-test
My event hub namespace has an event hub named dev-eh-kafka-test.
Has anyone been able to do it? Eventually the idea would be to SSL this with a certificate but first I need to be able to do the connection.
I tried using both Apacha Kafka 1.1.1 or the Confluent Kafka 4.1.3 (because this is the version the client is using).
==== UPDATE 1
Someone showed me how to get more logs and this seems to be the detailed version of the error
[2020-02-28 17:32:08,010] DEBUG [Producer clientId=console-producer] Initialize connection to node dev-we-eh-feed.servicebus.windows.net:9093 (id: -1 rack: null) for sending metadata request (org.apache.kafka.clients.NetworkClient)
[2020-02-28 17:32:08,010] DEBUG [Producer clientId=console-producer] Initiating connection to node dev-we-eh-feed.servicebus.windows.net:9093 (id: -1 rack: null) (org.apache.kafka.clients.NetworkClient)
[2020-02-28 17:32:08,010] DEBUG [Producer clientId=console-producer] Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 102400, SO_TIMEOUT = 0 to node -1 (org.apache.kafka.common.network.Selector)
[2020-02-28 17:32:08,010] DEBUG [Producer clientId=console-producer] Completed connection to node -1. Fetching API versions. (org.apache.kafka.clients.NetworkClient)
[2020-02-28 17:32:08,010] DEBUG [Producer clientId=console-producer] Initiating API versions fetch from node -1. (org.apache.kafka.clients.NetworkClient)
[2020-02-28 17:32:08,010] DEBUG [Producer clientId=console-producer] Connection with dev-we-eh-feed.servicebus.windows.net/51.144.238.23 disconnected (org.apache.kafka.common.network.Selector)
java.io.EOFException
at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:124)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:93)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:235)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:196)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:559)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:495)
at org.apache.kafka.common.network.Selector.poll(Selector.java:424)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:460)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:239)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:163)
at java.base/java.lang.Thread.run(Thread.java:830)
[2020-02-28 17:32:08,010] DEBUG [Producer clientId=console-producer] Node -1 disconnected. (org.apache.kafka.clients.NetworkClient)
So here is the configuration that worked (it seems I was missing client.id).
Also, it seems you can not choose the destination topic, it seems it must have the same name as the source...
bootstrap.servers=dev-we-eh-feed.servicebus.windows.net:9093
client.id=mirror_maker_producer
request.timeout.ms=60000
sasl.mechanism=PLAIN
security.protocol=SASL_SSL
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="Endpoint=sb://dev-we-eh-feed.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=******";

Read Docker Kafka messages populated by debezium postgres connect

I am using the Debezium Postgres connector. I have two tables in Postgres named 'publications' and 'comments'. kafka and zookeeper are running in docker containers as per the standard examples. The postgres is running locally. After using the debezium postgres connect, I have the following topics :
$ bin/kafka-topics.sh --list --zookeeper localhost:2181
__consumer_offsets
dbserver1.public.comments
dbserver1.public.publications
my_connect_configs
my_connect_offsets
my_connect_statuses
I would like to see a list of messages in the topic:
$ bin/kafka-console-consumer.sh --bootstrap-server kafka:9092 --topic dbserver1.public.publications
[2019-06-03 21:55:16,180] WARN [Consumer clientId=consumer-1,
groupId=console-consumer-5221] Connection to node -1
(kafka/23.202.231.166:9092) could not be established. Broker may not
be available. (org.apache.kafka.clients.NetworkClient) [2019-06-03
21:55:16,289] WARN [Consumer clientId=consumer-1,
groupId=console-consumer-5221] Connection to node -1
(kafka/23.202.231.166:9092) could not be established. Broker may not
be available. (org.apache.kafka.clients.NetworkClient) [2019-06-03
21:55:16,443] WARN [Consumer clientId=consumer-1,
groupId=console-consumer-5221] Connection to node -1
(kafka/23.202.231.166:9092) could not be established. Broker may not
be available. (org.apache.kafka.clients.NetworkClient) [2019-06-03
21:55:16,721] WARN [Consumer clientId=consumer-1,
groupId=console-consumer-5221] Connection to node -1
(kafka/23.202.231.166:9092) could not be established. Broker may not
be available. (org.apache.kafka.clients.NetworkClient) [2019-06-03
21:55:17,145] WARN [Consumer clientId=consumer-1,
groupId=console-consumer-5221] Connection to node -1
(kafka/23.202.231.166:9092) could not be established. Broker may not
be available. (org.apache.kafka.clients.NetworkClient) [2019-06-03
21:55:18,017] WARN [Consumer clientId=consumer-1,
groupId=console-consumer-5221] Connection to node -1
(kafka/23.202.231.166:9092) could not be established. Broker may not
be available. (org.apache.kafka.clients.NetworkClient) ^CProcessed a
total of 0 messages
$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092
--topic dbserver1.public.publications
[2019-06-03 21:55:16,180] WARN [Consumer clientId=consumer-1,
groupId=console-consumer-5221] Connection to node -1
(kafka/23.202.231.166:9092) could not be established. Broker may not
be available. (org.apache.kafka.clients.NetworkClient)
How do I specify the correct value for bootstrap-server? Thanks.
I am assuming you are trying to connect to kafka broker from an external server.
Since you have already mentioned your Kafka and Zookeeper instances are running from docker images. You need to identify your docker images external port corresponding to 9092 as well as its external IP address and you have to those along with --bootstrap-server parameter while executing command kafka-console-consumer.sh
If you are running the kafka-console-consumer.sh outside of docker then you should use localhost hostname. If inside the Docker container then make sure it is in container that sees kafka hostname.