why Flink kafka client is trying to connect to localhost:9092 while It is set up to connect to 172.17.0.1:9092? - scala

I am trying to set up a flink jobmanager-taskmanager with docker-compose with this config:
version: "3.7"
services:
jobmanagerconfig:
image: flink:1.13.2-scala_2.12
expose:
- "6133"
- "6123"
ports:
- "8085:8081"
command: standalone-job --job-classname net.mongerbot.configManager.App
volumes:
- ./usrlib/:/opt/flink/usrlib
environment:
- |
FLINK_PROPERTIES=
jobmanager.rpc.address: jobmanagerconfig
parallelism.default: 2
taskmanager.numberOfTaskSlots: 4
- KAFKA_URI=${KAFKA_URI}
- KAFKA_PORT=${KAFKA_PORT}
- KAFKA_groupId=${KAFKA_groupId}
taskmanagerconfig:
image: flink:1.13.2-scala_2.12
depends_on:
- jobmanagerconfig
links:
- jobmanagerconfig
command: taskmanager
# scale: 1
volumes:
- ./usrlib/:/opt/flink/usrlib
environment:
- KAFKA_URI=${KAFKA_URI}
- KAFKA_PORT=${KAFKA_PORT}
- KAFKA_groupId=${KAFKA_groupId}
- |
FLINK_PROPERTIES=
jobmanager.rpc.address: jobmanagerconfig
parallelism.default: 2
taskmanager.numberOfTaskSlots: 4
volumes:
usrlib:
networks:
default:
external:
name: mongerbot_network
The environment variables have the correct value in both containers.
and as the log says the configured kafka client is set up to connect to 172.17.0.1:9092 as well:
docker-taskmanagerconfig-1 | 2022-12-08 09:36:56,065 INFO org.apache.kafka.clients.consumer.ConsumerConfig [] - ConsumerConfig values:
docker-taskmanagerconfig-1 | allow.auto.create.topics = true
docker-taskmanagerconfig-1 | auto.commit.interval.ms = 5000
docker-taskmanagerconfig-1 | auto.offset.reset = latest
docker-taskmanagerconfig-1 | bootstrap.servers = [172.17.0.1:9092]
docker-taskmanagerconfig-1 | check.crcs = true
docker-taskmanagerconfig-1 | client.dns.lookup = default
docker-taskmanagerconfig-1 | client.id =
docker-taskmanagerconfig-1 | client.rack =
docker-taskmanagerconfig-1 | connections.max.idle.ms = 540000
docker-taskmanagerconfig-1 | default.api.timeout.ms = 60000
docker-taskmanagerconfig-1 | enable.auto.commit = true
docker-taskmanagerconfig-1 | exclude.internal.topics = true
...
but this is the next lines of logs exactly after the kafka client log:
docker-taskmanagerconfig-1 | value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
docker-taskmanagerconfig-1 |
docker-taskmanagerconfig-1 | 2022-12-08 09:36:56,084 INFO org.apache.kafka.clients.consumer.KafkaConsumer [] - [Consumer clientId=consumer-configManager-7, groupId=configManager] Subscribed to partition(s): config.subscribe-0, config.subscribe-2
docker-taskmanagerconfig-1 | 2022-12-08 09:36:56,090 INFO org.apache.kafka.clients.Metadata [] - [Consumer clientId=consumer-configManager-7, groupId=configManager] Cluster ID: s2iVODWcQ2Kbw4R5jL6RCw
docker-taskmanagerconfig-1 | 2022-12-08 09:36:56,091 INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator [] - [Consumer clientId=consumer-configManager-7, groupId=configManager] Discovered group coordinator localhost:9092 (id: 2147483646 rack: null)
docker-taskmanagerconfig-1 | 2022-12-08 09:36:56,094 WARN org.apache.kafka.clients.NetworkClient [] - [Consumer clientId=consumer-configManager-7, groupId=configManager] Connection to node 2147483646 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
docker-taskmanagerconfig-1 | 2022-12-08 09:36:56,094 INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator [] - [Consumer clientId=consumer-configManager-7, groupId=configManager] Group coordinator localhost:9092 (id: 2147483646 rack: null) is unavailable or invalid, will attempt rediscovery
docker-taskmanagerconfig-1 | 2022-12-08 09:36:56,094 INFO org.apache.kafka.common.utils.AppInfoParser [] - Kafka version: 2.4.1
docker-taskmanagerconfig-1 | 2022-12-08 09:36:56,095 INFO org.apache.kafka.common.utils.AppInfoParser [] - Kafka commitId: c57222ae8cd7866b
docker-taskmanagerconfig-1 | 2022-12-08 09:36:56,095 INFO org.apache.kafka.common.utils.AppInfoParser [] - Kafka startTimeMs: 1670492216094
docker-taskmanagerconfig-1 | 2022-12-08 09:36:56,096 INFO org.apache.kafka.clients.consumer.KafkaConsumer [] - [Consumer clientId=consumer-configManager-8, groupId=configManager] Subscribed to partition(s): config.subscribe-1
docker-taskmanagerconfig-1 | 2022-12-08 09:36:56,101 INFO org.apache.kafka.clients.Metadata [] - [Consumer clientId=consumer-configManager-8, groupId=configManager] Cluster ID: s2iVODWcQ2Kbw4R5jL6RCw
docker-taskmanagerconfig-1 | 2022-12-08 09:36:56,102 INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator [] - [Consumer clientId=consumer-configManager-8, groupId=configManager] Discovered group coordinator localhost:9092 (id: 2147483646 rack: null)
docker-taskmanagerconfig-1 | 2022-12-08 09:36:56,103 WARN org.apache.kafka.clients.NetworkClient [] - [Consumer clientId=consumer-configManager-8, groupId=configManager] Connection to node 2147483646 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
docker-taskmanagerconfig-1 | 2022-12-08 09:36:56,104 INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator [] - [Consumer clientId=consumer-configManager-8, groupId=configManager] Group coordinator localhost:9092 (id: 2147483646 rack: null) is unavailable or invalid, will attempt rediscovery
docker-taskmanagerconfig-1 | 2022-12-08 09:36:56,197 WARN org.apache.kafka.clients.NetworkClient [] - [Consumer clientId=consumer-configManager-7, groupId=configManager] Connection to node 1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
docker-taskmanagerconfig-1 | 2022-12-08 09:36:56,207 WARN org.apache.kafka.clients.NetworkClient
and as you can see it is trying to connect to localhost:9092.

There's actually no problem to what you're doing, as it's indicated in the logs:
Discovered group coordinator localhost:9092
So it's able to connect successfully. Now why you see 172.17.0.1 in the first place, and then localhost inside your kafka client logs? Well, localhost is just because of the env you passed to the application runtime. And the IP, that's because the raw name localhost which you provided in the configurations, needs to get resolved into some IP address, and you're not running it on your own machine natively, you're using docker. And eventually, 172.17.0.1 happens to be the docker host of the docker daemon of your machine. You can verify this in many ways, I'll link a post here to read more.

this problem is not related to flink or kafka consumer. it was related to kafka server itself.
the server should be configured to accept from 172.17.0.1 but it was setup to accept incoming request for kafka and localhost:
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
ports:
- 9092:9092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
I changed PLAINTEXT_HOST://localhost:9092 to PLAINTEXT_HOST://172.17.0.1:9092 and it fixed.
(it was confusing because other clients (conduktor) could connect to the kafka with 172.17.0.1:9092 address even though this address was not in KAFKA_ADVERTISED_LISTENERS

Related

How to solve Invalid value javax.net.ssl.SSLHandshakeException:for confluent-platform in docker?

Below is my Docker-compose file for adding Confluent-platform broker security
i tried so much ways,but it is not working.
i created the keys from my local and mounting to docker,thats works fine but i didnt understand the exception.
---
---
version: "2.3"
services:
zookeeper1:
image: confluentinc/cp-zookeeper:5.5.1
restart: always
hostname: zookeeper1
container_name: zookeeper1
ports:
- 2181:2181
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka1:
image: confluentinc/cp-server:5.5.1
hostname: kafka1
container_name: kafka1
depends_on:
- zookeeper1
volumes:
- //d/tmp/keys/:/etc/kafka/secrets
ports:
- 8091:8091
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper1:2181
KAFKA_BROKER_ID: 0
KAFKA_BROKER_RACK: "r1"
KAFKA_DELETE_TOPIC_ENABLE: "true"
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
KAFKA_DEFAULT_REPLICATION_FACTOR: 1
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: SSL:SSL
KAFKA_SECURITY_INTER_BROKER_PROTOCOL: SSL
KAFKA_LISTENERS: SSL://kafka1:8091
KAFKA_ADVERTISED_LISTENERS: SSL://kafka1:8091
KAFKA_SSL_PROTOCOL: "TLSV1.2"
KAFKA_SSL_KEYSTORE_FILENAME: kafka.broker.keystore.jks
KAFKA_SSL_KEYSTORE_CREDENTIALS: password.txt
KAFKA_SSL_KEY_CREDENTIALS: password.txt
KAFKA_SSL_TRUSTSTORE_FILENAME: kafka.broker.truststore.jks
KAFKA_SSL_TRUSTSTORE_CREDENTIALS: password.txt
KAFKA_SSL_CLIENT_AUTH: "none"
KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: ""
But in the log i am seein glike this truststore is not detected
ssl.key.password = [hidden]
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = /etc/kafka/secrets/kafka.broker.keystore.jks
ssl.keystore.password = [hidden]
ssl.keystore.type = JKS
ssl.principal.mapping.rules = DEFAULT
ssl.protocol = TLSV1.2
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
But i am facing the below error:
[2021-03-10 07:45:20,758] INFO Awaiting socket connections on 0.0.0.0:8091. (kafka.network.Acceptor)
[2021-03-10 07:45:21,049] ERROR [KafkaServer id=0] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.apache.kafka.common.KafkaException: org.apache.kafka.common.config.ConfigException: **Invalid value javax.net.ssl.SSLHandshakeException: General SSLEngine problem for configuration A client SSLEngine created with the provided settings can't connect to a server SSLEngine created with those settings.**
at org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:77)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:157)
at org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:97)
at kafka.network.Processor.<init>(SocketServer.scala:769)
at kafka.network.SocketServer.newProcessor(SocketServer.scala:396)
at kafka.network.SocketServer.$anonfun$addDataPlaneProcessors$1(SocketServer.scala:281)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158)
at kafka.network.SocketServer.addDataPlaneProcessors(SocketServer.scala:280)
at kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1(SocketServer.scala:243)
at kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1$adapted(SocketServer.scala:240)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at kafka.network.SocketServer.createDataPlaneAcceptorsAndProcessors(SocketServer.scala:240)
at kafka.network.SocketServer.startup(SocketServer.scala:123)
at kafka.server.KafkaServer.startup(KafkaServer.scala:406)
at io.confluent.support.metrics.SupportedServerStartable.startup(SupportedServerStartable.java:140)
at io.confluent.support.metrics.SupportedKafka.main(SupportedKafka.java:66)
Caused by: org.apache.kafka.common.config.ConfigException: Invalid value javax.net.ssl.SSLHandshakeException: General SSLEngine problem for configuration A client SSLEngine created with the provided settings can't connect to a server SSLEngine created with those settings.
at org.apache.kafka.common.security.ssl.SslFactory.configure(SslFactory.java:111)
at org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:75)
... 17 more
[2021-03-10 07:45:21,051] INFO [KafkaServer id=0] shutting down (kafka.server.KafkaServer)
[2021-03-10 07:45:21,052] INFO [SocketServer brokerId=0] Stopping socket server request processors (kafka.network.SocketServer)
[2021-03-10 07:45:21,053] INFO [SocketServer brokerId=0] Stopped socket server request processors (kafka.network.SocketServer)
[2021-03-10 07:45:21,059] INFO Shutting down. (kafka.log.LogManager)
[2021-03-10 07:45:21,060] INFO Shutting down the log cleaner. (kafka.log.LogCleaner)
[2021-03-10 07:45:21,061] INFO [kafka-log-cleaner-thread-0]: Shutting down (kafka.log.LogCleaner)
[2021-03-10 07:45:21,061] INFO [kafka-log-cleaner-thread-0]: Stopped (kafka.log.LogCleaner)
[2021-03-10 07:45:21,061] INFO [kafka-log-cleaner-thread-0]: Shutdown completed (kafka.log.LogCleaner)
[2021-03-10 07:45:21,066] INFO Shutdown complete. (kafka.log.LogManager)
[2021-03-10 07:45:21,067] INFO [ZooKeeperClient Kafka server] Closing. (kafka.zookeeper.ZooKeeperClient)
[2021-03-10 07:45:21,177] INFO Session: 0x10004be2a200006 closed (org.apache.zookeeper.ZooKeeper)
[2021-03-10 07:45:21,177] INFO EventThread shut down for session: 0x10004be2a200006 (org.apache.zookeeper.ClientCnxn)
[2021-03-10 07:45:21,180] INFO [ZooKeeperClient Kafka server] Closed. (kafka.zookeeper.ZooKeeperClient)
[2021-03-10 07:45:21,181] INFO [ThrottledChannelReaper-Fetch]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2021-03-10 07:45:21,383] INFO [ThrottledChannelReaper-Fetch]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2021-03-10 07:45:21,383] INFO [ThrottledChannelReaper-Fetch]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2021-03-10 07:45:21,383] INFO [ThrottledChannelReaper-Produce]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2021-03-10 07:45:22,383] INFO [ThrottledChannelReaper-Produce]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2021-03-10 07:45:22,383] INFO [ThrottledChannelReaper-Produce]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2021-03-10 07:45:22,384] INFO [ThrottledChannelReaper-Request]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2021-03-10 07:45:22,385] INFO [ThrottledChannelReaper-Request]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2021-03-10 07:45:22,385] INFO [ThrottledChannelReaper-Request]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2021-03-10 07:45:22,389] INFO [SocketServer brokerId=0] Shutting down socket server (kafka.network.SocketServer)
[2021-03-10 07:45:22,478] INFO [SocketServer brokerId=0] Shutdown completed (kafka.network.SocketServer)
[2021-03-10 07:45:22,485] INFO [KafkaServer id=0] shut down completed (kafka.server.KafkaServer)
[2021-03-10 07:45:22,487] INFO Shutting down SupportedServerStartable (io.confluent.support.metrics.SupportedServerStartable)
[2021-03-10 07:45:22,487] INFO Closing BaseMetricsReporter (io.confluent.support.metrics.BaseMetricsReporter)
[2021-03-10 07:45:22,487] INFO Waiting for metrics thread to exit (io.confluent.support.metrics.SupportedServerStartable)
[2021-03-10 07:45:22,487] INFO Shutting down KafkaServer (io.confluent.support.metrics.SupportedServerStartable)
[2021-03-10 07:45:22,487] INFO [KafkaServer id=0] shutting down (kafka.server.KafkaServer)
Can anyone aware why this exception is coming and how to overcome this?

Failed to send HTTP request to schema-registry

I am trying to setup a local kafka-connect stack with docker-compose and I have a problem with my scala producer that's supposed to send avro messages to a kafka topic using schema registry.
In my producer (scala) code I do the following:
val kafkaBootstrapServer = "kafka:9092"
val schemaRegistryUrl = "http://schema-registry:8081"
val topicName = "test"
val props = new Properties()
props.put("bootstrap.servers", kafkaBootstrapServer)
props.put("schema.registry.url", schemaRegistryUrl)
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer")
props.put("value.serializer", "io.confluent.kafka.serializers.KafkaAvroSerializer")
props.put("acks", "1")
and my docker-compose script reads:
---
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:5.5.0
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka:
image: confluentinc/cp-kafka:5.5.0
hostname: kafka
container_name: kafka
depends_on:
- zookeeper
ports:
- 9092:9092
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://kafka:9092"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_CREATE_TOPICS: "test:1:1"
schema-registry:
image: confluentinc/cp-schema-registry:5.5.0
hostname: schema-registry
container_name: schema-registry
depends_on:
- zookeeper
- kafka
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: zookeeper:2181
producer:
image: producer-app:1.0
depends_on:
- schema-registry
- kafka
EDIT: now the schema registry seems to be up:
schema-registry | [2021-01-17 22:27:27,704] INFO HV000001: Hibernate Validator 6.0.17.Final (org.hibernate.validator.internal.util.Version)
kafka | [2021-01-17 22:27:27,918] INFO [Controller id=1001] Processing automatic preferred replica leader election (kafka.controller.KafkaController)
kafka | [2021-01-17 22:27:27,919] TRACE [Controller id=1001] Checking need to trigger auto leader balancing (kafka.controller.KafkaController)
kafka | [2021-01-17 22:27:27,923] DEBUG [Controller id=1001] Topics not in preferred replica for broker 1001 Map() (kafka.controller.KafkaController)
kafka | [2021-01-17 22:27:27,924] TRACE [Controller id=1001] Leader imbalance ratio for broker 1001 is 0.0 (kafka.controller.KafkaController)
schema-registry | [2021-01-17 22:27:28,010] INFO JVM Runtime does not support Modules (org.eclipse.jetty.util.TypeUtil)
schema-registry | [2021-01-17 22:27:28,011] INFO Started o.e.j.s.ServletContextHandler#22d6f11{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler)
schema-registry | [2021-01-17 22:27:28,035] INFO Started o.e.j.s.ServletContextHandler#15eebbff{/ws,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler)
schema-registry | [2021-01-17 22:27:28,058] INFO Started NetworkTrafficServerConnector#2698dc7{HTTP/1.1,[http/1.1]}{0.0.0.0:8081} (org.eclipse.jetty.server.AbstractConnector)
schema-registry | [2021-01-17 22:27:28,059] INFO Started #4137ms (org.eclipse.jetty.server.Server)
schema-registry | [2021-01-17 22:27:28,059] INFO Server started, listening for requests... (io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain)
but prior to this, during the execution of the script I get:
schema-registry | ===> Launching ...
schema-registry | ===> Launching schema-registry ...
producer_1 | [main] ERROR io.confluent.kafka.schemaregistry.client.rest.RestService - Failed to send HTTP request to endpoint: http://schema-registry:8081/subjects/test-value/versions
producer_1 | java.net.ConnectException: Connection refused (Connection refused)
Could this be due to some dependency issue? It's like it is running the producer before completely starting the schema-registry! I did put depends on - schema-registry for the producer ...
It looks like the cause here was that your app was trying to call the Schema Registry before it had finished starting up. Perhaps your app should include some error handling for this condition and maybe retry after a backoff period on the first x failures?
For anyone else going through this, I was getting a connection refused error with my schema registry for the longest time. I have my schema-registry running on a docker container with a separate container for a REST API that is trying to use the schema-registry container. What fixed it for me was changing my connection URL in the REST container from http://localhost:8081 -> http://schema-registry-server:8081.
Connecting to the Schema Registry in the REST container:
schema_registry = SchemaRegistryClient(
url={
"url": "http://schema-registry-server:8081"
},
)
Here's the schema registry part of my docker compose file
# docker-compose.yml
schema-registry-server:
image: confluentinc/cp-schema-registry
hostname: schema-registry-server
container_name: schema-registry-server
depends_on:
- kafka
- zookeeper
ports:
- 8081:8081
environment:
- SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS=kafka:9092
- SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL=zookeeper:32181
- SCHEMA_REGISTRY_HOST_NAME=schema-registry-server
- SCHEMA_REGISTRY_LISTENERS=http://0.0.0.0:8081
- SCHEMA_REGISTRY_DEBUG=true

Broker may not be available in connecting kafka connector with kafka broker in docker compose

I am using docker container to build zookeeper, kafka and connect.
version: '2.1'
services:
zookeeper:
image: debezium/zookeeper
ports:
- 2181:2181
- 2888:2888
- 3888:3888
kafka:
image: wurstmeister/kafka
container_name: kafka-multibinder-1
ports:
- "9092:9092"
- "9094:9094"
environment:
- KAFKA_ADVERTISED_HOST_NAME=127.0.0.1
- KAFKA_ADVERTISED_PORT=9092
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_ADVERTISED_LISTENERS=INSIDE://:9094,OUTSIDE://localhost:9092
- KAFKA_LISTENERS=INSIDE://:9094,OUTSIDE://:9092
- KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
- KAFKA_INTER_BROKER_LISTENER_NAME=INSIDE
depends_on:
- zookeeper
kafka-connect:
image: debezium/connect
hostname: kafka-connect
ports:
- 8083:8083
depends_on:
- kafka
environment:
BOOTSTRAP_SERVERS: kafka:9092
GROUP_ID: 1
CONFIG_STORAGE_TOPIC: my_connect_configs
OFFSET_STORAGE_TOPIC: my_connect_offsets
however the logs warms me that 'could not be established. Broker may not be available'
kafka-connect_1 | 2020-10-18 04:11:44,671 INFO || Kafka version: 2.5.0 [org.apache.kafka.common.utils.AppInfoParser]
kafka-connect_1 | 2020-10-18 04:11:44,672 INFO || Kafka commitId: 66563e712b0b9f84 [org.apache.kafka.common.utils.AppInfoParser]
kafka-connect_1 | 2020-10-18 04:11:44,674 INFO || Kafka startTimeMs: 1602994304669 [org.apache.kafka.common.utils.AppInfoParser]
kafka-connect_1 | 2020-10-18 04:11:44,806 WARN || [AdminClient clientId=adminclient-1] Connection to node -1 (/127.0.0.1:9092) could not be established. Broker may not be available. [org.apache.kafka.clients.NetworkClient]
kafka-connect_1 | 2020-10-18 04:11:44,918 WARN || [AdminClient clientId=adminclient-1] Connection to node -1 (/127.0.0.1:9092) could not be established. Broker may not be available. [org.apache.kafka.clients.NetworkClient]
kafka-connect_1 | 2020-10-18 04:11:45,024 WARN || [AdminClient clientId=adminclient-1] Connection to node -1 (/127.0.0.1:9092) could not be established. Broker may not be available. [org.apache.kafka.clients.NetworkClient]
kafka-connect_1 | 2020-10-18 04:12:44,713 INFO || [AdminClient clientId=adminclient-1] Metadata update failed [org.apache.kafka.clients.admin.internals.AdminMetadataManager]
kafka-connect_1 | org.apache.kafka.common.errors.TimeoutException: Call(callName=fetchMetadata, deadlineMs=1602994364719) timed out at 9223372036854775807 after 1 attempt(s)
kafka-connect_1 | Caused by: org.apache.kafka.common.errors.TimeoutException: The AdminClient thread has exited.
kafka-connect_1 | 2020-10-18 04:12:44,722 ERROR || Stopping due to error [org.apache.kafka.connect.cli.ConnectDistributed]
kafka-connect_1 | org.apache.kafka.connect.errors.ConnectException: Failed to connect to and describe Kafka cluster. Check worker's broker connection and security properties.
kafka-connect_1 | at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:64)
kafka-connect_1 | at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:45)
kafka-connect_1 | at org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:95)
kafka-connect_1 | at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:78)
kafka-connect_1 | Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Call(callName=listNodes, deadlineMs=1602994364707) timed out at 1602994364708 after 1 attempt(s)
kafka-connect_1 | at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
kafka-connect_1 | at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
kafka-connect_1 | at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
kafka-connect_1 | at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
kafka-connect_1 | at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:58)
kafka-connect_1 | ... 3 more
kafka-connect_1 | Caused by: org.apache.kafka.common.errors.TimeoutException: Call(callName=listNodes, deadlineMs=1602994364707) timed out at 1602994364708 after 1 attempt(s)
kafka-connect_1 | Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
In your Kafka configuration port 9094 is used for inter broker communication. But in Kafka connect you trying to connect to the broker via port 9092.
As you are trying to connect to port 9092 then localhost:9092 will be sent as metadata to the Kafka connect for further communication but localhost:9092 is not available inside the docker network. Use below Kafka connect configuration to connect -
kafka-connect:
image: debezium/connect
hostname: kafka-connect
ports:
- 8083:8083
depends_on:
- kafka
environment:
BOOTSTRAP_SERVERS: kafka:9094
GROUP_ID: 1
CONFIG_STORAGE_TOPIC: my_connect_configs
OFFSET_STORAGE_TOPIC: my_connect_offsets
The concept of Advertised listeners is a little bit trickiest. Refer to this link if you want to know more about listeners and advertised listeners.

Number of alive brokers '0' does not meet the required replication factor '1' for the offsets topic

so i'm using a docker compose to start kafka and zookeeper ,
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper:latest
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka:latest
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: 192.168.1.6
# KAFKA_ADVERTISED_HOST_NAME: 192.168.3.54
KAFKA_CREATE_TOPICS: "configs:1:1,domainEvents:1:1,derivedEvents:1:1"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
depends_on:
- "zookeeper"
but when i see the kafka logs i keep getting this error
[2020-06-11 12:37:00,737] INFO [TransactionCoordinator id=1001] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[2020-06-11 12:37:00,749] INFO [TransactionCoordinator id=1001] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2020-06-11 12:37:00,753] INFO [Transaction Marker Channel Manager 1001]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2020-06-11 12:37:01,011] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2020-06-11 12:37:01,133] INFO [SocketServer brokerId=1001] Started data-plane processors for 1 acceptors (kafka.network.SocketServer)
[2020-06-11 12:37:01,141] INFO Kafka version: 2.3.0 (org.apache.kafka.common.utils.AppInfoParser)
[2020-06-11 12:37:01,141] INFO Kafka commitId: fc1aaa116b661c8a (org.apache.kafka.common.utils.AppInfoParser)
[2020-06-11 12:37:01,142] INFO Kafka startTimeMs: 1591879021134 (org.apache.kafka.common.utils.AppInfoParser)
[2020-06-11 12:37:01,144] INFO [KafkaServer id=1001] started (kafka.server.KafkaServer)
creating topics: configs:1:1
creating topics: domainEvents:1:1
creating topics: derivedEvents:1:1
[2020-06-11 12:37:35,825] ERROR [KafkaApi-1001] Number of alive brokers '0' does not meet the required replication factor '1' for the offsets topic (configured via 'offsets.topic.replication.factor'). This error can be ignored if the cluster is starting up and not all brokers are up yet. (kafka.server.KafkaApis)
[2020-06-11 12:37:35,825] ERROR [KafkaApi-1001] Number of alive brokers '0' does not meet the required replication factor '1' for the offsets topic (configured via 'offsets.topic.replication.factor'). This error can be ignored if the cluster is starting up and not all brokers are up yet. (kafka.server.KafkaApis)
[2020-06-11 12:37:43,568] ERROR [KafkaApi-1001] Number of alive brokers '0' does not meet the required replication factor '1' for the offsets topic (configured via 'offsets.topic.replication.factor'). This error can be ignored if the cluster is starting up and not all brokers are up yet. (kafka.server.KafkaApis)
this used to work before , but i'm running this code after few weeks and i keep getting this error. what could have gone wrong??

What is KAFKA_LISTENER option?

I'm studying kafka.
To test producing messages, I wrote this docker-compose file.
version: "2"
services:
zookeeper:
image: zookeeper
ports:
- 2181:2181
kafka1:
image: wurstmeister/kafka
ports:
- 19092:9092
environment:
KAFKA_LISTENERS: PLAINTEXT://localhost:9092
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
depends_on:
- zookeeper
kafka2:
image: wurstmeister/kafka
ports:
- 29092:9092
environment:
KAFKA_LISTENERS: PLAINTEXT://localhost:9092
KAFKA_BROKER_ID: 2
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
depends_on:
- zookeeper
kafka3:
image: wurstmeister/kafka
ports:
- 39092:9092
environment:
KAFKA_LISTENERS: PLAINTEXT://localhost:9092
KAFKA_BROKER_ID: 3
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
depends_on:
- zookeeper
zookeeper, kafka was started up successfully, so I tried to produce messages with kafka-console-producer.sh.
> bin/kafka-console-producer.sh --broker-list localhost:19092,localhost:29092,localhost:39092 --topic peter-topic
But I got this error.
[2020-05-06 02:40:41,610] WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:19092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2020-05-06 02:40:41,610] WARN [Producer clientId=console-producer] Bootstrap broker localhost:19092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2020-05-06 02:40:41,713] WARN [Producer clientId=console-producer] Connection to node -3 (localhost/127.0.0.1:39092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2020-05-06 02:40:41,713] WARN [Producer clientId=console-producer] Bootstrap broker localhost:39092 (id: -3 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2020-05-06 02:40:41,814] WARN [Producer clientId=console-producer] Connection to node -3 (localhost/127.0.0.1:39092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2020-05-06 02:40:41,815] WARN [Producer clientId=console-producer] Bootstrap broker localhost:39092 (id: -3 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
...
Why they try to access localhost:19092,29092,39092?
I set port forwarding from 19092(+29092,39092) to 9092.
What thing I was wrong?
For what I should set up this option?