kafka-manager not able to read from kafka docker JMX port exposed - apache-kafka

I am not able to get my kafka-manager to show messages data/metrics.
I am running kafka-manager locally with the below command,
bin/kafka-manager -Dkafka-manager.zkhosts="localhost:2181"
Also I am checking enable JMX polling on start option.
I am publishing messages to the kafka broker on a topic: test .Kafka-manager view is able to show the topic "test" but does not show the messages count/metrics etc. The kafka-manager application throws an exception which says:
[error] k.m.a.c.OffsetCachePassive - [topic=logstash_topic] An error has occurred while getting topic offsets from broker List((BrokerIdentity(1,kafka-1,9092,9999,false),0))
java.nio.channels.ClosedChannelException: null
at kafka.network.BlockingChannel.send(BlockingChannel.scala:110) ~[org.apache.kafka.kafka_2.11-0.10.0.1.jar:na]
at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:98) ~[org.apache.kafka.kafka_2.11-0.10.0.1.jar:na]
at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:83) ~[org.apache.kafka.kafka_2.11-0.10.0.1.jar:na]
at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:149) ~[org.apache.kafka.kafka_2.11-0.10.0.1.jar:na]
at kafka.manager.actor.cluster.OffsetCache$$anonfun$19$$anonfun$21$$anonfun$22.apply(KafkaStateActor.scala:415) ~[kafka-manager.kafka-manager-1.3.3.7-sans-externalized.jar:na]
at kafka.manager.actor.cluster.OffsetCache$$anonfun$19$$anonfun$21$$anonfun$22.apply(KafkaStateActor.scala:412) ~[kafka-manager.kafka-manager-1.3.3.7-sans-externalized.jar:na]
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24) ~[org.scala-lang.scala-library-2.11.8.jar:na]
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24) ~[org.scala-lang.scala-library-2.11.8.jar:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_161]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_161]
[info] k.m.a.KafkaManagerActor - Updating internal state...
[info] k.m.a.c.BrokerViewCacheActor - Updating broker view...
[info] k.m.a.KafkaManagerActor - Updating internal state...
[error] k.m.j.KafkaJMX$ - Failed to connect to service:jmx:rmi:///jndi/rmi://kafka-1:9999/jmxrmi
java.io.IOException: Failed to retrieve RMIServer stub: javax.naming.ServiceUnavailableException [Root exception is java.rmi.ConnectException: Connection refused to host: kafka-1; nested exception is:
java.net.ConnectException: Operation timed out (Connection timed out)]
at javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:369) ~[na:1.8.0_161]
at javax.management.remote.JMXConnectorFactory.connect(JMXConnectorFactory.java:270) ~[na:1.8.0_161]
at kafka.manager.jmx.KafkaJMX$.doWithConnection(KafkaJMX.scala:57) ~[kafka-manager.kafka-manager-1.3.3.7-sans-externalized.jar:na]
And my zookeeper and kafka instance are running by docker-compose up -d.
Below is my docker-compose.yml file.
zookeeper:
image: confluentinc/cp-zookeeper:latest
hostname: zookeeper
environment:
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- "2181:2181"
kafka-1:
image: confluentinc/cp-kafka:latest
hostname: kafka-1
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka-1:9092
KAFKA_JMX_HOSTNAME: kafka-1
KAFKA_JMX_PORT: 9999
KAFKA_JMX_OPTS: "-Djava.rmi.server.hostname=kafka-1 -
Dcom.sun.management.jmxremote.local.only=false -
Dcom.sun.management.jmxremote.rmi.port=9999 -
Dcom.sun.management.jmxremote.port=9999 -
Dcom.sun.management.jmxremote.authenticate=false -
Dcom.sun.management.jmxremote.ssl=false"
ports:
- "9092:9092"
- "9999:9999"
Really stuck at this one. Appreciate any kind of help.
Thanks.

Thank you user7222071, managed to get your code to work after I moved the "-" to the next line for the KAFKA_JMX_OPTS
kafka-1:
image: confluentinc/cp-kafka:5.1.2
volumes:
- kafka-1:/var/lib/kafka/data
ports:
- 19092:19092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:22181,zookeeper-2:32181,zookeeper-3:42181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka-1:19092
KAFKA_JMX_HOSTNAME: "kafka-1"
KAFKA_JMX_PORT: 9999
KAFKA_JMX_OPTS: "-Djava.rmi.server.hostname=kafka-1
-Dcom.sun.management.jmxremote.local.only=false
-Dcom.sun.management.jmxremote.rmi.port=9999
-Dcom.sun.management.jmxremote.port=9999
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false"
extra_hosts:
- "moby:127.0.0.1"
kafka-manager:
image: kafkamanager/kafka-manager:latest
ports:
- 9000:9000
environment:
ZK_HOSTS: zookeeper-1:2218

FWIW, in my scenario I wanted to monitor Kafka Connect and my connectors specifically -
connect:
image: confluentinc/cp-kafka-connect-base:latest
.
.
.
ports:
- "9999:9999"
.
.
.
environment:
KAFKA_JMX_HOSTNAME: "127.0.0.1"
KAFKA_JMX_PORT: 9999
KAFKA_JMX_OPTS: "-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.port=9999"
Then you can access it using a JMX Client on the Docker Host
jmxterm-1.0.2-uber.jar --url 127.0.0.1:9999

Related

Unable to establish connection between Zookeeper and Kafka on Apple M1 using docker-compose

I am trying to run Kafka + Zookeeper through docker-compose.yml
`version: '3'
services:
zookeeper:
image: zookeeper:3.4.9
hostname: zookeeper
ports:
- "2181:2181"
environment:
ZOO_MY_ID: 1
ZOO_PORT: 2181
# ZOO_SERVERS: server.1=zookeeper:2888:3888;2181
volumes:
- ./data/zookeeper/data:/data
- ./data/zookeeper/datalog:/datalog
kafka1:
image: confluentinc/cp-kafka:5.3.0
hostname: kafka1
ports:
- "9091:9091"
environment:
KAFKA_ADVERTISED_LISTENERS: LISTENER_DOCKER_INTERNAL://kafka1:19091,LISTENER_DOCKER_EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:9091
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_DOCKER_INTERNAL
KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
KAFKA_BROKER_ID: 1
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
volumes:
- ./data/kafka1/data:/var/lib/kafka/data
depends_on:
- zookeeper
However, Kafka is unable to connect with the ZooKeeper
kafka101-kafka1-1 | [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket error occurred: zookeeper/172.19.0.2:2181: Connection refused
kafka101-kafka1-1 | [main-SendThread(zookeeper:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zookeeper/172.19.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
I checked and confirmed that the Zoo keeper is running
ps -ef | grep zookeeper
1 zookeepe 0:19 {java} /usr/bin/qemu-x86_64 /usr/lib/jvm/java-1.8-openjdk/jre/bin/java /usr/lib/jvm/java-1.8-openjdk/jre/bin/java -Dzookeeper.log.dir=. -Dzookeeper.root.logger=INFO,CONSOLE -cp /zookeeper-3.4.9/bin/../build/classes:/zookeeper-3.4.9/bin/../build/lib/*.jar:/zookeeper-3.4.9/bin/../lib/slf4j-log4j12-1.6.1.jar:/zookeeper-3.4.9/bin/../lib/slf4j-api-1.6.1.jar:/zookeeper-3.4.9/bin/../lib/netty-3.10.5.Final.jar:/zookeeper-3.4.9/bin/../lib/log4j-1.2.16.jar:/zookeeper-3.4.9/bin/../lib/jline-0.9.94.jar:/zookeeper-3.4.9/bin/../zookeeper-3.4.9.jar:/zookeeper-3.4.9/bin/../src/java/lib/*.jar:/conf: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /conf/zoo.cfg
The zoo configuration looks like this:
clientPort=2181
dataDir=/data
dataLogDir=/datalog
tickTime=2000
initLimit=5
syncLimit=2
Telnet output directly from zookeeper is :
zookeeper-3.4.9 # telnet localhost 2181
telnet: can't connect to remote host (127.0.0.1): Connection refused
ping to zookeeper host in kafka1
> ping zookeeper
PING zookeeper (172.19.0.2) 56(84) bytes of data.
PING zookeeper (172.19.0.2) 56(84) bytes of data.
PING zookeeper (172.19.0.2) 56(84) bytes of data.
I am running on Apple M1 Chipset
What else can I check and do here?
Try using confluentinc's zookeeper image: confluentinc/cp-zookeeper:latest
I was in the exact same boat, also on the M1 chip and this worked for me.
These are the two services from my docker-compose for ref:
zoo:
image: confluentinc/cp-zookeeper:latest
restart: unless-stopped
ports:
- 2181:2181
environment:
ZOO_MY_ID: 1
ZOO_PORT: 2181
ZOO_SERVERS: server.1=zoo:2888:3888
ZOOKEEPER_CLIENT_PORT: 2181
kafka:
image: confluentinc/cp-kafka:latest
restart: always
ports:
- 9092:9092
environment:
KAFKA_ADVERTISED_LISTENERS: LISTENER_DOCKER_INTERNAL://kafka:19092,LISTENER_DOCKER_EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_DOCKER_INTERNAL
KAFKA_ZOOKEEPER_CONNECT: 'zoo:2181'
KAFKA_BROKER_ID: 1
KAFKA_LOG4J_LOGGERS: 'kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO'
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
depends_on:
- zoo

Failed to collect cluster Default info java.lang.IllegalStateException: Error while creating AdminClient for Cluster Default

I would like to use network_mode: bridge for kafka for being able to reach kafka through localhost:9092 from another service
I'm trying to use the provectus/kafka-ui but when I open the consumers menu I get the following error
my docker-compose.yml file :
kafka-ui:
container_name: kafka-ui
image: provectuslabs/kafka-ui:latest
ports:
- 8080:8080
depends_on:
- kafka
environment:
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:9092
KAFKA_CLUSTERS_0_JMXPORT: 9997
kafka:
image: johnnypark/kafka-zookeeper
ports:
- "2181:2181"
- "9092:9092"
network_mode: bridge
environment:
ADVERTISED_HOST: 127.0.0.1
NUM_PARTITIONS: 1
volumes:
- /var/run/docker.sock:/var/run/docker.sock
log error:
2022-01-13 09:16:50,014 ERROR [parallel-5] c.p.k.u.s.MetricsService: Failed to collect cluster Default info
java.lang.IllegalStateException: Error while creating AdminClient for Cluster Default
provectus/kafka-ui
I was using the johnnypark/kafka-zookeeper library for both kafka and zookeeper. I was able to solve this problem by using two separate libraries as in the example below
zookeeper1:
image: confluentinc/cp-zookeeper:5.2.4
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka1:
image: confluentinc/cp-kafka:5.3.1
depends_on:
- zookeeper1
ports:
- 9093:9093
- 9998:9998
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper1:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka1:29092,PLAINTEXT_HOST://localhost:9093
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
JMX_PORT: 9998
KAFKA_JMX_OPTS: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=kafka1 -Dcom.sun.management.jmxremote.rmi.port=9998
being able to reach kafka through localhost:9092 from another service
You can't use localhost to reach Kafka since that would be the Kafka UI container itself.
Changing ADVERTISED_HOST to kafka and using kafka:9092 from other containers is correct for a bridge network. However, this have the side effect of preventing any access to Kafka outside the Docker network, such as clients directly on the host machine.
Internal and External clients can be configured separately. bitnami/bitnami-docker-kafka
Here's an example using Bitnami's Kafka Image - this allows host clients to connect on port 9093 while allowing kafka-ui to connect with the default port.
version: "3"
services:
zookeeper:
image: 'bitnami/zookeeper:latest'
ports:
- '2181:2181'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: 'bitnami/kafka:latest'
ports:
- '9092:9092'
- '9093:9093'
environment:
- KAFKA_BROKER_ID=1
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT
- KAFKA_CFG_LISTENERS=CLIENT://:9092,EXTERNAL://:9093
- KAFKA_CFG_ADVERTISED_LISTENERS=CLIENT://kafka:9092,EXTERNAL://localhost:9093
- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=CLIENT
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper
kafka-ui:
image: provectuslabs/kafka-ui
container_name: kafka-ui
ports:
- "8081:8081"
restart: always
environment:
- KAFKA_CLUSTERS_0_NAME=local
- KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092
- SERVER_PORT=8081

Set up Confluent Metrics Reporter at wurstmeister/kafka

Control-Center with wurstmeister/kafka at docker.
But when I open cp-control-center I can't see the metrics for broker. There is a report message that says Set up Confluent Metrics Reporter .
Can I do set up and take the metrics for wurstmeister/kafka image?
My docker-compose file is the following
kafka:
image: wurstmeister/kafka
container_name: kafka
hostname: kafka
ports:
- "9092"
- "9999"
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_PORT: 9092
KAFKA_PORT: 9092
KAFKA_JMX_OPTS: "-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=kafka -Dcom.sun.management.jmxremote.rmi.port=9999"
JMX_PORT: 9999
KAFKA_LISTENERS: PLAINTEXT://:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://172.22.0.4:9092
volumes:
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- zookeeper
control-center:
image: confluentinc/cp-enterprise-control-center:6.0.0
hostname: control-center
container_name: control-center
depends_on:
- kafka
ports:
- "9021:9021"
environment:
CONTROL_CENTER_BOOTSTRAP_SERVERS: kafka:9092
CONTROL_CENTER_REPLICATION_FACTOR: 1
CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS: 1
CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS: 1
CONFLUENT_METRICS_TOPIC_REPLICATION: 1
PORT: 9021
The metrics reporters for the brokers aren't on the classpath for the wurstmeister container and the metrics topic isn't created.
You'd have to download the Confluent Platform to get those reporters, so no reason not to use their container

Kafka connect cannot find broker in docker

I have a docker-compose file to set up a Kafka infrastructure. The problem is, when I run the docker-compose for the first time on a machine then all the containers run fine, but when I STOP/REMOVE the containers and re-run the docker-compose then image container "cnfldemos/cp-server-connect-datagen:0.2.0-5.4.0" get an exit sometime. I just wanted to update that this problem is not constant and I cannot figure out the root cause.
Could you please help me with the above issue with image container "cnfldemos/cp-server-connect-datagen:0.2.0-5.4.0". By looking at the logs, I understand that there is an unknown host expectation but I don't know why it is trying to find 172.18.0.4 IP. It's not even my IP.
p.s. I am a novice at Kafka docker setup, please help!
Following are the error logs from the cnfldemos/cp-server-connect-datagen:0.2.0-5.4.0 container.
main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 5.4.0-ccs
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: f4201a82bea68cc7
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1585904457838
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (broker/172.18.0.4:29092) could not be established. Broker may not be available.
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (broker/172.18.0.4:29092) could not be established. Broker may not be available.
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (broker/172.18.0.4:29092) could not be established. Broker may not be available.
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Error connecting to node broker:29092 (id: -1 rack: null)
java.net.UnknownHostException: broker: Name or service not known
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1324)
at java.net.InetAddress.getAllByName0(InetAddress.java:1277)
at java.net.InetAddress.getAllByName(InetAddress.java:1193)
at java.net.InetAddress.getAllByName(InetAddress.java:1127)
at org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:104)
at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.currentAddress(ClusterConnectionStates.java:403)
at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.access$200(ClusterConnectionStates.java:363)
at org.apache.kafka.clients.ClusterConnectionStates.currentAddress(ClusterConnectionStates.java:151)
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:955)
at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:289)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.sendEligibleCalls(KafkaAdminClient.java:969)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1184)
at java.lang.Thread.run(Thread.java:748)
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Error connecting to node broker:29092 (id: -1 rack: null)
java.net.UnknownHostException: broker
Following is the docker-compose.yml file.
I execute the the following docker-compose.yml file by using command
docker-compose up -d
version: '2.1'
services:
zookeeper:
image: confluentinc/cp-zookeeper:5.4.1
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-server:5.4.1
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:29092
CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
schema-registry:
image: confluentinc/cp-schema-registry:5.4.1
hostname: schema-registry
container_name: schema-registry
depends_on:
- zookeeper
- broker
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: 'zookeeper:2181'
rest-proxy:
image: confluentinc/cp-kafka-rest:5.4.1
depends_on:
- zookeeper
- broker
- schema-registry
ports:
- 8082:8082
hostname: rest-proxy
container_name: rest-proxy
environment:
KAFKA_REST_HOST_NAME: rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: 'broker:29092'
KAFKA_REST_LISTENERS: "http://0.0.0.0:8082"
KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
kafka-topics-ui:
image: landoop/kafka-topics-ui:0.9.4
hostname: kafka-topics-ui
ports:
- "8000:8000"
environment:
KAFKA_REST_PROXY_URL: "http://rest-proxy:8082/"
PROXY: "true"
depends_on:
- zookeeper
- broker
- schema-registry
- rest-proxy
connect:
image: cnfldemos/cp-server-connect-datagen:0.2.0-5.4.0
hostname: connect
container_name: connect
depends_on:
- zookeeper
- broker
- schema-registry
ports:
- "8083:8083"
environment:
CONNECT_BOOTSTRAP_SERVERS: 'broker:29092'
CONNECT_REST_ADVERTISED_HOST_NAME: connect
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_ZOOKEEPER_CONNECT: 'zookeeper:2181'
CLASSPATH: /usr/share/java/monitoring-interceptors/monitoring-interceptors-5.4.1.jar
CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components"
CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR
docker-compose ps
Name Command State Ports
----------------------------------------------------------------------------------------------------------
broker /etc/confluent/docker/run Up 0.0.0.0:9092->9092/tcp
connect /etc/confluent/docker/run Up 0.0.0.0:8083->8083/tcp, 9092/tcp
topics-ui /run.sh Up 0.0.0.0:8000->8000/tcp
rest-proxy /etc/confluent/docker/run Up 0.0.0.0:8082->8082/tcp
schema-registry /etc/confluent/docker/run Up 0.0.0.0:8081->8081/tcp
zookeeper /etc/confluent/docker/run Up 0.0.0.0:2181->2181/tcp, 2888/tcp, 3888/tcp
docker-compose ps output after sometime
Name Command State Ports
-------------------------------------------------------------------------------------------------------------
broker /etc/confluent/docker/run Up 0.0.0.0:9092->9092/tcp
connect /etc/confluent/docker/run Exit 137
topics-ui /run.sh Up 0.0.0.0:8000->8000/tcp
rest-proxy /etc/confluent/docker/run Up 0.0.0.0:8082->8082/tcp
schema-registry /etc/confluent/docker/run Up 0.0.0.0:8081->8081/tcp
zookeeper /etc/confluent/docker/run Up 0.0.0.0:2181->2181/tcp, 2888/tcp, 3888/tcp
we can notice that container 'connect' is exit 137 state.
I don't know why it is trying to find 172.18.0.4 IP. It's not even my IP.
That is your hypervisor IP.
Overall, seems like you might be running into a memory problem because you are connecting to the correct addresses (CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS could be localhost:29092, actually), however the docker network is becoming unstable. (Run docker-compose rm -sf and docker system prune to get to a clean(er) state)
Solutions
Increase your Docker memory in the settings
Don't run containers you don't need (REST Proxy or Topics UI)

Controller 1's connection to broker was unsuccessful docker confluent

I'm following the setup of provisioning Kafka Cluster, Zookeeper, Control-center docker images provided by Confluent team in my local machine with the docker-compose.yml file below:
docker-compose.yml
redis:
image: redis
container_name: redis
ports:
- 6379:6379
mysql:
image: mysql:5.6
container_name: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: rootpass
MYSQL_DATABASE:
MYSQL_USER:
MYSQL_PASSWORD:
ports:
- 3306:3306
ksql-server:
image: confluentinc/cp-ksql-server:5.2.1
hostname: ksql-server
container_name: ksql-server
links:
- kafka
ports:
- "8088:8088"
environment:
KSQL_CONFIG_DIR: "/etc/ksql"
KSQL_LOG4J_OPTS: "-Dlog4j.configuration=file:/etc/ksql/log4j-rolling.properties"
KSQL_BOOTSTRAP_SERVERS: "kafka:29092"
KSQL_HOST_NAME: ksql-server
KSQL_APPLICATION_ID: "cp-all-in-one"
KSQL_LISTENERS: "http://0.0.0.0:8088"
KSQL_CACHE_MAX_BYTES_BUFFERING: 0
# KSQL_KSQL_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
KSQL_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
KSQL_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
zookeeper:
image: confluentinc/cp-zookeeper:latest
container_name: zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka:
image: confluentinc/cp-kafka:latest
container_name: kafka
links:
- zookeeper
ports:
- 9092:9092
- 29092:29092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
control-center:
image: confluentinc/cp-enterprise-control-center:5.2.1
hostname: control-center
container_name: control-center
links:
- zookeeper
- kafka
- ksql-server
ports:
- 9021:9021
environment:
CONTROL_CENTER_BOOTSTRAP_SERVERS: kafka:29092
CONTROL_CENTER_ZOOKEEPER_CONNECT: zookeeper:2181
CONTROL_CENTER_KSQL_URL: http://ksql-server:8088
CONTROL_CENTER_KSQL_ADVERTISED_URL: http://localhost:8088
CONTROL_CENTER_REPLICATION_FACTOR: 1
CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS: 1
CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS: 1
CONFLUENT_METRICS_TOPIC_REPLICATION: 1
PORT: 9021
The setup worked well on my old laptop. But after changing to use new one, I cannot make it works.
I tried to run docker logs control-center and see the logs below,
Logs :
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 2.2.0-cp2
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: c4bca159dd111016
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (kafka/172.17.0.5:29092) could not be established. Broker may not be available.
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (kafka/172.17.0.5:29092) could not be established. Broker may not be available.
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (kafka/172.17.0.5:29092) could not be established. Broker may not be available.
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (kafka/172.17.0.5:29092) could not be established. Broker may not be available.
I checked the logs of kafka docker and see the errors
[2019-05-18 11:39:28,721] WARN [RequestSendThread controllerId=1] Controller 1's connection to broker kafka:29092 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread)
java.lang.IllegalStateException: No entry found for connection 1
at org.apache.kafka.clients.ClusterConnectionStates.nodeState(ClusterConnectionStates.java:339)
at org.apache.kafka.clients.ClusterConnectionStates.disconnected(ClusterConnectionStates.java:143)
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:921)
at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:287)
at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:65)
at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:279)
at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:233)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
Below is the docker containers running
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
94e7a2e4e61a backend_collection-engine-consumer-generator "java -cp collection…" 4 minutes ago Up 4 minutes collection-engine-consumer-generator
791528e51a06 backend_collection-engine-executor "java -DsplitEvents=…" 4 minutes ago Up 4 minutes collection-engine-executor
d2eadae2ce7d confluentinc/cp-kafka:latest "/etc/confluent/dock…" 8 minutes ago Up 8 minutes 0.0.0.0:9092->9092/tcp, 0.0.0.0:29092->29092/tcp kafka
b1230959ceeb confluentinc/cp-zookeeper:latest "/etc/confluent/dock…" 8 minutes ago Up 8 minutes 2181/tcp, 2888/tcp, 3888/tcp zookeeper
0bcb7bc0e2d4 redis "docker-entrypoint.s…" 8 minutes ago Up 8 minutes 0.0.0.0:6379->6379/tcp redis
040fec54d7b8 mysql:5.6 "docker-entrypoint.s…" 8 minutes ago Up 8 minutes 0.0.0.0:3306->3306/tcp mysql
I cannot even access into Control Center via http://localhost:9021 as normal.