Kafka cluster on muliple nodes with docker-compose - apache-kafka

I'm trying to set up Kafka cluster on 3 nodes, with docker-compose.
node:
version: '3.1'
services:
zookeeper:
image: zookeeper:3.7.0
container_name: zookeeper
hostname: zookeeper
ports:
- 2181:2181
- 2888:2888
- 3888:3888
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=0.0.0.0:2888:3888;2181 server.2=node2:2888:3888;2181 server.3=node3:2888:3888;2181
kafka:
image: wurstmeister/kafka:2.13-2.7.0
container_name: kafka
hostname: kafka
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: node1:2181,node2:2181,node3:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'false'
KAFKA_ADVERTISED_LISTENERS: INSIDE://:9092,OUTSIDE://node1:9095
KAFKA_LISTENERS: INSIDE://:9092,OUTSIDE://:9095
KAFKA_CREATE_TOPICS: "test:4:1"
ports:
- 9095:9095
node:
version: '3.1'
services:
zookeeper:
image: zookeeper:3.7.0
container_name: zookeeper
hostname: zookeeper
ports:
- 2181:2181
- 2888:2888
- 3888:3888
environment:
ZOO_MY_ID: 2
ZOO_SERVERS: server.1=node1:2888:3888;2181 server.2=0.0.0.0:2888:3888;2181 server.3=node3:2888:3888;2181
kafka:
image: wurstmeister/kafka:2.13-2.7.0
container_name: kafka
hostname: kafka
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 2
KAFKA_ZOOKEEPER_CONNECT: node1:2181,node2:2181,node3:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'false'
KAFKA_ADVERTISED_LISTENERS: INSIDE://:9092,OUTSIDE://node2:9095
KAFKA_LISTENERS: INSIDE://:9092,OUTSIDE://:9095
#KAFKA_CREATE_TOPICS: "test:4:1"
ports:
- 9095:9095
node:
version: '3.1'
services:
zookeeper:
image: zookeeper:3.7.0
container_name: zookeeper
hostname: zookeeper
ports:
- 2181:2181
- 2888:2888
- 3888:3888
environment:
ZOO_MY_ID: 3
ZOO_SERVERS: server.1=node1:2888:3888;2181 server.2=node2:2888:3888;2181 server.3=0.0.0.0:2888:3888;2181
kafka:
image: wurstmeister/kafka:2.13-2.7.0
container_name: kafka
hostname: kafka
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 3
KAFKA_ZOOKEEPER_CONNECT: node1:2181,node2:2181,node3:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'false'
KAFKA_ADVERTISED_LISTENERS: INSIDE://:9092,OUTSIDE://node3:9095
KAFKA_LISTENERS: INSIDE://:9092,OUTSIDE://:9095
#KAFKA_CREATE_TOPICS: "test:4:1"
ports:
- 9095:9095
When I run it (on respective nodes):
docker-compose -f docker-compose-node1.yml -d
docker-compose -f docker-compose-node2.yml -d
docker-compose -f docker-compose-node3.yml -d
If I check logs, there are no errors. On contrary I can see something like:
INFO Registered broker 1 at path /brokers/ids/1 ...
INFO Registered broker 2 at path /brokers/ids/2 ...
INFO Registered broker 3 at path /brokers/ids/3 ...
Two things that are strange, first on the first node where I try to create a topic, logs stop at
creating topics: test:4:1
Created topic test.
If I set up a single node (localhost) everything sets up just fine. The second thing is that node2 is always elected as controller.
This is what I get with kafkacat:
kafkacat -b node1:9095,node2:9095,node3:9095 -L
Metadata for all topics (from broker 2: node2:9095/2):
3 brokers:
broker 2 at node2:9095 (controller)
broker 3 at node3:9095
broker 1 at node1:9095
1 topics:
topic "test" with 4 partitions:
partition 0, leader 2, replicas: 2, isrs: 2
partition 1, leader 3, replicas: 3, isrs: 3
partition 2, leader 1, replicas: 1, isrs: 1
partition 3, leader 2, replicas: 2, isrs: 2
But if I run:
docker exec -it kafka kafka-console-consumer.sh --bootstrap-server node1:9095,node2:9095,node3:9095 --topic test --from-beginning
I get loads of warnings:
[2021-04-12 14:31:06,848] WARN [Consumer clientId=consumer-console-consumer-95791-1, groupId=console-consumer-95791] Received unknown topic or partition error in ListOffset request for partition test-2 (org.apache.kafka.clients.consumer.internals.Fetcher)
and
[2021-04-12 14:31:06,848] WARN [Consumer clientId=consumer-console-consumer-95791-1, groupId=console-consumer-95791] Received unknown topic or partition error in ListOffset request for partition test-1 (org.apache.kafka.clients.consumer.internals.Fetcher)
Like there's no partition 1 and 2.
If I try to produce a message:
docker exec -it kafka-tn kafka-console-producer.sh --bootstrap-server tn00:9095,tn01:9095,tn02:9095 --topic test
I get loads of errors:
[2021-04-12 14:40:33,273] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 41 : {test=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
Am I doing something crazy here or it's just some minor misconfiguration?

Related

MirrorMaker2 messages skipped when failing over to cluster where consumer group doesn't exist

Context
We have 2 Kafka clusters in an active/active configuration.
We want to use Mirror Maker 2 to help us with DR by syncing topics and consumer offsets so that we can have consumers fail-over to a secondary cluster, in the event of an issue with the primary cluster.
Problem
We would perform this fail-over as follows:
Consumers are consuming from both local/remote topics in a Kafka cluster (.*topic1)
Producers switch from clusterA to clusterB.
Consumers switch from clusterA to clusterB.
The issue occurs in between steps 2 and 3. When the consumer disconnects from clusterA, any messages produced to clusterB (step 2) while disconnected, are not received by the consumers when they connect to clusterB.
Subsequent messages are received.
If this process were to be repeated, now that the consumer group exists in both clusters, the issue no longer occurs.
Steps to reproduce
Save the following file as mm2.properties in directory mm2_config:
clusters=clusterA, clusterB
clusterA.bootstrap.servers=broker1A:29092,broker2A:39092,broker3A:49092
clusterB.bootstrap.servers=broker1B:29093,broker2B:29094,broker3B:29095
clusterB.config.storage.replication.factor=3
clusterA.offset.storage.replication.factor=3
clusterB.offset.storage.replication.factor=3
clusterA.status.storage.replication.factor=3
clusterB.status.storage.replication.factor=3
clusterA->clusterB.enabled=true
clusterB->clusterA.enabled=true
offset-syncs.topic.replication.factor=3
heartbeats.topic.replication.factor=3
checkpoints.topic.replication.factor=3
topics=.*
groups=.*
tasks.max=2
replication.factor=3
refresh.topics.enabled=true
sync.topic.configs.enabled=true
refresh.topics.interval.seconds=30
clusterA->clusterB.emit.heartbeats.enabled=true
clusterA->clusterB.emit.checkpoints.enabled=true
clusterB->clusterA.emit.heartbeats.enabled=true
clusterB->clusterA.emit.checkpoints.enabled=true
clusterA->clusterB.sync.group.offsets.enabled=true
clusterB->clusterA.sync.group.offsets.enabled=true
clusterA->clusterB.emit.checkpoints.interval.seconds=10
clusterA->clusterB.sync.group.offsets.interval.seconds=10
clusterB->clusterA.emit.checkpoints.interval.seconds=10
clusterB->clusterA.sync.group.offsets.interval.seconds=10
Save the following docker-compose.yml, which creates 2 kafka clusters with 3 brokers and 1 zookeeper in each:
version: '3'
services:
zookeeperA:
image: confluentinc/cp-zookeeper
hostname: zookeeperA
container_name: zookeeperA
ports:
- 22181:22181
- 22888:22888
- 23888:23888
volumes:
- ./zookeeperA/data:/data
environment:
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_CLIENT_PORT: 22181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_INIT_LIMIT: 5
ZOOKEEPER_SYNC_LIMIT: 2
ZOOKEEPER_SERVERS: zookeeperA:22888:23888
broker1A:
image: confluentinc/cp-kafka
hostname: broker1A
container_name: broker1A
ports:
- 9092:9092
- 29092:29092
depends_on:
- zookeeperA
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeperA:22181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:29092,PLAINTEXT_HOST://0.0.0.0:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker1A:29092,PLAINTEXT_HOST://localhost:9092
ALLOW_PLAINTEXT_LISTENER: 'yes'
KAFKA_AUTO_OFFSET_RESET: "latest"
broker2A:
image: confluentinc/cp-kafka
hostname: broker2A
container_name: broker2A
ports:
- 9093:9093
- 39092:39092
depends_on:
- zookeeperA
environment:
KAFKA_BROKER_ID: 2
KAFKA_ZOOKEEPER_CONNECT: zookeeperA:22181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:39092,PLAINTEXT_HOST://0.0.0.0:9093
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker2A:39092,PLAINTEXT_HOST://localhost:9093
ALLOW_PLAINTEXT_LISTENER: 'yes'
KAFKA_AUTO_OFFSET_RESET: "latest"
broker3A:
image: confluentinc/cp-kafka
hostname: broker3A
container_name: broker3A
ports:
- 9094:9094
- 49092:49092
depends_on:
- zookeeperA
environment:
KAFKA_BROKER_ID: 3
KAFKA_ZOOKEEPER_CONNECT: zookeeperA:22181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:49092,PLAINTEXT_HOST://0.0.0.0:9094
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker3A:49092,PLAINTEXT_HOST://localhost:9094
ALLOW_PLAINTEXT_LISTENER: 'yes'
KAFKA_AUTO_OFFSET_RESET: "latest"
zookeeperB:
image: confluentinc/cp-zookeeper
hostname: zookeeperB
container_name: zookeeperB
ports:
- 32181:32181
- 32888:32888
- 33888:33888
volumes:
- ./zookeeperB/data:/data
environment:
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_CLIENT_PORT: 32181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_INIT_LIMIT: 5
ZOOKEEPER_SYNC_LIMIT: 2
ZOOKEEPER_SERVERS: zookeeperB:32888:33888
broker1B:
image: confluentinc/cp-kafka
hostname: broker1B
container_name: broker1B
ports:
- 8092:8092
- 29093:29093
depends_on:
- zookeeperB
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeperB:32181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:29093,PLAINTEXT_HOST://0.0.0.0:8092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker1B:29093,PLAINTEXT_HOST://localhost:8092
ALLOW_PLAINTEXT_LISTENER: 'yes'
KAFKA_AUTO_OFFSET_RESET: "latest"
broker2B:
image: confluentinc/cp-kafka
hostname: broker2B
container_name: broker2B
ports:
- 8093:8093
- 29094:29094
depends_on:
- zookeeperB
environment:
KAFKA_BROKER_ID: 2
KAFKA_ZOOKEEPER_CONNECT: zookeeperB:32181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:29094,PLAINTEXT_HOST://0.0.0.0:8093
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker2B:29094,PLAINTEXT_HOST://localhost:8093
ALLOW_PLAINTEXT_LISTENER: 'yes'
KAFKA_AUTO_OFFSET_RESET: "latest"
broker3B:
image: confluentinc/cp-kafka
hostname: broker3B
container_name: broker3B
ports:
- 8094:8094
- 29095:29095
depends_on:
- zookeeperB
environment:
KAFKA_BROKER_ID: 3
KAFKA_ZOOKEEPER_CONNECT: zookeeperB:32181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:29095,PLAINTEXT_HOST://0.0.0.0:8094
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker3B:29095,PLAINTEXT_HOST://localhost:8094
ALLOW_PLAINTEXT_LISTENER: 'yes'
KAFKA_AUTO_OFFSET_RESET: "latest"
mirror-maker:
image: confluentinc/cp-kafka
hostname: mirror-maker
container_name: mirror-maker
volumes:
- ./mm2_config:/tmp/kafka/config
ports:
- 9091:9091
- 29096:29096
depends_on:
- zookeeperA
- zookeeperB
- broker1A
- broker2A
- broker3A
- broker1B
- broker2B
- broker3B
environment:
KAFKA_BROKER_ID: 4
KAFKA_ZOOKEEPER_CONNECT: zookeeperA:22181,zookeeperB:32181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_LISTENERS: PLAINTEXT://:29096,PLAINTEXT_HOST://:9091
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://mirror-maker:29096,PLAINTEXT_HOST://localhost:9091
ALLOW_PLAINTEXT_LISTENER: 'yes'
KAFKA_AUTO_OFFSET_RESET: "latest"
Start the stack:
docker-compose up -d
Create topic topic1 in cluster A:
docker exec broker1A \
kafka-topics --bootstrap-server broker1A:29092 --create --topic topic1 --partitions 3 --replication-factor 3
Create topic topic1 in cluster B:
docker exec broker1B \
kafka-topics --bootstrap-server broker1B:29093 --create --topic topic1 --partitions 3 --replication-factor 3
Start MirrorMaker:
docker exec mirror-maker \
connect-mirror-maker /tmp/kafka/config/mm2.properties
Start a console consumer for cluster A:
docker exec broker1A \
kafka-console-consumer --bootstrap-server broker1A:29092 --include '.*topic1' --group test
Start a console producer for cluster A and publish m1:
docker exec -it broker1A \
kafka-console-producer --bootstrap-server broker1A:29092 --topic topic1
> m1
Verify that the consumer received m1.
Stop the producer and start it again pointing to cluster B and publish message m2.
docker exec -it broker1B \
kafka-console-producer --bootstrap-server broker1B:29093 --topic topic1
> m2
Verify that the consumer received m2.
Stop the consumer.
Have the producer publish m3.
Start the consumer in cluster B:
docker exec broker1B \
kafka-console-consumer --bootstrap-server broker1B:29093 --include '.*topic1' --group test
Observe that m3 is not received.
Publish m4 from the producer.
Verify that m4 is received.
Message m3 is lost.

Avoiding running out of disk storage when running Kafka

I have a very basic setup of Kafka where I am running a single broker instance on an AWS EC2 instance with a storage of 100GB. However, within hours I get the issue that the disk has been occupied by 100% and most of it is due to the Docker running Kafka. Here is the simple Kafka service that I am running:
broker:
image: confluentinc/cp-server:6.2.0
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
- "9101:9101"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://<ip>:9092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9101
KAFKA_JMX_HOSTNAME: <ip>
KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:29092
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
I suspect it might be I am not setting that retention and cleanup policy.Which I had set up by defining it as:
KAFKA_LOG_RETENTION_MINUTES: 5
KAFKA_LOG_CLEANUP_POLICY: compact, delete
However, when I set this up apart from facing the disk full issue, I am not able to publish any new messages to a Kafka topic.
Is there something that I am missing?
I fixed the issue by defining Docker volumes for my kafka setup as follows:
# Create dirs for Kafka / ZK data.
mkdir -p /vol1/zk-data
mkdir -p /vol2/zk-txn-logs
mkdir -p /vol3/kafka-data
# Make sure the user has the read and write permissions.
chown -R 1000:1000 /vol1/zk-data
chown -R 1000:1000 /vol2/zk-txn-logs
chown -R 1000:1000 /vol3/kafka-data
Got this from the confluent documentation here and defined it in my docker compose file as:
services:
zookeeper:
image: confluentinc/cp-zookeeper:6.2.0
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
volumes:
- /vol1/zk-data:/var/lib/zookeeper/data
- /vol2/zk-txn-logs:/var/lib/zookeeper/log
broker:
image: confluentinc/cp-server:6.2.0
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
- "9101:9101"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://ip:9092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9101
KAFKA_JMX_HOSTNAME: <ip>
KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:29092
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
volumes:
- /vol3/kafka-data:/var/lib/kafka/data

Zipkin Docker not consuming kafka "zipkin" topic

Currently following the https://github.com/openzipkin/zipkin quickstart for the docker startup of zipkin and kafka.
I set up a docker-compose that starts several services: kafka, zookeeper, neo4j and zipkin.
I gave zipkin through the "KAFKA.BOOTSTRAP.SERVERS: broker:19092" the same address as the functioning neo4j.
zipkin:
image: ghcr.io/openzipkin/zipkin:${TAG:-latest}
container_name: zipkin
depends_on:
- broker
environment:
KAFKA.BOOTSTRAP.SERVERS: broker:19092
ports:
# Port used for the Zipkin UI and HTTP Api
- 9411:9411
my Kafka docker-compose part looks like this:
broker:
image: confluentinc/cp-kafka:latest
hostname: broker
container_name: broker
ports:
- "9092:9092"
- "9101:9101"
depends_on:
- zookeeper
environment:
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: LISTENER_DOCKER_INTERNAL://broker:19092,LISTENER_DOCKER_EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:9092
KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_DOCKER_INTERNAL
KAFKA_BROKER_ID: 1
KAFKA_LOG4J_LOGGERS: "kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO"
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
The topic "zipkin" gets pushed to kafka, since I can consume it with my spring application.
The zipkin settings in my application.properties of the Spring implementation looks like this.
spring.zipkin.baseUrl= localhost:9411
spring.zipkin.sender.type= kafka
I couldn't find anything about how I can actually debug zipkin itself. It starts up without errors in my docker-compose.
I actually found it myself. In case anyone encounters this too.
KAFKA.BOOTSTRAP.SERVERS: broker:19092
is not a valid environment variable. it needs to be in the style of this:
KAFKA_BOOTSTRAP_SERVERS: broker:19092

Set up Confluent Metrics Reporter at wurstmeister/kafka

Control-Center with wurstmeister/kafka at docker.
But when I open cp-control-center I can't see the metrics for broker. There is a report message that says Set up Confluent Metrics Reporter .
Can I do set up and take the metrics for wurstmeister/kafka image?
My docker-compose file is the following
kafka:
image: wurstmeister/kafka
container_name: kafka
hostname: kafka
ports:
- "9092"
- "9999"
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_PORT: 9092
KAFKA_PORT: 9092
KAFKA_JMX_OPTS: "-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=kafka -Dcom.sun.management.jmxremote.rmi.port=9999"
JMX_PORT: 9999
KAFKA_LISTENERS: PLAINTEXT://:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://172.22.0.4:9092
volumes:
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- zookeeper
control-center:
image: confluentinc/cp-enterprise-control-center:6.0.0
hostname: control-center
container_name: control-center
depends_on:
- kafka
ports:
- "9021:9021"
environment:
CONTROL_CENTER_BOOTSTRAP_SERVERS: kafka:9092
CONTROL_CENTER_REPLICATION_FACTOR: 1
CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS: 1
CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS: 1
CONFLUENT_METRICS_TOPIC_REPLICATION: 1
PORT: 9021
The metrics reporters for the brokers aren't on the classpath for the wurstmeister container and the metrics topic isn't created.
You'd have to download the Confluent Platform to get those reporters, so no reason not to use their container

Controller 1's connection to broker was unsuccessful docker confluent

I'm following the setup of provisioning Kafka Cluster, Zookeeper, Control-center docker images provided by Confluent team in my local machine with the docker-compose.yml file below:
docker-compose.yml
redis:
image: redis
container_name: redis
ports:
- 6379:6379
mysql:
image: mysql:5.6
container_name: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: rootpass
MYSQL_DATABASE:
MYSQL_USER:
MYSQL_PASSWORD:
ports:
- 3306:3306
ksql-server:
image: confluentinc/cp-ksql-server:5.2.1
hostname: ksql-server
container_name: ksql-server
links:
- kafka
ports:
- "8088:8088"
environment:
KSQL_CONFIG_DIR: "/etc/ksql"
KSQL_LOG4J_OPTS: "-Dlog4j.configuration=file:/etc/ksql/log4j-rolling.properties"
KSQL_BOOTSTRAP_SERVERS: "kafka:29092"
KSQL_HOST_NAME: ksql-server
KSQL_APPLICATION_ID: "cp-all-in-one"
KSQL_LISTENERS: "http://0.0.0.0:8088"
KSQL_CACHE_MAX_BYTES_BUFFERING: 0
# KSQL_KSQL_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
KSQL_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
KSQL_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
zookeeper:
image: confluentinc/cp-zookeeper:latest
container_name: zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka:
image: confluentinc/cp-kafka:latest
container_name: kafka
links:
- zookeeper
ports:
- 9092:9092
- 29092:29092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
control-center:
image: confluentinc/cp-enterprise-control-center:5.2.1
hostname: control-center
container_name: control-center
links:
- zookeeper
- kafka
- ksql-server
ports:
- 9021:9021
environment:
CONTROL_CENTER_BOOTSTRAP_SERVERS: kafka:29092
CONTROL_CENTER_ZOOKEEPER_CONNECT: zookeeper:2181
CONTROL_CENTER_KSQL_URL: http://ksql-server:8088
CONTROL_CENTER_KSQL_ADVERTISED_URL: http://localhost:8088
CONTROL_CENTER_REPLICATION_FACTOR: 1
CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS: 1
CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS: 1
CONFLUENT_METRICS_TOPIC_REPLICATION: 1
PORT: 9021
The setup worked well on my old laptop. But after changing to use new one, I cannot make it works.
I tried to run docker logs control-center and see the logs below,
Logs :
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 2.2.0-cp2
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: c4bca159dd111016
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (kafka/172.17.0.5:29092) could not be established. Broker may not be available.
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (kafka/172.17.0.5:29092) could not be established. Broker may not be available.
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (kafka/172.17.0.5:29092) could not be established. Broker may not be available.
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (kafka/172.17.0.5:29092) could not be established. Broker may not be available.
I checked the logs of kafka docker and see the errors
[2019-05-18 11:39:28,721] WARN [RequestSendThread controllerId=1] Controller 1's connection to broker kafka:29092 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread)
java.lang.IllegalStateException: No entry found for connection 1
at org.apache.kafka.clients.ClusterConnectionStates.nodeState(ClusterConnectionStates.java:339)
at org.apache.kafka.clients.ClusterConnectionStates.disconnected(ClusterConnectionStates.java:143)
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:921)
at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:287)
at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:65)
at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:279)
at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:233)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
Below is the docker containers running
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
94e7a2e4e61a backend_collection-engine-consumer-generator "java -cp collection…" 4 minutes ago Up 4 minutes collection-engine-consumer-generator
791528e51a06 backend_collection-engine-executor "java -DsplitEvents=…" 4 minutes ago Up 4 minutes collection-engine-executor
d2eadae2ce7d confluentinc/cp-kafka:latest "/etc/confluent/dock…" 8 minutes ago Up 8 minutes 0.0.0.0:9092->9092/tcp, 0.0.0.0:29092->29092/tcp kafka
b1230959ceeb confluentinc/cp-zookeeper:latest "/etc/confluent/dock…" 8 minutes ago Up 8 minutes 2181/tcp, 2888/tcp, 3888/tcp zookeeper
0bcb7bc0e2d4 redis "docker-entrypoint.s…" 8 minutes ago Up 8 minutes 0.0.0.0:6379->6379/tcp redis
040fec54d7b8 mysql:5.6 "docker-entrypoint.s…" 8 minutes ago Up 8 minutes 0.0.0.0:3306->3306/tcp mysql
I cannot even access into Control Center via http://localhost:9021 as normal.