I have a very basic setup of Kafka where I am running a single broker instance on an AWS EC2 instance with a storage of 100GB. However, within hours I get the issue that the disk has been occupied by 100% and most of it is due to the Docker running Kafka. Here is the simple Kafka service that I am running:
broker:
image: confluentinc/cp-server:6.2.0
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
- "9101:9101"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://<ip>:9092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9101
KAFKA_JMX_HOSTNAME: <ip>
KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:29092
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
I suspect it might be I am not setting that retention and cleanup policy.Which I had set up by defining it as:
KAFKA_LOG_RETENTION_MINUTES: 5
KAFKA_LOG_CLEANUP_POLICY: compact, delete
However, when I set this up apart from facing the disk full issue, I am not able to publish any new messages to a Kafka topic.
Is there something that I am missing?
I fixed the issue by defining Docker volumes for my kafka setup as follows:
# Create dirs for Kafka / ZK data.
mkdir -p /vol1/zk-data
mkdir -p /vol2/zk-txn-logs
mkdir -p /vol3/kafka-data
# Make sure the user has the read and write permissions.
chown -R 1000:1000 /vol1/zk-data
chown -R 1000:1000 /vol2/zk-txn-logs
chown -R 1000:1000 /vol3/kafka-data
Got this from the confluent documentation here and defined it in my docker compose file as:
services:
zookeeper:
image: confluentinc/cp-zookeeper:6.2.0
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
volumes:
- /vol1/zk-data:/var/lib/zookeeper/data
- /vol2/zk-txn-logs:/var/lib/zookeeper/log
broker:
image: confluentinc/cp-server:6.2.0
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
- "9101:9101"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://ip:9092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9101
KAFKA_JMX_HOSTNAME: <ip>
KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:29092
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
volumes:
- /vol3/kafka-data:/var/lib/kafka/data
Related
Context
We have 2 Kafka clusters in an active/active configuration.
We want to use Mirror Maker 2 to help us with DR by syncing topics and consumer offsets so that we can have consumers fail-over to a secondary cluster, in the event of an issue with the primary cluster.
Problem
We would perform this fail-over as follows:
Consumers are consuming from both local/remote topics in a Kafka cluster (.*topic1)
Producers switch from clusterA to clusterB.
Consumers switch from clusterA to clusterB.
The issue occurs in between steps 2 and 3. When the consumer disconnects from clusterA, any messages produced to clusterB (step 2) while disconnected, are not received by the consumers when they connect to clusterB.
Subsequent messages are received.
If this process were to be repeated, now that the consumer group exists in both clusters, the issue no longer occurs.
Steps to reproduce
Save the following file as mm2.properties in directory mm2_config:
clusters=clusterA, clusterB
clusterA.bootstrap.servers=broker1A:29092,broker2A:39092,broker3A:49092
clusterB.bootstrap.servers=broker1B:29093,broker2B:29094,broker3B:29095
clusterB.config.storage.replication.factor=3
clusterA.offset.storage.replication.factor=3
clusterB.offset.storage.replication.factor=3
clusterA.status.storage.replication.factor=3
clusterB.status.storage.replication.factor=3
clusterA->clusterB.enabled=true
clusterB->clusterA.enabled=true
offset-syncs.topic.replication.factor=3
heartbeats.topic.replication.factor=3
checkpoints.topic.replication.factor=3
topics=.*
groups=.*
tasks.max=2
replication.factor=3
refresh.topics.enabled=true
sync.topic.configs.enabled=true
refresh.topics.interval.seconds=30
clusterA->clusterB.emit.heartbeats.enabled=true
clusterA->clusterB.emit.checkpoints.enabled=true
clusterB->clusterA.emit.heartbeats.enabled=true
clusterB->clusterA.emit.checkpoints.enabled=true
clusterA->clusterB.sync.group.offsets.enabled=true
clusterB->clusterA.sync.group.offsets.enabled=true
clusterA->clusterB.emit.checkpoints.interval.seconds=10
clusterA->clusterB.sync.group.offsets.interval.seconds=10
clusterB->clusterA.emit.checkpoints.interval.seconds=10
clusterB->clusterA.sync.group.offsets.interval.seconds=10
Save the following docker-compose.yml, which creates 2 kafka clusters with 3 brokers and 1 zookeeper in each:
version: '3'
services:
zookeeperA:
image: confluentinc/cp-zookeeper
hostname: zookeeperA
container_name: zookeeperA
ports:
- 22181:22181
- 22888:22888
- 23888:23888
volumes:
- ./zookeeperA/data:/data
environment:
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_CLIENT_PORT: 22181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_INIT_LIMIT: 5
ZOOKEEPER_SYNC_LIMIT: 2
ZOOKEEPER_SERVERS: zookeeperA:22888:23888
broker1A:
image: confluentinc/cp-kafka
hostname: broker1A
container_name: broker1A
ports:
- 9092:9092
- 29092:29092
depends_on:
- zookeeperA
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeperA:22181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:29092,PLAINTEXT_HOST://0.0.0.0:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker1A:29092,PLAINTEXT_HOST://localhost:9092
ALLOW_PLAINTEXT_LISTENER: 'yes'
KAFKA_AUTO_OFFSET_RESET: "latest"
broker2A:
image: confluentinc/cp-kafka
hostname: broker2A
container_name: broker2A
ports:
- 9093:9093
- 39092:39092
depends_on:
- zookeeperA
environment:
KAFKA_BROKER_ID: 2
KAFKA_ZOOKEEPER_CONNECT: zookeeperA:22181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:39092,PLAINTEXT_HOST://0.0.0.0:9093
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker2A:39092,PLAINTEXT_HOST://localhost:9093
ALLOW_PLAINTEXT_LISTENER: 'yes'
KAFKA_AUTO_OFFSET_RESET: "latest"
broker3A:
image: confluentinc/cp-kafka
hostname: broker3A
container_name: broker3A
ports:
- 9094:9094
- 49092:49092
depends_on:
- zookeeperA
environment:
KAFKA_BROKER_ID: 3
KAFKA_ZOOKEEPER_CONNECT: zookeeperA:22181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:49092,PLAINTEXT_HOST://0.0.0.0:9094
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker3A:49092,PLAINTEXT_HOST://localhost:9094
ALLOW_PLAINTEXT_LISTENER: 'yes'
KAFKA_AUTO_OFFSET_RESET: "latest"
zookeeperB:
image: confluentinc/cp-zookeeper
hostname: zookeeperB
container_name: zookeeperB
ports:
- 32181:32181
- 32888:32888
- 33888:33888
volumes:
- ./zookeeperB/data:/data
environment:
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_CLIENT_PORT: 32181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_INIT_LIMIT: 5
ZOOKEEPER_SYNC_LIMIT: 2
ZOOKEEPER_SERVERS: zookeeperB:32888:33888
broker1B:
image: confluentinc/cp-kafka
hostname: broker1B
container_name: broker1B
ports:
- 8092:8092
- 29093:29093
depends_on:
- zookeeperB
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeperB:32181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:29093,PLAINTEXT_HOST://0.0.0.0:8092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker1B:29093,PLAINTEXT_HOST://localhost:8092
ALLOW_PLAINTEXT_LISTENER: 'yes'
KAFKA_AUTO_OFFSET_RESET: "latest"
broker2B:
image: confluentinc/cp-kafka
hostname: broker2B
container_name: broker2B
ports:
- 8093:8093
- 29094:29094
depends_on:
- zookeeperB
environment:
KAFKA_BROKER_ID: 2
KAFKA_ZOOKEEPER_CONNECT: zookeeperB:32181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:29094,PLAINTEXT_HOST://0.0.0.0:8093
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker2B:29094,PLAINTEXT_HOST://localhost:8093
ALLOW_PLAINTEXT_LISTENER: 'yes'
KAFKA_AUTO_OFFSET_RESET: "latest"
broker3B:
image: confluentinc/cp-kafka
hostname: broker3B
container_name: broker3B
ports:
- 8094:8094
- 29095:29095
depends_on:
- zookeeperB
environment:
KAFKA_BROKER_ID: 3
KAFKA_ZOOKEEPER_CONNECT: zookeeperB:32181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:29095,PLAINTEXT_HOST://0.0.0.0:8094
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker3B:29095,PLAINTEXT_HOST://localhost:8094
ALLOW_PLAINTEXT_LISTENER: 'yes'
KAFKA_AUTO_OFFSET_RESET: "latest"
mirror-maker:
image: confluentinc/cp-kafka
hostname: mirror-maker
container_name: mirror-maker
volumes:
- ./mm2_config:/tmp/kafka/config
ports:
- 9091:9091
- 29096:29096
depends_on:
- zookeeperA
- zookeeperB
- broker1A
- broker2A
- broker3A
- broker1B
- broker2B
- broker3B
environment:
KAFKA_BROKER_ID: 4
KAFKA_ZOOKEEPER_CONNECT: zookeeperA:22181,zookeeperB:32181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_LISTENERS: PLAINTEXT://:29096,PLAINTEXT_HOST://:9091
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://mirror-maker:29096,PLAINTEXT_HOST://localhost:9091
ALLOW_PLAINTEXT_LISTENER: 'yes'
KAFKA_AUTO_OFFSET_RESET: "latest"
Start the stack:
docker-compose up -d
Create topic topic1 in cluster A:
docker exec broker1A \
kafka-topics --bootstrap-server broker1A:29092 --create --topic topic1 --partitions 3 --replication-factor 3
Create topic topic1 in cluster B:
docker exec broker1B \
kafka-topics --bootstrap-server broker1B:29093 --create --topic topic1 --partitions 3 --replication-factor 3
Start MirrorMaker:
docker exec mirror-maker \
connect-mirror-maker /tmp/kafka/config/mm2.properties
Start a console consumer for cluster A:
docker exec broker1A \
kafka-console-consumer --bootstrap-server broker1A:29092 --include '.*topic1' --group test
Start a console producer for cluster A and publish m1:
docker exec -it broker1A \
kafka-console-producer --bootstrap-server broker1A:29092 --topic topic1
> m1
Verify that the consumer received m1.
Stop the producer and start it again pointing to cluster B and publish message m2.
docker exec -it broker1B \
kafka-console-producer --bootstrap-server broker1B:29093 --topic topic1
> m2
Verify that the consumer received m2.
Stop the consumer.
Have the producer publish m3.
Start the consumer in cluster B:
docker exec broker1B \
kafka-console-consumer --bootstrap-server broker1B:29093 --include '.*topic1' --group test
Observe that m3 is not received.
Publish m4 from the producer.
Verify that m4 is received.
Message m3 is lost.
I need to enable SSL security in apache kafka and zookeeper? Is there any tutorial? I am facing issues with the truststore path.
you can go through below links to set SSL:
https://docs.confluent.io/platform/current/security/security_tutorial.html#generating-keys-certs
https://docs.confluent.io/3.0.0/kafka/ssl.html
This is the docker i m currently using:
version: '3'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
container_name: zookeeper
hostname: zookeeper
ports:
- 2181:2181
environment:
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_CLIENT_PORT: 2181
broker:
image: confluentinc/cp-kafka:latest
container_name: broker
hostname: broker
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,SSL:SSL
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,SSL://broker:9093
KAFKA_SSL_KEYSTORE_FILENAME: kafka.server.keystore.jks
KAFKA_SSL_KEYSTORE_CREDENTIALS: kafka.key
KAFKA_SSL_KEY_CREDENTIALS: kafka.key
KAFKA_SSL_TRUSTSTORE_FILENAME: kafka.server.truststore.jks
KAFKA_SSL_TRUSTSTORE_CREDENTIALS: kafka.key
KAFKA_MIN_INSYNC_REPLICAS: 1
KAFKA_NUM_PARTITIONS: 1
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 500
KAFKA_DEFAULT_REPLICATION_FACTOR: 1
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
CONFLUENT_METRICS_ENABLE: 'false'
volumes:
- ./se:/etc/kafka/secrets
I am trying to add a mongoDB connector and a mqtt connector in my confluent connect through the following docker-compose:
---
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:6.1.0
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-server:6.1.0
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
- "9101:9101"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9101
KAFKA_JMX_HOSTNAME: localhost
KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:29092
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
schema-registry:
image: confluentinc/cp-schema-registry:6.1.0
hostname: schema-registry
container_name: schema-registry
depends_on:
- broker
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: 'broker:29092'
SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081
connect:
#image: cnfldemos/cp-server-connect-datagen:0.4.0-6.1.0
# decomment following line to build custom connector
#image: confluentinc/kafka-connect-datagen:latest
build:
context: .
dockerfile: Dockerfile-confluenthub
hostname: connect
container_name: connect
depends_on:
- broker
- schema-registry
ports:
- "8083:8083"
environment:
CONNECT_BOOTSTRAP_SERVERS: 'broker:29092'
CONNECT_REST_ADVERTISED_HOST_NAME: connect
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081
# CLASSPATH required due to CC-2422
CLASSPATH: /usr/share/java/monitoring-interceptors/monitoring-interceptors-6.1.0.jar
CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components"
CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR
control-center:
image: confluentinc/cp-enterprise-control-center:6.1.0
hostname: control-center
container_name: control-center
depends_on:
- broker
- schema-registry
- connect
- ksqldb-server
ports:
- "9021:9021"
environment:
CONTROL_CENTER_BOOTSTRAP_SERVERS: 'broker:29092'
CONTROL_CENTER_CONNECT_CLUSTER: 'connect:8083'
CONTROL_CENTER_KSQL_KSQLDB1_URL: "http://ksqldb-server:8088"
CONTROL_CENTER_KSQL_KSQLDB1_ADVERTISED_URL: "http://localhost:8088"
CONTROL_CENTER_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
CONTROL_CENTER_REPLICATION_FACTOR: 1
CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS: 1
CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS: 1
CONFLUENT_METRICS_TOPIC_REPLICATION: 1
PORT: 9021
ksqldb-server:
image: confluentinc/cp-ksqldb-server:6.1.0
hostname: ksqldb-server
container_name: ksqldb-server
depends_on:
- broker
- connect
ports:
- "8088:8088"
environment:
KSQL_CONFIG_DIR: "/etc/ksql"
KSQL_BOOTSTRAP_SERVERS: "broker:29092"
KSQL_HOST_NAME: ksqldb-server
KSQL_LISTENERS: "http://0.0.0.0:8088"
KSQL_CACHE_MAX_BYTES_BUFFERING: 0
KSQL_KSQL_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
KSQL_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
KSQL_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
KSQL_KSQL_CONNECT_URL: "http://connect:8083"
KSQL_KSQL_LOGGING_PROCESSING_TOPIC_REPLICATION_FACTOR: 1
KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: 'true'
KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: 'true'
ksqldb-cli:
image: confluentinc/cp-ksqldb-cli:6.1.0
container_name: ksqldb-cli
depends_on:
- broker
- connect
- ksqldb-server
entrypoint: /bin/sh
tty: true
ksql-datagen:
image: confluentinc/ksqldb-examples:6.1.0
hostname: ksql-datagen
container_name: ksql-datagen
depends_on:
- ksqldb-server
- broker
- schema-registry
- connect
command: "bash -c 'echo Waiting for Kafka to be ready... && \
cub kafka-ready -b broker:29092 1 40 && \
echo Waiting for Confluent Schema Registry to be ready... && \
cub sr-ready schema-registry 8081 40 && \
echo Waiting a few seconds for topic creation to finish... && \
sleep 11 && \
tail -f /dev/null'"
environment:
KSQL_CONFIG_DIR: "/etc/ksql"
STREAMS_BOOTSTRAP_SERVERS: broker:29092
STREAMS_SCHEMA_REGISTRY_HOST: schema-registry
STREAMS_SCHEMA_REGISTRY_PORT: 8081
rest-proxy:
image: confluentinc/cp-kafka-rest:6.1.0
depends_on:
- broker
- schema-registry
ports:
- 8082:8082
hostname: rest-proxy
container_name: rest-proxy
environment:
KAFKA_REST_HOST_NAME: rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: 'broker:29092'
KAFKA_REST_LISTENERS: "http://0.0.0.0:8082"
KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
Where the Dockerfile-confluenthub defined in Connect is:
FROM cnfldemos/cp-server-connect-datagen:0.4.0-6.1.0
RUN confluent-hub install --no-prompt hpgrahsl/kafka-connect-mongodb:1.1.0 \
&& confluent-hub install --no-prompt confluentinc/kafka-connect-mqtt:1.4.0 \
&& confluent-hub install debezium/debezium-connector-sqlserver:1.3.1
But in this way the connect container does not work after some minutes
Can you help me?
Personally, I would use individual RUN commands since each connector installation would become its own Docker layer
Error 137 is a memory issue. Unclear what your host OS is, but for Windows and Mac, you must increase the Docker memory settings at least 6GB to run all those containers at the same time. Plus, unclear if you plan on running those databases within Docker as well, so you'll definitely need more memory, then
If you only want Connect, then remove Control Center, Rest proxy, and the 3 ksql containers
i am using docker image of confluent-platform(cp-all-in-one) from github
https://github.com/confluentinc/cp-all-in-one/tree/5.5.1-post/cp-all-in-one
so to avoid the kafka unavailability how can i change the docker-compose to configure the multi kafka broker without affecting the existing cluster setup?
Here is my docker-compose.yml file
---
version: "2"
services:
zookeeper:
image: confluentinc/cp-zookeeper:5.5.1
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-server:5.5.1
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:29092
CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: "true"
CONFLUENT_SUPPORT_CUSTOMER_ID: "anonymous"
KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL: http://schema-registry:8081
schema-registry:
image: confluentinc/cp-schema-registry:5.5.1
hostname: schema-registry
container_name: schema-registry
depends_on:
- zookeeper
- broker
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: "zookeeper:2181"
connect:
image: cnfldemos/cp-server-connect-datagen:0.3.2-5.5.0
hostname: connect
container_name: connect
depends_on:
- zookeeper
- broker
- schema-registry
ports:
- "8083:8083"
environment:
CONNECT_BOOTSTRAP_SERVERS: "broker:29092"
CONNECT_REST_ADVERTISED_HOST_NAME: connect
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_ZOOKEEPER_CONNECT: "zookeeper:2181"
# CLASSPATH required due to CC-2422
CLASSPATH: /usr/share/java/monitoring-interceptors/monitoring-interceptors-5.5.1.jar
CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components"
CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR
control-center:
image: confluentinc/cp-enterprise-control-center:5.5.1
hostname: control-center
container_name: control-center
depends_on:
- zookeeper
- broker
- schema-registry
- connect
- ksqldb-server
ports:
- "9021:9021"
environment:
CONTROL_CENTER_BOOTSTRAP_SERVERS: "broker:29092"
CONTROL_CENTER_ZOOKEEPER_CONNECT: "zookeeper:2181"
CONTROL_CENTER_CONNECT_CLUSTER: "connect:8083"
CONTROL_CENTER_KSQL_KSQLDB1_URL: "http://ksqldb-server:8088"
CONTROL_CENTER_KSQL_KSQLDB1_ADVERTISED_URL: "http://localhost:8088"
CONTROL_CENTER_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
CONTROL_CENTER_REPLICATION_FACTOR: 1
CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS: 1
CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS: 1
CONFLUENT_METRICS_TOPIC_REPLICATION: 1
PORT: 9021
ksqldb-server:
image: confluentinc/cp-ksqldb-server:5.5.1
hostname: ksqldb-server
container_name: ksqldb-server
depends_on:
- broker
- connect
ports:
- "8088:8088"
environment:
KSQL_CONFIG_DIR: "/etc/ksql"
KSQL_BOOTSTRAP_SERVERS: "broker:29092"
KSQL_HOST_NAME: ksqldb-server
KSQL_LISTENERS: "http://0.0.0.0:8088"
KSQL_CACHE_MAX_BYTES_BUFFERING: 0
KSQL_KSQL_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
KSQL_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
KSQL_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
KSQL_KSQL_CONNECT_URL: "http://connect:8083"
ksqldb-cli:
image: confluentinc/cp-ksqldb-cli:5.5.1
container_name: ksqldb-cli
depends_on:
- broker
- connect
- ksqldb-server
entrypoint: /bin/sh
tty: true
ksql-datagen:
image: confluentinc/ksqldb-examples:5.5.1
hostname: ksql-datagen
container_name: ksql-datagen
depends_on:
- ksqldb-server
- broker
- schema-registry
- connect
command: "bash -c 'echo Waiting for Kafka to be ready... && \
cub kafka-ready -b broker:29092 1 40 && \
echo Waiting for Confluent Schema Registry to be ready... && \
cub sr-ready schema-registry 8081 40 && \
echo Waiting a few seconds for topic creation to finish... && \
sleep 11 && \
tail -f /dev/null'"
environment:
KSQL_CONFIG_DIR: "/etc/ksql"
STREAMS_BOOTSTRAP_SERVERS: broker:29092
STREAMS_SCHEMA_REGISTRY_HOST: schema-registry
STREAMS_SCHEMA_REGISTRY_PORT: 8081
rest-proxy:
image: confluentinc/cp-kafka-rest:5.5.1
depends_on:
- zookeeper
- broker
- schema-registry
ports:
- 8082:8082
hostname: rest-proxy
container_name: rest-proxy
environment:
KAFKA_REST_HOST_NAME: rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: "broker:29092"
KAFKA_REST_LISTENERS: "http://0.0.0.0:8082"
KAFKA_REST_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
Please share your valuable thoughts or suggestions....
If you want another broker, you follow the same instructions as outside of Docker.
Copy the existing broker, keep the same KAFKA_ZOOKEEPER_CONNECT, and use a different KAFKA_BROKER_ID
You can then change the replication factors and ISR to values higher than 1
QUESTION:
How do I configure "securityMechanism=9, encryptionAlgorithm=2" for a db2 database connection in my docker-compose file?
NOTE: When running my local kafka installation (kafka_2.13-2.6.0) to connect to a db2 database on the network, I only had to modify the bin/connect-standalone.sh file
by modifying the existing "EXTRA_ARGS=" line like this:
(...)
EXTRA_ARGS=${EXTRA_ARGS-'-name connectStandalone -Ddb2.jcc.securityMechanism=9 -Ddb2.jcc.encryptionAlgorithm=2'}
(...)
it worked fine.
However, when I tried using the same idea for a containerized kafka/broker "service" (docker-compose.yml),
by mounting a volume with the modified "connect-standalone" file content (to replace the "/usr/bin/connect-standalone" file in the container) it did not work.
I did verify that the container's file was changed.
...I receive this exception when I attempt to use a kafka-jdbc-source-connector to connect to the database:
Caused by: com.ibm.db2.jcc.am.SqlInvalidAuthorizationSpecException: [jcc][t4][201][11237][4.25.13] Connection authorization failure occurred.
Reason: Security mechanism not supported. ERRORCODE=-4214, SQLSTATE=28000
So, again, how do I configure the securityMechanism/encryptionAlgorithm setting in a docker-compose.yml?
Thx for any help
-sairn
here is a docker-compose.yml - you can see I've tried mounting volume with the modified "connect-standalone" file in both the broker(kafka) service and the kafka-connect service... neither achieved the desired effect
version: '3.8'
services:
zookeeper:
image: confluentinc/cp-zookeeper:6.0.0
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka:
image: confluentinc/cp-enterprise-kafka:6.0.0
container_name: kafka
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://kafka:9092
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 100
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: kafka:29092
CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
JVM_OPTS: "-Ddb2.jcc.securityMechanism=9 -Ddb2.jcc.encryptionAlgorithm=2"
volumes:
- ./connect-standalone:/usr/bin/connect-standalone
schema-registry:
image: confluentinc/cp-schema-registry:6.0.0
container_name: schema-registry
hostname: schema-registry
depends_on:
- zookeeper
- kafka
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: 'zookeeper:2181'
SCHEMA_REGISTRY_LISTENERS: http://schema-registry:8081
kafka-connect:
image: confluentinc/cp-kafka-connect:6.0.0
container_name: kafka-connect
hostname: kafka-connect
depends_on:
- kafka
- schema-registry
ports:
- "8083:8083"
environment:
CONNECT_BOOTSTRAP_SERVERS: "kafka:29092"
CONNECT_REST_ADVERTISED_HOST_NAME: "kafka-connect"
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: kafka-connect
CONNECT_CONFIG_STORAGE_TOPIC: kafka-connect-configs
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
CONNECT_OFFSET_STORAGE_TOPIC: kafka-connect-offsets
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_TOPIC: kafka-connect-status
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_ZOOKEEPER_CONNECT: 'zookeeper:2181'
CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components"
CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR
JVM_OPTS: "-Ddb2.jcc.securityMechanism=9 -Ddb2.jcc.encryptionAlgorithm=2"
volumes:
- ./kafka-connect-jdbc-10.0.1.jar:/usr/share/java/kafka-connect-jdbc/kafka-connect-jdbc-10.0.1.jar
- ./db2jcc-db2jcc4.jar:/usr/share/java/kafka-connect-jdbc/db2jcc-db2jcc4.jar
- ./connect-standalone:/usr/bin/connect-standalone
Fwiw, the connector looks similar to this...
curl -X POST http://localhost:8083/connectors -H "Content-Type: application/json" -d '{
"name": "CONNECTOR01",
"config": {
"connector.class":"io.confluent.connect.jdbc.JdbcSourceConnector",
"connection.url":"jdbc:db2://THEDBURL:50000/XXXXX",
"connection.user":"myuserid",
"connection.password":"mypassword",
"poll.interval.ms":"15000",
"table.whitelist":"YYYYY.TABLEA",
"topic.prefix":"tbl-",
"mode":"timestamp",
"timestamp.initial":"-1",
"timestamp.column.name":"TIME_UPD",
"poll.interval.ms":"15000"
}
}'
Try to use KAFKA_OPTS instead of JVM_OPTS