How can i setup multi broker kafka cluster to avoid kafka failure? - apache-kafka

i am using docker image of confluent-platform(cp-all-in-one) from github
https://github.com/confluentinc/cp-all-in-one/tree/5.5.1-post/cp-all-in-one
so to avoid the kafka unavailability how can i change the docker-compose to configure the multi kafka broker without affecting the existing cluster setup?
Here is my docker-compose.yml file
---
version: "2"
services:
zookeeper:
image: confluentinc/cp-zookeeper:5.5.1
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-server:5.5.1
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:29092
CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: "true"
CONFLUENT_SUPPORT_CUSTOMER_ID: "anonymous"
KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL: http://schema-registry:8081
schema-registry:
image: confluentinc/cp-schema-registry:5.5.1
hostname: schema-registry
container_name: schema-registry
depends_on:
- zookeeper
- broker
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: "zookeeper:2181"
connect:
image: cnfldemos/cp-server-connect-datagen:0.3.2-5.5.0
hostname: connect
container_name: connect
depends_on:
- zookeeper
- broker
- schema-registry
ports:
- "8083:8083"
environment:
CONNECT_BOOTSTRAP_SERVERS: "broker:29092"
CONNECT_REST_ADVERTISED_HOST_NAME: connect
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_ZOOKEEPER_CONNECT: "zookeeper:2181"
# CLASSPATH required due to CC-2422
CLASSPATH: /usr/share/java/monitoring-interceptors/monitoring-interceptors-5.5.1.jar
CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components"
CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR
control-center:
image: confluentinc/cp-enterprise-control-center:5.5.1
hostname: control-center
container_name: control-center
depends_on:
- zookeeper
- broker
- schema-registry
- connect
- ksqldb-server
ports:
- "9021:9021"
environment:
CONTROL_CENTER_BOOTSTRAP_SERVERS: "broker:29092"
CONTROL_CENTER_ZOOKEEPER_CONNECT: "zookeeper:2181"
CONTROL_CENTER_CONNECT_CLUSTER: "connect:8083"
CONTROL_CENTER_KSQL_KSQLDB1_URL: "http://ksqldb-server:8088"
CONTROL_CENTER_KSQL_KSQLDB1_ADVERTISED_URL: "http://localhost:8088"
CONTROL_CENTER_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
CONTROL_CENTER_REPLICATION_FACTOR: 1
CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS: 1
CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS: 1
CONFLUENT_METRICS_TOPIC_REPLICATION: 1
PORT: 9021
ksqldb-server:
image: confluentinc/cp-ksqldb-server:5.5.1
hostname: ksqldb-server
container_name: ksqldb-server
depends_on:
- broker
- connect
ports:
- "8088:8088"
environment:
KSQL_CONFIG_DIR: "/etc/ksql"
KSQL_BOOTSTRAP_SERVERS: "broker:29092"
KSQL_HOST_NAME: ksqldb-server
KSQL_LISTENERS: "http://0.0.0.0:8088"
KSQL_CACHE_MAX_BYTES_BUFFERING: 0
KSQL_KSQL_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
KSQL_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
KSQL_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
KSQL_KSQL_CONNECT_URL: "http://connect:8083"
ksqldb-cli:
image: confluentinc/cp-ksqldb-cli:5.5.1
container_name: ksqldb-cli
depends_on:
- broker
- connect
- ksqldb-server
entrypoint: /bin/sh
tty: true
ksql-datagen:
image: confluentinc/ksqldb-examples:5.5.1
hostname: ksql-datagen
container_name: ksql-datagen
depends_on:
- ksqldb-server
- broker
- schema-registry
- connect
command: "bash -c 'echo Waiting for Kafka to be ready... && \
cub kafka-ready -b broker:29092 1 40 && \
echo Waiting for Confluent Schema Registry to be ready... && \
cub sr-ready schema-registry 8081 40 && \
echo Waiting a few seconds for topic creation to finish... && \
sleep 11 && \
tail -f /dev/null'"
environment:
KSQL_CONFIG_DIR: "/etc/ksql"
STREAMS_BOOTSTRAP_SERVERS: broker:29092
STREAMS_SCHEMA_REGISTRY_HOST: schema-registry
STREAMS_SCHEMA_REGISTRY_PORT: 8081
rest-proxy:
image: confluentinc/cp-kafka-rest:5.5.1
depends_on:
- zookeeper
- broker
- schema-registry
ports:
- 8082:8082
hostname: rest-proxy
container_name: rest-proxy
environment:
KAFKA_REST_HOST_NAME: rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: "broker:29092"
KAFKA_REST_LISTENERS: "http://0.0.0.0:8082"
KAFKA_REST_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
Please share your valuable thoughts or suggestions....

If you want another broker, you follow the same instructions as outside of Docker.
Copy the existing broker, keep the same KAFKA_ZOOKEEPER_CONNECT, and use a different KAFKA_BROKER_ID
You can then change the replication factors and ISR to values higher than 1

Related

How to enable SSL in kafka and zookeeper?

I need to enable SSL security in apache kafka and zookeeper? Is there any tutorial? I am facing issues with the truststore path.
you can go through below links to set SSL:
https://docs.confluent.io/platform/current/security/security_tutorial.html#generating-keys-certs
https://docs.confluent.io/3.0.0/kafka/ssl.html
This is the docker i m currently using:
version: '3'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
container_name: zookeeper
hostname: zookeeper
ports:
- 2181:2181
environment:
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_CLIENT_PORT: 2181
broker:
image: confluentinc/cp-kafka:latest
container_name: broker
hostname: broker
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,SSL:SSL
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,SSL://broker:9093
KAFKA_SSL_KEYSTORE_FILENAME: kafka.server.keystore.jks
KAFKA_SSL_KEYSTORE_CREDENTIALS: kafka.key
KAFKA_SSL_KEY_CREDENTIALS: kafka.key
KAFKA_SSL_TRUSTSTORE_FILENAME: kafka.server.truststore.jks
KAFKA_SSL_TRUSTSTORE_CREDENTIALS: kafka.key
KAFKA_MIN_INSYNC_REPLICAS: 1
KAFKA_NUM_PARTITIONS: 1
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 500
KAFKA_DEFAULT_REPLICATION_FACTOR: 1
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
CONFLUENT_METRICS_ENABLE: 'false'
volumes:
- ./se:/etc/kafka/secrets

docker-compose.yml with 3 zookepers and 1 broker set up with public IP - broker failed to start with no meaningful logs (but works with 1 zookeeper)

I have the following docker-compose.yml file:
version: '3.7'
services:
zookeeper-1:
image: confluentinc/cp-zookeeper:6.2.1
hostname: zookeeper-1
container_name: zookeeper-1
ports:
- "2181:2181"
- "2888:2888"
- "3888:3888"
environment:
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_PEER_PORT: 2888
ZOOKEEPER_LEADER_PORT: 3888
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_INIT_LIMIT: 5
ZOOKEEPER_SYNC_LIMIT: 2
ZOOKEEPER_SERVERS: "localhost:2888:3888;192.168.100.14:12888:13888;192.168.100.14:22888:23888"
volumes:
- ./kafka-data/zookeeper-1:/var/lib/zookeeper/data
- ./kafka-data/zookeeper-logs-1:/var/lib/zookeeper/log
networks:
- mynet
zookeeper-2:
image: confluentinc/cp-zookeeper:6.2.1
hostname: zookeeper-2
container_name: zookeeper-2
ports:
- "12181:12181"
- "12888:12888"
- "13888:13888"
environment:
ZOOKEEPER_SERVER_ID: 2
ZOOKEEPER_CLIENT_PORT: 12181
ZOOKEEPER_PEER_PORT: 12888
ZOOKEEPER_LEADER_PORT: 13888
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_INIT_LIMIT: 5
ZOOKEEPER_SYNC_LIMIT: 2
ZOOKEEPER_SERVERS: "192.168.100.14:2888:3888;localhost:12888:13888;192.168.100.14:22888:23888"
volumes:
- ./kafka-data/zookeeper-2:/var/lib/zookeeper/data
- ./kafka-data/zookeeper-logs-2:/var/lib/zookeeper/log
networks:
- mynet
zookeeper-3:
image: confluentinc/cp-zookeeper:6.2.1
hostname: zookeeper-3
container_name: zookeeper-3
ports:
- "22181:22181"
- "22888:22888"
- "23888:23888"
environment:
ZOOKEEPER_SERVER_ID: 3
ZOOKEEPER_CLIENT_PORT: 22181
ZOOKEEPER_PEER_PORT: 22888
ZOOKEEPER_LEADER_PORT: 23888
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_INIT_LIMIT: 5
ZOOKEEPER_SYNC_LIMIT: 2
ZOOKEEPER_SERVERS: "192.168.100.14:2888:3888;192.168.100.14:12888:13888;localhost:22888:23888"
volumes:
- ./kafka-data/zookeeper-3:/var/lib/zookeeper/data
- ./kafka-data/zookeeper-logs-3:/var/lib/zookeeper/log
networks:
- mynet
broker-1:
image: confluentinc/cp-kafka:6.2.1
hostname: broker-1
container_name: broker-1
depends_on:
- zookeeper-1
- zookeeper-2
- zookeeper-3
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_LISTENERS: OUTSIDE://0.0.0.0:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: OUTSIDE:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: OUTSIDE://192.168.100.14:9092
KAFKA_INTER_BROKER_LISTENER_NAME: OUTSIDE
KAFKA_ZOOKEEPER_CONNECT: "192.168.100.14:2181,192.168.100.14:12181,192.168.100.14:22181"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_LOG_MESSAGE_TIMESTAMP_TYPE: 'LogAppendTime'
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONNECTIONS_MAX_IDLE_MS: 31536000000 # 1 year
volumes:
- ./kafka-data/kafka-1:/var/lib/kafka/data
networks:
- mynet
# PORT 8081 reserved for Schema Registry
kafka-rest-1:
image: confluentinc/cp-kafka-rest:6.2.1
hostname: kafka-rest-1
container_name: kafka-rest-1
depends_on:
- broker-1
ports:
- "8082:8082"
environment:
KAFKA_REST_HOST_NAME: 192.168.100.14
KAFKA_REST_LISTENERS: http://0.0.0.0:8082
KAFKA_REST_BOOTSTRAP_SERVERS: 192.168.100.14:9092
networks:
- mynet
# PORT 8083 reserved for Kafka-Connect REST API
kafka-ui-1:
image: provectuslabs/kafka-ui:0.2.1
hostname: kafka-ui-1
container_name: kafka-ui-1
depends_on:
- broker-1
ports:
- "8084:8080"
environment:
KAFKA_CLUSTERS_0_NAME: lab
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: 192.168.100.14:9092
KAFKA_CLUSTERS_0_ZOOKEEPER: "192.168.100.14:2181;192.168.100.14:12181;192.168.100.14:22181"
networks:
- mynet
kafdrop-1:
image: obsidiandynamics/kafdrop:3.27.0
hostname: kafrop-1
container_name: kafdrop-1
depends_on:
- broker-1
ports:
- "8085:9000"
environment:
KAFKA_BROKERCONNECT: 192.168.100.14:9092
JVM_OPTS: "-Xms32M -Xmx64M"
SERVER_SERVLET_CONTEXTPATH: "/"
networks:
- mynet
networks:
mynet:
driver: bridge
Also found here at stackoverflow that zookeper ZOOKEEPER_SERVERS should not have external IP of itself (should be 'localhost' there), so my 3 zookeepers are started to work together.
But my broker-1 fails to start and exits with 'Exit 1' code and log is always:
# docker-compose up broker-1
zookeeper-1 is up-to-date
Starting broker-1 ... done
Attaching to broker-1
broker-1 | ===> User
broker-1 | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser)
broker-1 | ===> Configuring ...
broker-1 | ===> Running preflight checks ...
broker-1 | ===> Check if /var/lib/kafka/data is writable ...
broker-1 | ===> Check if Zookeeper is healthy ...
broker-1 | SLF4J: Class path contains multiple SLF4J bindings.
broker-1 | SLF4J: Found binding in [jar:file:/usr/share/java/cp-base-new/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
broker-1 | SLF4J: Found binding in [jar:file:/usr/share/java/cp-base-new/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
broker-1 | SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
broker-1 | SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
broker-1 | log4j:WARN No appenders could be found for logger (io.confluent.admin.utils.cli.ZookeeperReadyCommand).
broker-1 | log4j:WARN Please initialize the log4j system properly.
broker-1 | log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
broker-1 exited with code 1
But if I comment out additional zookeepers, then broker-1 started well, this is config that works:
version: '3.7'
services:
zookeeper-1:
image: confluentinc/cp-zookeeper:6.2.1
hostname: zookeeper-1
container_name: zookeeper-1
ports:
- "2181:2181"
- "2888:2888"
- "3888:3888"
environment:
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_PEER_PORT: 2888
ZOOKEEPER_LEADER_PORT: 3888
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_INIT_LIMIT: 5
ZOOKEEPER_SYNC_LIMIT: 2
ZOOKEEPER_SERVERS: "localhost:2888:3888" #;192.168.100.14:12888:13888;192.168.100.14:22888:23888"
volumes:
- ./kafka-data/zookeeper-1:/var/lib/zookeeper/data
- ./kafka-data/zookeeper-logs-1:/var/lib/zookeeper/log
networks:
- mynet
# zookeeper-2:
# image: confluentinc/cp-zookeeper:6.2.1
# hostname: zookeeper-2
# container_name: zookeeper-2
# ports:
# - "12181:12181"
# - "12888:12888"
# - "13888:13888"
# environment:
# ZOOKEEPER_SERVER_ID: 2
# ZOOKEEPER_CLIENT_PORT: 12181
# ZOOKEEPER_PEER_PORT: 12888
# ZOOKEEPER_LEADER_PORT: 13888
# ZOOKEEPER_TICK_TIME: 2000
# ZOOKEEPER_INIT_LIMIT: 5
# ZOOKEEPER_SYNC_LIMIT: 2
# ZOOKEEPER_SERVERS: "192.168.100.14:2888:3888;localhost:12888:13888;192.168.100.14:22888:23888"
# volumes:
# - ./kafka-data/zookeeper-2:/var/lib/zookeeper/data
# - ./kafka-data/zookeeper-logs-2:/var/lib/zookeeper/log
# networks:
# - mynet
#
# zookeeper-3:
# image: confluentinc/cp-zookeeper:6.2.1
# hostname: zookeeper-3
# container_name: zookeeper-3
# ports:
# - "22181:22181"
# - "22888:22888"
# - "23888:23888"
# environment:
# ZOOKEEPER_SERVER_ID: 3
# ZOOKEEPER_CLIENT_PORT: 22181
# ZOOKEEPER_PEER_PORT: 22888
# ZOOKEEPER_LEADER_PORT: 23888
# ZOOKEEPER_TICK_TIME: 2000
# ZOOKEEPER_INIT_LIMIT: 5
# ZOOKEEPER_SYNC_LIMIT: 2
# ZOOKEEPER_SERVERS: "192.168.100.14:2888:3888;192.168.100.14:12888:13888;localhost:22888:23888"
# volumes:
# - ./kafka-data/zookeeper-3:/var/lib/zookeeper/data
# - ./kafka-data/zookeeper-logs-3:/var/lib/zookeeper/log
# networks:
# - mynet
broker-1:
image: confluentinc/cp-kafka:6.2.1
hostname: broker-1
container_name: broker-1
depends_on:
- zookeeper-1
# - zookeeper-2
# - zookeeper-3
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_LISTENERS: OUTSIDE://0.0.0.0:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: OUTSIDE:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: OUTSIDE://192.168.100.14:9092
KAFKA_INTER_BROKER_LISTENER_NAME: OUTSIDE
KAFKA_ZOOKEEPER_CONNECT: "192.168.100.14:2181" #,192.168.100.14:12181,192.168.100.14:22181"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_LOG_MESSAGE_TIMESTAMP_TYPE: 'LogAppendTime'
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONNECTIONS_MAX_IDLE_MS: 31536000000 # 1 year
volumes:
- ./kafka-data/kafka-1:/var/lib/kafka/data
networks:
- mynet
# PORT 8081 reserved for Schema Registry
kafka-rest-1:
image: confluentinc/cp-kafka-rest:6.2.1
hostname: kafka-rest-1
container_name: kafka-rest-1
depends_on:
- broker-1
ports:
- "8082:8082"
environment:
KAFKA_REST_HOST_NAME: 192.168.100.14
KAFKA_REST_LISTENERS: http://0.0.0.0:8082
KAFKA_REST_BOOTSTRAP_SERVERS: 192.168.100.14:9092
networks:
- mynet
# PORT 8083 reserved for Kafka-Connect REST API
kafka-ui-1:
image: provectuslabs/kafka-ui:0.2.1
hostname: kafka-ui-1
container_name: kafka-ui-1
depends_on:
- broker-1
ports:
- "8084:8080"
environment:
KAFKA_CLUSTERS_0_NAME: lab
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: 192.168.100.14:9092
KAFKA_CLUSTERS_0_ZOOKEEPER: "192.168.100.14:2181" #;192.168.100.14:12181;192.168.100.14:22181"
networks:
- mynet
kafdrop-1:
image: obsidiandynamics/kafdrop:3.27.0
hostname: kafrop-1
container_name: kafdrop-1
depends_on:
- broker-1
ports:
- "8085:9000"
environment:
KAFKA_BROKERCONNECT: 192.168.100.14:9092
JVM_OPTS: "-Xms32M -Xmx64M"
SERVER_SERVLET_CONTEXTPATH: "/"
networks:
- mynet
networks:
mynet:
driver: bridge
What's wrong with 3-zookeepers config and why 1-zookeerper config is ok for single kafka broker?
UPD: Of cource I know that 3 zookeepers and 3 brokers (one now) makes no sense on the same host :-)
I need to simulate several hosts environment on the single host that I have, by using docker containers.
Plan is to switch off some of docker containers then to simulate different "hosts" (zookeper, broker) failures.
That's why I'm using "public" IP address 192.168.100.14 ("public" from container point of view) and different ports in configs for such simulation.
Docker bridged networking used, i.e. it's np to access to network hosts and even to Internet hosts by IP from inside of container - I tested that.
You seem to misunderstand Docker Compose networking. You should always be using service names, not IP addresses
If you use one Zookeeper server, ZOOKEEPER_SERVERS doesn't do anything. This is used to join a cluster
So, you're looking for this
services:
zookeeper-1:
image: confluentinc/cp-zookeeper:6.2.1
hostname: zookeeper-1
container_name: zookeeper-1
ports:
- "2181:2181"
- "2888:2888"
- "3888:3888"
environment:
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_PEER_PORT: 2888
ZOOKEEPER_LEADER_PORT: 3888
...
ZOOKEEPER_SERVERS: "localhost:2888:3888;zookeeper-1:12888:13888;zookeeper-2:22888:23888"
...
zookeeper-2:
image: confluentinc/cp-zookeeper:6.2.1
hostname: zookeeper-2
container_name: zookeeper-2
ports:
- "12181:12181"
- "12888:12888"
- "13888:13888"
environment:
ZOOKEEPER_SERVER_ID: 2
ZOOKEEPER_CLIENT_PORT: 12181
ZOOKEEPER_PEER_PORT: 12888
ZOOKEEPER_LEADER_PORT: 13888
...
ZOOKEEPER_SERVERS: "zookeeper-1:2888:3888;localhost:12888:13888;zookeeper-2:22888:23888"
...
zookeeper-3:
image: confluentinc/cp-zookeeper:6.2.1
hostname: zookeeper-3
container_name: zookeeper-3
ports:
- "22181:22181"
- "22888:22888"
- "23888:23888"
environment:
ZOOKEEPER_SERVER_ID: 3
ZOOKEEPER_CLIENT_PORT: 22181
ZOOKEEPER_PEER_PORT: 22888
ZOOKEEPER_LEADER_PORT: 23888
...
ZOOKEEPER_SERVERS: "zookeeper-1:2888:3888;zookeeper-2:12888:13888;localhost:22888:23888"
...
broker-1:
image: confluentinc/cp-kafka:6.2.1
hostname: broker-1
container_name: broker-1
depends_on:
- zookeeper-1
- zookeeper-2
- zookeeper-3
# ports removed because the listener is internal to the docker network only
environment:
KAFKA_BROKER_ID: 1
KAFKA_LISTENERS: INSIDE://0.0.0.0:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: INSIDE://broker-1:9092
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_ZOOKEEPER_CONNECT: "zookeeper-1:zookeeper-2:12181,zookeeper-3:22181"
...
... And so on ...
Also use broker-1:9092 for the bootstrap servers for the other containers
Keep in mind that you're using a single host, which is a single point of failure, and therefore more than one ZK server is pointless
Problem totally resolved for me! :-)
It seems Docker container bridged network adds problems to interconnect all stuff in the cluster.
Adding ZOOKEEPER_QUORUM_LISTEN_ON_ALL_IPS: 'true' to zookeper env and extra_hosts: to all containers solved all my problems! :-)
So I added into each config of each service the following lines with extra_hosts :
extra_hosts:
- "kafka-1:192.168.1.11"
- "kafka-2:192.168.1.12"
- "kafka-3:192.168.1.13"
Also it's important to add ZOOKEEPER_QUORUM_LISTEN_ON_ALL_IPS: 'true' to zookeper environment.
Full example:
version: '3.7'
x-zoo: &zoo "kafka-1:2888:3888;kafka-2:2888:3888;kafka-3:2888:3888"
x-kafkaZookeepers: &kafkaZookeepers "kafka-1:2181,kafka-2:2181,kafka-3:2181"
x-kafkaBrokers: &kafkaBrokers "kafka-1:9092"
services:
zookeeper:
image: confluentinc/cp-zookeeper:6.2.1
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
- "2888:2888"
- "3888:3888"
extra_hosts:
- "kafka-1:192.168.1.11"
- "kafka-2:192.168.1.12"
- "kafka-3:192.168.1.13"
environment:
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_QUORUM_LISTEN_ON_ALL_IPS: 'true'
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_PEER_PORT: 2888
ZOOKEEPER_LEADER_PORT: 3888
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_INIT_LIMIT: 5
ZOOKEEPER_SYNC_LIMIT: 2
ZOOKEEPER_SERVERS: *zoo
volumes:
- ./kafka-data/zookeeper:/var/lib/zookeeper/data
- ./kafka-data/zookeeper-logs:/var/lib/zookeeper/log
networks:
- mynet
broker:
image: confluentinc/cp-kafka:6.2.1
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
extra_hosts:
- "kafka-1:192.168.1.11"
- "kafka-2:192.168.1.12"
- "kafka-3:192.168.1.13"
environment:
KAFKA_BROKER_ID: 1
KAFKA_LISTENERS: OUTSIDE://0.0.0.0:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: OUTSIDE:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: OUTSIDE://192.168.1.11:9092
KAFKA_INTER_BROKER_LISTENER_NAME: OUTSIDE
KAFKA_ZOOKEEPER_CONNECT: *kafkaZookeepers
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_LOG_MESSAGE_TIMESTAMP_TYPE: 'LogAppendTime'
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONNECTIONS_MAX_IDLE_MS: 31536000000 # 1 year
volumes:
- ./kafka-data/kafka:/var/lib/kafka/data
networks:
- mynet
kafka-ui:
image: provectuslabs/kafka-ui:0.2.1
hostname: kafka-ui
container_name: kafka-ui
depends_on:
- broker
ports:
- "8084:8080"
extra_hosts:
- "kafka-1:192.168.1.11"
- "kafka-2:192.168.1.12"
- "kafka-3:192.168.1.13"
environment:
KAFKA_CLUSTERS_0_NAME: local_kafka
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: *kafkaBrokers
KAFKA_CLUSTERS_0_ZOOKEEPER: *kafkaZookeepers
networks:
- mynet
networks:
mynet:
driver: bridge
docker-compose.yml files on other VMs looks similar (I set up only 3 zookeepers for now), on 2nd VM:
version: '3.7'
x-zoo: &zoo "kafka-1:2888:3888;kafka-2:2888:3888;kafka-3:2888:3888"
x-kafkaZookeepers: &kafkaZookeepers "kafka-1:2181,kafka-2:2181,kafka-3:2181"
x-kafkaBrokers: &kafkaBrokers "kafka-1:9092"
services:
zookeeper:
image: confluentinc/cp-zookeeper:6.2.1
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
- "2888:2888"
- "3888:3888"
extra_hosts:
- "kafka-1:192.168.1.11"
- "kafka-2:192.168.1.12"
- "kafka-3:192.168.1.13"
environment:
ZOOKEEPER_SERVER_ID: 2
ZOOKEEPER_QUORUM_LISTEN_ON_ALL_IPS: 'true'
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_PEER_PORT: 2888
ZOOKEEPER_LEADER_PORT: 3888
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_INIT_LIMIT: 5
ZOOKEEPER_SYNC_LIMIT: 2
ZOOKEEPER_SERVERS: *zoo
volumes:
- ./kafka-data/zookeeper:/var/lib/zookeeper/data
- ./kafka-data/zookeeper-logs:/var/lib/zookeeper/log
networks:
- mynet
networks:
mynet:
driver: bridge
and on 3rd VM:
version: '3.7'
x-zoo: &zoo "kafka-1:2888:3888;kafka-2:2888:3888;kafka-3:2888:3888"
x-kafkaZookeepers: &kafkaZookeepers "kafka-1:2181,kafka-2:2181,kafka-3:2181"
x-kafkaBrokers: &kafkaBrokers "kafka-1:9092"
services:
zookeeper:
image: confluentinc/cp-zookeeper:6.2.1
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
- "2888:2888"
- "3888:3888"
extra_hosts:
- "kafka-1:192.168.1.11"
- "kafka-2:192.168.1.12"
- "kafka-3:192.168.1.13"
environment:
ZOOKEEPER_SERVER_ID: 3
ZOOKEEPER_QUORUM_LISTEN_ON_ALL_IPS: 'true'
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_PEER_PORT: 2888
ZOOKEEPER_LEADER_PORT: 3888
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_INIT_LIMIT: 5
ZOOKEEPER_SYNC_LIMIT: 2
ZOOKEEPER_SERVERS: *zoo
volumes:
- ./kafka-data/zookeeper:/var/lib/zookeeper/data
- ./kafka-data/zookeeper-logs:/var/lib/zookeeper/log
networks:
- mynet
networks:
mynet:
driver: bridge
And according the logs all Docker containers now may connect each other ok.
I also added my hosts kafka-1, kafka-2, kafka-3 to /etc/hosts file on VMs:
test#kafka-1:~/Kafka-Docker$ cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 kafka-1
192.168.1.11 kafka-1
192.168.1.12 kafka-2
192.168.1.13 kafka-3
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
Hope it will save somebody's time and nerves and perhas that was the answer I was waiting for! :-)
Also note that I'm using latest atm (see date) docker images (cp-zookeper,cp-kafka,etc.) 6.2.1 from dockerhub from official confluent publisher.
UPD: I forward port 4888->8080 on each zookeeper and got simple REST admin page ( https://zookeeper.apache.org/doc/r3.5.9/zookeeperAdmin.html#sc_zkCommands ) where I may see who is "leader" and who is "follower" , examples:
enter image description here
enter image description here

How Can I add multiple connectors in my confluent connect with a docker compose?

I am trying to add a mongoDB connector and a mqtt connector in my confluent connect through the following docker-compose:
---
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:6.1.0
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-server:6.1.0
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
- "9101:9101"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9101
KAFKA_JMX_HOSTNAME: localhost
KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:29092
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
schema-registry:
image: confluentinc/cp-schema-registry:6.1.0
hostname: schema-registry
container_name: schema-registry
depends_on:
- broker
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: 'broker:29092'
SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081
connect:
#image: cnfldemos/cp-server-connect-datagen:0.4.0-6.1.0
# decomment following line to build custom connector
#image: confluentinc/kafka-connect-datagen:latest
build:
context: .
dockerfile: Dockerfile-confluenthub
hostname: connect
container_name: connect
depends_on:
- broker
- schema-registry
ports:
- "8083:8083"
environment:
CONNECT_BOOTSTRAP_SERVERS: 'broker:29092'
CONNECT_REST_ADVERTISED_HOST_NAME: connect
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081
# CLASSPATH required due to CC-2422
CLASSPATH: /usr/share/java/monitoring-interceptors/monitoring-interceptors-6.1.0.jar
CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components"
CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR
control-center:
image: confluentinc/cp-enterprise-control-center:6.1.0
hostname: control-center
container_name: control-center
depends_on:
- broker
- schema-registry
- connect
- ksqldb-server
ports:
- "9021:9021"
environment:
CONTROL_CENTER_BOOTSTRAP_SERVERS: 'broker:29092'
CONTROL_CENTER_CONNECT_CLUSTER: 'connect:8083'
CONTROL_CENTER_KSQL_KSQLDB1_URL: "http://ksqldb-server:8088"
CONTROL_CENTER_KSQL_KSQLDB1_ADVERTISED_URL: "http://localhost:8088"
CONTROL_CENTER_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
CONTROL_CENTER_REPLICATION_FACTOR: 1
CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS: 1
CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS: 1
CONFLUENT_METRICS_TOPIC_REPLICATION: 1
PORT: 9021
ksqldb-server:
image: confluentinc/cp-ksqldb-server:6.1.0
hostname: ksqldb-server
container_name: ksqldb-server
depends_on:
- broker
- connect
ports:
- "8088:8088"
environment:
KSQL_CONFIG_DIR: "/etc/ksql"
KSQL_BOOTSTRAP_SERVERS: "broker:29092"
KSQL_HOST_NAME: ksqldb-server
KSQL_LISTENERS: "http://0.0.0.0:8088"
KSQL_CACHE_MAX_BYTES_BUFFERING: 0
KSQL_KSQL_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
KSQL_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
KSQL_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
KSQL_KSQL_CONNECT_URL: "http://connect:8083"
KSQL_KSQL_LOGGING_PROCESSING_TOPIC_REPLICATION_FACTOR: 1
KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: 'true'
KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: 'true'
ksqldb-cli:
image: confluentinc/cp-ksqldb-cli:6.1.0
container_name: ksqldb-cli
depends_on:
- broker
- connect
- ksqldb-server
entrypoint: /bin/sh
tty: true
ksql-datagen:
image: confluentinc/ksqldb-examples:6.1.0
hostname: ksql-datagen
container_name: ksql-datagen
depends_on:
- ksqldb-server
- broker
- schema-registry
- connect
command: "bash -c 'echo Waiting for Kafka to be ready... && \
cub kafka-ready -b broker:29092 1 40 && \
echo Waiting for Confluent Schema Registry to be ready... && \
cub sr-ready schema-registry 8081 40 && \
echo Waiting a few seconds for topic creation to finish... && \
sleep 11 && \
tail -f /dev/null'"
environment:
KSQL_CONFIG_DIR: "/etc/ksql"
STREAMS_BOOTSTRAP_SERVERS: broker:29092
STREAMS_SCHEMA_REGISTRY_HOST: schema-registry
STREAMS_SCHEMA_REGISTRY_PORT: 8081
rest-proxy:
image: confluentinc/cp-kafka-rest:6.1.0
depends_on:
- broker
- schema-registry
ports:
- 8082:8082
hostname: rest-proxy
container_name: rest-proxy
environment:
KAFKA_REST_HOST_NAME: rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: 'broker:29092'
KAFKA_REST_LISTENERS: "http://0.0.0.0:8082"
KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
Where the Dockerfile-confluenthub defined in Connect is:
FROM cnfldemos/cp-server-connect-datagen:0.4.0-6.1.0
RUN confluent-hub install --no-prompt hpgrahsl/kafka-connect-mongodb:1.1.0 \
&& confluent-hub install --no-prompt confluentinc/kafka-connect-mqtt:1.4.0 \
&& confluent-hub install debezium/debezium-connector-sqlserver:1.3.1
But in this way the connect container does not work after some minutes
Can you help me?
Personally, I would use individual RUN commands since each connector installation would become its own Docker layer
Error 137 is a memory issue. Unclear what your host OS is, but for Windows and Mac, you must increase the Docker memory settings at least 6GB to run all those containers at the same time. Plus, unclear if you plan on running those databases within Docker as well, so you'll definitely need more memory, then
If you only want Connect, then remove Control Center, Rest proxy, and the 3 ksql containers

How do I configure kafka-connect w/ "securityMechanism=9, encryptionAlgorithm=2" for a db2 database connection in my docker-compose file?

QUESTION:
How do I configure "securityMechanism=9, encryptionAlgorithm=2" for a db2 database connection in my docker-compose file?
NOTE: When running my local kafka installation (kafka_2.13-2.6.0) to connect to a db2 database on the network, I only had to modify the bin/connect-standalone.sh file
by modifying the existing "EXTRA_ARGS=" line like this:
(...)
EXTRA_ARGS=${EXTRA_ARGS-'-name connectStandalone -Ddb2.jcc.securityMechanism=9 -Ddb2.jcc.encryptionAlgorithm=2'}
(...)
it worked fine.
However, when I tried using the same idea for a containerized kafka/broker "service" (docker-compose.yml),
by mounting a volume with the modified "connect-standalone" file content (to replace the "/usr/bin/connect-standalone" file in the container) it did not work.
I did verify that the container's file was changed.
...I receive this exception when I attempt to use a kafka-jdbc-source-connector to connect to the database:
Caused by: com.ibm.db2.jcc.am.SqlInvalidAuthorizationSpecException: [jcc][t4][201][11237][4.25.13] Connection authorization failure occurred.
Reason: Security mechanism not supported. ERRORCODE=-4214, SQLSTATE=28000
So, again, how do I configure the securityMechanism/encryptionAlgorithm setting in a docker-compose.yml?
Thx for any help
-sairn
here is a docker-compose.yml - you can see I've tried mounting volume with the modified "connect-standalone" file in both the broker(kafka) service and the kafka-connect service... neither achieved the desired effect
version: '3.8'
services:
zookeeper:
image: confluentinc/cp-zookeeper:6.0.0
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka:
image: confluentinc/cp-enterprise-kafka:6.0.0
container_name: kafka
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://kafka:9092
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 100
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: kafka:29092
CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
JVM_OPTS: "-Ddb2.jcc.securityMechanism=9 -Ddb2.jcc.encryptionAlgorithm=2"
volumes:
- ./connect-standalone:/usr/bin/connect-standalone
schema-registry:
image: confluentinc/cp-schema-registry:6.0.0
container_name: schema-registry
hostname: schema-registry
depends_on:
- zookeeper
- kafka
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: 'zookeeper:2181'
SCHEMA_REGISTRY_LISTENERS: http://schema-registry:8081
kafka-connect:
image: confluentinc/cp-kafka-connect:6.0.0
container_name: kafka-connect
hostname: kafka-connect
depends_on:
- kafka
- schema-registry
ports:
- "8083:8083"
environment:
CONNECT_BOOTSTRAP_SERVERS: "kafka:29092"
CONNECT_REST_ADVERTISED_HOST_NAME: "kafka-connect"
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: kafka-connect
CONNECT_CONFIG_STORAGE_TOPIC: kafka-connect-configs
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
CONNECT_OFFSET_STORAGE_TOPIC: kafka-connect-offsets
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_TOPIC: kafka-connect-status
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_ZOOKEEPER_CONNECT: 'zookeeper:2181'
CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components"
CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR
JVM_OPTS: "-Ddb2.jcc.securityMechanism=9 -Ddb2.jcc.encryptionAlgorithm=2"
volumes:
- ./kafka-connect-jdbc-10.0.1.jar:/usr/share/java/kafka-connect-jdbc/kafka-connect-jdbc-10.0.1.jar
- ./db2jcc-db2jcc4.jar:/usr/share/java/kafka-connect-jdbc/db2jcc-db2jcc4.jar
- ./connect-standalone:/usr/bin/connect-standalone
Fwiw, the connector looks similar to this...
curl -X POST http://localhost:8083/connectors -H "Content-Type: application/json" -d '{
"name": "CONNECTOR01",
"config": {
"connector.class":"io.confluent.connect.jdbc.JdbcSourceConnector",
"connection.url":"jdbc:db2://THEDBURL:50000/XXXXX",
"connection.user":"myuserid",
"connection.password":"mypassword",
"poll.interval.ms":"15000",
"table.whitelist":"YYYYY.TABLEA",
"topic.prefix":"tbl-",
"mode":"timestamp",
"timestamp.initial":"-1",
"timestamp.column.name":"TIME_UPD",
"poll.interval.ms":"15000"
}
}'
Try to use KAFKA_OPTS instead of JVM_OPTS

Confluent Control Center not intercepting stream

I'm using CCC with a Kafka stream, which is populated by the Postgres Connector from Debezium.
I'm using the following docker-compose.yml:
version: '2'
services:
zookeeper-1:
image: confluentinc/cp-zookeeper:latest
hostname: zookeeper-1
container_name: zookeeper-1
volumes:
- /path/to/something/zk1/zk-data:/var/lib/zookeeper/data
- /path/to/something/zk1/zk-txn-logs:/var/lib/zookeeper/log
ports:
- 22181:22181
environment:
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_CLIENT_PORT: 22181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_INIT_LIMIT: 5
ZOOKEEPER_SYNC_LIMIT: 2
ZOOKEEPER_SERVERS: zookeeper-1:22888:23888;zookeeper-2:32888:33888;zookeeper-3:42888:43888
zookeeper-2:
image: confluentinc/cp-zookeeper:latest
hostname: zookeeper-2
container_name: zookeeper-2
volumes:
- /path/to/something/zk2/zk-data:/var/lib/zookeeper/data
- /path/to/something/zk2/zk-txn-logs:/var/lib/zookeeper/log
ports:
- 32181:32181
environment:
ZOOKEEPER_SERVER_ID: 2
ZOOKEEPER_CLIENT_PORT: 32181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_INIT_LIMIT: 5
ZOOKEEPER_SYNC_LIMIT: 2
ZOOKEEPER_SERVERS: zookeeper-1:22888:23888;zookeeper-2:32888:33888;zookeeper-3:42888:43888
zookeeper-3:
image: confluentinc/cp-zookeeper:latest
hostname: zookeeper-3
container_name: zookeeper-3
volumes:
- /path/to/something/zk3/zk-data:/var/lib/zookeeper/data
- /path/to/something/zk3/zk-txn-logs:/var/lib/zookeeper/log
ports:
- 42181:42181
environment:
ZOOKEEPER_SERVER_ID: 3
ZOOKEEPER_CLIENT_PORT: 42181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_INIT_LIMIT: 5
ZOOKEEPER_SYNC_LIMIT: 2
ZOOKEEPER_SERVERS: zookeeper-1:22888:23888;zookeeper-2:32888:33888;zookeeper-3:42888:43888
kafka-1:
image: confluentinc/cp-enterprise-kafka:latest
hostname: kafka-1
container_name: kafka-1
volumes:
- /path/to/something/kafka1/kafka-data:/var/lib/kafka/data
ports:
- 19092:19092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:22181,zookeeper-2:32181,zookeeper-3:42181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.0.71:19092
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 100
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: kafka-1:19092,kafka-2:19093,kafka-3:19094
CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper-1:22181,zookeeper-2:32181,zookeeper-3:42181
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
KAFKA_REPLICA_FETCH_MAX_BYTES: 3145728
KAFKA_MESSAGE_MAX_BYTES: 3145728
KAFKA_PRODUCER_MAX_REQUEST_SIZE: 3145728
KAFKA_CONSUMER_MAX_PARTITION_FETCH_BYTES: 3145728
depends_on:
- zookeeper-1
- zookeeper-2
- zookeeper-3
kafka-2:
image: confluentinc/cp-enterprise-kafka:latest
hostname: kafka-2
container_name: kafka-2
volumes:
- /path/to/something/kafka2/kafka-data:/var/lib/kafka/data
ports:
- 19093:19093
environment:
KAFKA_BROKER_ID: 2
KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:22181,zookeeper-2:32181,zookeeper-3:42181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.0.71:19093
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 100
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: kafka-1:19092,kafka-2:19093,kafka-3:19094
CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper-1:22181,zookeeper-2:32181,zookeeper-3:42181
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
KAFKA_REPLICA_FETCH_MAX_BYTES: 3145728
KAFKA_MESSAGE_MAX_BYTES: 3145728
KAFKA_PRODUCER_MAX_REQUEST_SIZE: 3145728
KAFKA_CONSUMER_MAX_PARTITION_FETCH_BYTES: 3145728
depends_on:
- zookeeper-1
- zookeeper-2
- zookeeper-3
kafka-3:
image: confluentinc/cp-enterprise-kafka:latest
hostname: kafka-3
container_name: kafka-3
volumes:
- /path/to/something/kafka3/kafka-data:/var/lib/kafka/data
ports:
- 19094:19094
environment:
KAFKA_BROKER_ID: 3
KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:22181,zookeeper-2:32181,zookeeper-3:42181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.0.71:19094
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 100
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: kafka-1:19092,kafka-2:19093,kafka-3:19094
CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper-1:22181,zookeeper-2:32181,zookeeper-3:42181
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
KAFKA_REPLICA_FETCH_MAX_BYTES: 3145728
KAFKA_MESSAGE_MAX_BYTES: 3145728
KAFKA_PRODUCER_MAX_REQUEST_SIZE: 3145728
KAFKA_CONSUMER_MAX_PARTITION_FETCH_BYTES: 3145728
depends_on:
- zookeeper-1
- zookeeper-2
- zookeeper-3
schema-registry:
image: confluentinc/cp-schema-registry:latest
hostname: schema-registry
container_name: schema-registry
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: zookeeper-1:22181,zookeeper-2:32181,zookeeper-3:42181
connect:
image: confluentinc/cp-kafka-connect:latest
hostname: connect
container_name: connect
depends_on:
- schema-registry
- zookeeper-1
- zookeeper-2
- zookeeper-3
- kafka-1
- kafka-2
- kafka-3
ports:
- "8083:8083"
volumes:
- /path/to/something/postgres-source-connector:/usr/share/java/postgres-source-connector
- /path/to/something/mongodb-sink-connector:/usr/share/java/mongodb-sink-connector
environment:
CONNECT_BOOTSTRAP_SERVERS: kafka-1:19092,kafka-2:19093,kafka-3:19094
CONNECT_REST_ADVERTISED_HOST_NAME: connect
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
CONNECT_KEY_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_REST_ADVERTISED_HOST_NAME: "kafka-connect"
CONNECT_LOG4J_ROOT_LOGLEVEL: "INFO"
CONNECT_LOG4J_LOGGERS: "org.apache.kafka.connect.runtime.rest=WARN,org.reflections=ERROR"
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_PLUGIN_PATH: '/usr/share/java'
CLASSPATH: /usr/share/java/monitoring-interceptors/monitoring-interceptors-5.0.0.jar
CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
CONNECT_PRODUCER_MAX_REQUEST_SIZE: 3145728
CONNECT_CONSUMER_MAX_PARTITION_FETCH_BYTES: 3145728
control-center:
image: confluentinc/cp-enterprise-control-center:latest
hostname: control-center
container_name: control-center
depends_on:
- schema-registry
- connect
- ksql-server
- zookeeper-1
- zookeeper-2
- zookeeper-3
- kafka-1
- kafka-2
- kafka-3
ports:
- "9021:9021"
environment:
CONTROL_CENTER_BOOTSTRAP_SERVERS: kafka-1:19092,kafka-2:19093,kafka-3:19094
CONTROL_CENTER_ZOOKEEPER_CONNECT: zookeeper-1:22181,zookeeper-2:32181,zookeeper-3:42181
CONTROL_CENTER_CONNECT_CLUSTER: 'http://connect:8083'
CONTTROL_CENTER_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
CONTROL_CENTER_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
CONTROL_CENTER_KSQL_URL: "http://ksql-server:8088"
CONTROL_CENTER_CONNECT_CLUSTER: "http://connect:8083"
CONTROL_CENTER_KSQL_ADVERTISED_URL: "http://localhost:8088"
CONTROL_CENTER_SCHEMA_REGISTRY_URL: "https://schema-registry:8081"
CONTROL_CENTER_REPLICATION_FACTOR: 1
CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS: 1
CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS: 1
CONFLUENT_METRICS_TOPIC_REPLICATION: 1
CONTROL_CENTER_CUB_KAFKA_TIMEOUT: 300
PORT: 9021
ksql-server:
image: confluentinc/cp-ksql-server:latest
hostname: ksql-server
container_name: ksql-server
depends_on:
- connect
ports:
- "8088:8088"
environment:
KSQL_CUB_KAFKA_TIMEOUT: 300
KSQL_BOOTSTRAP_SERVERS: kafka-1:19092,kafka-2:19093,kafka-3:19094
KSQL_LISTENERS: http://0.0.0.0:8088
KSQL_KSQL_SCHEMA_REGISTRY_URL: http://schema-registry:8081
KSQL_KSQL_SERVICE_ID: confluent_rmoff_01
KSQL_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
KSQL_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
KSQL_KSQL_COMMIT_INTERVAL_MS: 2000
KSQL_KSQL_CACHE_MAX_BYTES_BUFFERING: 10000000
KSQL_KSQL_AUTO_OFFSET_RESET: earliest
ksql-cli:
image: confluentinc/cp-ksql-cli:latest
hostname: ksql-cli
container_name: ksql-cli
depends_on:
- connect
- ksql-server
entrypoint: /bin/sh
tty: true
rest-proxy:
image: confluentinc/cp-kafka-rest:latest
hostname: rest-proxy
container_name: rest-proxy
depends_on:
- schema-registry
ports:
- 8082:8082
environment:
KAFKA_REST_HOST_NAME: rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: kafka-1:19092,kafka-2:19093,kafka-3:19094
KAFKA_REST_LISTENERS: "http://0.0.0.0:8082"
KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
KAFKA_REST_ACCESS_CONTROL_ALLOW_METHODS: 'GET,POST,PUT,DELETE,OPTIONS'
KAFKA_REST_ACCESS_CONTROL_ALLOW_ORIGIN: '*'
postgres:
image: debezium/postgres
hostname: postgres
container_name: postgres
volumes:
- /path/to/something/postgres:/var/lib/postgresql/data
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: admin
POSTGRES_DB: some-db
ports:
- 5432:5432
I've mapped the Postgres Connector into Kafka Connect (via volumes in Compose), and can see it in CCC when creating a new Source Connector.
When I create a Source connector, I can see the log message indicating that the topic for this connector was created. I also see this topic in CCC's Connect area. I also can see that Connect is able to authenticate to Postgres via this Connector.
When I make a change to the table I specify in the Connector, I see Kafka (I have a cluster of 3) figuring out who's going to store this message. Meaning, the Postgres tx log created a message of the appropriate topic in response to my change, so the DB, Connector and Kafka are working correctly.
However, no matter what I do, I cannot get this event to display in the Data Streams or System Health (neither the > Topics nor > Brokers areas) (edit: this works now. Data Streams still does not).
I'm at a loss for what's going wrong. The only indication I get is the initial message saying
Double check to see if monitoring interceptors have been properly configured for any clients producing to or consuming from the cluster controlcenter.cluster
I am under the impression that this means essentially that my Control Center container is configured with the *_INTERCEPTOR_CLASSES, which I pasted above. I followed the link on this message, which takes you to their Documentation site, which says to check for the response of the web service which gives kafka data. As their documentation suggests, I get a response of just {}, indicating that Kafka is saying it has no data. But it definitely does.
Is it trying to say that I also need these interceptors configured into the Connector somehow? I don't know what it means to have monitoring interceptors for any consumers/producers -- I don't have any raw Java consumers/producers (yet)... only Source connectors for now.
My connector configuration is as follows (created via the CCC UI) if it matters:
{
"database.server.name": "my-namespace",
"database.dbname": "my-database",
"database.hostname": "my-hostname",
"database.port": "5432",
"database.user": "admin",
"schema.whitelist": "public",
"table.whitelist": "my-database.my-table",
"connector.class": "io.debezium.connector.postgresql.PostgresConnector",
"name": "my-connector",
"database.password": "its correct"
}
When starting all of the services, I see the following in the corresponding logs which I suspect may be of interest (in no particular order below):
control-center | 2018-09-17T20:45:02.748463792Z interceptor.classes = []
kafka-2 | 2018-09-17T20:44:56.293701931Z interceptor.classes = []
schema-registry | 2018-09-17T20:45:34.658065846Z interceptor.classes = []
connect | 2018-09-17T20:48:52.628218936Z [2018-09-17 20:48:52,628] WARN The configuration 'producer.interceptor.classes' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)
connect | 2018-09-17T20:48:52.628472218Z [2018-09-17 20:48:52,628] WARN The configuration 'consumer.interceptor.classes' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)
Any help is appreciated. Thanks!
You are referencing 5.1.0 JAR for the interceptors, which does not exist in the latest image. If you docker-compose exec connect bash and go to the path defined, you'll see which version is there (currently 5.0.0 in latest). So change your compose to read
CLASSPATH: /usr/share/java/monitoring-interceptors/monitoring-interceptors-5.0.0.jar
Have a look at https://github.com/rmoff/ksql/blob/clickstream-c3/ksql-clickstream-demo/docker-compose.yml for an example of a working Docker Compose with Confluent Control Center and interceptors working with Kafka Connect (and also KSQL, if you're interested).
For debugging further, check:
Kafka Connect log file - if the interceptors are working you should see
[2018-03-02 11:39:38,594] INFO ConsumerConfig values:
[...]
interceptor.classes = [io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor]
[2018-03-02 11:39:38,806] INFO ProducerConfig values:
[...]
interceptor.classes = [io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor]
[2018-03-02 11:39:39,455] INFO creating interceptor (io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor:74)
[2018-03-02 11:39:39,456] INFO creating interceptor (io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor:70)
[2018-03-02 11:39:39,486] INFO MonitoringInterceptorConfig values:
confluent.monitoring.interceptor.publishMs = 15000
confluent.monitoring.interceptor.topic = _confluent-monitoring
(io.confluent.monitoring.clients.interceptor.MonitoringInterceptorConfig:223)
[2018-03-02 11:39:39,486] INFO MonitoringInterceptorConfig values:
confluent.monitoring.interceptor.publishMs = 15000
confluent.monitoring.interceptor.topic = _confluent-monitoring
(io.confluent.monitoring.clients.interceptor.MonitoringInterceptorConfig:223)
See the Confluent Control Center troubleshooting doc for details of the control-center-console-consumer you can use for checking the actual interceptor data being received (or not, if things aren't set up correctly).