I am trying to resolve the connection.uri using FileConfigProvider, by following this example:
https://docs.confluent.io/platform/current/connect/security.html#externalizing-secrets
I have the following POST request:
POST http://localhost:8083/connectors/my-sink/config
{
"connector.class": "com.mongodb.kafka.connect.MongoSinkConnector",
"topics": "topic",
"database": "my-database",
"connection.uri": "${file:/home/appuser/my-file.txt:mongo_uri}",
"config.providers": "file",
"config.providers.file.class": "org.apache.kafka.common.config.provider.FileConfigProvider"
}
I get the following error:
{
"error_code": 400,
"message": "Connector configuration is invalid and contains the following 1 error(s):\nInvalid value ${file:/home/appuser/my-file.txt:mongo_topic} for configuration connection.uri: The connection string is invalid. Connection strings must start with either 'mongodb://' or 'mongodb+srv://\nYou can also find the above list of errors at the endpoint `/connector-plugins/{connectorType}/config/validate`"
}
It appears that the config validation is executed before resolving the secret value.
And, for this reason, the value "connection.uri": "${my-secret}" is not a valid mongodb connection string.
Is there a possibility to fix this?
Source-code:
/MyFolder
├── kafka-connect
│ └── Dockerfile
└── docker-compose.yml
MyFolder\docker-compose.yml:
version: "3"
services:
zookeeper:
image: confluentinc/cp-zookeeper:6.0.0
container_name: zookeeper
hostname: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka:
image: confluentinc/cp-kafka:6.0.0
container_name: kafka
hostname: kafka
depends_on:
- zookeeper
ports:
- "29092:29092"
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_offsets_TOPIC_REPLICATION_FACTOR: 1
kafka-connect:
build:
context: ./kafka-connect
dockerfile: Dockerfile
container_name: kafka_connect
depends_on:
- kafka
ports:
- "8083:8083"
mongo:
image: mongo
container_name: mongo
restart: unless-stopped
depends_on:
- kafka-connect
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: password
MyFolder\kafka-connect\Dockerfile:
FROM confluentinc/cp-kafka-connect:6.0.0
COPY ./plugins/ /usr/local/share/kafka/plugins/
ENV CONNECT_BOOTSTRAP_SERVERS=PLAINTEXT://kafka:29092
ENV CONNECT_REST_ADVERTISED_HOST_NAME=kafka_connect
ENV CONNECT_GROUP_ID=kafka-connect-group
ENV CONNECT_CONFIG_STORAGE_TOPIC=kafka-connect-group-config
ENV CONNECT_OFFSET_STORAGE_TOPIC=connect-group-offset
ENV CONNECT_STATUS_STORAGE_TOPIC=kafka-connect-group-status
ENV CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR=1
ENV CONNECT_STATUS_STORAGE_REPLICATION_FACTOR=1
ENV CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR=1
ENV CONNECT_KEY_CONVERTER=org.apache.kafka.connect.storage.StringConverter
ENV CONNECT_VALUE_CONVERTER=org.apache.kafka.connect.storage.StringConverter
ENV CONNECT_PLUGIN_PATH=/usr/local/share/kafka/plugins
EXPOSE 8083
I create the container using docker-compose up.
I configure the MongoSinkConnector using the kafka-connect REST endpoints.
The properties for setting up the provider are for the Connect worker (the process started by the container), not a specific connector
kafka-connect:
...
depends_on:
- kafka
- mongo
ports:
- "8083:8083"
environment:
...
CONNECT_CONFIG_PROVIDERS: file
CONNECT_CONFIG_PROVIDERS_FILE_CLASS: org.apache.kafka.common.config.provider.FileConfigProvider
Related
I would like to use network_mode: bridge for kafka for being able to reach kafka through localhost:9092 from another service
I'm trying to use the provectus/kafka-ui but when I open the consumers menu I get the following error
my docker-compose.yml file :
kafka-ui:
container_name: kafka-ui
image: provectuslabs/kafka-ui:latest
ports:
- 8080:8080
depends_on:
- kafka
environment:
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:9092
KAFKA_CLUSTERS_0_JMXPORT: 9997
kafka:
image: johnnypark/kafka-zookeeper
ports:
- "2181:2181"
- "9092:9092"
network_mode: bridge
environment:
ADVERTISED_HOST: 127.0.0.1
NUM_PARTITIONS: 1
volumes:
- /var/run/docker.sock:/var/run/docker.sock
log error:
2022-01-13 09:16:50,014 ERROR [parallel-5] c.p.k.u.s.MetricsService: Failed to collect cluster Default info
java.lang.IllegalStateException: Error while creating AdminClient for Cluster Default
provectus/kafka-ui
I was using the johnnypark/kafka-zookeeper library for both kafka and zookeeper. I was able to solve this problem by using two separate libraries as in the example below
zookeeper1:
image: confluentinc/cp-zookeeper:5.2.4
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka1:
image: confluentinc/cp-kafka:5.3.1
depends_on:
- zookeeper1
ports:
- 9093:9093
- 9998:9998
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper1:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka1:29092,PLAINTEXT_HOST://localhost:9093
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
JMX_PORT: 9998
KAFKA_JMX_OPTS: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=kafka1 -Dcom.sun.management.jmxremote.rmi.port=9998
being able to reach kafka through localhost:9092 from another service
You can't use localhost to reach Kafka since that would be the Kafka UI container itself.
Changing ADVERTISED_HOST to kafka and using kafka:9092 from other containers is correct for a bridge network. However, this have the side effect of preventing any access to Kafka outside the Docker network, such as clients directly on the host machine.
Internal and External clients can be configured separately. bitnami/bitnami-docker-kafka
Here's an example using Bitnami's Kafka Image - this allows host clients to connect on port 9093 while allowing kafka-ui to connect with the default port.
version: "3"
services:
zookeeper:
image: 'bitnami/zookeeper:latest'
ports:
- '2181:2181'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: 'bitnami/kafka:latest'
ports:
- '9092:9092'
- '9093:9093'
environment:
- KAFKA_BROKER_ID=1
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT
- KAFKA_CFG_LISTENERS=CLIENT://:9092,EXTERNAL://:9093
- KAFKA_CFG_ADVERTISED_LISTENERS=CLIENT://kafka:9092,EXTERNAL://localhost:9093
- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=CLIENT
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper
kafka-ui:
image: provectuslabs/kafka-ui
container_name: kafka-ui
ports:
- "8081:8081"
restart: always
environment:
- KAFKA_CLUSTERS_0_NAME=local
- KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092
- SERVER_PORT=8081
I have integrated the ELK system concept in node js. So Kibana , Kafka, kafka manager, Zookeeper are installed in docker in my local system. Everything is running in the docker window perfectly. I am getting some errors so I could not create a cluster in CMAK. Please find the attachment. Could you help me?
ElasticSearch.yaml
version : "3.7"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.12.0
container_name: elasticsearch
restart : always
environment :
- xpack.security.enabled=false
- discovery.type=single-node
- ES_JAVA_OPTS=-Xms750m -Xmx750m
ulimits :
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
cap_add:
- IPC_LOCK
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
ports:
- "9200:9200"
kibana:
container_name: kibana
image: docker.elastic.co/kibana/kibana:7.12.0
restart: always
environment:
- ELASTICSEARCH_URL=http://192.168.29.138:9200
- ELASTICSEARCH_HOST=http://192.168.29.138:9200
ports:
- "5601:5601"
depends_on:
- elasticsearch
volumes:
elasticsearch-data:
Kafka.yaml
version : "3"
services:
zookeeper:
image: zookeeper
restart : always
container_name: zookeeper
hostname: zookeeper
ports:
- 2181:2181
environment:
ZOO_MY_ID: 1
kafka:
image: wurstmeister/kafka
container_name: kafka
ports:
- 9092:9092
environment:
KAFKA_ADVERTISED_HOST_NAME : 192.168.29.138
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
kafka_manager:
image: hlebalbau/kafka-manager:stable
container_name: kafka-manager
restart : always
ports:
- "9000:9000"
environment:
ZK_HOST : "zookeeper:2181"
APPLICATION_SECRET: "random-secret"
command: -Dpidfile.path=/dev/null
I have set in ksqldb-server in /etc/ksqldb/ksql-server.properties file, my schema registry as they say in the documentation :
ksql.schema.registry.url=http://myipaddress:8090
but when I go inside my ksqldb container :
docker exec -it ksqldb-cli ksql http://ksqldb-server:8088
and I try to run :
CREATE STREAM tracking WITH (KAFKA_TOPIC='tracking', VALUE_FORMAT='AVRO');
I get error:
Cannot create topic 'tracking' with format AVRO without configuring 'ksql.schema.registry.url'
I have also tried setting it with the following, even it's not recommended :
SET 'ksql.schema.registry.url'='http://myipaddress:8090';
but still getting same error, not sure what I'm doing wrong.
this is my docker-compose file:
version: "3.3"
services:
# Kafka/Zookeeper container
divolte-kafka:
image: krisgeus/docker-kafka
restart: always
environment:
ADVERTISED_HOST: divolte-kafka
LOG_RETENTION_HOURS: 1
AUTO_CREATE_TOPICS: "false"
KAFKA_CREATE_TOPICS: tracking:4:1
ADVERTISED_LISTENERS: PLAINTEXT://divolte-kafka:9092,INTERNAL://localhost:9093
LISTENERS: PLAINTEXT://0.0.0.0:9092,INTERNAL://0.0.0.0:9093
SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,INTERNAL:PLAINTEXT
INTER_BROKER: INTERNAL
# Schema Registry
schema-registry:
image: confluentinc/cp-schema-registry:5.5.3
restart: always
depends_on:
- divolte-kafka
ports:
- 8090:8081
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: divolte-kafka:2181
# ksql server
ksqldb-server:
image: confluentinc/ksqldb-server:0.20.0
restart: always
hostname: ksqldb-server
container_name: ksqldb-server
depends_on:
- divolte-kafka
ports:
- "8088:8088"
environment:
KSQL_LISTENERS: http://0.0.0.0:8088
KSQL_BOOTSTRAP_SERVERS: divolte-kafka:9092
KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: "true"
KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: "true"
# ksql cli
ksqldb-cli:
image: confluentinc/ksqldb-cli:0.20.0
restart: always
container_name: ksqldb-cli
depends_on:
- divolte-kafka
- ksqldb-server
entrypoint: /bin/sh
tty: true
I have set in ksqldb-server in /etc/ksqldb/ksql-server.properties file
Well, you're never using this file inside Docker
You need to add a variable for it
KSQL_KSQL_SCHEMA_REGISTRY_URL: "schema-registry:8081"
Also, you should use Kafka on the registry, not deprecated Zookeeper, with property KSQL_KAFKASTORE_BOOTSTRAP_SERVERS
I'm trying to run a Kafka service with Zookeper, Kafdrop and Schema Registry, I made it work by installing Kafka, Zookeper and Kadrop in a container and then installing Confluent Schema Registry in another (with its own Kafka, Zookeeper, Ksql Server and Rest Proxy), however trying to make kafdrop read the schema registry is not working, so now I want to install exclusively just one container with Kafka, Zookeeper, Kafdrop and Schema Registry, and even though everything is installed successfully, the Schema Registry is restarting every 10 seconds or so, and I cannot reach out the service (localhost:8085) to add my schema, so I'm wondering if it's even possible to run the Confluent Schema Registry outside the Confluent suite of services, here it is my YAML file:
version: '2'
services:
kafka:
image: wurstmeister/kafka
container_name: kafka
ports:
- "9092:9092"
environment:
- KAFKA_ADVERTISED_HOST_NAME=127.0.0.1
- KAFKA_ADVERTISED_PORT=9092
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_LISTENERS=INTERNAL://:29092,EXTERNAL://:9092
- KAFKA_ADVERTISED_LISTENERS=INTERNAL://kafka:29092,EXTERNAL://localhost:9092
- KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
- KAFKA_INTER_BROKER_LISTENER_NAME=INTERNAL
- KAFKA_SCHEMA_REGISTRY_URL=schemaregistry:8085
depends_on:
- zookeeper
zookeeper:
image: wurstmeister/zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
- KAFKA_ADVERTISED_HOST_NAME=zookeeper
schemaregistry:
image: confluentinc/cp-schema-registry:6.2.0
restart: always
depends_on:
- zookeeper
environment:
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: "zookeeper:2181"
SCHEMA_REGISTRY_HOST_NAME: schemaregistry
SCHEMA_REGISTRY_LISTENERS: "http://0.0.0.0:8085"
ports:
- 8085:8085
kafdrop:
image: obsidiandynamics/kafdrop
container_name: kafdrop
restart: "no"
ports:
- "9000:9000"
environment:
KAFKA_BROKERCONNECT: "kafka:29092"
JVM_OPTS: "-Xms16M -Xmx48M -Xss180K -XX:-TieredCompilation -XX:+UseStringDeduplication -noverify"
SCHEMAREGISTRY_CONNECT: schemaregistry:8085
depends_on:
- "kafka"
So it turned out that the Schema Registry couldn't connect because Kafka was not using 'PLAINTEXT' as the internal broker listener name, here is the YAML version working with the Kafdrop also working when deserializing the message to AVRO:
version: '2'
services:
kafka:
image: wurstmeister/kafka
container_name: kafka
ports:
- "9092:9092"
environment:
- KAFKA_ADVERTISED_HOST_NAME=127.0.0.1
- KAFKA_ADVERTISED_PORT=9092
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_LISTENERS=PLAINTEXT://:29092,EXTERNAL://:9092
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:29092,EXTERNAL://localhost:9092
- KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT
- KAFKA_INTER_BROKER_LISTENER_NAME=PLAINTEXT
- KAFKA_SCHEMA_REGISTRY_URL=schemaregistry:8085
depends_on:
- zookeeper
zookeeper:
image: wurstmeister/zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
- KAFKA_ADVERTISED_HOST_NAME=zookeeper
schemaregistry:
image: confluentinc/cp-schema-registry:6.2.0
restart: always
depends_on:
- zookeeper
environment:
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: "zookeeper:2181"
SCHEMA_REGISTRY_HOST_NAME: schemaregistry
SCHEMA_REGISTRY_LISTENERS: "http://0.0.0.0:8085"
ports:
- 8085:8085
kafdrop:
image: obsidiandynamics/kafdrop
container_name: kafdrop
restart: "no"
ports:
- "9000:9000"
environment:
KAFKA_BROKERCONNECT: "kafka:29092"
JVM_OPTS: "-Xms16M -Xmx48M -Xss180K -XX:-TieredCompilation -XX:+UseStringDeduplication -noverify"
SCHEMAREGISTRY_CONNECT: http://schemaregistry:8085
depends_on:
- "kafka"
Your both YAML are not correct
You have in zookeeper:
KAFKA_ADVERTISED_HOST_NAME
And in schema registry you give zookeeper port 2181 in kafkastore....
I have been trying to dockerize my spring boot application which depends on redis, kafka and mongodb.
Following is the docker-compose.yml:
version: '3.3'
services:
my-service:
image: my-service
build:
context: ../../
dockerfile: Dockerfile
restart: always
container_name: my-service
environment:
KAFKA_CONFLUENT_BOOTSTRAP_SERVERS: kafka:9092
MONGO_HOSTS: mongodb:27017
REDIS_HOST: redis
REDIS_PORT: 6379
volumes:
- /private/var/log/my-service/:/var/log/my-service/
ports:
- 8080:8090
- 1053:1053
depends_on:
- redis
- kafka
- mongodb
portainer:
image: portainer/portainer
command: -H unix:///var/run/docker.sock
restart: always
container_name: portainer
ports:
- 9000:9000
- 9001:8000
volumes:
- /var/run/docker.sock:/var/run/docker.sock
redis:
image: redis
container_name: redis
restart: always
ports:
- 6379:6379
zookeeper:
image: wurstmeister/zookeeper
ports:
- 2181:2181
container_name: zookeeper
kafka:
image: wurstmeister/kafka
ports:
- 9092:9092
container_name: kafka
environment:
KAFKA_CREATE_TOPICS: "cms.entity.change:1:1" # topic:partition:replicas
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_PORT: 9092
volumes:
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- "zookeeper"
mongodb:
image: mongo:latest
container_name: mongodb
environment:
MONGO_INITDB_ROOT_USERNAME:
MONGO_INITDB_ROOT_PASSWORD:
ports:
- 27017:27017
volumes:
- ./data/db:/data/db
The issue is that this starts up mongo as a STANDALONE instance. So the APIs in my service that persist data are failing as mongo needs to start as a REPLICA_SET.
How can I edit my docker-compose file to start mongo as a REPLICA_SET?
I had the same issue and ended up on this stackoverflow post.
We had a requirement of using official mongoDB docker image (https://hub.docker.com/_/mongo ) and couldn't use bitnami as suggested in Vahid's answer.
This answer isn't exactly what's needed by the question asked and coming in 6 months too late; but it should give directions to someone who need to use the mongoDb standalone replicaset throw away instance for integration testing purpose. If you need to use it in PROD then you'll have to provide environment variables for volumes and auth as per Vahid's answer.
version: '3.7'
services:
mongodb:
image: mongo:latest
container_name: myservice-mongodb
networks:
- myServiceNetwork
expose:
- 27017
command: --replSet singleNodeReplSet
mongodb-replicaset:
container_name: mongodb-replicaset-helper
depends_on:
- mongodb
networks:
- myServiceNetwork
image: mongo:latest
command: bash -c "sleep 5 && mongo --host myservice-mongodb --port 27017 --eval \"rs.initiate()\" && sleep 2 && mongo --host myservice-mongodb --port 27017 --eval \"rs.status()\" && sleep infinity"
my-service:
depends_on:
- mongodb-replicaset
image: myserviceimage
container_name: myservicecontainer
networks:
- myServiceNetwork
environment:
myservice__Database__ConnectionString: mongodb://myservice-mongodb:27017/?connect=direct&replicaSet=singleNodeReplSet&readPreference=primary
myservice__Database__Name: myserviceDb
networks:
myServiceNetwork:
driver: bridge
NOTE: Please look at the way how connection string is passed as env variable to the service depending on mongo replicaset instance. You'd have to ensure that the name used in setting up the mongodb replicaset (in my case singleNodeReplicaSet) is passed on to the service depending on it.
Edited:
my previous answer was far wrong so I changed it. I managed to make it work using 'bitnami/mongodb:4.0'. Not sure if that would help you or not, but maybe it gives you some idea. They have a docker-compose file ready for replicaset mode.
version: '3'
services:
mdb-primary:
image: 'bitnami/mongodb:4.0'
environment:
- MONGODB_REPLICA_SET_MODE=primary
- MONGODB_ROOT_PASSWORD=somepassword
- MONGODB_REPLICA_SET_KEY=replicasetkey
- MONGODB_ADVERTISED_HOSTNAME=mdb-primary
mdb-secondary:
image: 'bitnami/mongodb:4.0'
depends_on:
- mdb-primary
environment:
- MONGODB_PRIMARY_HOST=mdb-primary
- MONGODB_REPLICA_SET_MODE=secondary
- MONGODB_PRIMARY_ROOT_PASSWORD=somepassword
- MONGODB_REPLICA_SET_KEY=replicasetkey
- MONGODB_ADVERTISED_HOSTNAME=mdb-secondary
mdb-arbiter:
image: 'bitnami/mongodb:4.0'
depends_on:
- mdb-primary
environment:
- MONGODB_PRIMARY_HOST=mdb-primary
- MONGODB_REPLICA_SET_MODE=arbiter
- MONGODB_PRIMARY_ROOT_PASSWORD=somepassword
- MONGODB_REPLICA_SET_KEY=replicasetkey
- MONGODB_ADVERTISED_HOSTNAME=mdb-arbiter
mongo-cli:
image: 'bitnami/mongodb:latest'
don't forget to add volumes and map it to /bitnami on the primary node
the last container, mongo-cli is for testing purposes. So you can connect to the replicaset using the cli, there is an argument about that here if you like to read about it.
$ docker-compose exec mongo-cli bash
$ mongo "mongodb://mdb-primary:27017/test?replicaSet=replicaset"