multiple kafka schema registry against same cluster - apache-kafka

I'm trying to mount two instances of kafka schema registry against the same kafka and zookeeper cluster. But the shemas are getting mixed. When running the two registries, if I register a schema using the api "kafka-schema-registry" it appears to be created at "schema-registry-ui-other" and not showing at "kafka-schema-registry-ui" as expected.
My configuration is:
version: '2.1'
services:
zoo1:
image: zookeeper:3.4.9
restart: unless-stopped
hostname: zoo1
ports:
- "2181:2181"
environment:
ZOO_MY_ID: 1
ZOO_PORT: 2181
ZOO_SERVERS: server.1=zoo1:2888:3888
volumes:
- ./full-stack/zoo1/data:/data
- ./full-stack/zoo1/datalog:/datalog
kafka1:
image: confluentinc/cp-kafka:5.3.1
hostname: kafka1
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_LISTENERS: LISTENER_DOCKER_INTERNAL://kafka1:19092,LISTENER_DOCKER_EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_DOCKER_INTERNAL
KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181"
KAFKA_BROKER_ID: 1
KAFKA_LOG4J_LOGGERS: "kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
volumes:
- ./full-stack/kafka1/data:/var/lib/kafka/data
depends_on:
- zoo1
kafka-schema-registry:
image: confluentinc/cp-schema-registry:5.3.1
hostname: kafka-schema-registry
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: PLAINTEXT://kafka1:19092
SCHEMA_REGISTRY_HOST_NAME: kafka-schema-registry
SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081
SCHEMA_REGISTRY_KAFKASTORE_GROUP_ID: "schema-registry"
depends_on:
- zoo1
- kafka1
kafka-schema-registry-other:
image: confluentinc/cp-schema-registry:5.3.1
hostname: kafka-schema-registry-other
ports:
- "8092:8081"
environment:
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: PLAINTEXT://kafka1:19092
SCHEMA_REGISTRY_HOST_NAME: kafka-schema-registry-other
SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081
SCHEMA_REGISTRY_SCHEMA_REGISTRY_ZK_NAMESPACE: schema_registry_other
SCHEMA_REGISTRY_KAFKASTORE_TOPIC: "_schemas_other"
SCHEMA_REGISTRY_KAFKASTORE_GROUP_ID: "schema-registry-other"
depends_on:
- zoo1
- kafka1
schema-registry-ui:
image: landoop/schema-registry-ui:0.9.4
hostname: kafka-schema-registry-ui
ports:
- "8001:8000"
environment:
SCHEMAREGISTRY_URL: http://kafka-schema-registry:8081/
PROXY: "true"
depends_on:
- kafka-schema-registry
schema-registry-ui-other:
image: landoop/schema-registry-ui:0.9.4
hostname: kafka-schema-registry-ui-other
ports:
- "8002:8000"
environment:
SCHEMAREGISTRY_URL: http://kafka-schema-registry-other:8081/
PROXY: "true"
depends_on:
- kafka-schema-registry-other
kafka-rest-proxy:
image: confluentinc/cp-kafka-rest:5.3.1
hostname: kafka-rest-proxy
ports:
- "8082:8082"
environment:
# KAFKA_REST_ZOOKEEPER_CONNECT: zoo1:2181
KAFKA_REST_LISTENERS: http://0.0.0.0:8082/
KAFKA_REST_SCHEMA_REGISTRY_URL: http://kafka-schema-registry:8081/
KAFKA_REST_HOST_NAME: kafka-rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: PLAINTEXT://kafka1:19092
depends_on:
- zoo1
- kafka1
- kafka-schema-registry
kafka-topics-ui:
image: landoop/kafka-topics-ui:0.9.4
hostname: kafka-topics-ui
ports:
- "8000:8000"
environment:
KAFKA_REST_PROXY_URL: "http://kafka-rest-proxy:8082/"
PROXY: "true"
depends_on:
- zoo1
- kafka1
- kafka-schema-registry
- kafka-rest-proxy
zoonavigator-web:
image: elkozmon/zoonavigator-web:0.5.1
ports:
- "8004:8000"
environment:
API_HOST: "zoonavigator-api"
API_PORT: 9000
links:
- zoonavigator-api
depends_on:
- zoonavigator-api
zoonavigator-api:
image: elkozmon/zoonavigator-api:0.5.1
environment:
SERVER_HTTP_PORT: 9000
depends_on:
- zoo1
it's not possible to have two separated schemas registry ?

You are defining the Group ID incorrectly for the Schema Registry servers, which means they are in the same consumer group, which means they consider themselves part of the same cluster. The second Schema Registry server becomes the primary, and so performs all of the writes.
You can fix this by setting the environment variable SCHEMA_REGISTRY_SCHEMA_REGISTRY_GROUP_ID for both of the Schema Registry servers. They will then be considered to be two different clusters, and this will work as you expect it to.

Related

How to create two postgres docker containers from docker compose file , one for development and one for testing?

So for example this is my docker-compose.yml file:
version: "3.7"
services:
postgres:
container_name: mydevdb
image: postgres:13
restart: always
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
volumes:
- postgres:/var/lib/postgresql/data
ports:
- "5432:5432"
volumes:
postgres:
And I want to have an image for development and another for testing with different (container name, POSTGRES_USER, POSTGRES_PASSWORD, POSTGRES_DB and port)
I want the best approach for that. Thanks.
You just have to give the services different names. (and maybe different port exposures, depending on how you want to use them)
version: "3.7"
services:
postgres-dev:
container_name: mydevdb
image: postgres:13
restart: always
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
volumes:
- postgres-dev:/var/lib/postgresql/data
ports:
- "5432:5432"
postgres-test:
container_name: mytestdb
image: postgres:13
restart: always
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
volumes:
- postgres-test:/var/lib/postgresql/data
ports:
- "5432:5432"
volumes:
postgres-dev:
postgres-test:

Kafka cluster with 3 kafka and 3 zookeepers installed on 2 machines in dockers

I need to create kafka cluster (3 kafka with 3 zookeepers) installed in docker on 2 linux machines (2 kafka + 2 zookeepers on one and 1 kafka with 1 zookeeper on another one).
Server IPs are 192.168.30.35 and 192.168.30.37.
My docker-compose:
**Server 35:**
version: "3"
services:
zookeeper1:
image: confluentinc/cp-zookeeper:5.5.0
hostname: zookeeper1
container_name: zookeeper1
restart: always
ports:
- "2181:2181"
- "12888:2888"
- "13888:3888"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_SERVERS: "localhost:12888:13888;localhost:22888:23888;192.168.30.37:32888:33888"
volumes:
- ./volumes/zookeeper-1/data:/var/lib/zookeeper/data
- ./volumes/zookeeper-1/log:/var/lib/zookeeper/log
zookeeper2:
image: confluentinc/cp-zookeeper:5.5.0
hostname: zookeeper2
container_name: zookeeper2
ports:
- "2182:2181"
- "22888:2888"
- "23888:3888"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_SERVER_ID: 2
ZOOKEEPER_SERVERS: "localhost:12888:13888;localhost:22888:23888;192.168.30.37:32888:33888"
volumes:
- ./volumes/zookeeper-2/data:/var/lib/zookeeper/data
- ./volumes/zookeeper-2/log:/var/lib/zookeeper/log
kafka1:
image: confluentinc/cp-kafka:5.5.0
hostname: kafka1
container_name: kafka1
depends_on:
- zookeeper1
- zookeeper2
ports:
- "9091:9091"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: localhost:2181,localhost:2182,192.168.30.37:2183
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_LISTENERS: INTERNAL://:29092,OUTSIDE://:9091
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka1:29092,OUTSIDE://localhost:9091
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
volumes:
- ./volumes/kafka-1/data:/var/lib/kafka/data
kafka2:
image: confluentinc/cp-kafka:5.5.0
hostname: kafka2
container_name: kafka2
depends_on:
- zookeeper1
- zookeeper2
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 2
KAFKA_ZOOKEEPER_CONNECT: localhost:2181,localhost:2182,192.168.30.37:2183
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_LISTENERS: INTERNAL://:29092,OUTSIDE://:9092
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka2:29092,OUTSIDE://localhost:9092
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
volumes:
- ./volumes/kafka-2/data:/var/lib/kafka/data
**Server 37:**
version: "3"
services:
zookeeper3:
image: confluentinc/cp-zookeeper:5.5.0
hostname: zookeeper3
container_name: zookeeper3
restart: always
ports:
- "2183:2181"
- "32888:2888"
- "33888:3888"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_SERVER_ID: 3
ZOOKEEPER_SERVERS: "192.168.30.35:12888:13888;192.168.30.35:22888:23888;zookeeper3:32888:33888"
volumes:
- ./volumes/zookeeper-3/data:/var/lib/zookeeper/data
- ./volumes/zookeeper-3/log:/var/lib/zookeeper/log
kafka3:
image: confluentinc/cp-kafka:5.5.0
hostname: kafka3
container_name: kafka3
depends_on:
- zookeeper3
ports:
- "9093:9093"
environment:
KAFKA_BROKER_ID: 3
KAFKA_ZOOKEEPER_CONNECT: 192.168.30.35:2181,192.168.30.35:2182,zookeeper3:2183
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_LISTENERS: INTERNAL://:29092,OUTSIDE://:9093
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka3:29092,OUTSIDE://192.168.30.37:9093
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
volumes:
- ./volumes/kafka-3/data:/var/lib/kafka/data
I succeeded to run it and create topics but seems zookeeper on server 37 failed to communicate with 35.
As result when the kafka 3 is down and up again no ISR restored. Where am I wrong?

docker-compose file for nodejs, mongo, redis, rabbitmq

I need a docker compose that have node 12, mongo 4.4, redis 4.0.6 and rabbitmq 3.8.9 . This is what I have in my docker-compose right now and apparently it does not work. The app can't seems to connect to redis and rabbitmq.
version: '3'
services:
app:
container_name: malllog-main
restart: always
build: .
ports:
- '3000:3000'
external_links:
- mongo
- redis
- rabbitmq
mongo:
container_name: malllog-mongo
image: mongo:4.4
ports:
- '27017:27017'
redis:
container_name: malllog-redis
image: redis:4.0.6
ports:
- '6379:6379'
rabbitmq:
container_name: malllog-rabbitmq
image: rabbitmq:3.8.9
ports:
- '15672:15672'
- '5672:5672'
Below is the docker-compose file that I have used for my microservice test project, which works for me as you can try this out.
version: '3'
services:
my-ui:
container_name: my-ui
build: ./my-ui
ports:
- "80:80"
depends_on:
- my-api
networks:
- test-network
my-api:
container_name: my-api
restart: always
build:
context: my-api
dockerfile: Dockerfile
ports:
- "8080:8080"
#command: mvn clean spring-boot:run -Dspring-boot.run.profiles=docker
depends_on:
- rabbitmq
- mp-redis
networks:
- test-network
rabbitmq:
container_name: rabbitmq
image: rabbitmq:management
ports:
- "5672:5672"
- "15672:15672"
restart: always
networks:
- test-network
mp-redis:
container_name: mp-redis
image: redis:5
ports:
- "6379:6379"
restart: always
networks:
- test-network
mp-mongodb:
container_name: mp-mongodb
image: mongo:3.6
restart: always
environment:
MONGO_DATA_DIR: /data/db
MONGO_LOG_DIR: /dev/null
volumes:
- mongo-data:/data/db
ports:
- "27017:27017"
command: mongod --smallfiles --logpath=/dev/null # --quiet
networks:
- test-network
volumes:
mongo-data:
networks:
test-network:

Error while running compose up. YAML linter showing no errors

I keep getting a service 'image' error when running docker-compose up for my yml file.
I researched online and it seems like it was mostly some formatting error.
I have run my yml through a YAML linter and there are no errors.
version: '3.5'
services:
server:
image: postgrest/postgrest
ports:
- "3000:3000"
links:
- db:db
environment:
PGRST_DB_URI: postgres://app_user:password#db:5432/app_db
PGRST_DB_SCHEMA: public
PGRST_DB_ANON_ROLE: app_user
depends_on:
- db
db:
image: postgres
ports:
- "5432:5432"
environment:
POSTGRES_USER: ${POSTGRES_USER:-postgres}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-changeme}
PGDATA: /data/postgres
volumes:
- postgres:/data/postgres
networks:
- postgres
restart: unless-stopped
pgadmin:
container_name: pgadmin_container
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: ${PGADMIN_DEFAULT_EMAIL:-pgadmin4#pgadmin.org}
PGADMIN_DEFAULT_PASSWORD: ${PGADMIN_DEFAULT_PASSWORD:-admin}
volumes :
- pgadmin:/root/.pgadmin
ports:
-"${PGADMIN_PORT:-5050}:80"
networks:
- postgres
restart: unless-stopped
networks:
postgres:
driver: bridge
volumes:
postgres:
pgadmin:
swagger:
image: swaggerapi/swagger-ui
ports:
- "8080:8080"
expose:
- "8080"
environment:
API_URL: http://localhost:3000/
Expected would be the images get downloaded and containers get started up
Error is:
ERROR: In file '.\docker-compose.yml', service 'image' must be a mapping not a string.
This issue happens because of indentation.
docker-compose deals with image like a service because of wrong indentation.
I modified your file and start configured containers successfully:
version: "3.5"
services:
server:
image: postgrest/postgrest
ports:
- "3000:3000"
links:
- db:db
environment:
PGRST_DB_URI: postgres://app_user:password#db:5432/app_db
PGRST_DB_SCHEMA: public
PGRST_DB_ANON_ROLE: app_user
depends_on:
- db
db:
image: postgres
ports:
- "5432:5432"
environment:
POSTGRES_USER: ${PGADMIN_DEFAULT_EMAIL:-pgadmin4#pgadmin.org}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-changeme}
PGDATA: /data/postgres
volumes:
- postgres:/data/postgres
networks:
- postgres
restart: unless-stopped
pgadmin:
container_name: pgadmin_container
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: ${PGADMIN_DEFAULT_EMAIL:-pgadmin4#pgadmin.org}
PGADMIN_DEFAULT_PASSWORD: ${PGADMIN_DEFAULT_PASSWORD:-admin}
volumes:
- pgadmin:/root/.pgadmin
ports:
- "${PGADMIN_PORT:-5050}:80"
networks:
- postgres
restart: unless-stopped
swagger:
image: swaggerapi/swagger-ui
ports:
- "8080:8080"
expose:
- "8080"
environment:
API_URL: http://localhost:3000/
networks:
postgres:
driver: bridge
volumes:
pgadmin:
postgres:

Slowdown in docker-compose version 2 vs version 1

I have a docker-compose file, with the following general content:
mongo:
image: mongoimage
expose:
- "27017"
ports:
- "27017:27017"
mysql:
image: mysqlimage
expose:
- "3306"
ports:
- "3306:3306"
data:
image: dataimage
memcached:
image: memcached
webserver:
image: webserverimage
ports:
- "80:80"
- "5000:5000"
- "12500:12500"
expose:
- "5000"
- "12500"
links:
- mongo
- mysql
- memcached
hostname: localtesting.dev
volumes_from:
- data
volumes:
- localvolume1:/mountedvolume1
- localvolume2:/mountedvolume2
- localvolume3:/mountedvolume3
- localvolume4:/mountedvolume4
- localvolume5:/mountedvolume5
testing:
image: testingimage
ports:
- "8080:80"
links:
- mongo
- mysql
- webserver:localtesting.dev
volumes_from:
- webserver
When I run this through our local testsuite, which consists of a lot of http requests from testing to webserver, it takes about 1 hour.
I modified it to docker version 2:
version: '2'
services:
mongo:
image: mongoimage
expose:
- "27017"
ports:
- "27017:27017"
mysql:
image: mysqlimage
expose:
- "3306"
ports:
- "3306:3306"
data:
image: dataimage
memcached:
image: memcached
webserver:
image: webserverimage
ports:
- "80:80"
- "5000:5000"
- "12500:12500"
expose:
- "5000"
- "12500"
links:
- mongo
- mysql
- memcached
hostname: localtesting.dev
volumes_from:
- data
volumes:
- localvolume1:/mountedvolume1
- localvolume2:/mountedvolume2
- localvolume3:/mountedvolume3
- localvolume4:/mountedvolume4
- localvolume5:/mountedvolume5
testing:
image: testingimage
ports:
- "8080:80"
links:
- mongo
- mysql
- webserver:localtesting.dev
volumes_from:
- webserver
When I run it now through the local testsuite, it suddenly takes about 6 hours, with all http requests taking a lot longer. What network changes should I make to remove the extra latency, and what is the actual cause behind it?