Slowdown in docker-compose version 2 vs version 1 - docker-compose

I have a docker-compose file, with the following general content:
mongo:
image: mongoimage
expose:
- "27017"
ports:
- "27017:27017"
mysql:
image: mysqlimage
expose:
- "3306"
ports:
- "3306:3306"
data:
image: dataimage
memcached:
image: memcached
webserver:
image: webserverimage
ports:
- "80:80"
- "5000:5000"
- "12500:12500"
expose:
- "5000"
- "12500"
links:
- mongo
- mysql
- memcached
hostname: localtesting.dev
volumes_from:
- data
volumes:
- localvolume1:/mountedvolume1
- localvolume2:/mountedvolume2
- localvolume3:/mountedvolume3
- localvolume4:/mountedvolume4
- localvolume5:/mountedvolume5
testing:
image: testingimage
ports:
- "8080:80"
links:
- mongo
- mysql
- webserver:localtesting.dev
volumes_from:
- webserver
When I run this through our local testsuite, which consists of a lot of http requests from testing to webserver, it takes about 1 hour.
I modified it to docker version 2:
version: '2'
services:
mongo:
image: mongoimage
expose:
- "27017"
ports:
- "27017:27017"
mysql:
image: mysqlimage
expose:
- "3306"
ports:
- "3306:3306"
data:
image: dataimage
memcached:
image: memcached
webserver:
image: webserverimage
ports:
- "80:80"
- "5000:5000"
- "12500:12500"
expose:
- "5000"
- "12500"
links:
- mongo
- mysql
- memcached
hostname: localtesting.dev
volumes_from:
- data
volumes:
- localvolume1:/mountedvolume1
- localvolume2:/mountedvolume2
- localvolume3:/mountedvolume3
- localvolume4:/mountedvolume4
- localvolume5:/mountedvolume5
testing:
image: testingimage
ports:
- "8080:80"
links:
- mongo
- mysql
- webserver:localtesting.dev
volumes_from:
- webserver
When I run it now through the local testsuite, it suddenly takes about 6 hours, with all http requests taking a lot longer. What network changes should I make to remove the extra latency, and what is the actual cause behind it?

Related

How to create two postgres docker containers from docker compose file , one for development and one for testing?

So for example this is my docker-compose.yml file:
version: "3.7"
services:
postgres:
container_name: mydevdb
image: postgres:13
restart: always
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
volumes:
- postgres:/var/lib/postgresql/data
ports:
- "5432:5432"
volumes:
postgres:
And I want to have an image for development and another for testing with different (container name, POSTGRES_USER, POSTGRES_PASSWORD, POSTGRES_DB and port)
I want the best approach for that. Thanks.
You just have to give the services different names. (and maybe different port exposures, depending on how you want to use them)
version: "3.7"
services:
postgres-dev:
container_name: mydevdb
image: postgres:13
restart: always
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
volumes:
- postgres-dev:/var/lib/postgresql/data
ports:
- "5432:5432"
postgres-test:
container_name: mytestdb
image: postgres:13
restart: always
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
volumes:
- postgres-test:/var/lib/postgresql/data
ports:
- "5432:5432"
volumes:
postgres-dev:
postgres-test:

How to get docker compose to create pg DB if not found

I am using NestJS with TypeOrm and PostGres to define the Database.
I would like to have docker-compose create the DB if it does not exist in the container.
docker-compose.yaml
version: '3.9'
services:
backend:
build: .
ports:
- 8000:3000
volumes:
- .:/app
depends_on:
- db
db:
image: postgres:14.1
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- ./postgres-data:/var/lib/postgresql/data
ports:
- 33066:5432
pgadmin:
container_name: pgadmin4_container_shibari
image: dpage/pgadmin4
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: admin#admin.com
PGADMIN_DEFAULT_PASSWORD: root
ports:
- '5050:80'
redis:
image: redis
ports:
- 6379:6379

docker-compose file for nodejs, mongo, redis, rabbitmq

I need a docker compose that have node 12, mongo 4.4, redis 4.0.6 and rabbitmq 3.8.9 . This is what I have in my docker-compose right now and apparently it does not work. The app can't seems to connect to redis and rabbitmq.
version: '3'
services:
app:
container_name: malllog-main
restart: always
build: .
ports:
- '3000:3000'
external_links:
- mongo
- redis
- rabbitmq
mongo:
container_name: malllog-mongo
image: mongo:4.4
ports:
- '27017:27017'
redis:
container_name: malllog-redis
image: redis:4.0.6
ports:
- '6379:6379'
rabbitmq:
container_name: malllog-rabbitmq
image: rabbitmq:3.8.9
ports:
- '15672:15672'
- '5672:5672'
Below is the docker-compose file that I have used for my microservice test project, which works for me as you can try this out.
version: '3'
services:
my-ui:
container_name: my-ui
build: ./my-ui
ports:
- "80:80"
depends_on:
- my-api
networks:
- test-network
my-api:
container_name: my-api
restart: always
build:
context: my-api
dockerfile: Dockerfile
ports:
- "8080:8080"
#command: mvn clean spring-boot:run -Dspring-boot.run.profiles=docker
depends_on:
- rabbitmq
- mp-redis
networks:
- test-network
rabbitmq:
container_name: rabbitmq
image: rabbitmq:management
ports:
- "5672:5672"
- "15672:15672"
restart: always
networks:
- test-network
mp-redis:
container_name: mp-redis
image: redis:5
ports:
- "6379:6379"
restart: always
networks:
- test-network
mp-mongodb:
container_name: mp-mongodb
image: mongo:3.6
restart: always
environment:
MONGO_DATA_DIR: /data/db
MONGO_LOG_DIR: /dev/null
volumes:
- mongo-data:/data/db
ports:
- "27017:27017"
command: mongod --smallfiles --logpath=/dev/null # --quiet
networks:
- test-network
volumes:
mongo-data:
networks:
test-network:

multiple kafka schema registry against same cluster

I'm trying to mount two instances of kafka schema registry against the same kafka and zookeeper cluster. But the shemas are getting mixed. When running the two registries, if I register a schema using the api "kafka-schema-registry" it appears to be created at "schema-registry-ui-other" and not showing at "kafka-schema-registry-ui" as expected.
My configuration is:
version: '2.1'
services:
zoo1:
image: zookeeper:3.4.9
restart: unless-stopped
hostname: zoo1
ports:
- "2181:2181"
environment:
ZOO_MY_ID: 1
ZOO_PORT: 2181
ZOO_SERVERS: server.1=zoo1:2888:3888
volumes:
- ./full-stack/zoo1/data:/data
- ./full-stack/zoo1/datalog:/datalog
kafka1:
image: confluentinc/cp-kafka:5.3.1
hostname: kafka1
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_LISTENERS: LISTENER_DOCKER_INTERNAL://kafka1:19092,LISTENER_DOCKER_EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_DOCKER_INTERNAL
KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181"
KAFKA_BROKER_ID: 1
KAFKA_LOG4J_LOGGERS: "kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
volumes:
- ./full-stack/kafka1/data:/var/lib/kafka/data
depends_on:
- zoo1
kafka-schema-registry:
image: confluentinc/cp-schema-registry:5.3.1
hostname: kafka-schema-registry
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: PLAINTEXT://kafka1:19092
SCHEMA_REGISTRY_HOST_NAME: kafka-schema-registry
SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081
SCHEMA_REGISTRY_KAFKASTORE_GROUP_ID: "schema-registry"
depends_on:
- zoo1
- kafka1
kafka-schema-registry-other:
image: confluentinc/cp-schema-registry:5.3.1
hostname: kafka-schema-registry-other
ports:
- "8092:8081"
environment:
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: PLAINTEXT://kafka1:19092
SCHEMA_REGISTRY_HOST_NAME: kafka-schema-registry-other
SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081
SCHEMA_REGISTRY_SCHEMA_REGISTRY_ZK_NAMESPACE: schema_registry_other
SCHEMA_REGISTRY_KAFKASTORE_TOPIC: "_schemas_other"
SCHEMA_REGISTRY_KAFKASTORE_GROUP_ID: "schema-registry-other"
depends_on:
- zoo1
- kafka1
schema-registry-ui:
image: landoop/schema-registry-ui:0.9.4
hostname: kafka-schema-registry-ui
ports:
- "8001:8000"
environment:
SCHEMAREGISTRY_URL: http://kafka-schema-registry:8081/
PROXY: "true"
depends_on:
- kafka-schema-registry
schema-registry-ui-other:
image: landoop/schema-registry-ui:0.9.4
hostname: kafka-schema-registry-ui-other
ports:
- "8002:8000"
environment:
SCHEMAREGISTRY_URL: http://kafka-schema-registry-other:8081/
PROXY: "true"
depends_on:
- kafka-schema-registry-other
kafka-rest-proxy:
image: confluentinc/cp-kafka-rest:5.3.1
hostname: kafka-rest-proxy
ports:
- "8082:8082"
environment:
# KAFKA_REST_ZOOKEEPER_CONNECT: zoo1:2181
KAFKA_REST_LISTENERS: http://0.0.0.0:8082/
KAFKA_REST_SCHEMA_REGISTRY_URL: http://kafka-schema-registry:8081/
KAFKA_REST_HOST_NAME: kafka-rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: PLAINTEXT://kafka1:19092
depends_on:
- zoo1
- kafka1
- kafka-schema-registry
kafka-topics-ui:
image: landoop/kafka-topics-ui:0.9.4
hostname: kafka-topics-ui
ports:
- "8000:8000"
environment:
KAFKA_REST_PROXY_URL: "http://kafka-rest-proxy:8082/"
PROXY: "true"
depends_on:
- zoo1
- kafka1
- kafka-schema-registry
- kafka-rest-proxy
zoonavigator-web:
image: elkozmon/zoonavigator-web:0.5.1
ports:
- "8004:8000"
environment:
API_HOST: "zoonavigator-api"
API_PORT: 9000
links:
- zoonavigator-api
depends_on:
- zoonavigator-api
zoonavigator-api:
image: elkozmon/zoonavigator-api:0.5.1
environment:
SERVER_HTTP_PORT: 9000
depends_on:
- zoo1
it's not possible to have two separated schemas registry ?
You are defining the Group ID incorrectly for the Schema Registry servers, which means they are in the same consumer group, which means they consider themselves part of the same cluster. The second Schema Registry server becomes the primary, and so performs all of the writes.
You can fix this by setting the environment variable SCHEMA_REGISTRY_SCHEMA_REGISTRY_GROUP_ID for both of the Schema Registry servers. They will then be considered to be two different clusters, and this will work as you expect it to.

How to get mongo-express to wait for mongo in docker-compose

I'm using the healthcheck/mongo and have mongo-express waiting on service_healthy there. Why does it not wait and the result at localhost:8081 is 404 in the browser? :(
#Docker Engine: 18.06.1-ce
#docker-compose: 1.22.0, build f46880f
version: '2.1'
services:
mongodb:
image: healthcheck/mongo:latest
restart: always
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=rootpass
ports:
- "27017:27017"
volumes:
- "./mongo/data:/data/db"
mongo-express:
image: mongo-express
restart: always
depends_on:
mongodb:
condition: service_healthy
ports:
- "8081:8081"
environment:
- ME_CONFIG_MONGODB_ADMINUSERNAME=root
- ME_CONFIG_MONGODB_ADMINPASSWORD=rootpass