I have currently a problem with mongoDB and Docker:
I have this error when I compose up:
ERROR: Named volume "mongo:/data/db:rw" is used in service "mongo" but no declaration was found in the volumes section.
In fact, i realized that i have a volume instead a container but i can't access to the data or manipulate them with the mongo shell.
I have to delete an index from one mongodb collection in the volume.
How i can do that knowing that i have no container but i have a volume instead.
version: '3.1'
services:
proxy:
image: jwilder/nginx-proxy:alpine
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- $CERT_DIR:/etc/nginx/certs
app:
image: registry.eksae.fr/talent:${VERSION:-latest}
environment:
- NEXT_PUBLIC_APP_URL
- NEXT_PUBLIC_SENTRY_DSN=${SENTRY_DSN}
- NEXT_SERVER_MONGODB_URI
- NEXT_SERVER_MONGODB_USER
- NEXT_SERVER_MONGODB_PASSWORD
- NEXT_SERVER_MONGODB_DBNAME
- NEXT_SERVER_MONGODB_DEBUG
- NEXT_SERVER_MAILER_HOST
- NEXT_SERVER_MAILER_PORT
- NEXT_SERVER_MAILER_USER
- NEXT_SERVER_MAILER_PASS
- NEXT_SERVER_MAILER_SECURE
- NEXT_SERVER_REDIS_HOST
- NEXT_SERVER_TOKEN_EXPIRE_DAYS
- SENTRY_DSN
- SENTRY_RELEASE
- SENTRY_ENVIRONMENT
- SENTRY_ORG
- SENTRY_PROJECT
- SENTRY_LOG_LEVEL
- VIRTUAL_PORT=3000
- VIRTUAL_HOST
- CERT_NAME
extra_hosts:
- s0003113:10.100.173.107
- s0003112:10.100.173.106
- s0003110:10.100.173.105
restart: always
redis:
image: redis
restart: always
volumes:
- redis:/data
volumes:
redis:
driver: local
Related
I'm working with docker-compose.yml files and have done so for about three years now.
In my solution, I have 6 containers that reside on one network, that network defined as a bridge. One of the containers must have access to a specific internet host, which it's able to do so without any issue. I'll name this solution as the "original" solution.
My next development task is now to replicate the original solution, on my workstation, ensuring that the replica can't interact with the original. My understanding is that I simply create another network, define it as a "bridge" and I should be good to go.
The issue here is that the one container in my replica solution is unable to resolve a cURL call to the internet host. Code is nigh on identical between original and replica (I've added curl debugging to replica and that's how I caught the unresolved host error) and the only thing I can see being any different is the fact that it's on it's own network (but it's bridged so that it should still be able to resolve to the internet host).
The YAML files that I'm using are as follows:
original-docker-compose.yml
version: '3.8'
services:
console-mysql:
container_name: console-mysql
image: PRIVATE_ECR_ADDRESS/mysql:5.7.34
command: --default-authentication-plugin=mysql_native_password
ports:
- "7306:3306"
volumes:
- ./perImageFiles/console-mysql/db:/var/lib/mysql
- ./perImageFiles/console-mysql/seed:/docker-entrypoint-initdb.d
env_file:
- ./.env
networks:
- dev-vlt-console
console-www:
image: PRIVATE_ECR_ADDRESS/bnoe-console:latest
container_name: console-www
env_file:
- ./.env
ports:
- 7080:80
volumes:
- ../../console-www:/var/www/secure
- ./perImageFiles/console/console-nginx.conf:/etc/nginx/conf.d/default.conf
- ./perImageFiles/console/nginx.conf:/etc/nginx/nginx.conf
- ./perImageFiles/console/logs-nginx:/var/log/nginx/
- ./perImageFiles/console/console-www-entrypoint.sh:/console-www-entrypoint.sh
depends_on:
- console-www-php
networks:
- dev-vlt-console
console-proc:
container_name: console-proc
image: PRIVATE_ECR_ADDRESS/bnoe-processor:latest
env_file:
- ./.env
ports:
- 7082:9001
volumes:
- ../../console-www:/var/www/secure
- ./perImageFiles/proc/supervisord.conf:/etc/supervisor/conf.d/supervisord.conf
- ./perImageFiles/dicom/storage:/var/lib/orthanc/db-v6
- ../../console-www/scripts/docker/console-proc-docker-run.sh:/console-proc-docker-run.sh
entrypoint: /console-proc-docker-run.sh
depends_on:
- console-mysql
networks:
- dev-vlt-console
console-www-php:
container_name: console-www-php
image: PRIVATE_ECR_ADDRESS/bnoe-console-php:latest
volumes:
- ../../console-www:/var/www/secure
- ./perImageFiles/console/php-log.conf:/usr/local/etc/php-fpm.d/zz-log.conf
- ./perImageFiles/console/www-php.ini:/usr/local/etc/php/conf.d/www-php.ini
- ./perImageFiles/console/console-php-entrypoint.sh:/console-php-entrypoint.sh
env_file:
- ./.env
- ./.setupenv
entrypoint: /console-php-entrypoint.sh
depends_on:
- console-mysql
networks:
- dev-vlt-console
console-dicom:
image: jodogne/orthanc-plugins:1.9.7
container_name: console-dicom
depends_on: [console-mysql]
ports: [7084:8042, 10401:10401]
volumes:
- ./perImageFiles/dicom/orthanc.json:/etc/orthanc/orthanc.json
- ./perImageFiles/dicom/storage:/var/lib/orthanc/db-v6
- ./perImageFiles/dicom/plugins:/usr/share/orthanc/plugins
- ./perImageFiles/dicom/lua-scripts:/usr/share/orthanc/lua-scripts
env_file:
- ./.env
networks:
- dev-vlt-console
console-vpacs:
image: PRIVATE_ECR_ADDRESS/vpacs:latest
container_name: console-vpacs
depends_on: [console-mysql]
ports: ["7085:8042"]
secrets:
- vpacs-orthanc.json
networks:
- dev-vlt-console
secrets:
console-orthanc.json:
file: ./perImageFiles/dicom/orthanc.json
vpacs-orthanc.json:
file: ./perImageFiles/vpacs/orthanc.json
networks:
dev-vlt-console:
name: dev-vlt-console
driver: bridge
replica-docker-compose.yml
version: '3.8'
services:
console-mysql-dev3:
container_name: console-mysql-dev3
image: PRIVATE_ECR_ADDRESS/mysql:5.7.34
command: --default-authentication-plugin=mysql_native_password
ports:
- "60001:3306"
volumes:
- ./perImageFiles/console-mysql/db:/var/lib/mysql
- ./perImageFiles/console-mysql/seed:/docker-entrypoint-initdb.d
env_file:
- ./.env
networks:
- dev-vlt-console-dev3
console-www-dev3:
image: PRIVATE_ECR_ADDRESS/bnoe-console:latest
container_name: console-www-dev3
env_file:
- ./.env
ports:
- 60002:80
volumes:
- ../console-www:/var/www/secure
- ./perImageFiles/console/console-nginx.conf:/etc/nginx/conf.d/default.conf
- ./perImageFiles/console/nginx.conf:/etc/nginx/nginx.conf
- ./perImageFiles/console/logs-nginx:/var/log/nginx/
- ./perImageFiles/console/console-www-entrypoint.sh:/console-www-entrypoint.sh
depends_on:
- console-www-php-dev3
networks:
- dev-vlt-console-dev3
console-proc-dev3:
container_name: console-proc-dev3
image: PRIVATE_ECR_ADDRESS/bnoe-processor:latest
env_file:
- ./.env
ports:
- 60003:9001
volumes:
- ../console-www:/var/www/secure
- ./perImageFiles/proc/supervisord.conf:/etc/supervisor/conf.d/supervisord.conf
- ./perImageFiles/dicom/storage:/var/lib/orthanc/db-v6
- ../console-www/scripts/docker/console-proc-docker-run.sh:/console-proc-docker-run.sh
entrypoint: /console-proc-docker-run.sh
depends_on:
- console-mysql-dev3
networks:
- dev-vlt-console-dev3
console-www-php-dev3:
container_name: console-www-php-dev3
image: PRIVATE_ECR_ADDRESS/bnoe-console-php:latest
volumes:
- ../console-www:/var/www/secure
- ./perImageFiles/console/php-log.conf:/usr/local/etc/php-fpm.d/zz-log.conf
- ./perImageFiles/console/www-php.ini:/usr/local/etc/php/conf.d/www-php.ini
- ./perImageFiles/console/console-php-entrypoint.sh:/console-php-entrypoint.sh
env_file:
- ./.env
- ./.setupenv
entrypoint: /console-php-entrypoint.sh
depends_on:
- console-mysql-dev3
networks:
- dev-vlt-console-dev3
console-dicom-dev3:
image: jodogne/orthanc-plugins:1.9.7
container_name: console-dicom-dev3
depends_on: [console-mysql-dev3]
ports: [60004:8042, 60005:10401]
volumes:
- ./perImageFiles/dicom/orthanc.json:/etc/orthanc/orthanc.json
- ./perImageFiles/dicom/storage:/var/lib/orthanc/db-v6
- ./perImageFiles/dicom/plugins:/usr/share/orthanc/plugins
- ./perImageFiles/dicom/lua-scripts:/usr/share/orthanc/lua-scripts
env_file:
- ./.env
networks:
- dev-vlt-console-dev3
console-vpacs-dev3:
image: PRIVATE_ECR_ADDRESS/vpacs:latest
container_name: console-vpacs-dev3
depends_on: [console-mysql-dev3]
ports: ["60006:8042"]
secrets:
- vpacs-orthanc-dev3.json
networks:
- dev-vlt-console-dev3
secrets:
console-orthanc-dev3.json:
file: ./perImageFiles/dicom/orthanc.json
vpacs-orthanc-dev3.json:
file: ./perImageFiles/vpacs/orthanc.json
networks:
dev-vlt-console-dev3:
name: dev-vlt-console-dev3
driver: bridge
Am I missing something here? I've added tens of other containers, all with their own networks configured as "bridge" and they are all able to access the internet without issue.
I've read posts from 5 years ago about this and the only resolution that seemed to work, was rebooting the docker host - which I've done, but it didn't help.
Any thoughts / comments?!
Thanks
I'm trying to setup a local development with Docker Compose that has a MongoDB cluster as the database. I chose Mongo Express as the Database Admin User Interface so I can check inside the MongoDB database. It does take some time for the cluster to accept connections, I have the 3 db containers as part of the depends_on, but seems like I have to do more than that based on the Docker Compose documentation here. I can't seem to find a good example for waiting for MongoDB clusters. Has anyone figured this out already? Please share, that would be great. Thank you in advance!
Here's the docker-compose.yml file:
version: '3.9'
services:
mongodb-primary:
image: 'bitnami/mongodb:latest'
environment:
- MONGODB_ADVERTISED_HOSTNAME=mongodb-primary
- MONGODB_REPLICA_SET_MODE=primary
- MONGODB_ROOT_PASSWORD=password
- MONGODB_REPLICA_SET_KEY=replicasetkey
volumes:
- 'mongodb_master_data:/bitnami'
mongodb-secondary:
image: 'bitnami/mongodb:latest'
depends_on:
- mongodb-primary
environment:
- MONGODB_ADVERTISED_HOSTNAME=mongodb-secondary
- MONGODB_REPLICA_SET_MODE=secondary
- MONGODB_INITIAL_PRIMARY_HOST=mongodb-primary
- MONGODB_INITIAL_PRIMARY_PORT_NUMBER=27017
- MONGODB_INITIAL_PRIMARY_ROOT_PASSWORD=password
- MONGODB_REPLICA_SET_KEY=replicasetkey
mongodb-arbiter:
image: 'bitnami/mongodb:latest'
depends_on:
- mongodb-primary
environment:
- MONGODB_ADVERTISED_HOSTNAME=mongodb-arbiter
- MONGODB_REPLICA_SET_MODE=arbiter
- MONGODB_INITIAL_PRIMARY_HOST=mongodb-primary
- MONGODB_INITIAL_PRIMARY_PORT_NUMBER=27017
- MONGODB_INITIAL_PRIMARY_ROOT_PASSWORD=password
- MONGODB_REPLICA_SET_KEY=replicasetkey
dbadmin:
image: mongo-express
restart: always
ports:
- 8081:8081
depends_on:
- mongodb-primary
- mongodb-secondary
- mongodb-arbiter
environment:
ME_CONFIG_MONGODB_URL: mongodb://root:password#mongodb-primary:27017,mongodb-secondary:27017,mongodb-arbiter:27017?replicaSet=replicaset
ME_CONFIG_BASICAUTH_USERNAME: admin
ME_CONFIG_BASICAUTH_PASSWORD: mexpress
volumes:
mongodb_master_data:
driver: local
I use this docker-compose which runs MongoDB+ mongo-express Web interface:
https://gist.github.com/adamelliotfields/cd49f056deab05250876286d7657dc4b
How can I run docker(s) with mongodb cluster and mongo-express as web interface?
A have attached my docker-compose file: I have got it from bitnami and added mongo-express. But web interface does not work.
That is:
version: '3.1'
services: mongodb-sharded:
image: docker.io/bitnami/mongodb-sharded:4.4
environment:
- MONGODB_ADVERTISED_HOSTNAME=mongodb-sharded
- MONGODB_SHARDING_MODE=mongos
- MONGODB_CFG_PRIMARY_HOST=mongodb-cfg
- MONGODB_CFG_REPLICA_SET_NAME=cfgreplicaset
- MONGODB_REPLICA_SET_KEY=replicasetkey123
- MONGODB_ROOT_PASSWORD=example
ports:
- "27017:27017"
mongodb-shard0:
image: docker.io/bitnami/mongodb-sharded:4.4
environment:
- MONGODB_ADVERTISED_HOSTNAME=mongodb-shard0
- MONGODB_SHARDING_MODE=shardsvr
- MONGODB_MONGOS_HOST=mongodb-sharded
- MONGODB_ROOT_PASSWORD=example
- MONGODB_REPLICA_SET_MODE=primary
- MONGODB_REPLICA_SET_KEY=replicasetkey123
- MONGODB_REPLICA_SET_NAME=shard0
volumes:
- 'shard0_data:/bitnami'
mongodb-cfg:
image: docker.io/bitnami/mongodb-sharded:4.4
environment:
- MONGODB_ADVERTISED_HOSTNAME=mongodb-cfg
- MONGODB_SHARDING_MODE=configsvr
- MONGODB_ROOT_PASSWORD=example
- MONGODB_REPLICA_SET_MODE=primary
- MONGODB_REPLICA_SET_KEY=replicasetkey123
- MONGODB_REPLICA_SET_NAME=cfgreplicaset
volumes:
- 'cfg_data:/bitnami'
mongo-express:
image: mongo-express
restart: always
ports:
- 8081:8081
# - 27017:27017
environment:
ME_CONFIG_MONGODB_ADMINUSERNAME: root
ME_CONFIG_MONGODB_ADMINPASSWORD: example
XX_CONFIG_MONGODB_URL: mongodb://root:example#mongodb-sharded:27017/
ME_CONFIG_MONGODB_URL: mongodb://root:example#mongodb-cfg:27017/
# ongodb://[username:password#]host1[:port1][,...hostN[:portN]][/[defaultauthdb][?options]]
volumes: shard0_data:
driver: local cfg_data:
driver: local
I run AirFlow on local machine with docker-compose:
version: '2'
services:
postgresql:
image: bitnami/postgresql:10
volumes:
- postgresql_data:/bitnami/postgresql
environment:
- POSTGRESQL_DATABASE=bitnami_airflow
- POSTGRESQL_USERNAME=bn_airflow
- POSTGRESQL_PASSWORD=bitnami1
- ALLOW_EMPTY_PASSWORD=yes
redis:
image: bitnami/redis:5.0
volumes:
- redis_data:/bitnami
environment:
- ALLOW_EMPTY_PASSWORD=yes
airflow-scheduler:
image: bitnami/airflow-scheduler:1
environment:
- AIRFLOW_FERNET_KEY=46BKJoQYlPPOexq0OhDZnIlNepKFf87WFwLbfzqDDho=
- AIRFLOW_DATABASE_NAME=bitnami_airflow
- AIRFLOW_DATABASE_USERNAME=bn_airflow
- AIRFLOW_DATABASE_PASSWORD=bitnami1
- AIRFLOW_EXECUTOR=CeleryExecutor
- AIRFLOW_LOAD_EXAMPLES=no
volumes:
- airflow_scheduler_data:/bitnami
- ./airflow/dags:/opt/bitnami/airflow/dags
- ./airflow/plugins:/opt/bitnami/airflow/plugins
airflow-worker:
image: bitnami/airflow-worker:1
environment:
- AIRFLOW_FERNET_KEY=46BKJoQYlPPOexq0OhDZnIlNepKFf87WFwLbfzqDDho=
- AIRFLOW_EXECUTOR=CeleryExecutor
- AIRFLOW_DATABASE_NAME=bitnami_airflow
- AIRFLOW_DATABASE_USERNAME=bn_airflow
- AIRFLOW_DATABASE_PASSWORD=bitnami1
- AIRFLOW_LOAD_EXAMPLES=no
volumes:
- airflow_worker_data:/bitnami
- ./airflow/dags:/opt/bitnami/airflow/dags
- ./airflow/plugins:/opt/bitnami/airflow/plugins
airflow:
image: bitnami/airflow:1
environment:
- AIRFLOW_FERNET_KEY=46BKJoQYlPPOexq0OhDZnIlNepKFf87WFwLbfzqDDho=
- AIRFLOW_DATABASE_NAME=bitnami_airflow
- AIRFLOW_DATABASE_USERNAME=bn_airflow
- AIRFLOW_DATABASE_PASSWORD=bitnami1
- AIRFLOW_USERNAME=user
- AIRFLOW_PASSWORD=password
- AIRFLOW_EXECUTOR=CeleryExecutor
- AIRFLOW_LOAD_EXAMPLES=yes
ports:
- '8080:8080'
volumes:
- ./airflow/dags:/opt/bitnami/airflow/dags
- ./airflow/plugins:/opt/bitnami/airflow/plugins
volumes:
airflow_scheduler_data:
driver: local
airflow_worker_data:
driver: local
airflow_data:
driver: local
postgresql_data:
driver: local
redis_data:
driver: local
But when I sing in the UI interface, I see
"The scheduler does not appear to be running.
The DAGs list may not update, and new tasks will not be scheduled."
Why?
I use official docker images and there is no problem with this.
And another problem - while I don't switch AIRFLOW_LOAD_EXAMPLES=yes or no and restart docker-compose I don't see the updated DAGs' list. (
When I have got for AirFlow 1 docker-compose by Puckel, everything worked: https://github.com/puckel/docker-airflow/blob/master/README.md
I'm using a .yml to launch an Odoo instance & its postgresql through a traefik and it works perfectly fine. It assigns the domain and subdomain with a Let's Encrypt certificate, perfect.
My issue now is that I wish to launch more of these but I haven't succeeded in doing so.
I've tried:
Changing the ports: in the odoo section
Removing the ports: section from the odoo section
Editing the ports: in the traefik section
The .yml:
version: "3"
services:
odoo:
image: odoo:12.0
depends_on:
- db
restart: unless-stopped
networks:
- internal
ports:
- "8069:8069"
- "8072:8072"
environment:
- HOST=db
- USER=${ODOO_USER}
- PASSWORD=${ODOO_PASS}
volumes:
- ./odoo/odoo-web-data:/var/lib/odoo
- ./odoo/config:/etc/odoo
- ./odoo/addons:/mnt/extra-addons
- ./odoo/logs:/var/log/odoo
labels:
- 'traefik.http.routers.odoo.rule=Host(`${ODOO_TRAEFIK_URL}`)'
- 'traefik.http.routers.odoo.entrypoints=websecure'
- 'traefik.http.routers.odoo.tls.certresolver=odoo'
- 'traefik.port=8069'
- "traefik.http.routers.http-catchall.rule=hostregexp(`{host:.+}`)"
- "traefik.http.routers.http-catchall.entrypoints=web"
- "traefik.http.routers.http-catchall.middlewares=redirect-to-https#docker"
- "traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https"
db:
image: postgres:10
restart: unless-stopped
networks:
- internal
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=${ODOO_USER}
- POSTGRES_PASSWORD=${ODOO_PASS}
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- ./pgdata:/var/lib/postgresql/data/pgdata
traefik:
image: traefik:v2.0
networks:
- internal
- web
ports:
# The HTTP port
- "80:80"
- "443:443"
# The Web UI (enabled by --api.insecure=true)
- "8080:8080"
volumes:
- "./traefik/letsencrypt:/letsencrypt"
- "./traefik/traefik.yml:/etc/traefik.yml"
- "/var/run/docker.sock:/var/run/docker.sock"
command:
- "--log.level=DEBUG"
- "--api.insecure=true"
- "--providers.docker"
- "--providers.docker.defaultRule=Host(`{{ trimPrefix `/` .Name }}.${TRAEFIK_DEFAULT_DOMAIN}`)"
- "--entryPoints.web.address=:80"
- "--entryPoints.websecure.address=:443"
- "--certificatesResolvers.odoo.acme.httpchallenge=true"
- "--certificatesresolvers.odoo.acme.httpchallenge.entrypoint=web"
- "--certificatesresolvers.odoo.acme.email=${ACME_EMAIL}"
- "--certificatesresolvers.odoo.acme.storage=/letsencrypt/acme.json"
networks:
internal:
web:
external: true
Assuming I already have a Traefik + Odoo + Postgres already running, how could I go about launching more instances of these together?