i want to be able to recreate some base data that is dumped when mongo-data folder is deleted and docker-compose up is called.
the problem that im facing is that app does not have mongo
these are my files:
docker-compose.yml
version: "3"
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
volumes:
- .:/testapp
environment:
DB_URL: mongodb://test_mongo/appdb
depends_on:
- mongo
mongo:
image: "mongo:4.4.4"
restart: always
container_name: test_mongo
ports:
- "27017:27017"
- "27018:27018"
volumes:
- ./mongo-data:/data/db
Dockerfile:
FROM node:14.15.5
RUN mkdir -p /testapp
WORKDIR /testapp
EXPOSE 3000
ENTRYPOINT ["./entrypoint.sh"]
entrypoint.sh:
#!/bin/bash
sh ./__backup__/db/restore.sh
sh ./__backup__/app/restore.sh
yarn install
yarn start:dev
backup/app/restore.sh:
#!/bin/bash
if [[ ! -d '/testapp/uploads' ]]
then
tar -xvf ./uploads.tar.gz /testapp/
fi
backup/app/restore.sh:
#!/bin/bash
until mongo --eval "print(\"waited for connection\")"
do
sleep 1
done
if [[ ! -d '/testapp/mongo-data' ]]
then
mongorestore --archive ./db.dump
fi
is there anyway to run these resotre.sh files after mongo service is up or running mongo from app?
If I understand the question correctly, you want to restore the MongoDB to a certain state every time your app launches, and you're asking if there's a way to do it after MongoDB container launches.
There's a tool called docker-compose-wait, quoting from its GitHub README, it's a small command-line utility to wait for other docker images to be started while using docker-compose.
It's fairly simple to use it. Add it to the image, run /wait to wait for services to be up, and get on to whatever you want next.
So according to your current setup, your Dockerfile could be like this:
FROM node:14.15.5
## Add the wait script to the image
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.9.0/wait /wait
RUN chmod +x /wait
RUN mkdir -p /testapp
WORKDIR /testapp
ADD . .
EXPOSE 3000
## Launch the wait tool and then your entrypoint.sh
ENTRYPOINT /wait && /testapp/entrypoint.sh"
In which your entrypoint.sh was already written to call the restore script. In your docker-compose.yml, add environment variable to set up the services to be waited.
version: "3"
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
volumes:
- .:/testapp
environment:
DB_URL: mongodb://test_mongo/appdb
WAIT_HOSTS: mongo:27017
depends_on:
- mongo
mongo:
image: "mongo:4.4.4"
restart: always
container_name: test_mongo
ports:
- "27017:27017"
- "27018:27018"
volumes:
- ./mongo-data:/data/db
Related
I'm using docker-compose version 1.25.5, build 8a1c60f6 &
Docker version 19.03.8, build afacb8b
I'm trying to initialise database with Users in MongoDb container using docker-entrypoint-initdb.d but,
js/scripts aren't being executed when container starts.
I know /data/db must be empty for it execute so I've been deleting it every time before start. But still doesn't execute. Going into the container and manually executing mongo mongo-init.js works.
Not sure why it's not working when it should.
docker-compose.yml:
version: '3'
services:
mongodb:
container_name: "mongodb"
image: tonyh308/cv-mongodb:1
build: ./mongo
container_name: mongodb
restart: always
ports:
- 27017:27017
environment:
MONGO_INITDB_DATABASE: test
volumes:
- ./docker-entrypoint-initdb.d/mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
- ./mongo/mongodb:/data/db
labels:
com.qa.description: "MongoDb Container for application."
# other services ...
Dockerfile:
FROM mongo:3.6.18-xenial
RUN apt-get update
COPY ./Customers /Customers
COPY ./test /test
WORKDIR /data
ENTRYPOINT ["mongod", "--bind_ip_all"]
EXPOSE 27017
Feb 26 2021: still no solution
This is my first time with docker, I'm working on this problem for two days, it would make me very happy to find a solution.
I'm running docker-compose.yml file with "docker-compose up":
version: '3.3'
services:
base:
networks:
- brain_storm-network
volumes:
- brain_storm-storage:/usr/src/brain_storm
build: "./brain_storm"
data_base:
image: mongo
volumes:
- brain_storm-storage:/usr/src/brain_storm
networks:
- brain_storm-network
ports:
- '27017:27017'
api:
build: "./brain_storm/api"
volumes:
- brain_storm-storage:/usr/src/brain_storm
networks:
- brain_storm-network
ports:
- 5000:5000
depends_on:
- data_base
- base
restart: on-failure
the base Dockerfile inside ./brain_storm does the following:
FROM brain_storm-base:latest
RUN mkdir -p /usr/src/brain_storm/brain_storm
ADD . /usr/src/brain_storm/brain_storm
and when running the Dockerfile inside brain_storm/api
FROM brain_storm-base:latest
CMD cd /usr/src/brain_storm \
&& python -m brain_storm.api run-server -h 0.0.0.0 -p 5000 -d mongodb://0.0.0.0:27017
I'm getting this error :
brain_storm_api_1 exited with code 1
api_1 | /usr/local/bin/python: Error while finding module specification for 'brain_storm.api' (ModuleNotFoundError: No module named 'brain_storm')
pwd says I'm in '/' and not in the current directory when running the base Dockerfile,
so that might be the problem but how do I solve it without going to /home/user/brain_storm in the Dockerfile, because I want to keep the location of brain_storm folder general.
How can I make Dockerfile see and take the file from the current directory (where the Dockerfile file is) ?
You should probably define WORKDIR command in both your Dockerfiles. The WORKDIR command is used to define the working directory of a Docker container at any given time. Any RUN, CMD, ADD,COPY, or ENTRYPOINT command will be executed in the specified working directory.:
base:
FROM brain_storm-base:latest
WORKDIR /usr/src/brain_storm
COPY . .
api:
FROM brain_storm-base:latest
WORKDIR /usr/src/brain_storm
CMD python -m brain_storm.api run-server -h 0.0.0.0 -p 5000 -d mongodb://0.0.0.0:27017
This is my Dockerfile:
FROM mongo
WORKDIR /usr/src/app
COPY db /usr/src/app/db
COPY replica.js /usr/src/app/
CMD mongo
The replica.js as follows
rs.initiate();
This is my docker-compose file
mongo_server:
image: mongo
hostname: mongo_server.$ENV_NAME
build:
context: ./mongo
dockerfile: Dockerfile
expose:
- 27017
ports:
- "$MONGO_PORT:27017"
restart: always
networks:
localnet:
aliases:
- mongo_server.$ENV_NAME
command: --replSet $MONGO_REPLICA --bind_ip_all
volumes:
- "mongovolume:/data/db"
The problem is if I run successfully docker-compose up.
Then I need to run manually two command
docker exec 2b2 sh -c "mongo < /usr/src/app/replica.js" # 2b2 is id of container mongo
and
docker exec 2b2 sh -c "mongorestore --drop -d mydb /usr/src/app/db"
Now the replica is set, the database is restored. My question is could I make it automatically such as moving to entrypoint.sh and call in Dockerfile or setting in docker-compose.yml to reduce manual work?
There is definitely a way by adding another container in your docker-compose file:
mongo_restore:
image: mongo
build:
context: ./mongo
dockerfile: Dockerfile
networks:
localnet:
aliases:
- mongo_server.$ENV_NAME
entrypoint:
- sh
command:
- -c
- |
# Step 1: Wait until mongo_server is fully up and running. Please insert your own code to check.
# Step 2: Execute your restore script but make sure to target mongo_server instead
volumes:
- "mongovolume:/data/db"
There might be some syntax errors here and there but the idea is the same as I have used this method in some other projects :)
I have posted the relevant files below. Everything builds as expected, however when trying to use SQLAlchemy to make a call to the database, I invariably get the following error:
OperationalError: (psycopg2.OperationalError) could not translate host name "db" to address: Name or service not known
The string that sqlalchemy is using is (as given in .env.web.dev): postgres://postgres:postgres#db:5432/spaceofmotion.
What am I doing wrong?
docker-compose.yml:
version: '3'
services:
db:
container_name: db
ports:
- '5432:5432'
expose:
- '5432'
build:
context: ./
dockerfile: Dockerfile.postgres
networks:
- db_web
web:
container_name: web
restart: always
build:
context: ../
dockerfile: Dockerfile.web
ports:
- '5000:5000'
env_file:
- ./.env.web.dev
networks:
- db_web
depends_on:
- db
- redis
- celery
redis:
image: 'redis:5.0.7-buster'
container_name: redis
command: redis-server
ports:
- '6379:6379'
celery:
container_name: celery
build:
context: ../
dockerfile: Dockerfile.celery
env_file:
- ./.env.celery.dev
command: celery worker -A a.celery --loglevel=info
depends_on:
- redis
client:
container_name: react-app
build:
context: ../a/client
dockerfile: Dockerfile.client
volumes:
- '../a/client:/src/app'
- '/src/app/node_modules'
ports:
- '3000:3000'
depends_on:
- "web"
environment:
- NODE_ENV=development
- HOST_URL=http://localhost:5000
networks:
db_web:
driver: bridge
Dockerfile.postgres:
FROM postgres:latest
ENV POSTGRES_DB spaceofmotion
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD postgres
COPY ./spaceofmotion-db.sql /
COPY ./docker-entrypoint-initdb.d/restore-database.sh /docker-entrypoint-initdb.d/
restore-database.sh:
file="/spaceofmotion-db.sql"
psql -U postgres spaceofmotion < "$file"
Dockerfile.web:
FROM python:3.7-slim-buster
RUN apt-get update
RUN apt-get -y install python-pip libpq-dev python-dev && \
pip install --upgrade pip && \
pip install psycopg2
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
CMD ["python", "manage.py", "runserver"]
.env.web.dev:
DATABASE_URL=postgres://postgres:postgres#db:5432/spaceofmotion
... <other config vars> ...
Is this specifically coming from your celery container?
Your db container declares
networks:
- db_web
but the celery container has no such declaration; that means that it will be on the default network Compose creates for you. Since the two containers aren't on the same network they can't connect to each other.
There's nothing wrong with using the Compose-managed default network, especially for routine Web applications, and I'd suggest deleting all of the networks: blocks in the entire file. (You also don't need to specify container_name:, since Compose will come up with reasonable names on its own.)
Based on Docker's Postgres documentation, I can create any *.sql file inside /docker-entrypoint-initdb.d and have it automatically run.
I have init.sql that contains CREATE DATABASE ronda;
In my docker-compose.yaml, I have
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
volumes:
- /usr/src/app/static
env_file: .env
command: /usr/local/bin/gunicorn ronda.wsgi:application -w 2 -b :8000
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
links:
- web:web
postgres:
restart: always
build: ./postgres/
volumes_from:
- data
ports:
- "5432:5432"
data:
restart: always
build: ./postgres/
volumes:
- /var/lib/postgresql
command: "true"
and my postgres Dockerfile,
FROM library/postgres
RUN mkdir -p /docker-entrypoint-initdb.d
COPY init.sql /docker-entrypoint-initdb.d/
Running docker-compose build and docker-compose up work fine, but the database ronda is not created.
This is how I use postgres on my projects and preload the database.
file: docker-compose.yml
db:
container_name: db_service
build:
context: .
dockerfile: ./Dockerfile.postgres
ports:
- "5432:5432"
volumes:
- /var/lib/postgresql/data/
This Dockerfile load the file named pg_dump.backup(binary dump) or psql_dump.sql(plain text dump) if exist on root folder of the project.
file: Dockerfile.postgres
FROM postgres:9.6-alpine
ENV POSTGRES_DB DatabaseName
COPY pg_dump.backup .
COPY pg_dump.sql .
RUN [[ -e "pg_dump.backup" ]] && pg_restore pg_dump.backup > pg_dump.sql
# Preload database on init
RUN [[ -e "pg_dump.sql" ]] && cp pg_dump.sql /docker-entrypoint-initdb.d/
In case of need retry the loading of the dump, you can remove the current database with the command:
docker-compose rm db
Then you can run docker-compose up to retry load the database.
If your initialisation requirements are just to create the ronda schema, then you could just make use of the POSTGRES_DB environment variable as described in the documentation.
The bit of your docker-compose.yml file for the postgres service would then be:
postgres:
restart: always
build: ./postgres/
volumes_from:
- data
ports:
- "5432:5432"
environment:
POSTGRES_DB: ronda
On a side note, do not use restart: always for your data container as this container does not run any service (just the true command). Doing this you are basically telling Docker to run the true command in an infinite loop.
Had the same problem with postgres 11.
Some points that helped me:
run:
docker-compose rm
docker-compose build
docker-compose up
The obvious: don't run compose in detached mode. You want to see the logs.
After adding the step docker-compose rm to the mix it worked, finally.