How mongorestore DB with docker - mongodb

I got that docker-compose.yml:
version: '2'
services:
web:
build: .
ports:
- "3000:3000"
volumes:
- .:/code
depends_on:
- db1
links:
- db1:mongo
db1:
image: mongo
Dockerfile:
FROM node:4.4.2
ADD . /code
WORKDIR /code
RUN npm i
CMD node app.js
I store dump in project files so folder is shared with container. How should look like proccess of restoring dump ? In web container I don't have access to mongoresore commnad ...

Related

Why my dockerized app data will be deleted after a while?

I have a NestJs App, when I deploy the app on my ubuntu server using docker compose up --build, after a few hours, data will be deleted automatically.
Dockerfile:
FROM node:14.16.0-alpine3.13
RUN addgroup app && adduser -S -G app app
RUN mkdir /app && chown app:app /app
USER app
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "run", "start:prod"]
And my docker-compose file:
version: '3'
services:
api:
depends_on:
- db
restart: unless-stopped
build:
context: .
dockerfile: Dockerfile
ports:
- '3000:3000'
environment:
DB_URL: mongodb://db_myapp
JWP_PRIVATE_KEY: test
PORT: 3000
db:
image: mongo:xenial
container_name: db_myapp
volumes:
- myapp:/data/db
ports:
- '27017:27017'
volumes:
myapp:
I tried attaching the volume to a local directory in my server:
db:
image: mongo:xenial
container_name: db_myapp
volumes:
- ../myapp_db_data:/data/db
ports:
- '27017:27017'
But I faced this error when running the app:
MongooseServerSelectionError: getaddrinfo EAI_AGAIN db_myapp
I'm looking for a way to have persistent data of my app after docker compose up --build, because I need to update the backend app on production

Mongodb backup using docker-compose

I'm very new to Docker and I'm confused about how to backup the mongodb database. I'm using 2 containers, both connected through a network.
the express app container
along with the MongoDB container
Based on this, a docker image/script needs to be written so that the database gets uploaded at intervals to the cloud such as google drive.
My current files are as follows:
docker-compose.yml
version: "3.7"
services:
app:
build: .
container_name: "server-app-service"
#restart: always
ports:
- ${PORT}:${PORT}
working_dir: /oj-server
volumes:
- ./:/oj-server
links:
- mongodb
env_file:
- .env.production
environment:
- MONGO_URI=mongodb://mongodb:27017/beta1
command: ["yarn", "start"]
mongodb:
image: mongo:latest
container_name: "mongodb-service"
ports:
- 27017:27017
volumes:
- mongodb-data:/data/db
volumes:
mongodb-data:
Dockerfile
FROM node:16.14.2-alpine
RUN apk add --no-cache libc6-compat build-base g++ openjdk11 python3
WORKDIR /oj-server
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile
# expose the port
EXPOSE ${PORT}

Why is my flask server unable to speak to the postgres database using docker-compose?

I have posted the relevant files below. Everything builds as expected, however when trying to use SQLAlchemy to make a call to the database, I invariably get the following error:
OperationalError: (psycopg2.OperationalError) could not translate host name "db" to address: Name or service not known
The string that sqlalchemy is using is (as given in .env.web.dev): postgres://postgres:postgres#db:5432/spaceofmotion.
What am I doing wrong?
docker-compose.yml:
version: '3'
services:
db:
container_name: db
ports:
- '5432:5432'
expose:
- '5432'
build:
context: ./
dockerfile: Dockerfile.postgres
networks:
- db_web
web:
container_name: web
restart: always
build:
context: ../
dockerfile: Dockerfile.web
ports:
- '5000:5000'
env_file:
- ./.env.web.dev
networks:
- db_web
depends_on:
- db
- redis
- celery
redis:
image: 'redis:5.0.7-buster'
container_name: redis
command: redis-server
ports:
- '6379:6379'
celery:
container_name: celery
build:
context: ../
dockerfile: Dockerfile.celery
env_file:
- ./.env.celery.dev
command: celery worker -A a.celery --loglevel=info
depends_on:
- redis
client:
container_name: react-app
build:
context: ../a/client
dockerfile: Dockerfile.client
volumes:
- '../a/client:/src/app'
- '/src/app/node_modules'
ports:
- '3000:3000'
depends_on:
- "web"
environment:
- NODE_ENV=development
- HOST_URL=http://localhost:5000
networks:
db_web:
driver: bridge
Dockerfile.postgres:
FROM postgres:latest
ENV POSTGRES_DB spaceofmotion
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD postgres
COPY ./spaceofmotion-db.sql /
COPY ./docker-entrypoint-initdb.d/restore-database.sh /docker-entrypoint-initdb.d/
restore-database.sh:
file="/spaceofmotion-db.sql"
psql -U postgres spaceofmotion < "$file"
Dockerfile.web:
FROM python:3.7-slim-buster
RUN apt-get update
RUN apt-get -y install python-pip libpq-dev python-dev && \
pip install --upgrade pip && \
pip install psycopg2
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
CMD ["python", "manage.py", "runserver"]
.env.web.dev:
DATABASE_URL=postgres://postgres:postgres#db:5432/spaceofmotion
... <other config vars> ...
Is this specifically coming from your celery container?
Your db container declares
networks:
- db_web
but the celery container has no such declaration; that means that it will be on the default network Compose creates for you. Since the two containers aren't on the same network they can't connect to each other.
There's nothing wrong with using the Compose-managed default network, especially for routine Web applications, and I'd suggest deleting all of the networks: blocks in the entire file. (You also don't need to specify container_name:, since Compose will come up with reasonable names on its own.)

why my mongodb image isn't pushed to azure container registry?

My application image of my nodejs app is pushed to azure container register but my mongo image isnt it. why?
this is my docker file.
FROM node:8
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8085
CMD [ "npm", "start" ]
and this is my docker-compose file.
version: "3"
services:
app:
image: rocketrcontainerregister.azurecr.io/rocketr_app:dev
build:
context: .
dockerfile: ./Dockerfile
ports:
- "8085:8085"
depends_on:
- mongo
links:
- mongo
container_name: rocketrapi
mongo:
image: mongo
ports:
- "27017:27017"
container_name: mongodb

docker-compose: run rails db:setup and rake tasks to init data

I can't find the way to execute the following commands from a docker-compose.yml file:
rails db:setup
rails db:init_data.
I tried to do that as follows and it failed:
version: '3'
services:
web:
build: .
links:
- database
- redis
ports:
- "3000:3000"
volumes:
- .:/usr/src/app
env_file:
- .env/development/database
- .env/development/web
command: ["rails", "db:setup"]
command: ["rails", "db:init_data"]
redis:
image: redis
database:
image: postgres
env_file:
- .env/development/database
volumes:
- db-data:/var/lib/postgresql/data
volumes:
db-data:
Any idea on what's going wrong here ? Thank you.
The code source is on the GitHub.
You can do two things in my opinion:
Change command: to the following line, because two commands are not allowed in compose file:
command:
- /bin/bash
- -c
- |
rails db:setup
rails db:init_data
Use supervisord app: supervisord web page
The solution that worked for me was to remove CMD commad from Dockerfile because using command option in docker-compose.yml would have overridden CMD command.
So, Docker file will look like that:
FROM ruby:2.5.1
LABEL maintainer="DECATHLON"
RUN apt-get update -yqq
RUN apt-get install -yqq --no-install-recommends nodejs
COPY Gemfile* /usr/src/app/
WORKDIR /usr/src/app
RUN bundle install
COPY . /usr/src/app/
Then add command option to docker-compose file:
version: '3'
services:
web:
build: .
links:
- database
- redis
ports:
- "3000:3000"
volumes:
- .:/usr/src/app
env_file:
- .env/development/database
- .env/development/web
command:
- |
rails db:reset
rails db:init_data
rails s -p 3000 -b '0.0.0.0'
redis:
image: redis
database:
image: postgres
env_file:
- .env/development/database
volumes:
- db-data:/var/lib/postgresql/data
volumes:
db-data:
If the above solution does not work for somebody, there is an alternative solution:
Create a shell script in the project route and name it entrypoint.sh, for example:
#!/bin/bash
set -e
bundle exec rails db:reset
bundle exec rails db:migrate
exec "$#"
Declare entrypoint option in dpcker-compose file:
v
version: '3'
services:
web:
build: .
entrypoint:
- /bin/sh
- ./entrypoint.sh
depends_on:
- database
- redis
ports:
- "3000:3000"
volumes:
- .:/usr/src/app
env_file:
- .env/development/database
- .env/development/web
command: ['./wait-for-it.sh', 'database:5432', '--', 'bundle', 'exec', 'rails', 's', '-p', '3000', '-b', '0.0.0.0']
database:
image: postgres:9.6
env_file:
- .env/development/database
volumes:
- db-data:/var/lib/postgresql/data
volumes:
db-data:
I also user wait-for-it script to ensure the DB is started.
Hope this helps. I pushed the modifications to the Github repo. Sorry for some extra letters left in the text before code blocks, - for some unknown reasons, the code markdown didn't work, so I left them to get it working.