Mongodb backup using docker-compose - mongodb

I'm very new to Docker and I'm confused about how to backup the mongodb database. I'm using 2 containers, both connected through a network.
the express app container
along with the MongoDB container
Based on this, a docker image/script needs to be written so that the database gets uploaded at intervals to the cloud such as google drive.
My current files are as follows:
docker-compose.yml
version: "3.7"
services:
app:
build: .
container_name: "server-app-service"
#restart: always
ports:
- ${PORT}:${PORT}
working_dir: /oj-server
volumes:
- ./:/oj-server
links:
- mongodb
env_file:
- .env.production
environment:
- MONGO_URI=mongodb://mongodb:27017/beta1
command: ["yarn", "start"]
mongodb:
image: mongo:latest
container_name: "mongodb-service"
ports:
- 27017:27017
volumes:
- mongodb-data:/data/db
volumes:
mongodb-data:
Dockerfile
FROM node:16.14.2-alpine
RUN apk add --no-cache libc6-compat build-base g++ openjdk11 python3
WORKDIR /oj-server
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile
# expose the port
EXPOSE ${PORT}

Related

Why my dockerized app data will be deleted after a while?

I have a NestJs App, when I deploy the app on my ubuntu server using docker compose up --build, after a few hours, data will be deleted automatically.
Dockerfile:
FROM node:14.16.0-alpine3.13
RUN addgroup app && adduser -S -G app app
RUN mkdir /app && chown app:app /app
USER app
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "run", "start:prod"]
And my docker-compose file:
version: '3'
services:
api:
depends_on:
- db
restart: unless-stopped
build:
context: .
dockerfile: Dockerfile
ports:
- '3000:3000'
environment:
DB_URL: mongodb://db_myapp
JWP_PRIVATE_KEY: test
PORT: 3000
db:
image: mongo:xenial
container_name: db_myapp
volumes:
- myapp:/data/db
ports:
- '27017:27017'
volumes:
myapp:
I tried attaching the volume to a local directory in my server:
db:
image: mongo:xenial
container_name: db_myapp
volumes:
- ../myapp_db_data:/data/db
ports:
- '27017:27017'
But I faced this error when running the app:
MongooseServerSelectionError: getaddrinfo EAI_AGAIN db_myapp
I'm looking for a way to have persistent data of my app after docker compose up --build, because I need to update the backend app on production

docker-entrypoint-initdb.d not executing scripts

I'm using docker-compose version 1.25.5, build 8a1c60f6 &
Docker version 19.03.8, build afacb8b
I'm trying to initialise database with Users in MongoDb container using docker-entrypoint-initdb.d but,
js/scripts aren't being executed when container starts.
I know /data/db must be empty for it execute so I've been deleting it every time before start. But still doesn't execute. Going into the container and manually executing mongo mongo-init.js works.
Not sure why it's not working when it should.
docker-compose.yml:
version: '3'
services:
mongodb:
container_name: "mongodb"
image: tonyh308/cv-mongodb:1
build: ./mongo
container_name: mongodb
restart: always
ports:
- 27017:27017
environment:
MONGO_INITDB_DATABASE: test
volumes:
- ./docker-entrypoint-initdb.d/mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
- ./mongo/mongodb:/data/db
labels:
com.qa.description: "MongoDb Container for application."
# other services ...
Dockerfile:
FROM mongo:3.6.18-xenial
RUN apt-get update
COPY ./Customers /Customers
COPY ./test /test
WORKDIR /data
ENTRYPOINT ["mongod", "--bind_ip_all"]
EXPOSE 27017
Feb 26 2021: still no solution

Why is my flask server unable to speak to the postgres database using docker-compose?

I have posted the relevant files below. Everything builds as expected, however when trying to use SQLAlchemy to make a call to the database, I invariably get the following error:
OperationalError: (psycopg2.OperationalError) could not translate host name "db" to address: Name or service not known
The string that sqlalchemy is using is (as given in .env.web.dev): postgres://postgres:postgres#db:5432/spaceofmotion.
What am I doing wrong?
docker-compose.yml:
version: '3'
services:
db:
container_name: db
ports:
- '5432:5432'
expose:
- '5432'
build:
context: ./
dockerfile: Dockerfile.postgres
networks:
- db_web
web:
container_name: web
restart: always
build:
context: ../
dockerfile: Dockerfile.web
ports:
- '5000:5000'
env_file:
- ./.env.web.dev
networks:
- db_web
depends_on:
- db
- redis
- celery
redis:
image: 'redis:5.0.7-buster'
container_name: redis
command: redis-server
ports:
- '6379:6379'
celery:
container_name: celery
build:
context: ../
dockerfile: Dockerfile.celery
env_file:
- ./.env.celery.dev
command: celery worker -A a.celery --loglevel=info
depends_on:
- redis
client:
container_name: react-app
build:
context: ../a/client
dockerfile: Dockerfile.client
volumes:
- '../a/client:/src/app'
- '/src/app/node_modules'
ports:
- '3000:3000'
depends_on:
- "web"
environment:
- NODE_ENV=development
- HOST_URL=http://localhost:5000
networks:
db_web:
driver: bridge
Dockerfile.postgres:
FROM postgres:latest
ENV POSTGRES_DB spaceofmotion
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD postgres
COPY ./spaceofmotion-db.sql /
COPY ./docker-entrypoint-initdb.d/restore-database.sh /docker-entrypoint-initdb.d/
restore-database.sh:
file="/spaceofmotion-db.sql"
psql -U postgres spaceofmotion < "$file"
Dockerfile.web:
FROM python:3.7-slim-buster
RUN apt-get update
RUN apt-get -y install python-pip libpq-dev python-dev && \
pip install --upgrade pip && \
pip install psycopg2
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
CMD ["python", "manage.py", "runserver"]
.env.web.dev:
DATABASE_URL=postgres://postgres:postgres#db:5432/spaceofmotion
... <other config vars> ...
Is this specifically coming from your celery container?
Your db container declares
networks:
- db_web
but the celery container has no such declaration; that means that it will be on the default network Compose creates for you. Since the two containers aren't on the same network they can't connect to each other.
There's nothing wrong with using the Compose-managed default network, especially for routine Web applications, and I'd suggest deleting all of the networks: blocks in the entire file. (You also don't need to specify container_name:, since Compose will come up with reasonable names on its own.)

Docker run not working

When I use docker compose it performs perfectly the application, however, when I use docker run nothing happens
I have a API Rest (Express & Mongodb) with nginx proxy-pass.
Docker file:
FROM node:8-alpine
EXPOSE 3000
ARG NODE_ENV
ENV NODE_ENV $NODE_ENV
RUN mkdir /app
WORKDIR /app
ADD package.json yarn.lock /app/
RUN yarn --pure-lockfile
ADD . /app
CMD ["yarn", "start"]
Docker compose:
version: "2"
services:
api:
build: .
environment:
- NODE_ENV=production
command: yarn start
volumes:
- .:/app
ports:
- "3000:3000"
tty: true
depends_on:
- mongodb
restart: always
nginx:
image: nginx
depends_on:
- api
ports:
- "80:80"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- mongodb
restart: always
mongodb:
image: mongo
ports:
- "27017:27017"
restart: always
When I use docker compose it performs perfectly the application, however, when I use docker run nothing happens
That seems expected, since docker run would run one image.
As opposed to docker compose, which will run a multi-container Docker application.
You need for all images to run, starting with the right order, in order to anything to happen.

How mongorestore DB with docker

I got that docker-compose.yml:
version: '2'
services:
web:
build: .
ports:
- "3000:3000"
volumes:
- .:/code
depends_on:
- db1
links:
- db1:mongo
db1:
image: mongo
Dockerfile:
FROM node:4.4.2
ADD . /code
WORKDIR /code
RUN npm i
CMD node app.js
I store dump in project files so folder is shared with container. How should look like proccess of restoring dump ? In web container I don't have access to mongoresore commnad ...