I have a NestJs App, when I deploy the app on my ubuntu server using docker compose up --build, after a few hours, data will be deleted automatically.
Dockerfile:
FROM node:14.16.0-alpine3.13
RUN addgroup app && adduser -S -G app app
RUN mkdir /app && chown app:app /app
USER app
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "run", "start:prod"]
And my docker-compose file:
version: '3'
services:
api:
depends_on:
- db
restart: unless-stopped
build:
context: .
dockerfile: Dockerfile
ports:
- '3000:3000'
environment:
DB_URL: mongodb://db_myapp
JWP_PRIVATE_KEY: test
PORT: 3000
db:
image: mongo:xenial
container_name: db_myapp
volumes:
- myapp:/data/db
ports:
- '27017:27017'
volumes:
myapp:
I tried attaching the volume to a local directory in my server:
db:
image: mongo:xenial
container_name: db_myapp
volumes:
- ../myapp_db_data:/data/db
ports:
- '27017:27017'
But I faced this error when running the app:
MongooseServerSelectionError: getaddrinfo EAI_AGAIN db_myapp
I'm looking for a way to have persistent data of my app after docker compose up --build, because I need to update the backend app on production
Related
I'm very new to Docker and I'm confused about how to backup the mongodb database. I'm using 2 containers, both connected through a network.
the express app container
along with the MongoDB container
Based on this, a docker image/script needs to be written so that the database gets uploaded at intervals to the cloud such as google drive.
My current files are as follows:
docker-compose.yml
version: "3.7"
services:
app:
build: .
container_name: "server-app-service"
#restart: always
ports:
- ${PORT}:${PORT}
working_dir: /oj-server
volumes:
- ./:/oj-server
links:
- mongodb
env_file:
- .env.production
environment:
- MONGO_URI=mongodb://mongodb:27017/beta1
command: ["yarn", "start"]
mongodb:
image: mongo:latest
container_name: "mongodb-service"
ports:
- 27017:27017
volumes:
- mongodb-data:/data/db
volumes:
mongodb-data:
Dockerfile
FROM node:16.14.2-alpine
RUN apk add --no-cache libc6-compat build-base g++ openjdk11 python3
WORKDIR /oj-server
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile
# expose the port
EXPOSE ${PORT}
I have posted the relevant files below. Everything builds as expected, however when trying to use SQLAlchemy to make a call to the database, I invariably get the following error:
OperationalError: (psycopg2.OperationalError) could not translate host name "db" to address: Name or service not known
The string that sqlalchemy is using is (as given in .env.web.dev): postgres://postgres:postgres#db:5432/spaceofmotion.
What am I doing wrong?
docker-compose.yml:
version: '3'
services:
db:
container_name: db
ports:
- '5432:5432'
expose:
- '5432'
build:
context: ./
dockerfile: Dockerfile.postgres
networks:
- db_web
web:
container_name: web
restart: always
build:
context: ../
dockerfile: Dockerfile.web
ports:
- '5000:5000'
env_file:
- ./.env.web.dev
networks:
- db_web
depends_on:
- db
- redis
- celery
redis:
image: 'redis:5.0.7-buster'
container_name: redis
command: redis-server
ports:
- '6379:6379'
celery:
container_name: celery
build:
context: ../
dockerfile: Dockerfile.celery
env_file:
- ./.env.celery.dev
command: celery worker -A a.celery --loglevel=info
depends_on:
- redis
client:
container_name: react-app
build:
context: ../a/client
dockerfile: Dockerfile.client
volumes:
- '../a/client:/src/app'
- '/src/app/node_modules'
ports:
- '3000:3000'
depends_on:
- "web"
environment:
- NODE_ENV=development
- HOST_URL=http://localhost:5000
networks:
db_web:
driver: bridge
Dockerfile.postgres:
FROM postgres:latest
ENV POSTGRES_DB spaceofmotion
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD postgres
COPY ./spaceofmotion-db.sql /
COPY ./docker-entrypoint-initdb.d/restore-database.sh /docker-entrypoint-initdb.d/
restore-database.sh:
file="/spaceofmotion-db.sql"
psql -U postgres spaceofmotion < "$file"
Dockerfile.web:
FROM python:3.7-slim-buster
RUN apt-get update
RUN apt-get -y install python-pip libpq-dev python-dev && \
pip install --upgrade pip && \
pip install psycopg2
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
CMD ["python", "manage.py", "runserver"]
.env.web.dev:
DATABASE_URL=postgres://postgres:postgres#db:5432/spaceofmotion
... <other config vars> ...
Is this specifically coming from your celery container?
Your db container declares
networks:
- db_web
but the celery container has no such declaration; that means that it will be on the default network Compose creates for you. Since the two containers aren't on the same network they can't connect to each other.
There's nothing wrong with using the Compose-managed default network, especially for routine Web applications, and I'd suggest deleting all of the networks: blocks in the entire file. (You also don't need to specify container_name:, since Compose will come up with reasonable names on its own.)
My application image of my nodejs app is pushed to azure container register but my mongo image isnt it. why?
this is my docker file.
FROM node:8
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8085
CMD [ "npm", "start" ]
and this is my docker-compose file.
version: "3"
services:
app:
image: rocketrcontainerregister.azurecr.io/rocketr_app:dev
build:
context: .
dockerfile: ./Dockerfile
ports:
- "8085:8085"
depends_on:
- mongo
links:
- mongo
container_name: rocketrapi
mongo:
image: mongo
ports:
- "27017:27017"
container_name: mongodb
I'm new to Docker, and I'm trying the simplest of setups with docker-compose, but don't succeed to connect to Mongodb.
My docker-compose.local.yaml file:
version: "2"
services:
posts-api:
build:
dockerfile: Dockerfile.local
context: ./
volumes:
- ".:/app"
ports:
- "6820:6820"
depends_on:
- mongodb
mongodb:
image: mongo:3.5
ports:
- "27018:27018"
command: mongod --port 27018
My Docker file:
FROM node:7.8.0
MAINTAINER Livefeed 'project.livefeed#gmail.com'
RUN mkdir /app
VOLUME /app
WORKDIR /app
ADD package.json yarn.lock ./
RUN eval rm -rf node_modules && \
yarn
ADD server.js .
RUN mkdir config src
ADD config config/
ADD src src/
EXPOSE 6820
EXPOSE 27018
CMD yarn run local
In server.js I try to connect with:
mongoose.connect('mongodb://localhost:27018');
I also tried:
mongoose.connect('mongodb://mongodb:27018');
To run docker-compose:
docker-compose -f docker-compose.local.yaml up --build
And I receive the error:
connection error: { MongoError: failed to connect to server [localhost:27018] on first connect [MongoError: connect ECONNREFUSED 127.0.0.1:27018]
What am I missing?
In server.js use mongodb instead of localhost:
mongoose.connect('mongodb://mongodb:27018');
Because containers in the same network can communicate using their service name.
Bear in mind that each container and your host have their own localhost. Each localhost is a different host: container A, container B, your host (each one has its own network interface).
Edit:
Be sure to get your mongo up:
docker-compose logs mongodb
docker-compose ps
Sometimes it doesn't get up because of disk space.
Edit 2:
With newer versions of mongo, you need to specify to listen to all interfaces too:
command: mongod --port 27018 --bind_ip_all
I think, that you should add links option in your config. Like this:
ports:
- "6820:6820"
depends_on:
- mongodb
links:
- mongodb
update
As I promised
version: '2.1'
services:
pm2:
image: keymetrics/pm2-docker-alpine:6
restart: always
container_name: pm2
volumes:
- ./pm2:/app
links:
- redis_db
- db
environment:
REDIS_CONNECTION_STRING: redis://redis_db:6379
nginx:
image: firesh/nginx-lua
restart: always
volumes:
- ./nginx:/etc/nginx
- /var/run/docker.sock:/tmp/docker.sock:ro
ports:
- 80:80
links:
- pm2
s3: # mock for development
image: lphoward/fake-s3:latest
redis_db:
container_name: redis_db
image: redis
ports:
- 6379:6379
db: # for scorebig-syncer
image: mysql:5.7
ports:
- 3306:3306
I got that docker-compose.yml:
version: '2'
services:
web:
build: .
ports:
- "3000:3000"
volumes:
- .:/code
depends_on:
- db1
links:
- db1:mongo
db1:
image: mongo
Dockerfile:
FROM node:4.4.2
ADD . /code
WORKDIR /code
RUN npm i
CMD node app.js
I store dump in project files so folder is shared with container. How should look like proccess of restoring dump ? In web container I don't have access to mongoresore commnad ...