Docker-compose mongoose - mongodb

I'm new to Docker, and I'm trying the simplest of setups with docker-compose, but don't succeed to connect to Mongodb.
My docker-compose.local.yaml file:
version: "2"
services:
posts-api:
build:
dockerfile: Dockerfile.local
context: ./
volumes:
- ".:/app"
ports:
- "6820:6820"
depends_on:
- mongodb
mongodb:
image: mongo:3.5
ports:
- "27018:27018"
command: mongod --port 27018
My Docker file:
FROM node:7.8.0
MAINTAINER Livefeed 'project.livefeed#gmail.com'
RUN mkdir /app
VOLUME /app
WORKDIR /app
ADD package.json yarn.lock ./
RUN eval rm -rf node_modules && \
yarn
ADD server.js .
RUN mkdir config src
ADD config config/
ADD src src/
EXPOSE 6820
EXPOSE 27018
CMD yarn run local
In server.js I try to connect with:
mongoose.connect('mongodb://localhost:27018');
I also tried:
mongoose.connect('mongodb://mongodb:27018');
To run docker-compose:
docker-compose -f docker-compose.local.yaml up --build
And I receive the error:
connection error: { MongoError: failed to connect to server [localhost:27018] on first connect [MongoError: connect ECONNREFUSED 127.0.0.1:27018]
What am I missing?

In server.js use mongodb instead of localhost:
mongoose.connect('mongodb://mongodb:27018');
Because containers in the same network can communicate using their service name.
Bear in mind that each container and your host have their own localhost. Each localhost is a different host: container A, container B, your host (each one has its own network interface).
Edit:
Be sure to get your mongo up:
docker-compose logs mongodb
docker-compose ps
Sometimes it doesn't get up because of disk space.
Edit 2:
With newer versions of mongo, you need to specify to listen to all interfaces too:
command: mongod --port 27018 --bind_ip_all

I think, that you should add links option in your config. Like this:
ports:
- "6820:6820"
depends_on:
- mongodb
links:
- mongodb
update
As I promised
version: '2.1'
services:
pm2:
image: keymetrics/pm2-docker-alpine:6
restart: always
container_name: pm2
volumes:
- ./pm2:/app
links:
- redis_db
- db
environment:
REDIS_CONNECTION_STRING: redis://redis_db:6379
nginx:
image: firesh/nginx-lua
restart: always
volumes:
- ./nginx:/etc/nginx
- /var/run/docker.sock:/tmp/docker.sock:ro
ports:
- 80:80
links:
- pm2
s3: # mock for development
image: lphoward/fake-s3:latest
redis_db:
container_name: redis_db
image: redis
ports:
- 6379:6379
db: # for scorebig-syncer
image: mysql:5.7
ports:
- 3306:3306

Related

Why my dockerized app data will be deleted after a while?

I have a NestJs App, when I deploy the app on my ubuntu server using docker compose up --build, after a few hours, data will be deleted automatically.
Dockerfile:
FROM node:14.16.0-alpine3.13
RUN addgroup app && adduser -S -G app app
RUN mkdir /app && chown app:app /app
USER app
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "run", "start:prod"]
And my docker-compose file:
version: '3'
services:
api:
depends_on:
- db
restart: unless-stopped
build:
context: .
dockerfile: Dockerfile
ports:
- '3000:3000'
environment:
DB_URL: mongodb://db_myapp
JWP_PRIVATE_KEY: test
PORT: 3000
db:
image: mongo:xenial
container_name: db_myapp
volumes:
- myapp:/data/db
ports:
- '27017:27017'
volumes:
myapp:
I tried attaching the volume to a local directory in my server:
db:
image: mongo:xenial
container_name: db_myapp
volumes:
- ../myapp_db_data:/data/db
ports:
- '27017:27017'
But I faced this error when running the app:
MongooseServerSelectionError: getaddrinfo EAI_AGAIN db_myapp
I'm looking for a way to have persistent data of my app after docker compose up --build, because I need to update the backend app on production

How to connect to Mongo DB which is in remote server to my Docker?

Hi I have an application where I am using docker-compose. I have large amount of data in the remote mongo DB server which is running on port 28107. How can I connect from my docker-compose to this remote server?
Below is my docker-compose.yml file:
version: '3'
services:
myapp:
# container_name: myapp
restart: always
build: .
ports:
- '52000:52000'
# - '8080:8080'
# - '4300:4300'
# - '4301:4301'
environment:
- MONGO_URL=mongodb://test:test#ip_address:28107/test
# command: ["./wait-for-it.sh", "mongo:28107", "--", "npm", "start"]
links:
- redis
- mongo
mongo:
# container_name: myapp-mongo
image: 'mongo:latest'
ports:
- '28107:28107'
# - '27017:27017'
volumes:
# - ~/Downloads/db_dump_09_01_2020:/data/db
- /data/db
# - /data/configdb
# command: mongod --auth
redis:
# container_name: myapp-redis
restart: always
image: 'redis:4.0.11'
# command: ["redis-server", "--appendonly", "yes"]
depends_on:
- helper
sysctls:
- net.core.somaxconn=511
ports:
- '6379:6379'
helper:
image: alpine
command: sh -c "echo never > /sys/kernel/mm/transparent_hugepage/enabled"
privileged: true
In the above code, in the environment parameter, I have mentioned the remote mongo DB server url. I have all the data in that url. I don't want to export that data in my localhost and mount it in my docker container but instead, I would like to directly link my docker container to that remote mongo DB server.
How can I do it ? I am new to the docker concepts.
Seems like you are trying to connect to an external MongoDb server on a network different from your docker network. Then you shouldn't need.
mongo:
# container_name: myapp-mongo
image: 'mongo:latest'
ports:
- '28107:28107'
# - '27017:27017'
volumes:
# - ~/Downloads/db_dump_09_01_2020:/data/db
- /data/db
# - /data/configdb
# command: mongod --auth
You should only need to provide the necessary environment settings of the remote MongoDb server to myapp.

Why is my flask server unable to speak to the postgres database using docker-compose?

I have posted the relevant files below. Everything builds as expected, however when trying to use SQLAlchemy to make a call to the database, I invariably get the following error:
OperationalError: (psycopg2.OperationalError) could not translate host name "db" to address: Name or service not known
The string that sqlalchemy is using is (as given in .env.web.dev): postgres://postgres:postgres#db:5432/spaceofmotion.
What am I doing wrong?
docker-compose.yml:
version: '3'
services:
db:
container_name: db
ports:
- '5432:5432'
expose:
- '5432'
build:
context: ./
dockerfile: Dockerfile.postgres
networks:
- db_web
web:
container_name: web
restart: always
build:
context: ../
dockerfile: Dockerfile.web
ports:
- '5000:5000'
env_file:
- ./.env.web.dev
networks:
- db_web
depends_on:
- db
- redis
- celery
redis:
image: 'redis:5.0.7-buster'
container_name: redis
command: redis-server
ports:
- '6379:6379'
celery:
container_name: celery
build:
context: ../
dockerfile: Dockerfile.celery
env_file:
- ./.env.celery.dev
command: celery worker -A a.celery --loglevel=info
depends_on:
- redis
client:
container_name: react-app
build:
context: ../a/client
dockerfile: Dockerfile.client
volumes:
- '../a/client:/src/app'
- '/src/app/node_modules'
ports:
- '3000:3000'
depends_on:
- "web"
environment:
- NODE_ENV=development
- HOST_URL=http://localhost:5000
networks:
db_web:
driver: bridge
Dockerfile.postgres:
FROM postgres:latest
ENV POSTGRES_DB spaceofmotion
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD postgres
COPY ./spaceofmotion-db.sql /
COPY ./docker-entrypoint-initdb.d/restore-database.sh /docker-entrypoint-initdb.d/
restore-database.sh:
file="/spaceofmotion-db.sql"
psql -U postgres spaceofmotion < "$file"
Dockerfile.web:
FROM python:3.7-slim-buster
RUN apt-get update
RUN apt-get -y install python-pip libpq-dev python-dev && \
pip install --upgrade pip && \
pip install psycopg2
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
CMD ["python", "manage.py", "runserver"]
.env.web.dev:
DATABASE_URL=postgres://postgres:postgres#db:5432/spaceofmotion
... <other config vars> ...
Is this specifically coming from your celery container?
Your db container declares
networks:
- db_web
but the celery container has no such declaration; that means that it will be on the default network Compose creates for you. Since the two containers aren't on the same network they can't connect to each other.
There's nothing wrong with using the Compose-managed default network, especially for routine Web applications, and I'd suggest deleting all of the networks: blocks in the entire file. (You also don't need to specify container_name:, since Compose will come up with reasonable names on its own.)

Docker run not working

When I use docker compose it performs perfectly the application, however, when I use docker run nothing happens
I have a API Rest (Express & Mongodb) with nginx proxy-pass.
Docker file:
FROM node:8-alpine
EXPOSE 3000
ARG NODE_ENV
ENV NODE_ENV $NODE_ENV
RUN mkdir /app
WORKDIR /app
ADD package.json yarn.lock /app/
RUN yarn --pure-lockfile
ADD . /app
CMD ["yarn", "start"]
Docker compose:
version: "2"
services:
api:
build: .
environment:
- NODE_ENV=production
command: yarn start
volumes:
- .:/app
ports:
- "3000:3000"
tty: true
depends_on:
- mongodb
restart: always
nginx:
image: nginx
depends_on:
- api
ports:
- "80:80"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- mongodb
restart: always
mongodb:
image: mongo
ports:
- "27017:27017"
restart: always
When I use docker compose it performs perfectly the application, however, when I use docker run nothing happens
That seems expected, since docker run would run one image.
As opposed to docker compose, which will run a multi-container Docker application.
You need for all images to run, starting with the right order, in order to anything to happen.

Cannot connect to postgres db using docker build

I am trying to build an image and deploy it to a VPS.
I am running the app successfully with
docker-compose up
Then I build it with
docker build -t mystore .
When I try to run it for a test locally or on the VPS trough docker cloud:
docker run -p 4000:8000 mystore
The container works fine, but when I hit http://0.0.0.0:4000/
I am getting:
OperationalError at /
could not translate host name "db" to address: Name or service not known
I have changed the postgresql.conf listen_addresses to "*", nothing changes. The posgresql logs are empty. I am running MacOS.
Here is my DATABASE config:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'postgres',
'USER': 'user',
'PASSWORD': 'password',
'HOST': 'db',
'PORT': '5432',
}
}
This is the Dockerfile
FROM python:3.5
ENV PYTHONUNBUFFERED 1
RUN \
apt-get -y update && \
apt-get install -y gettext && \
apt-get clean
ADD requirements.txt /app/
RUN pip install -r /app/requirements.txt
ADD . /app
WORKDIR /app
EXPOSE 8000
ENV PORT 8000
CMD ["uwsgi", "/app/saleor/wsgi/uwsgi.ini"]
This is the docker-compose.yml file:
version: '2'
services:
db:
image: postgres
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
ports:
- '5432:5432'
redis:
image: redis
ports:
- '6379:6379'
celery:
build:
context: .
dockerfile: Dockerfile
env_file: common.env
command: celery -A saleor worker --app=saleor.celeryconf:app --loglevel=info
volumes:
- .:/app:Z
links:
- redis
depends_on:
- redis
search:
image: elasticsearch:5.4.3
mem_limit: 512m
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ports:
- '127.0.0.1:9200:9200'
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
env_file: common.env
depends_on:
- db
- redis
- search
ports:
- '8000:8000'
volumes:
- .:/app:Z
makemigrations:
build: .
command: python manage.py makemigrations --noinput
volumes:
- .:/app:Z
migration:
build: .
command: python manage.py migrate --noinput
volumes:
- .:/app:Z
You forgot to add links to your web image
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
env_file: common.env
depends_on:
- db
- redis
- search
links: # <- here
- db
- redis
- search
ports:
- '8000:8000'
volumes:
- .:/app:Z
Check the available networks. There are 3 by default:
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
db07e84f27a1 bridge bridge local
6a1bf8c2d8e2 host host local
d8c3c61003f1 none null local
I've a simplified setup of your docker compose. Only postgres:
version: '2'
services:
postgres:
image: postgres
name: postgres
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
ports:
- '5432:5432'
networks:
random:
networks:
random:
I gave the postgres container the name postgres and called the service postgres, I created a network called 'random' (last commands), and I've added the service postgres to the network random. If you don't specify a network you will see that docker-compose creates its a selfnamed network.
After starting docker-compose, you will have 4 networks. A new bridge network called random.
Check in which network your docker compose environment is created by inspecting for example your postgres container:
Mine is created in the network 'random':
$ docker inspect postgres
It's in the network 'random'.
"Networks": {
"random": {..
Now start your mystore container in the same network:
$ docker run -p 4000:8000 --network=random mystore
You can check again with docker inspect. To be sure you can exec inside your mystore container and try to ping postgres. They are deployed inside the same network so this should be possible and your container should be able to translate the name postgres to an address.
in your docker-compose.yml, add a network and add your containers to it like so:
to each container definition add:
networks:
- mynetwork
and then, at the end of the file, add:
networks:
mynetwork: