How to run multiple webservers inside docker-compose - docker-compose

docker-compose.yml
version: "3"
services:
consumerhub:
container_name: consumerhub
build: ./
# command: python manage.py runserver 0.0.0.0:8000
command: "bash -c 'python manage.py runserver react 0.0.0.0:8000 && npm start --prefix frontend/'"
working_dir: /usr/src/consumerhub
ports:
- "8000:8000"
volumes:
- ./:/usr/src/consumerhub
Here i am tring to run two server inside docker-compose.
my backend is python and frontend is react.
When i am using "docker-compose up --build" it is only running python server but not running react server.
PLease have a look

Related

localhost not accesible to load flask app via docker container

I have a flask-mongodb project where I use docker-compose to create a container of my flask app with mongodb data as backup . Now I can load my app perfectly in my linux vm browser with typing localhost:5000 but in windows 10 I get that my connection is refused when typing the same thing in chrome .
My docker-compose.yml file :
version: '2'
services:
mongodb:
image: mongo
restart: always
container_name: mongodb
ports:
- 27017:27017
volumes:
- ./mongodb/data:/data/db
flask-service:
build:
context: ./flask
restart: always
container_name: flask
depends_on:
- mongodb
ports:
- 5000:5000
environment:
- "MONGO_HOSTNAME=mongodb"
My DockerFile :
FROM ubuntu:16.04
MAINTAINER user <user#gmailcom>
RUN apt-get update
RUN apt-get install -y python3 python3-pip
RUN apt-get install -y bcrypt
RUN pip3 install Flask-PyMongo py-bcrypt
RUN mkdir /app
RUN mkdir -p /app/templates
COPY webservice.py /app/webservice.py
ADD templates /app/templates
EXPOSE 5000
WORKDIR /app
ENTRYPOINT ["python3" , "-u" , "webservice.py" ]
How I connect to mongodb with my flask app :
from pymongo import MongoClient
mongodb_hostname = os.environ.get("MONGO_HOSTNAME","localhost")
client = MongoClient('mongodb://'+mongodb_hostname+':27017/')
db = client['MovieFlixDB']
#more code ...
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0', port=5000)
I would appreciate your help with this . Thank you in advance

Why is my flask server unable to speak to the postgres database using docker-compose?

I have posted the relevant files below. Everything builds as expected, however when trying to use SQLAlchemy to make a call to the database, I invariably get the following error:
OperationalError: (psycopg2.OperationalError) could not translate host name "db" to address: Name or service not known
The string that sqlalchemy is using is (as given in .env.web.dev): postgres://postgres:postgres#db:5432/spaceofmotion.
What am I doing wrong?
docker-compose.yml:
version: '3'
services:
db:
container_name: db
ports:
- '5432:5432'
expose:
- '5432'
build:
context: ./
dockerfile: Dockerfile.postgres
networks:
- db_web
web:
container_name: web
restart: always
build:
context: ../
dockerfile: Dockerfile.web
ports:
- '5000:5000'
env_file:
- ./.env.web.dev
networks:
- db_web
depends_on:
- db
- redis
- celery
redis:
image: 'redis:5.0.7-buster'
container_name: redis
command: redis-server
ports:
- '6379:6379'
celery:
container_name: celery
build:
context: ../
dockerfile: Dockerfile.celery
env_file:
- ./.env.celery.dev
command: celery worker -A a.celery --loglevel=info
depends_on:
- redis
client:
container_name: react-app
build:
context: ../a/client
dockerfile: Dockerfile.client
volumes:
- '../a/client:/src/app'
- '/src/app/node_modules'
ports:
- '3000:3000'
depends_on:
- "web"
environment:
- NODE_ENV=development
- HOST_URL=http://localhost:5000
networks:
db_web:
driver: bridge
Dockerfile.postgres:
FROM postgres:latest
ENV POSTGRES_DB spaceofmotion
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD postgres
COPY ./spaceofmotion-db.sql /
COPY ./docker-entrypoint-initdb.d/restore-database.sh /docker-entrypoint-initdb.d/
restore-database.sh:
file="/spaceofmotion-db.sql"
psql -U postgres spaceofmotion < "$file"
Dockerfile.web:
FROM python:3.7-slim-buster
RUN apt-get update
RUN apt-get -y install python-pip libpq-dev python-dev && \
pip install --upgrade pip && \
pip install psycopg2
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
CMD ["python", "manage.py", "runserver"]
.env.web.dev:
DATABASE_URL=postgres://postgres:postgres#db:5432/spaceofmotion
... <other config vars> ...
Is this specifically coming from your celery container?
Your db container declares
networks:
- db_web
but the celery container has no such declaration; that means that it will be on the default network Compose creates for you. Since the two containers aren't on the same network they can't connect to each other.
There's nothing wrong with using the Compose-managed default network, especially for routine Web applications, and I'd suggest deleting all of the networks: blocks in the entire file. (You also don't need to specify container_name:, since Compose will come up with reasonable names on its own.)

How to convert from docker-compose to Dockerrun.aws.json

I have a project that need to run multiple containers and I want to put it in Elastic Beanstalk.
I already have docker-compose.yml and I want to convert it to Dockerrun.aws.json
Here is my docker-compose.yml file
version: '2.0'
services:
djangoserverdocker:
build: ./django_server
command: bash -c "python3 manage.py makemigrations && python3 manage.py migrate --fake && python3 manage.py runserver 0.0.0.0:80"
volumes:
- .:/djangoserverdocker
ports:
- "80:80"
socketserverdocker:
build:
context: ./needsocket
ports:
- '8002:8002'
How do I convert it?

can`t link mongo docker container

I have an image (gepick:latest) with node app created from Dockerfile:
FROM centos:7
# Create app directory
WORKDIR /usr/src/app
RUN curl --silent --location https://rpm.nodesource.com/setup_8.x | bash -
RUN yum install -y nodejs
RUN curl --silent --location https://dl.yarnpkg.com/rpm/yarn.repo | tee /etc/yum.repos.d/yarn.repo
RUN rpm --import https://dl.yarnpkg.com/rpm/pubkey.gpg
RUN yum install -y yarn
RUN yarn
COPY . .
EXPOSE 8080
CMD [ "yarn", "test-matches-collecting-job"]
My goal is run tests in docker. But it requires mongodb
docker run gepick:latest :
...
Mongoose default connection error: MongoError: failed to connect to server [localhost:27017] on first connect [MongoError: connect ECONNREFUSED 127.0.0.1:27017]
...
I tried link mongo:4 images container docker run --link 0d24c3a35d5a gepick:latest but get same error.
When you launch your container using a docker-compose yaml file Docker bridges the containers together and allows you to have it launch the mongo container before other containers which rely on mongo to be active ... try something like this
cat my-docker-compose.yml
version: '3'
services:
my-gepick:
image: gepick:latest
container_name: blah_gepick
restart: always
depends_on:
- loudmongo
volumes:
- /cryptdata5/var/log/blobs:/blobs
- /webapp/enduser/bundle:/tmp
environment:
- MONGO_SERVICE_HOST=loudmongo
- MONGO_SERVICE_PORT=$GKE_MONGO_PORT
- MONGO_URL=mongodb://loudmongo:$GKE_MONGO_PORT/test
- METEOR_SETTINGS=${METEOR_SETTINGS}
- MAIL_URL=smtp://support#${GKE_DOMAIN_NAME}:blah#loudmail:587/
links:
- loudmongo
ports:
- 127.0.0.1:3000:3000
working_dir: /tmp
command: /usr/bin/supervisord -c /etc/supervisor/conf.d/supervisord.conf
loudmongo:
image: mongo
container_name: loud_mongo
restart: always
ports:
- 127.0.0.1:$GKE_MONGO_PORT:$GKE_MONGO_PORT
volumes:
- /cryptdata7/var/data/db:/data/db
so your launch sequence may look like
docker-compose -f /somedir/my-docker-compose.yml pull
docker-compose -f /somedir/my-docker-compose.yml up -d

Cannot connect to postgres db using docker build

I am trying to build an image and deploy it to a VPS.
I am running the app successfully with
docker-compose up
Then I build it with
docker build -t mystore .
When I try to run it for a test locally or on the VPS trough docker cloud:
docker run -p 4000:8000 mystore
The container works fine, but when I hit http://0.0.0.0:4000/
I am getting:
OperationalError at /
could not translate host name "db" to address: Name or service not known
I have changed the postgresql.conf listen_addresses to "*", nothing changes. The posgresql logs are empty. I am running MacOS.
Here is my DATABASE config:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'postgres',
'USER': 'user',
'PASSWORD': 'password',
'HOST': 'db',
'PORT': '5432',
}
}
This is the Dockerfile
FROM python:3.5
ENV PYTHONUNBUFFERED 1
RUN \
apt-get -y update && \
apt-get install -y gettext && \
apt-get clean
ADD requirements.txt /app/
RUN pip install -r /app/requirements.txt
ADD . /app
WORKDIR /app
EXPOSE 8000
ENV PORT 8000
CMD ["uwsgi", "/app/saleor/wsgi/uwsgi.ini"]
This is the docker-compose.yml file:
version: '2'
services:
db:
image: postgres
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
ports:
- '5432:5432'
redis:
image: redis
ports:
- '6379:6379'
celery:
build:
context: .
dockerfile: Dockerfile
env_file: common.env
command: celery -A saleor worker --app=saleor.celeryconf:app --loglevel=info
volumes:
- .:/app:Z
links:
- redis
depends_on:
- redis
search:
image: elasticsearch:5.4.3
mem_limit: 512m
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ports:
- '127.0.0.1:9200:9200'
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
env_file: common.env
depends_on:
- db
- redis
- search
ports:
- '8000:8000'
volumes:
- .:/app:Z
makemigrations:
build: .
command: python manage.py makemigrations --noinput
volumes:
- .:/app:Z
migration:
build: .
command: python manage.py migrate --noinput
volumes:
- .:/app:Z
You forgot to add links to your web image
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
env_file: common.env
depends_on:
- db
- redis
- search
links: # <- here
- db
- redis
- search
ports:
- '8000:8000'
volumes:
- .:/app:Z
Check the available networks. There are 3 by default:
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
db07e84f27a1 bridge bridge local
6a1bf8c2d8e2 host host local
d8c3c61003f1 none null local
I've a simplified setup of your docker compose. Only postgres:
version: '2'
services:
postgres:
image: postgres
name: postgres
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
ports:
- '5432:5432'
networks:
random:
networks:
random:
I gave the postgres container the name postgres and called the service postgres, I created a network called 'random' (last commands), and I've added the service postgres to the network random. If you don't specify a network you will see that docker-compose creates its a selfnamed network.
After starting docker-compose, you will have 4 networks. A new bridge network called random.
Check in which network your docker compose environment is created by inspecting for example your postgres container:
Mine is created in the network 'random':
$ docker inspect postgres
It's in the network 'random'.
"Networks": {
"random": {..
Now start your mystore container in the same network:
$ docker run -p 4000:8000 --network=random mystore
You can check again with docker inspect. To be sure you can exec inside your mystore container and try to ping postgres. They are deployed inside the same network so this should be possible and your container should be able to translate the name postgres to an address.
in your docker-compose.yml, add a network and add your containers to it like so:
to each container definition add:
networks:
- mynetwork
and then, at the end of the file, add:
networks:
mynetwork: