Run MongoDB and RabbitMQ in Dockerfile - mongodb

I'm trying to run MongoDB and RabbitMQ in docker using Dockerfile to test my python app. what's the best way to do that?
I did
FROM python:latest
RUN apt-get update
RUN apt-get install -y rabbitmq-server wget
RUN wget -qO - https://www.mongodb.org/static/pgp/server-5.0.asc | sudo apt-key add -
RUN touch /etc/apt/sources.list.d/mongodb-org-5.0.list
RUN apt-get install -y mongodb-org
RUN sudo apt-get update
RUN sudo apt-get install -y mongodb-org
but it doesn't seem to work.

Using Dockerfile you can only run one service at a time if you want to run 2 services at the same time, you have to use docker-compose
Here is a docker-compose.yaml, you can use to run 2 MongoDB and rabbit-mq at the same time.
version: '3.7'
services:
mongodb_container:
image: mongo:latest
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: rootpassword
ports:
- 27017:27017
volumes:
- mongodb_data_container:/data/db
rabbitmq3:
container_name: "rabbitmq"
image: rabbitmq:3.8-management-alpine
environment:
- RABBITMQ_DEFAULT_USER=myuser
- RABBITMQ_DEFAULT_PASS=mypassword
ports:
# AMQP protocol port
- '5672:5672'
# HTTP management UI
- '15672:15672'
volumes:
mongodb_data_container:

Related

error 111 connection refused, topology type :unknown

I am trying to containerise an python-flask application which uses MongoDB as database.
the error that I am getting
The error is same when whether I run the Dockerfile of project or the Docker-compose file.
It works fine when I run it on my machine locally.
My DOCKERFILE
FROM python:3
COPY requirements.txt ./
WORKDIR /
RUN apt update -y
RUN apt install build-essential libdbus-glib-1-dev libgirepository1.0-dev -y
RUN apt-get install python-dev -y
RUN apt-get install libcups2-dev -y
RUN apt install libgirepository1.0-dev -y
RUN pip install pycups
RUN pip install cmake
RUN pip install dbus-python
RUN pip install reportlab
RUN pip install PyGObject
RUN pip install -r requirements.txt
COPY . .
CMD ["python3","main.py"]
MY DOCKER-COMPOSE.YML
version: '2.0'
networks:
app-tier:
driver: bridge
services:
myapp:
image: 'chatapp'
networks:
- app-tier
links:
- mongodb
ports:
- 8000:8000
depends_on:
- mongodb
mongodb:
image: 'mongo'
networks:
- app-tier
environment:
- ALLOW_EMPTY_PASSWORD=yes
ports:
- 27018:27017
I tried linking the two containers via --links but i am unable to figure out what actual problem is.
Your app is trying to mongodb with localhost:27017.
For your app, localhost, is the container where the app is running.
To access the mingoDb container you must use the service name in your docker-compose.yaml.
In your case: mongodb.
So the connection to the db should be: mongodb:27017.
To access mongodb directly from your host macchine you use localhost:27018. In this case localhost refers to your hostsystem (your pc).
Your docker-compose is a bit obsolete. You can update it like so:
version: '3.9'
networks:
app-tier:
driver: bridge
services:
myapp: // this is a service name
image: 'chatapp'
networks:
- app-tier
ports:
- 8000:8000
depends_on:
- mongodb
mongodb: // this is the servicename to connect from the app container
image: 'mongo'
networks:
- app-tier
ports:
- 27018:27017
You can remove also allowempty password.

What is a proper way of installing dependencies in Airflow?

I am trying to install git inside Airflow scheduler by apt-get install -y git (see docker-compose.yml below) and I get sudo: no tty present and no askpass program specified.
Is this even a good direction here by installing this package in "command"?
docker-compose.yml
services:
postgres:
...
init:
...
webserver:
...
scheduler:
image: *airflow_image
restart: always
depends_on:
- postgres
volumes:
- ./dags:/opt/airflow/dags
entrypoint: ["/bin/sh"]
command: ["-c",
"apt-get install -y git \
&& pip install -r /opt/airflow/tmp/requirements.txt \
&& airflow scheduler"]
volumes:
logs:

What is the path for application.properties (or similar file) in docker container?

I am dockerizing springboot application(with PostgreSQL). I want to overwrite application.properties in docker container with my own application.properties.
My docker-compose.yml file looks like this:
version: '2'
services:
API:
image: 'api-docker.jar'
ports:
- "8080:8080"
depends_on:
- PostgreSQL
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://PostgreSQL:5432/postgres
- SPRING_DATASOURCE_USERNAME=postgres
- SPRING_DATASOURCE_PASSWORD=password
- SPRING_JPA_HIBERNATE_DDL_AUTO=update
PostgreSQL:
image: postgres
volumes:
- C:/path/to/my/application.properties:/path/of/application.properties/in/container
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=password
- POSTGRES_USER=postgres
- POSTGRES_DB=postgres
I am doing this to overwrite the application.properties in container with my application.properties file so that the data gets stored in localhost
I tried the path /opt/application.properties but it didn't work.
You have two solutions:
1) First solution
Create application.properties with env variable
mycustomproperties1: ${MY_CUSTOM_ENV1}
mycustomproperties2: ${MY_CUSTOM_ENV2}
I advise you to create different application.properties (application-test,application-prod, etc...)
2) Another solution
Create docker file:
FROM debian:buster
RUN apt-get update --fix-missing && apt-get dist-upgrade -y
RUN apt install wget -y
RUN apt install apt-transport-https ca-certificates wget dirmngr gnupg software-properties-common -y
RUN wget -qO - https://adoptopenjdk.jfrog.io/adoptopenjdk/api/gpg/key/public | apt-key add -
RUN add-apt-repository --yes https://adoptopenjdk.jfrog.io/adoptopenjdk/deb/
RUN apt update
RUN apt install adoptopenjdk-8-hotspot -y
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","-Dspring.config.location="file:///config/application.properties","/app.jar"]
or add env variable in docker compose
SPRING_CONFIG_LOCATION=file:///config/application.properties
modify docker-compose:
version: '2'
services:
API:
image: 'api-docker.jar'
ports:
- "8080:8080"
depends_on:
- PostgreSQL
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://PostgreSQL:5432/postgres
- SPRING_DATASOURCE_USERNAME=postgres
- SPRING_DATASOURCE_PASSWORD=password
- SPRING_JPA_HIBERNATE_DDL_AUTO=update
- SPRING_CONFIG_LOCATION=file:///config/application.properties
volumes:
- C:/path/to/my/application.properties:/config/application.properties
In case anybody comes across the same problem, here is the solution.
I am trying to use my localhost database instead of in-memory database(storing it in container). This is my docker-compose.yml configuration
version: '2'
services:
API:
image: 'api-docker.jar' #(your jar file name)
volumes:
- path/to/new/application.properties:/config/env
ports:
- "8080:8080"
You need to provide a new application.properties file which contains the configuration for storing the data into your local database(could be the copy of your actual application.properties). This file needs to be overwritten in the config file of the container and the path to that is /config/env (which is mentioned in the yml file)

Connecting my flask with the mongodb Docker

I am trying to connect my docker container running the gunicorn with another container of mongodb.
This is my Dockerfile for building the container
FROM python:3.8.10-buster
COPY requirements.txt /
RUN apt-get update
RUN apt-get -y install build-essential libpoppler-cpp-dev pkg-config
RUN apt install -y libsm6 libxext6
RUN apt-get install -y libxrender-dev
RUN pip3 install -r /requirements.txt
COPY . /app
WORKDIR /app
RUN ["chmod", "+x", "./gunicorn.sh"]
EXPOSE 4444
ENTRYPOINT ["./gunicorn.sh"]
I created the following docker-compose.yml for running the built container
version: '3.7'
services:
web:
build: .
image: 'flask/flask_docker'
container_name: 'xyz'
ports:
- 4444:4444
Following is the docker-compose for the Mongodb
version: '3.7'
services:
database:
image: 'mongo:3.6.8'
container_name: 'transactionsDB'
environment:
- MONGO_INITDB_DATABASE=abc
- MONGO_INITDB_ROOT_USERNAME=abc
- MONGO_INITDB_ROOT_PASSWORD=abc#v_1
ports:
- '5555:27017'
volumes:
- /home/ubuntu/abc/:/data/db
I used the following as the connection string from flask to connect with the mongodb
mongodb://abc:abc#v_1#transactionsDB:5555/abc
But everytime I get the following error pymongo.errors.ServerSelectionTimeoutError

Docker/Mongodb data not persistent

Iam running a rails api server with mongodb all worked perfectly find and I started to move my server into docker.
Unfortunately whenever I stop my server (docker-compose down) and restart it all data are lost and the db is completely empty.
This is my docker-compose file:
version: '2'
services:
mongodb:
image: mongo:3.4
command: mongod
ports:
- "27017:27017"
environment:
- MONGOID_ENV=test
volumes:
- /data/db
api:
build: .
depends_on:
- 'mongodb'
ports:
- "3001:3001"
command: bundle exec rails server -p 3001 -b '0.0.0.0'
environment:
- RAILS_ENV=test
links:
- mongodb
And this is my dockerfile:
FROM ruby:2.5.1
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
ENV APP_HOME /app
RUN mkdir $APP_HOME
WORKDIR $APP_HOME
COPY Gemfile* $APP_HOME/
RUN bundle install
COPY . $APP_HOME
RUN chown -R nobody:nogroup $APP_HOME
USER nobody
ENV RACK_ENV test
ENV MONGOID_ENV test
EXPOSE 3001
Any idea whats missing here?
Thanks,
Michael
In docker-compose, I think your "volumes" field in the mongodb service isn't quite right. I think
volumes:
- /data/db
Should be:
volumes:
- ./localFolder:/data/db