How to dockerize my dotnet core + postgresql app? - postgresql

I have a dotnet core application created with Angular template that communicates with a postgresql database.
On my local machine, I run the following command on my terminal to run the database container:
docker run -p 5432:5432 --name accman-postgresql -e POSTGRES_PASSWORD=mypass -d -v 'accman-postgresql-volume:/var/lib/postgresql/data' postgres:10.4
And then by pressing F5 in VsCode, I see that my application works great.
To dockerise my application, I added this file to the root of my application.
Dockerfile:
FROM mcr.microsoft.com/dotnet/core/sdk:2.2 AS build-env
# install nodejs for angular, webpack middleware
RUN apt-get update
RUN apt-get -f install
RUN apt-get install -y wget
RUN wget -qO- https://deb.nodesource.com/setup_11.x | bash -
RUN apt-get install -y build-essential nodejs
WORKDIR /app
# Copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore
# Copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out
# Build runtime image
FROM mcr.microsoft.com/dotnet/core/aspnet:2.2
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "Web.dll"]
Now I think I have to create a docker-compose file. Would you please help me on creating my docker-compose.yml file?
Thanks,

I figured it out, here is my final version of docker-compose.yml file:
version: '3'
services:
web:
container_name: 'accman-web-app'
image: 'accman-web'
build:
context: .
dockerfile: Dockerfile
ports:
- '8090:80'
depends_on:
- 'postgres'
networks:
- accman-network
postgres:
ports:
- '5432:5432'
container_name: accman-postgresql
environment:
- POSTGRES_PASSWORD=mypass
volumes:
- 'accman-postgresql-volume:/var/lib/postgresql/data'
image: 'postgres:10.4'
networks:
- accman-network
volumes:
accman-postgresql-volume:
networks:
accman-network:
driver: bridge
You can use composerize to find out how you can add services to your docker-compose file.
Now you can run these following commands consecutively:
docker-compose build
docker-compose up
And voila!

Related

Connecting my flask with the mongodb Docker

I am trying to connect my docker container running the gunicorn with another container of mongodb.
This is my Dockerfile for building the container
FROM python:3.8.10-buster
COPY requirements.txt /
RUN apt-get update
RUN apt-get -y install build-essential libpoppler-cpp-dev pkg-config
RUN apt install -y libsm6 libxext6
RUN apt-get install -y libxrender-dev
RUN pip3 install -r /requirements.txt
COPY . /app
WORKDIR /app
RUN ["chmod", "+x", "./gunicorn.sh"]
EXPOSE 4444
ENTRYPOINT ["./gunicorn.sh"]
I created the following docker-compose.yml for running the built container
version: '3.7'
services:
web:
build: .
image: 'flask/flask_docker'
container_name: 'xyz'
ports:
- 4444:4444
Following is the docker-compose for the Mongodb
version: '3.7'
services:
database:
image: 'mongo:3.6.8'
container_name: 'transactionsDB'
environment:
- MONGO_INITDB_DATABASE=abc
- MONGO_INITDB_ROOT_USERNAME=abc
- MONGO_INITDB_ROOT_PASSWORD=abc#v_1
ports:
- '5555:27017'
volumes:
- /home/ubuntu/abc/:/data/db
I used the following as the connection string from flask to connect with the mongodb
mongodb://abc:abc#v_1#transactionsDB:5555/abc
But everytime I get the following error pymongo.errors.ServerSelectionTimeoutError

Node app cannot connect to mongo image on my ubuntu instance

MongoServerSelectionError: getaddrinfo ENOTFOUND mongo
is the error I'm getting when I try to run docker-compose build.
This is my docker-compse.yml file
version: '3'
services:
app:
container_name: node-app
build: .
ports:
- "3000:3000"
restart: always
volumes:
- ./uploads:/app/uploads
links:
- mongo
mongo:
container_name: mongo
image: mongo
volumes:
- ./data:/data/db
ports:
- "27017:27017"
command: mongod
as well as my Dockerfile
# https://docs.docker.com/samples/library/node/
ARG NODE_VERSION=12.10.0
# https://github.com/Yelp/dumb-init/releases
ARG DUMB_INIT_VERSION=1.2.2
# Build container
FROM node:${NODE_VERSION}-alpine AS build
ARG DUMB_INIT_VERSION
WORKDIR /home/node
RUN apk add --no-cache build-base python2 yarn && \
wget -O dumb-init -q https://github.com/Yelp/dumb-init/releases/download/v${DUMB_INIT_VERSION}/dumb-init_${DUMB_INIT_VERSION}_amd64 && \
chmod +x dumb-init
ADD . /home/node
RUN yarn install && yarn build && yarn cache clean
# Runtime container
FROM node:${NODE_VERSION}-alpine
WORKDIR /home/node
COPY --from=build /home/node /home/node
EXPOSE 3000
CMD ["./dumb-init", "yarn", "start"]
My connection string in the code is
mongodb://mongo:27017/{db_name}
When I run docker ps -a, I can clearly that my mongo image is there. I've googled this issue to no extent, and tried ridiculous combinations of connection strings to try and connect to mongo, but does anyone have any supplemental information or debugging advice to overcome this?
The issue is most probably that you start mongod without passing the --bind_ip_all parameter. By default, mongod only binds to 127.0.0.1 as also stated by other SO posts.

Postgres Initialize script not working Docker version 3.4

Trying to dockerize an application and in my application, i have the following
docker-compose.yml
version: '3.4'
services:
app:
build:
context: .
dockerfile: Dockerfile
depends_on:
- database
ports:
- "3000:3000"
volumes:
- .:/app
- gem_cache:/usr/local/bundle/gems
env_file: .env
environment:
RAILS_ENV: development
database:
image: postgres:10.12
volumes:
- ./init.sql/:/docker-entrypoint-initdb.d/init.sql
- db_data:/var/lib/postgresql/data
volumes:
gem_cache:
db_data:
In my init.sql file
CREATE USER user1 WITH PASSWORD 'password';
ALTER USER user1 WITH SUPERUSER;
i have already run chmod +x init.sql
In my .env file i have the following
DATABASE_NAME=tools_development
DATABASE_USER=user1
DATABASE_PASSWORD=password
DATABASE_HOST=database
And this is my Dockerfile
FROM ruby:2.7.0
ENV BUNDLER_VERSION=2.1.4
RUN apt-get -y update --fix-missing
RUN apt-get install -y bash git build-essential nodejs libxml2-dev openssh-server libssl-dev libreadline-dev zlib1g-dev postgresql-client libcurl4-openssl-dev libxml2-dev libpq-dev tzdata
RUN gem install bundler -v 2.1.4
WORKDIR /app
COPY Gemfile Gemfile.lock ./
RUN bundle check || bundle install
COPY . ./
ENTRYPOINT ["./entrypoint.sh"]
But each time I run docker-compose run --build and try to run my application. I get error:
could not translate host name "database" to address: Name or service not known
I have tried everything possible but still the same error.
Does anyone have any idea on how to fix this issue?
I know the issue is happening because the postgres initialize scripts are not running. I have seen a lot of options online and i have tried everything but I am still facing the same error.
Any help is appreciated
Thanks
From the postgres docker image documentation you can see that the POSTGRES_USER and POSTGRES_PASSWORD are the necessary environment variables to setup the postgres container
You could add these environment variables to your .env file. So the file will be as follow:
.env
DATABASE_NAME=tools_development
DATABASE_USER=user1
DATABASE_PASSWORD=password
DATABASE_HOST=database
POSTGRES_USER=user1
POSTGRES_PASSWORD=password
POSTGRES_DB=tools_development
These environment variables will be used from the postgres container to init the DB and assign the user, so you can get rid of the init.sql file
After that you need to add the reference of the .env file in the database(postgres:10.12) service.
So your docker compose file should be as follow:
docker-compose.yml
...
database:
image: postgres:10.12
volumes:
- db_data:/var/lib/postgresql/data
env_file: .env
...

can`t link mongo docker container

I have an image (gepick:latest) with node app created from Dockerfile:
FROM centos:7
# Create app directory
WORKDIR /usr/src/app
RUN curl --silent --location https://rpm.nodesource.com/setup_8.x | bash -
RUN yum install -y nodejs
RUN curl --silent --location https://dl.yarnpkg.com/rpm/yarn.repo | tee /etc/yum.repos.d/yarn.repo
RUN rpm --import https://dl.yarnpkg.com/rpm/pubkey.gpg
RUN yum install -y yarn
RUN yarn
COPY . .
EXPOSE 8080
CMD [ "yarn", "test-matches-collecting-job"]
My goal is run tests in docker. But it requires mongodb
docker run gepick:latest :
...
Mongoose default connection error: MongoError: failed to connect to server [localhost:27017] on first connect [MongoError: connect ECONNREFUSED 127.0.0.1:27017]
...
I tried link mongo:4 images container docker run --link 0d24c3a35d5a gepick:latest but get same error.
When you launch your container using a docker-compose yaml file Docker bridges the containers together and allows you to have it launch the mongo container before other containers which rely on mongo to be active ... try something like this
cat my-docker-compose.yml
version: '3'
services:
my-gepick:
image: gepick:latest
container_name: blah_gepick
restart: always
depends_on:
- loudmongo
volumes:
- /cryptdata5/var/log/blobs:/blobs
- /webapp/enduser/bundle:/tmp
environment:
- MONGO_SERVICE_HOST=loudmongo
- MONGO_SERVICE_PORT=$GKE_MONGO_PORT
- MONGO_URL=mongodb://loudmongo:$GKE_MONGO_PORT/test
- METEOR_SETTINGS=${METEOR_SETTINGS}
- MAIL_URL=smtp://support#${GKE_DOMAIN_NAME}:blah#loudmail:587/
links:
- loudmongo
ports:
- 127.0.0.1:3000:3000
working_dir: /tmp
command: /usr/bin/supervisord -c /etc/supervisor/conf.d/supervisord.conf
loudmongo:
image: mongo
container_name: loud_mongo
restart: always
ports:
- 127.0.0.1:$GKE_MONGO_PORT:$GKE_MONGO_PORT
volumes:
- /cryptdata7/var/data/db:/data/db
so your launch sequence may look like
docker-compose -f /somedir/my-docker-compose.yml pull
docker-compose -f /somedir/my-docker-compose.yml up -d

Docker/Mongodb data not persistent

Iam running a rails api server with mongodb all worked perfectly find and I started to move my server into docker.
Unfortunately whenever I stop my server (docker-compose down) and restart it all data are lost and the db is completely empty.
This is my docker-compose file:
version: '2'
services:
mongodb:
image: mongo:3.4
command: mongod
ports:
- "27017:27017"
environment:
- MONGOID_ENV=test
volumes:
- /data/db
api:
build: .
depends_on:
- 'mongodb'
ports:
- "3001:3001"
command: bundle exec rails server -p 3001 -b '0.0.0.0'
environment:
- RAILS_ENV=test
links:
- mongodb
And this is my dockerfile:
FROM ruby:2.5.1
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
ENV APP_HOME /app
RUN mkdir $APP_HOME
WORKDIR $APP_HOME
COPY Gemfile* $APP_HOME/
RUN bundle install
COPY . $APP_HOME
RUN chown -R nobody:nogroup $APP_HOME
USER nobody
ENV RACK_ENV test
ENV MONGOID_ENV test
EXPOSE 3001
Any idea whats missing here?
Thanks,
Michael
In docker-compose, I think your "volumes" field in the mongodb service isn't quite right. I think
volumes:
- /data/db
Should be:
volumes:
- ./localFolder:/data/db