Having some issues with docker-compose - postgresql

Im pretty new to flask , postgres and docker. I am trying to dockerize our application so that a new dev does not have to worry about installing python depenedencies to start developing. I currently have two containers, one for the flask app and one for the postgres db.
The issue I am having is that when I put the two in a compose they cannot connect to each other. I have defined my own network to ensure that they are running on the same one but that still does not seem to solve the problem. When I run the flask app outside of a container and try to connect to the containerized postgres db I am having no trouble connecting and it works fine.
Any help would be much appreciated
Thanks in advance
Dockerfile for the flask app
FROM python:3.7-alpine3.7
LABEL maintainer="uwblueprint"
LABEL org.label-schema.schema-version="1.0"
LABEL org.label-schema.name="elevate-api"
LABEL org.label-schema.vcs-url="https://gitlab.com/uwblueprint/elevate-api/"
LABEL org.label-schema.vendor="UW Blueprint"
## Copy source and environment-config files.
COPY app/ ./app/
WORKDIR /app
COPY Pipfile Pipfile.lock ./
## Install external dependencies.
RUN apk add --no-cache libpq
## Install application dependencies.
RUN apk add --no-cache --virtual build-deps \
gcc musl-dev postgresql-dev libffi-dev && \
pip3 install --upgrade pip pipenv gunicorn && \
pipenv --python 3.7 && \
pipenv install --system --deploy && \
apk del build-deps
EXPOSE 5000
CMD ["flask", "run"]
Docker Compose File
version: '3.7'
services:
database:
image: registry.gitlab.com/uwblueprint/elevate-api/postgres:latest
build:
context: ./external/postgres
cache_from:
- registry.gitlab.com/uwblueprint/elevate-api/postgres:latest
- postgres:10.5-alpine
container_name: database
env_file: ./external/postgres/configs/.env
volumes:
- postgres.data:/var/lib/postgresql/data # persist data
ports:
- "5432:5432"
networks:
- "api_net"
api:
build: .
ports:
- "5000:5000"
networks:
- "api_net"
networks:
api_net:
volumes:
postgres.data:
Python SQL Alchemy Code
import os
from flask_sqlalchemy import SQLAlchemy
from app import app
# Make sure that DB_PASS is an environment variable:
##if 'DB_PASS' not in os.environ:
## raise EnvironmentError("Could not find environment variable 'DB_PASS',")
##user = os.environ.get("DB_USER", "robot")
user ="fakeUser"
password = "fakePassword"
##password = os.environ["DB_PASS"]
host = os.environ.get("DB_HOST", "0.0.0.0")
port = os.environ.get("DB_PORT", "5432")
name = os.environ.get("DB_NAME", "elevate")
# Configure flask-sqlalchemy.
app.config["SQLALCHEMY_DATABASE_URI"] = "postgresql://%s:%s#%s:%s/%s" % (
user, password, host, port, name
)
app.config["SQLALCHEMY_TRACK_MODIFICATIONS"] = False
# TODO: Find the optimal pool recycle time for PostgreSQL.
app.config["SQLALCHEMY_POOL_RECYCLE"] = 7200 # in milliseconds
# The db object is aware of the Flask application lifecycle, and will do things
# like close the database session when a Flask request ends.
#
# This saves a lot of headaches compared to the plain SQLAlchemy library, where
# I kept running into issues about database-session related objects trying
# to access the session when I had closed it prematurely (due to SQLAlchemy's
# lazy-loading data acess methods).
db = SQLAlchemy(app)

Set DB_HOST to "database" which is the name of the container.

Inside docker-compose you can make connection between services by service names. You can read about this in documentation.
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
Set host or DB_HOST environ to database. 127.0.0.1 won't work.

Related

Can't connect with docker-compose to Postgres database

I'm trying to build a docker-compose file that will spin up my EF Core web api project, connecting to my Postgres database.
I'm having a hard time getting the EF project connecting to the database.
This is what I currently have for my docker-compose.yml:
version: '3.8'
services:
web:
container_name: 'mybackendcontainer'
image: 'myuser/mybackend:0.0.6'
build:
context: .
dockerfile: backend.dockerfile
ports:
- 8080:80
depends_on:
- postgres
networks:
- mybackend-network
postgres:
container_name: 'postgres'
image: 'postgres:latest'
environment:
- POSTGRES_USER=username
- POSTGRES_PASSWORD=MySuperSecurePassword!
- POSTGRES_DB=MyDatabase
networks:
- mybackend-network
expose:
- 5432
volumes:
- ./db-data/:/var/lib/postgresql/data/
pgadmin:
image: dpage/pgadmin4
ports:
- 15433:80
env_file:
- .env
depends_on:
- postgres
networks:
- mybackend-network
volumes:
- ./pgadmin-data/:/var/lib/pgadmin/
networks:
mybackend-network:
driver: bridge
And my web project docker file looks like this:
# Get base DSK Image from Microsoft
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build-env
WORKDIR /app
# Copy the CSPROJ file and restore any dependencies (via NUGET)
COPY *.csproj ./
RUN dotnet restore
# Copy the project files and build our release
COPY . ./
RUN dotnet publish -c Release -o out
# Generate runtime image - do not include the whole SDK to save image space
FROM mcr.microsoft.com/dotnet/aspnet:6.0
WORKDIR /app
EXPOSE 80
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "MyBackend.dll"]
And my connection string looks like this:
User ID =bootcampdb;Password=MySuperSecurePassword!;Server=postgres;Port=5432;Database=MyDatabase; Integrated Security=true;Pooling=true;
Currently I have two problems:
I'm getting Npgsql.PostgresException (0x80004005): 57P03: the database system is starting up when I do docker-compose -up. I tried to add the healthcheck to my postgress db but that did not work. When I go to my Docker desktop app, and start my backend again, that message goes away and I get my second problem...
Secondly after the DB started it's saying: FATAL: password authentication failed for user "username". It looks like it's not creating my user for the database. I even changed not to use .env files but have the value in my docker-compose file, but its still not working. I've tried to do docker-compose down -v to ensure my volumes gets deleted.
Sorry these might be silly questions, I'm still new to containerization and trying to get this to work.
Any help will be appreciated!
Problem 1: Having depends_on only means that docker-compose will wait until your postgres container is started before it starts the web container. The postgres container needs some time to get ready to accept connections and if you attempt to connect before it's ready, you get the error you're seeing. You need to code your backend in a way that it'll wait until Postgres is ready by retrying the connection with a delay.
Problem 2: Postgres only creates the user and database if no database already exists. You probably have an existing database in ./db-data/ on the host. Try deleting ./db-data/ and Postgres should create the user and database using the environment variables you've set.

Running psql command from a docker container

I'm having trouble connecting postgresDB to my app using docker-compose
My understanding of docker compose is that I can combine two containers so that they can communicate with each other. Suppose I have an app in the appcontainer that runs the psql command (just a one liner python script with os.command("psql")). Since the app container does not have postgres installed, it won't be able to run psql by itself. However, I thought combining two containers in docker-compose.yml would let me run psql but apparently not.
What am I missing here?
I am using 2 postgres images because I'm trying to find regression bugs between two dbms
version: "3"
services:
app:
image: "app:1.0"
depends_on:
- postgres9
- postgres12
ports:
- 8080:80
postgres9:
image: postgres:9.6
environment:
POSTGRES_PASSWORD: mysecretpassword
POSTGRES_USER: postgres
POSTGRES_DB: test_bd
ports:
- '5432:5432'
postgres12:
image: postgres:12
environment:
POSTGRES_PASSWORD: mysecretpassword
POSTGRES_USER: postgres
POSTGRES_DB: test_bd
ports:
- '5435:5435'
Each Docker container has a self-contained filesystem. You can never directly run commands from the host or from other containers' filesystems; anything you want to run needs to be installed in the container (really, in its image's Dockerfile).
If you want to run a tool like psql, it needs to be installed in your image. You don't say what your base image is, but if it's based on Debian or Ubuntu, you need to install the postgresql-client package:
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --no-install-recommends --assume-yes \
postgresql-client
The right approach here is to add a standard Python PostgreSQL client library, like psycopg2, to your project's Python Pipfile, setup.py, and/or requirements.txt, and use that library instead of shelling out to psql. You will also need the PostgreSQL C library header files to install that package; instead of postgresql-client, install the Debian libpq-dev package.
In your case, those two containers with postgres instance in each, are running on different hosts (other than host with app). What you need is to specify correct host in psql command. It might look like (for postgres12 container):
PGPASSWORD="mysecretpassword" psql -h postgres12 -d test_bd -U postgres

not able to access Postgres container in docker compose

I have a docker-compose.yml with two services, a custom image (the service called code) and a Postgres server. Below I attach the Dockerfile to built the image called app of the first service and next the docker-compose.yml:
# Dockerfile of custom image
FROM ubuntu:latest
RUN apt-get update \
&& apt-get install -y python3-pip python3-dev \
&& cd /usr/local/bin \
&& ln -s /usr/bin/python3 python \
&& pip3 install --upgrade pip
WORKDIR /usr/app
COPY ./* ${PWD}/
ADD ./requirements.txt ./
RUN pip install -r requirements.txt
ADD ./ ./
# docker-compose.yml
version: '3.2'
services:
code:
image: app:latest
ports:
- 5001:5001
networks:
- docker-elk_elk
postgres:
image: postgres:9.5-alpine
environment:
POSTGRES_USER: postgres # define credentials
POSTGRES_PASSWORD: postgres # define credentials
POSTGRES_DB: postgres # define database
ports:
- 5432:5432 # Postgres port
networks:
- docker-elk_elk
networks:
docker-elk_elk:
external: true
Also here docker-elk_elk points to another network where another docker-compose stack runs and I want the above docker-compose to join too. However when I run docker-compose run code bash and obtain a shell in code service, a curl https://postgres:5432 gives the following message : curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to postgres:5432. I've tried also curl http://postgres:5432 which returned curl: (52) Empty reply from server. Furthermore, the docker-elk_elk network (clearly created by elasticsearch-logtash-kibaba stack) when doing docker network ls gives
NETWORK ID NAME DRIVER SCOPE
8a54fe394fe8 docker-elk_elk bridge local
I'm really lost and confused, can someone help me out? If there is any piece of info that might be necessary or helpful and wasn't included above let me know please.
I forgot to mention that app is just a simple python application (not a web app or other python sophisticated libraries).
P.S. Something that perhaps I should have mentioned above. What I want to do, is using the ubuntu container with the app inside to query (and send data) both to the postgres and Elasticsearch (which is in the other docker-compose stack) db.

docker-compose succeed but server does not response when request

I have built a RESTful API web service using Flask framework, Redis as main database, MongoDB as a backup store and Celery as task queue to store data into MongoDB in background
Then I dockerize my application using docker-compose. Here is my docker-compose.yml:
version: '3'
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/app
redis:
image: "redis:alpine"
ports:
- "6379:6379"
mongo:
image: "mongo:3.6.5"
ports:
- "27017:27017"
environment:
MONGO_INITDB_DATABASE: syncapp
Here is my Dockerfile:
# base image
FROM python:3.5-alpine
MAINTAINER xhoix <145giakhang#gmail.com>
# copy just the requirements.txt first to leverage Docker cache
# install all dependencies for Python app
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
# install dependencies in requirements.txt
RUN pip install -r requirements.txt
# copy all content to work directory /app
COPY . /app
# specify the port number the container should expose
EXPOSE 5000
# run the application
CMD ["python", "/app/app.py"]
After run command docker-compose up, the app server, Redis and Mongo server just run well. But when I use Postman or curl to call the API, for example http://127.0.0.1:5000/sync/api/v1.0/users, which should return JSON format of all users, but the result is Could not get any response: There was an error connecting to http://127.0.0.1:5000/sync/api/v1.0/users.
I have no idea why this happens.
Thanks for any help and suggestion!
I found the cause of the issue:
After an hour debug, it turns out that I only need to change the app host to 0.0.0.0. Maybe when mapping port, docker default will be 0.0.0.0, since when I run command docker-compose ps, the PORTS column of each container has format 0.0.0.0:<port> -> <port>. I don't know this is the cause of the issue or not, but I did it and the problem is solved
If operating system Linux then use :
ifconfig -a
If operating system Windows then use :
ipconfig /all
Then check the interface like docker or something with virtualization, and use the ipv4 or inet
Or Just use the docker command:
docker network inspect bridge
Then use the gateway ip on IPAM

docker-compose mongodb phoenix, [error] failed to connect: ** (Mongo.Error) tcp connect: connection refused - :econnrefused

Hi I am getting this error when I try to run docker-compose up on my yml file.
This is my docker-compose.yml file
version: '3.6'
services:
phoenix:
# tell docker-compose which Dockerfile it needs to build
build:
context: .
dockerfile: Dockerfile.development
# map the port of phoenix to the local dev port
ports:
- 4000:4000
# mount the code folder inside the running container for easy development
volumes:
- . .
# make sure we start mongodb when we start this service
depends_on:
- db
db:
image: mongo:latest
volumes:
- ./data/db:/data/db
ports:
- 27017:27017
This is my Dockerfile:
# base image elixer to start with
FROM elixir:1.6
# install hex package manager
RUN mix local.hex --force
RUN mix local.rebar --force
# install the latest phoenix
RUN mix archive.install https://github.com/phoenixframework/archives/raw/master/phx_new.ez --force
# create app folder
COPY . .
WORKDIR ./
# install dependencies
RUN mix deps.get
# run phoenix in *dev* mode on port 4000
CMD mix phx.server
Is this a problem with my dev.exs setup or something to do with the compatibility of docker and phoenix / docker and mongodb?
https://docs.docker.com/compose/compose-file/#depends_on explicitly says:
There are several things to be aware of when using depends_on:
depends_on does not wait for db and redis to be “ready” before starting web - only until they have been started. If you need to wait for a service to be ready,
and advises you to implement the logic to wait for mongodb to spinup and be ready to accept connections by yourself: https://docs.docker.com/compose/startup-order/
In your case it could be something like:
CMD wait-for-db.sh && mix phx.server
where wait-for-db.sh can be as simple as
#!/bin/bash
until nc -z localhost 27017; do echo "waiting for db"; sleep 1; done
for which you need nc and wait-for-db.sh installed in the container.
There are plenty of other alternative tools to test if db container is listening on the target port.
UPDATE:
The network connection between containers is described at https://docs.docker.com/compose/networking/:
When you run docker-compose up, the following happens:
A network called myapp_default is created, where myapp is name of the directory where docker-compose.yml is stored.
A container is created using phoenix’s configuration. It joins the network myapp_default under the name phoenix.
A container is created using db’s configuration. It joins the network myapp_default under the name db.
Each container can now look up the hostname phoenix or db and get back the appropriate container’s IP address. For example, phoenix’s application code could connect to the URL mongodb://db:27017 and start using the Mongodb database.
It was an issue with my dev environment not connecting to the mongodb url specified in docker-compose. Instead of localhost, it should be db as named in my docker-compose.yml file
For clarity to dev env:
modify config/dev.exs to (replace with correct vars)
username: System.get_env("PGUSER"),
password: System.get_env("PGPASSWORD"),
database: System.get_env("PGDATABASE"),
hostname: System.get_env("PGHOST"),
port: System.get_env("PGPORT"),
create a dot env file on the root folder of your project (replace with relevant vars to the db service used)
PGUSER=some_user
PGPASSWORD=some_password
PGDATABASE=some_database
PGPORT=5432
PGHOST=db
Note that we have added port.
Host can be localhost but should be mongodb or db or even url when working on a docker-compose or server or k8s.
will update answer for prod config...