Docker with postgresql in flask web application (part 2) - postgresql

I am building a Flask application in Python. I'm using SQLAlchemy to connect to PostgreSQL.
In the flask application, I'm using this to connect SQLAlchemy to PostgreSQL
engine = create_engine('postgresql://postgres:[mypassword]#db:5432/employee-manager-db')
And this is my docker-compose.yml
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
ports:
- 8000:8000
volumes:
- .:/app
links:
- db:db
depends_on:
- pgadmin
db:
image: postgres:14.5
restart: always
volumes:
- .dbdata:/var/lib/postgresql
hostname: postgres
environment:
POSTGRES_PASSWORD: [mypassword]
POSTGRES_DB: employee-manager-db
pgadmin:
image: 'dpage/pgadmin4'
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: [myemail]
PGADMIN_DEFAULT_PASSWORD: [mypassword]
ports:
- "5050:80"
depends_on:
- db
I can do "docker build -t employee-manager ." to build the image. However, when I do "docker run -p 5000:5000 employee-manager" to run the image, I get an error saying
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: could not translate host name "db" to address: Try again
Does anybody know how to fix this? Thank you so much for your help

Your containers are on different networks and that is why they don't see each other.
When you run docker-compose up, docker-compose creates a separate network and puts all the services defined inside docker-compose.yml on that network. You can see that with docker network ls.
When you run a container with docker run, it is attached to the default bridge network, which is isolated from other networks.
There are several ways to fix this, but this one will serve you in many other scenarios:
Run docker container ls and identify the name or ID of the db container that was started with docker-compose
Then run your container with:
# ID_or_name from the previous point
docker run -p 5000:5000 --network container:<ID_or_name> employee-manager
This attached the new container to the same network as your database container.
Other ways include creating a network manually and defining that network as default in the docker-compose.yml. Then you can use docker run --network <network_name> ... to attach other containers to that network.

docker run doesn't read any of the information in the docker-compose.yml file, and it doesn't see things like the Docker network that Compose automatically creates.
In your case you already have the service fully-defined in the docker-compose.yml file, so you can use Compose commands to build and restart it
docker-compose build
docker-compose up -d # will delete and recreate changed containers
(If the name of the image is important to you – maybe you're pushing to a registry – you can specify image: alongside build:. links: are obsolete and you should remove them. I'd also avoid replacing the image's content with volumes:, since this misses any setup or modification that's done in the Dockerfile and it means you're running untested code if you ever deploy the image without the mount.)

Related

How to communicate multiple containers with each other in Docker?

I'm trying to containerize my application. I use mongodb and 2 more micro services.
As you can see in the docker compose file below, I have some problems.
My requirements:
main_image needs to connect to MongoDB.
gui_image needs to connect to MongoDB.
gui_image needs to show its GUI on port 8080 (Can use another port as well)
gui_image has to read and write to a file inside my computer.
MongoDB has to access a volume inside my computer.
main_image needs access to the internet.
Here is my questions:
1- Does exposing ports in docker file and docker-compose the same thing?
2- How do I mount a volume to mongodb as best practice?
3- How to accomplish the requirements above with the diagram below in docker-compose?
Here is my docker-compose file:
version: "3"
services:
mongo:
image: mongo:latest
ports:
- 27017:27017
main_image:
build:
context: .
dockerfile: .\my_project\dockerfile
depends_on:
- mongo
gui_image:
build:
context: .
dockerfile: .\my_gui\dockerfile
ports:
- 8080:8080
- 27017:27017
depends_on:
- mongo
Here is my dockerfile under my_gui directory:
FROM continuumio/miniconda3
WORKDIR /app
COPY . .
RUN pip install dash
EXPOSE 8080
EXPOSE 27017
ENTRYPOINT [ "python","gui_script.py"]
And lastly, here is my dockerfile under my_project directory:
FROM continuumio/miniconda3
WORKDIR /app
COPY . .
EXPOSE 27017
ENTRYPOINT [ "python","main_script.py"]
1.The EXPOSE instruction in Dockerfile informs Docker that the container listens on the specified network ports at runtime(like when using docker run -p command).
However using ports in compose is a dynamic way of specifying these ports. So images like nginx or apache which are always supposed to run on port 80 inside the container will use EXPOSE in Dockerfile itself.
While an image which has dynamic port which may be controlled using an environment variable will then use expose in docker run or compose file.
some_webapi:
environment:
- ASPNETCORE_URLS=http://*:80
build:
context: .
dockerfile: ./Dockerfile
2.As documented on the docker hub page for mongo image (https://hub.docker.com/_/mongo/) you can use
volumes:
- '/path/to/your/pc/folder:/path/inside/docker'
3.And for the last question you might wanna use Networking in Compose.
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
Services can join networks like this
gui_image:
build:
context: .
dockerfile: .\my_gui\dockerfile
ports:
- 8080:8080
- 27017:27017
depends_on:
- mongo
networks:
- gui
And also you have to define all the networks used by services in global scope of compose file
version: '3'
services:
networks:
gui:
After that containers will be able to see each other even by their container_name which you can define in services
mongo:
image: mongo:latest
ports:
- 27017:27017
container_name: gui_mogno
then you will be able to connect to mongo with a connection string like this mongodb://gui_mogno:27017/
You can get more information about networking here

Scala JDBC project won't run outside of Docker Container?

com.zaxxer.hikari.pool.HikariPool$PoolInitializationException: Failed to initialize pool: The connection attempt failed.
I get the above error when entering sbt run However, inside my docker containers everything works fine.
Inside the first container I have a postgres database. The second container I have an image built from my project folders. When I run docker-compose up --build everything works fine.
I suspect the project (actual codebase) can't see the postgres database in docker-compose container.
Do I need another postgres database outside the docker-compose containers to go with my project code outside the containers?
docker-compose.yml file.
version: '3.6'
services:
# App Backend PostgreSQL
postgres:
container_name: sportsAppApiDb
image: postgres:11.7-alpine
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: password
POSTGRES_URL: postgres://admin:password#localhost:5432/sportsappapi
POSTGRES_DB: sportsappapi
POSTGRES_HOST: postgres
ports:
- "5432:5432"
# App Backend
sports-app-api:
container_name: sportsAppApi
build: ./
volumes:
- ./:/usr/src/sports-app-api
command: sbt run
working_dir: /usr/src/sports-app-api
ports:
- "8000:8000"
environment:
POSTGRES_URI: postgres://admin:password#postgres:5432/sportsappapi
Entrypoint for scala project
object SportsAppApiStartup extends App {
SportsAppApiDb(SportsAppApiConfig.appDb).init
WebServer(Endpoints.handler, 8000).start()
println(s"Running sports-app-api on port: 8000")
}
Your database is not accessible outside of docker-compose under postgres:5432. Try to connect to it through psql or pgcli or other client and you'll see.
When you'll call docker-compose ps or docker ps you'll be able to see how to connect to Postgres docker image (under ports) - most likely it will be something like 0.0.0.0:5432.
E.g. if I have:
> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d90871418bcb postgres "docker-entrypoint.s…" 2 weeks ago Up 4 days 0.0.0.0:7766->5432/tcp postgres_container
it means that Postgres was available under 0.0.0.0:7766 from outside Docker.
This has nothing to do with Scala, sbt and slick as far as I can tell.

docker-compose - Application can't communicate with postgres container

I have a scrapy application which I'm trying to containerized it. Basically, this is my docker-compose.yml file:
version: '3'
services:
scrapper:
container_name: scrapper
build: .
ports:
- 80:80
depends_on:
- db
links:
- db
db:
volumes:
- ./scrapper/sql:/docker-entrypoint-initdb.d
image: postgres
container_name: postgres
restart: always
ports:
- 5432:5432
And this is my Dockerfile:
FROM python:3
WORKDIR /usr/app
COPY requirements.txt .
RUN pip3 install -r requirements.txt
COPY . .
But when I try to execute my application using the following command: docker run -it scrapper_scrapper scrapy crawl angeloni, I'm receiving this message:
File "/usr/local/lib/python3.7/site-packages/scrapy/crawler.py", line 88, in crawl
yield self.engine.open_spider(self.spider, start_requests)
psycopg2.OperationalError: could not translate host name "db" to address: Name or service not known
Why this is happening? When I execute docker-compose ps command, it shows:
Name Command State Ports
--------------------------------------------------------------------------
postgres docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp
scrapper python3 Exit 0
When running docker-compose up to start db, that container will run under its network that is also created by docker compose. As such, running docker run ... will not be able to connect to that instance, since it is not running on the same network. But you can specify it with:
docker run --network $network_name
To get the docker networks available, you can run:
docker networks ls
I think you have to explicitly define a user network and put your containers on it:
https://docs.docker.com/network/bridge/
Under the section:
User-defined bridges provide automatic DNS resolution between containers.

Docker compose doesn't connect two containers

I have two containers that don't connect to each other:
1. I made an image postgres that get data from dump.sql
here is Dockerfile:
FROM postgres:11.1-alpine
COPY restore_db.sh /docker-entrypoint-initdb.d/
COPY db.sql /backup/
ENV PGDATA=/data
Then I created container with docker run --name db -p 5432:5432 db
4.I made a image with app. Dockerfile for app look like:
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY build/libs/ /app/
# Make port 80 available to the world outside this container
EXPOSE 8085
# Define environment variable
ENV NAME app
# Run app when the container launches
CMD java -jar /app/olympic-0.0.1-SNAPSHOT.jar
I made a container with run.
then i use docker-compose up with file that looks like:
version: '3'
services:
db:
image: db-data
container_name: postgres
ports:
- 5432:5432
volumes:
- ./pg_data:/data
environment:
POSTGRES_DB: innovation
POSTGRES_USER: postgres
PGDATA: /data
restart: always
web:
image: app
container_name: roc
environment:
POSTGRES_HOST: db
ports:
- 8085:8085
restart: always
links:
- db
```
here is property file:
```
spring.datasource.url=jdbc:postgresql://db:5432/innovation
spring.datasource.username=postgres
spring.jpa.properties.hibernate.jdbc.lob.non_contextual_creation=true
logging.level.org.hibernate.SQL=DEBUG
logging.level.root=INFO
spring.output.ansi.enabled=ALWAYS
logging.level.org.hibernate.type.descriptor.sql.BasicBinder=TRACE
spring.liquibase.change-log=classpath:liqubase/db.changelog-master.xml
spring.liquibase.url=jdbc:postgresql://db:5432/innovation
spring.liquibase.user=postgres
```
Thet are not able to be connected.
I always got an error:
org.postgresql.util.PSQLException: Connection to db:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
First of all I dont see any network defined in your Docker files for both containers
So I assume ther are on $project-default network.
docker network inspect $project-default
will give you list of all containers using default network.
Now coming to the containers, Let's assume DB is Container 1 (10.1.1.2) and Spring App is Container 2 (10.1.1.3).
You can get running containers IP by running
docker inspect containerName
You are exposing 5432 and 8085 port for db and Spring respectively
Inside Spring app container property file spring.datasource.url
localhost:5432 or db:5432 (not sure what is db hostname mapped to) is not accessable as DB is in different container.
You can try 10.1.1.2:5432
When you are exposing 5432 and 8085 port from Host machine you can access these port.
eg in Docker for Windows it would be 192.168.99.100:5432
but same cant be access from inside container.
spring.datasource.url=jdbc:postgresql://10.1.1.2:5432/innov should work assuming DB is up and running

docker-compose external_link mongo network not reachable

I am having a strange situation where I can not connect to my running mongo DB in my docker compose. My compose file:
version: '3'
services:
app:
image: myimage:latest
ports:
- "8080:8080"
external_links:
- myname:mongo
environment:
- MONGO_URL=mongodb://myname:27017/test
I have found a few infos on that that all did not solve my issue. I.e. I tried:
1)
Create a custom network:
docker network create mongonet
Then start mongo with the --network mongonet flag and add to the compose:
networks:
default:
external:
name: mongonet
Got nothing there either.
I looked into my /etc/hosts file on my compose, and it did not list any DNS entry.
If i do a docker inspect and grab the mongo IP and add it to my compose, that is fine and works like a charm.
I start mongo like this:
docker run -d -p 27017:27017 -v ~/mongo_data:/data/db mongo
I am really rather confused as I believed this to be a out-of-the-box kind of thing. Strangely I can't make it work. I have found examples on internal links (vs external_link) but that does not work for me as I have many services that I would like to run like this and not all of them should run at the same time.
I start my docker compose as this:
docker-compose up --force-recreate
My versions are:
docker-compose version 1.17.1, build 6d101fb
Docker version 17.05.0-ce, build 89658be
My question: How do I successfully link a running mongo container as an external link into my application containers such that they can connect to them?
My docker PS:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5cf6e08d6fde mongo "docker-entrypoint..." About an hour ago Up About an hour 0.0.0.0:27017->27017/tcp gallant_feynman
Links are deprecated, use networks instead.
Notes:
If you’re using the version 2 or above file format, the
externally-created containers must be connected to at least one of the
same networks as the service which is linking to them. Links are a
legacy option. We recommend using networks instead.
The network way should work. I think you are missing some pieces. Make sure to give the mongo container a name, and make sure to attach the app container to the network in the compose file:
docker network create mongonet
docker run -d -p 27017:27017 --network mongonet --name mongo -v ~/mongo_data:/data/db mongo
version: '3'
services:
app:
image: myimage:latest
ports:
- "8080:8080"
environment:
- MONGO_URL=mongodb://mongo:27017/test
networks:
- mongonet
networks:
default:
external:
name: mongonet