Failing to connect to a Postgres Container with psycopg2? - postgresql

I have a docker-compose file that looks like this
version: "3.7"
services:
app:
stdin_open: true
tty: true
build:
context: .
dockerfile: app.Dockerfile
volumes:
- ${HOST_SAVE_DIRC}:${CONTAINER_SAVE_DIRC}
depends_on:
- postgres
postgres:
image: 'postgres'
environment:
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_HOST_AUTH_METHOD=trust
restart: always
expose:
- "5432"
where variables like POSTGRES_USER are entries from a env file. app.Dockerfile looks like
FROM python:3.8.3-slim-buster
COPY src /src/
COPY init.sql .
COPY .env .
COPY run.sh run.sh
COPY requirements.txt .
RUN ls -a
RUN pip install --no-cache-dir -r requirements.txt
The containers are created, then the user is logged into the app container w/ the main function of the program being called - this is when the database calls
From the app container I am attempting to connect to the postgres container via psycopg2. However when I attempt to do so, I receive the following error:
psycopg2.OperationalError: could not connect to server: No route to host
Is the server running on host "postgres" (172.22.0.2) and accepting
TCP/IP connections on port 5432?
using a psycopg2 call that looks like
with psy.connect(host='postgres', port=5432, user='postgres', password='postgres') as conn:
...
the entries of this psycopg2 call match the env file given to the docker-compose file.
My understanding is that Postgres uses port 5432 by default. Also that when docker-compose creates the two containers - it creates a docker network for those containers name DIR_default where DIR is the name of the directory the docker-compose file lives in, where each container can be accessed with using the name listed in the docker-compose file ('postgres' and 'app' in these cases).
Among various tries:
I've checked and the database isn't going down between the container being created and the user being exec'd in.
I've tried various little changes like changing the container names, postgres login info, etc.
I've tried linking the postgres container name explicitly with link: "postgres:postgres".
Other solutions suggested here
Any help would be greatly appreciated! I see no reason why something as simple as this should be occurring, but also here I am.
Edit:
Pinging the Postgres container from the app container appears to be working when running docker exec app ping postgres_container_name. Is this a sign that the Docker network is set up correctly and the issue is something of mine?
Edit 2:
Tried clearing all images and containers, then restarting the Docker daemon and afterwards my PC. No change in either case.
For reference, the ping command looked like
docker exec python-app ping name_given_to_postgres_container
returning various statements which looked like
64 bytes from name_given_to_postgres_container.project_name_default (172.18.0.3): icmp_seq=1 ttl=64 time=0.090 ms
which unless I am mistaken, I believe is signalling a succesful ping.
The top level .env file provided to docker-compose
HOST_SAVE_DIRC=~/python_projects/project_directory/directory_in_project
CONTAINER_SAVE_DIRC=/pdfs
POSTGRES_DB=project_name # same as project_directory
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_PORT=5432
Here is the requirements.txt file for the Python app as well
certifi==2020.4.5.1
chardet==3.0.4
idna==2.9
psycopg2-binary==2.8.5
read-env==1.1.0
requests==2.23.0
urllib3==1.25.9
Exec-ing into the Postgres container with docker exec -it container_id bash and running psql -U postgres appears to be successful - even with restart: always removed. I can also see the database named in the docker-compose file is also created. I feel confident in saying this container isn't dying spontaneously.
However, hitting the 5432 port on the Postgres container with netcat via nc name_given_to_postgres_container 5432-5433 returns an error similar to the one returned by psycopg2
arxivist_postgres_1 [172.22.0.3] 5433 (?) : No route to host
arxivist_postgres_1 [172.22.0.3] 5432 (postgresql) : No route to host
The same error is also returned with curl. So my guess the issue isn't with the Postgres container directly, psycopg2, or the host-name - but something with the port?
Edit 3:
As a last attempt to fix this project, the full project this post is referring to is posted at this link. If anyone would like to download the repo and try building the docker containers themselves via ./start.sh - that might be just what is needed to find a solution!

I thought I had Docker setup on my machine, which runs Fedora 32. However as I came to realize from this article, setting up Docker on Fedora 32 requires some extra steps I was not previously aware of.
Specifically for this issue, the command listed in the article to add Docker to whitelist Docker on the local network's firewall with the command
sudo firewall-cmd --permanent --zone=FedoraWorkstation --add-masquerade
So I believe the root cause of my issue was simply my app container being blocked from accessing the postgres container by the firewall. Making the above change made the program work finally!

Related

Why isn't Docker Compose honoring my POSTGRES_USER environment variable?

I know lots of questions sound like this, and they all have the same answer: delete your volumes to force it to reinitialize.
The problem is, I'm being careful to delete my volumes, but it's consistently spinning up the container incorrectly every time.
My docker-compose.yml
version: "3.1"
services:
db:
environment:
- POSTGRES_DB=mydb
- POSTGRES_PASSWORD=changeme
- POSTGRES_USER=myuser
image: postgres
My process:
$ docker volume ls
DRIVER VOLUME NAME
$ docker-compose up -v # or docker-compose up --force-recreate
yet it always creates the "postgres" user instead of myuser. The output when it starts up shows that it "will be owned by user 'postgres'" and I can only docker exec as postgres, not my user.
The instructions seem very straightforward. Am I missing something, or is this a bug?
What happens when you use the compose file above?
I can only docker exec as postgres, not myuser
The environment variable POSTGRES_USER controls the database user, not the linux user. Take a look at the chapter Arbitrary --user Notes in the documentation to learn how to change the linux user.

Setup a PostgreSQL connection to an already existing project in Docker

I had never used PostgreSQL nor Docker before. I set up an already developed project that uses these two technologies in order to modify it.
To get the project running on my Linux (Pop!_OS 20.04) machine I was given these instructions (sorry if this is irrelevant but I don't know what is important and what is not to state my problem):
Installed Docker CE and Docker Compose.
Cloned the project with git and ran the commands git submodule init and git submodule update.
Initialized the container with: docker-compose up -d
Generated the application configuration file: ./init.sh
After all of that the app was available at http://localhost:8080/app/ and I got inside the project's directory the following subdirectories:
And inside dbdata:
Now I need to modify the DB and there's where the difficulty arose since I don't know how to set up the connection with PostgreSQL inside Docker.
In a project without Docker which uses MySQL I would
Create the local project's database "dbname".
Import the project's DB: mysql -u username -ppassword dbname < /path/to/dbdata.sql
Connect a DB client (DBeaver in my case) to the local DB and perform the necessary modifications.
In an endeavour to do something like that with PostgeSQL, I have read that I need to
Install and configure Ubuntu 20.04 serve.
Install PostgreSQL.
Configure Postgres “roles” to handle authentication and authorization.
Create a new Database.
And then what?
How can I set up the connection in order to be able to modify the DB from DBeaver and see the changes reflected on http://localhost:8080/app/ when Docker is involved?
Do I really need an Ubuntu server?
Do I need other program than psql to connect to Postgres from the command line?
I have found many articles related to the local setup of PostgreSQL with Docker but all of them address the topic from scratch, none of them talk about how to connect to the DB of an "old" project inside Docker. I hope someone here can give directions for a newbie on what to do or recommend an article explaining from scratch how to configure PostgreSQL and then connecting to a DB in Docker. Thanks in advance.
Edit:
Here's the output of docker ps
You have 2 options to get into known waters pretty fast:
Publish the postgres port on the docker host machine, install any postgres client you like on the host and connect to the database hosted in the container as you would have done this traditionally. You will use localhost:5433 to reach the DB. << Update: 5433 is the port where the postgres container is published on you host, according to the screenshot.
Another option is to add another service in your docker-compose file to host the client itself in a container.
Here's a minimal example in which I am launching two containers: the postgres and an adminer that is exposed on the host machine on port 9999.
version: '3'
services:
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: example
adminer:
image: adminer
restart: always
ports:
- 9999:8080
then I can access the adminer at localhost:9999 (password is example):
Once I'm connected to my postgres through adminer, I can import and execute any SQL query I need:
A kind advice is to have a thorough lecture to understand how the data is persisted in a Docker context. Performance and security are also topics that you might want to add under your belt as a novice in the field better sooner than later.
If you're running your PostgreSQL container inside your own machine you don't need anything else to connect using a database client. That's because to the host machine, all the containers are accessible using their own subnet.
That means that if you do this:
docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' 341164c5050f`
it will output a list of IPs that you can configure in your DBeaver to access the container instance directly.
If you're not fond of doing that (or you prefer to use cli) you can always use the psql inside the installation of the PostgreSQL container to achieve something like you described in mysql point nº2:
docker exec -i 341164c5050f bash -c 'psql -U $POSTGRES_USER' < /path/to/your/schema.sql
It's important to inform the -i, otherwise it'll not read the schema from the stdin. If you're looking for psql in the interactive mode, use -it instead.
Last but not least, you can always edit the docker-compose.yml file to export the port and connect to the instance using the public IP/loopback device.

docker-compose mongodb phoenix, [error] failed to connect: ** (Mongo.Error) tcp connect: connection refused - :econnrefused

Hi I am getting this error when I try to run docker-compose up on my yml file.
This is my docker-compose.yml file
version: '3.6'
services:
phoenix:
# tell docker-compose which Dockerfile it needs to build
build:
context: .
dockerfile: Dockerfile.development
# map the port of phoenix to the local dev port
ports:
- 4000:4000
# mount the code folder inside the running container for easy development
volumes:
- . .
# make sure we start mongodb when we start this service
depends_on:
- db
db:
image: mongo:latest
volumes:
- ./data/db:/data/db
ports:
- 27017:27017
This is my Dockerfile:
# base image elixer to start with
FROM elixir:1.6
# install hex package manager
RUN mix local.hex --force
RUN mix local.rebar --force
# install the latest phoenix
RUN mix archive.install https://github.com/phoenixframework/archives/raw/master/phx_new.ez --force
# create app folder
COPY . .
WORKDIR ./
# install dependencies
RUN mix deps.get
# run phoenix in *dev* mode on port 4000
CMD mix phx.server
Is this a problem with my dev.exs setup or something to do with the compatibility of docker and phoenix / docker and mongodb?
https://docs.docker.com/compose/compose-file/#depends_on explicitly says:
There are several things to be aware of when using depends_on:
depends_on does not wait for db and redis to be “ready” before starting web - only until they have been started. If you need to wait for a service to be ready,
and advises you to implement the logic to wait for mongodb to spinup and be ready to accept connections by yourself: https://docs.docker.com/compose/startup-order/
In your case it could be something like:
CMD wait-for-db.sh && mix phx.server
where wait-for-db.sh can be as simple as
#!/bin/bash
until nc -z localhost 27017; do echo "waiting for db"; sleep 1; done
for which you need nc and wait-for-db.sh installed in the container.
There are plenty of other alternative tools to test if db container is listening on the target port.
UPDATE:
The network connection between containers is described at https://docs.docker.com/compose/networking/:
When you run docker-compose up, the following happens:
A network called myapp_default is created, where myapp is name of the directory where docker-compose.yml is stored.
A container is created using phoenix’s configuration. It joins the network myapp_default under the name phoenix.
A container is created using db’s configuration. It joins the network myapp_default under the name db.
Each container can now look up the hostname phoenix or db and get back the appropriate container’s IP address. For example, phoenix’s application code could connect to the URL mongodb://db:27017 and start using the Mongodb database.
It was an issue with my dev environment not connecting to the mongodb url specified in docker-compose. Instead of localhost, it should be db as named in my docker-compose.yml file
For clarity to dev env:
modify config/dev.exs to (replace with correct vars)
username: System.get_env("PGUSER"),
password: System.get_env("PGPASSWORD"),
database: System.get_env("PGDATABASE"),
hostname: System.get_env("PGHOST"),
port: System.get_env("PGPORT"),
create a dot env file on the root folder of your project (replace with relevant vars to the db service used)
PGUSER=some_user
PGPASSWORD=some_password
PGDATABASE=some_database
PGPORT=5432
PGHOST=db
Note that we have added port.
Host can be localhost but should be mongodb or db or even url when working on a docker-compose or server or k8s.
will update answer for prod config...

Networking using docker-compose in docker executor in circleci

this is a circleci question I guess.
I am quite happy with circleci but now I ran into a problem and I don't know what I'm doing wrong.
Maybe this is something very easy, but I don't see the it.
In short
I can't make containers talk to each other on circleci.
Problem
Basically what I wanted to do is start a server container and a client container, and then let them talk to each other.
I created a minimal example here: https://github.com/mRcSchwering/circleci-integration-test
The README.md basically explains the desired outcome.
I have a .circleci/config.yml like this:
version: 2
jobs:
build:
docker:
- image: docker:18.03.0-ce-git
steps:
- checkout
- setup_remote_docker
- run:
name: Install docker-compose
command: |
apk --update add py2-pip
/usr/bin/pip2 install docker-compose
docker-compose --version
- run:
name: Start Container
command: |
docker-compose up -d
docker-compose ps
- run:
name: Let client talk to server
command: |
docker-compose run client psql -h server -p 5432 -U postgres -c "\l"
In a docker container, docker-compose is installed, which is then used to start a server and a client (postgres here). In the last step I am telling the client to query the server. However, it cannot find the server:
#!/bin/sh -eo pipefail
docker-compose run client psql -h server -p 5432 -U postgres -c "\l"
Starting project_server_1 ...
^#^#psql: could not connect to server: Connection refused
Is the server running on host "server" (172.18.0.2) and accepting
TCP/IP connections on port 5432?
Exited with code 2
Files
The docker-compose.yml looks like this
version: '2'
services:
server:
image: postgres:9.5.12-alpine
networks:
- internal
expose:
- '5432'
client:
build:
context: .
networks:
- internal
depends_on:
- server
networks:
internal:
driver: bridge
where the client is built from a dockerfile like this
FROM alpine:3.7
RUN apk --no-cache add postgresql-client && rm -rf /var/cache/apk/*
Note
If I repeat everything on my Linux (also with docker-in-docker) it works.
But I guess some things work completely different on circleci.
I found some people mentioning that on circleci networking and bind mounts can be tricky but I didn't find anything that can help me.
There is this doc but I thought I am doing this already.
Then there is this project where someone seems to do the same thing on circleci successfully.
But I cannot figure out what's different there...
Anyway I would really appreciate your help. So far I have given up on this.
Best
Marc
Ok, in the meanwhile I (no actually it was halfer from the circleci forum) noticed that docker-compose run client psql -h server -p 5432 -U postgres -c "\l" was run before the server was up and running. A simple sleep 5 after docker-compose up -d fixes the problem.

What is the difference between docker-machine and docker-compose?

I think I don't get it. First, I created docker-machine:
$ docker-machine create -d virtualbox dev
$ eval $(docker-machine env dev)
Then I wrote Dockerfile and docker-compose.yml:
FROM python:2.7
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
version: '2'
services:
db:
image: postgres
web:
build: .
restart: always
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
links:
- db
Finally, I built and started the image:
$ docker-compose build --no-cache
$ docker-compose start
I checked ip of my virtual machine
$ docker-machine ip dev
and successfully opened the site in my browser. But when I made some changes in my code - nothing happened. So I logged to the "dev" machine:
$ docker-machine ssh dev
and I didn't find my code! So I logged to the docker "web" image:
$ docker exec -it project_web_1 bash
and there was a code, but unchanged.
What is the docker-machine for? What is the sense? Why docker doesn't syncing files after changes? It looks like docker + docker-machine + docker-compose are pain in the a...s for local development :-)
Thanks.
Docker is the command-line tool that uses containerization to manage multiple images and containers and volumes and such -- a container is basically a lightweight virtual machine. See https://docs.docker.com/ for extensive documentation.
Until recently Docker didn't run on native Mac or Windows OS, so another tool was created, Docker-Machine, which creates a virtual machine (using yet another tool, e.g. Oracle VirtualBox), runs Docker on that VM, and helps coordinate between the host OS and the Docker VM.
Since Docker isn't running on your actual host OS, docker-machine needs to deal with IP addresses and ports and volumes and such. And its settings are saved in environment variables, which means you have to run commands like this every time you open a new shell:
eval $(docker-machine env default)
docker-machine ip default
Docker-Compose is essentially a higher-level scripting interface on top of Docker itself, making it easier (ostensibly) to manage launching several containers simultaneously. Its config file (docker-compose.yml) is confusing since some of its settings are passed down to the lower-level docker process, and some are used only at the higher level.
I agree that it's a mess; my advice is to start with a single Dockerfile and get it running either with docker-machine or with the new beta native Mac/Windows Docker, and ignore docker-compose until you feel more comfortable with the lower-level tools.