Connect to Postgres as SystemD service from docker-compose - postgresql

I need to connect to a Postgres server instance that is running as a SystemD service from within a docker-compose file.
docker-compose containers ---> postgres as systemd
This is about setting up Airflow with an external Postgres DB that is on localhost.
I've taken the docker-compose example with:
curl -LfO 'https://airflow.apache.org/docs/apache-airflow/2.2.3/docker-compose.yaml'
However in there they are defining a Postgres container where Airflow connects to by resolving the postgres host within the Docker network.
But I already have Postgres running on the machine via SystemD, I can check its status with:
# make sure the service is up and running
systemctl list-units --type=service | grep postgres.*12
# check the process
ps aux | grep postgres.*12.*config_file
# check the service details
systemctl status postgresql#12-main.service
AFAIU inside the docker-compose YAML file I need to use the feature host.docker.internal so the Docker service makes the docker container find their way out of the Docker network and find localhost with the SystemD services e.g. Postgres.
I've setup the Airflow YAML file for docker-compose with:
---
version: '3'
x-airflow-common:
&airflow-common
image: ${AIRFLOW_IMAGE_NAME:-apache/airflow:2.2.3}
environment:
&airflow-common-env
AIRFLOW__CORE__EXECUTOR: LocalExecutor
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow#host.docker.internal/airflow
...
extra_hosts:
- "host.docker.internal:host-gateway"
There's a lot of stuff going on there, but the point is that the SQLAlchemy connection string is using host.docker.internal as host.
Now when I invoke the command docker-compose -f airflow-local.yaml up airflow-init, then I see in the ouput logs that Airflow is complaining it does not find the Postgres server:
airflow-init_1 | psycopg2.OperationalError: connection to server at "host.docker.internal" (172.17.0.1), port 5432 failed: Connection refused
airflow-init_1 | Is the server running on that host and accepting TCP/IP connections?
It might be an issue with the DNS resolution between the Docker special network and the OS network, not sure how to troubleshoot this.
How do I make Docker container to find out SystemD services that serve on localhost?

Turns out I just need to use network_mode: host in the YAML code for the Docker container definition (a container is a service in docker-compose terminology).
This way the Docker virtual network is somehow bound to the laptop networking layer ("localhost" or "127.0.0.1"). This setup is not encouraged by the Docker people, but sometimes things are messy when dealing with legacy systems so you have to work around what has been done in the past.
Then you can use localhost to reach the Postgres DB running as a SystemD service.
The only caveat is that someone can not use port mappings when using network_mode: host, otherwise docker-compose complains with the error message:
"host" network_mode is incompatible with port_bindings
So you have to remove the YAML part similar to:
ports:
- 9999:8080
and sort out the ports (TCP sockets) in a different way.
In my specific scenario (Airflow stuff), I've done the following:
For the host/networking that makes the Airflow webserver (docker container/service) reach the Postgres DB (SystemD service/daemon) on localhost:
# see the use of "localhost"
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow#localhost/airflow
For the TCP port, in the docker-compose YAML service definition for the Airflow webserver I specified the port:
command: webserver -p 9999

Related

Can't connect to DB located in docker container

I'm trying to create PostgreSQL DB inside docker container and connect to it from my local machine. Running docker-compose up -d with that inside docker-compose.yml:
version: '3.5'
services:
db:
image: postgres:12.2
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_DB: db
POSTGRES_USER: root
POSTGRES_PASSWORD: root
ended successfully. No crashes, errors of something. But, when I'm trying to connect to it with pgAdmin4 with these credentials:
Host name/address: localhost
Port: 5432
Maintenance database: db
Username: root
Password: root
it says to me:
Unable to connect to server:
FATAL: password authentication failed for user "root"
My OS: Windows 10 build(1809)
PostgreSQL version (installed on local machine): 12
Docker version: 19.03.13, build 4484c46d9d
UPD 1:
After re-creating container with different ports (now it is 5433:5433), pgAdmin4 error changed:
Unable to connect to server:
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
Host name/address: localhost
Port: 5432
You are trying to connect to 5432 port on localhost. Are you sure your container is taking the host IP?
To make the container run with the host IP run the container with --network host option.
docker run --network host <rest of the command>
Note that if you use '--network host' option, then portmapping '-p' option is not needed.
Read https://docs.docker.com/network/host/ for more information.
Have you checked you've cleaned away any old instances running locally and that you're not trying to access an old instance?
You can wipe out all local docker containers with: docker rm -f $(docker ps -aq)
Once you've got a clean environment you can try spin up the containers again locally and see if you can access the service. I copy/pasted what you have into a clean docker-compose.yaml and ran docker-compose up against the file - it worked and I logged in and was able to view the pg_user table.
If it still fails you can try to find the IP using: netstat -in | grep en0 which will show something like
en0 1500 192.168.1 **192.168.1.163** 15301832 - 9001208 - - -
this shows the external/accessible IP of the container. Try using the address shown (something similar to 192.168.1.163) instead of localhost

Using Docker-compose, how to access the container database via the same port in container and in host

I'm running a docker-compose container for postgresql. The postgresql database in the container is running on the standard port 5432, and I am publishing that port out to port 5444 on the host (since the host's postgresql default port is in use).
I am using the same configuration on the host and in the container (a .env file that provides config settings for cli commands and the app as a whole). Unfortunately, whichever port I choose, one system will lose access. For example, I cannot run:
[k#host]$ pgsql -p 5444 # Connects
in the host and still have work inside the container:
[k#db-container]$ pgsql -p 5444 # Errors in container
The container's postgresql-server is running on 5432:
[k#db-container]$ pgsql -p 5432 # Connects successfully in container
and the ports are published via the docker-compose.yml via:
- ports:
- "5444:5432"
So currently I don't know how to configure the same port everywhere simply via the docker-compose.yml! expose command exposes the port but does not allow remapping, ports forwards host<-->container, but does not remap the internal port. I have thought of remapping the postgresql default port inside the postgresql container configuration, but fully reconfiguring postgresql seems non-trivial to do via docker-compose on every docker-compose up.
How can I remap the ports inside the container so that I can use port 5444 everywhere, in the host & container?
The standard PostgreSQL client library supports several environment variables that tell it where the server is. In the same way that you can configure the host using $PGHOST, you can configure the port using $PGPORT.
In a Docker Compose context, it should be straightforward to set these:
version: '3'
services:
postgres:
image: postgres:11
ports: ['5444:5432']
volumes: ['./postgres:/var/lib/postgresql/data']
myapp:
build: .
ports: ['8888:8888']
env:
PGHOST: postgres
# default PGPORT=5432 will work fine
Similarly, if you're running the application in a development environment on the host, you can set
PGHOST=localhost PGPORT=5444 ./myapp
You can't use the same port on the host.
Nothing prevents you to run multiple instances if they has different IP-addresses.
psql -p 5444
Defaults to psql --host=127.0.0.1 -p 5444. If you want several instances - obviously you must make them different in some way.

Postgresql via Docker - postgres is not running automatically

The main problem is that i cannot run postgresql even on vm with the error:
root#a2c8a58d4e0e:/# psql -h localhost -U psqluser -W
Password for user psqluser:
psql: could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Cannot assign requested address
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?
For that purpose i run this commands inside VM:
pg_createcluster 9.6 main --start
/etc/init.d/postgresql start
And then it works properly on VM. But that's manually.
I configured everything by official docker repo docs.
This is my docker compose file:
version: "3.3"
services:
postgresql:
build:
context: .
dockerfile: postgresql
container_name: Postgres
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_DB: 'psqldb'
POSTGRES_USER: 'psqluser'
POSTGRES_PASSWORD: 'temp123'
volumes:
- /home/VOLUMES/DB/Postgresql:/var/lib/postgresql
I did inheritance from original repo as i want to run postgresql service automatically. Otherwise it's not running.
postgresql file:
FROM postgres:9.6.11
RUN pg_createcluster 9.6 main --start
RUN /etc/init.d/postgresql start
It does not run Postgres as well. Only manually inside VM.
What's wrong?
Within docker compose, ports aren't exposed on the host by default. https://docs.docker.com/compose/compose-file/
ports is virtual within the docker compose network. If you want to expose them to the host machine, you can use the expose option instead of ports.
Alternatively, you can also run docker-compose run with the --service-ports flag which will automatically expose the ports to the host when running.
docker-compose run --service-ports postgresql (see doc)

Could not connect to Postgres from Symfony 4 + Docker

I run into the strange problem. I've created docker-compose file to build php + nginx + postgres services:
version: '2'
services:
db:
image: orchardup/postgresql
ports:
- "5433:5432"
environment:
LC_ALL: C.UTF-8
POSTGRESQL_USER: postgres
POSTGRESQL_DB: db
POSTGRESQL_PASS: postgres
php:
build: .docker/php-fpm
ports:
- "9002:9002"
volumes:
- .:/var/www/symfony:cached
- ./var/log/symfony:/var/www/symfony/var/log:cached
links:
- db
nginx:
build: .docker/nginx
ports:
- "8001:80"
links:
- php
volumes_from:
- php
volumes:
- ./var/log/nginx/:/var/log/nginx:cached
After that I created DB schema by running bin/console doctrine:schema:update --force . The tables and migrations created just fine. Seems like DB connection is ok. I checked this by connecting to db from my machine through psql with credentials from .env, the tables are there.
But when I go to the web page and trying to authorize, I get an error told me the connection is not ok:
Connection refused Is the server running on host "127.0.0.1" and accepting TCP/IP connections on port 5433?"
I checked in both case I have dev environment - from the web page and from the console. I tried 5433 and 5432 ports with no success. I tried everything I could find for 3 hours.
This is the output from the postgres container:
# netstat -tlpn | grep 5432
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN 12/postgres
tcp6 0 0 :::5432 :::* LISTEN 12/postgres
# grep listen /etc/postgresql/9.3/main/postgresql.conf
listen_addresses = '*' # what IP address(es) to listen on;
The only way for containers to talk to each other is through IPs. By linking multiple containers together through --link (or links in docker-compose), docker creates a secure tunnel between those two containers so that we don't need to expose any ports externally.
If you try to connect to your database from your local environment through a database client, you will be able to connect to it from 127.0.0.1:5433 as the port is exposed to your host through the docker-compose file. This is the reason why your schema update command succeeded.
Docker exposes connectivity information for the source container to
the recipient container in two ways:
Environment variables,
Updating the /etc/hosts file.
Ref: https://docs.docker.com/network/links/#communication-across-links
In order to connect to your database (which is running in the db container) from the php container, you will need to get the host of your db container through the environment variable DB_PORT_5432_TCP_ADDR (I might be wrong on this, but type env in your php container's terminal to verify. You will need to SSH into your php container).
Alternatively, you can use the second method, which is just db as the hostname instead of 127.0.0.1 since docker updated the /etc/hosts file in the php container to map your linked container's name to its IP, and in this case, the value mapped to the hostname db is the same as the value stored in the environment variable DB_PORT_5432_TCP_ADDR.

How to connect Postgresql Docker Container with another Docker Container

I want to connect mysoft docker container to postgresql docker container.
But i have some errors:
ERROR: for mysoft_db_1 Cannot start service db: driver failed programming external connectivity on endpoint mysoft_db_1 (XXX):
Error starting userland proxy: listen tcp 0.0.0.0:5432: bind: address already in use
ERROR: for db Cannot start service db: driver failed programming external connectivity on endpoint mysoft_db_1 (XXX):
Error starting userland proxy: listen tcp 0.0.0.0:5432: bind: address already in use
here is my docker-compose.yml
version: '2'
services:
mysoft:
image: mysoft/mysoft:1.2.3
ports:
- "80:8080"
environment:
- DATABASE_URL=postgres://mysoft:PASSWORD#db/mysoft?sslmode=disable
db:
image: postgresql
environment:
- POSTGRES_USER=mysoft
- POSTGRES_PASSWORD=PASSWORD
- POSTGRES_DB=mysoft
ports:
- 5432:5432
I want use another, already running docker pg server to connect new soft, also one pg docker server, for more projects
Is it possible?
You should add links to the definition of mysoft service in docker-compose.yml. Then your db service will be accessible from mysoft container.
After that your service definition will look like this.
mysoft:
image: mysoft/mysoft:1.2.3
ports:
- "80:8080"
environment:
- DATABASE_URL=postgres://mysoft:PASSWORD#db/mysoft?sslmode=disable
links:
- db
Now about error of binding. Probably, you receive it, because you have a local postgresql running on port 5432 or you already have a running docker container with 5432 port mapped to local machine.
ports:
- 5432:5432
It is used for mapping ports to your local machine. And if you don't need to access container's db from it, just remove it.
I want use another, already running docker pg server to connect new
soft, also one pg docker server, for more projects Is it possible?
Yes, it's possible. Use external_links.
If you choose this option:
Remove the db service and links in mysoft service definition from your docker-compose.yml
Add external_links with correct container name to mysoft service definition.
Update host and port in DATABASE_URL according to the container name and postgresql port in it.
You might want to check of you already have a local postgres running on port 5432? If you do you can not do the ports 5432:5432 but have to expose the inner port to an other outer port e.g. 5555:5432
at least if you are using native docker (running on localhost)...