Handling multiple Postgresql Instances on same port between Local & Docker container - postgresql

I have an issue where my docker container is delivering an error message, database_1 | 2021-05-03 23:33:49.552 UTC [33] FATAL: role "myname" does not exist, for the postgres container I am running and I'm under the impression that it is possibly tied to the fact that its running on the same port as my postgres instance that runs locally on my computer as a background service. Not completely certain, but it seems strange as the role (or I assume username) is present when I connect to a database with my local instance running. Is there something that I can do to further debug? When I run a local node server for the application the credentials work without any issue.
Here is my docker-compose.yml setup:
version: "3.9"
services:
redis:
image: redis:alpine
database:
image: postgres
ports:
- "5432:5432"
environment:
POSTGRES_USER: ${DB_USERNAME}
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: ${DB_DATABASE}
volumes:
- nextjs_auth_template:/var/lib/postgresql/data/ # persist data even if container shuts down
app:
image: nextjs-auth-boilerplate
build: .
depends_on:
- redis
- database
ports:
- "3000:3000"
environment:
- REDIS_HOST=redis
- DB_HOSTNAME=database
volumes:
nextjs_auth_template:
Here is my .env file:
DB_USERNAME=myname
DB_PASSWORD=''
DB_DATABASE=nextjs_auth_template
DB_HOSTNAME=127.0.0.1
DB_URL=postgresql://myname#127.0.0.1/nextjs_auth_template

If you are running both database instances on the same port and the same host, the problem must be that there is something that tries to connect as that user. If you are running the instance where the user exists, everything works. If you are running the other instance, you'll receive the error.
You'll have to figure out from where the client connects. For that, add %h to log_line_prefix to get the client IP address logged (change the parameter in the postgresql.conf file and reload the server). If you get no IP address in the log, that means that the connection is from your local computer.

Related

i'm trying to build a docker image of my strapi backend connecting with postgres. The images builds but running the container fails and i get error [duplicate]

I'm building an app running on NodeJS using postgresql.
I'm using SequelizeJS as ORM.
To avoid using real postgres daemon and having nodejs on my own device, i'm using containers with docker-compose.
when I run docker-compose up
it starts the pg database
database system is ready to accept connections
and the nodejs server.
but the server can't connect to database.
Error: connect ECONNREFUSED 127.0.01:5432
If I try to run the server without using containers (with real nodejs and postgresd on my machine) it works.
But I want it to work correctly with containers. I don't understand what i'm doing wrong.
here is the docker-compose.yml file
web:
image: node
command: npm start
ports:
- "8000:4242"
links:
- db
working_dir: /src
environment:
SEQ_DB: mydatabase
SEQ_USER: username
SEQ_PW: pgpassword
PORT: 4242
DATABASE_URL: postgres://username:pgpassword#127.0.0.1:5432/mydatabase
volumes:
- ./:/src
db:
image: postgres
ports:
- "5432:5432"
environment:
POSTGRES_USER: username
POSTGRES_PASSWORD: pgpassword
Could someone help me please?
(someone who likes docker :) )
Your DATABASE_URL refers to 127.0.0.1, which is the loopback adapter (more here). This means "connect to myself".
When running both applications (without using Docker) on the same host, they are both addressable on the same adapter (also known as localhost).
When running both applications in containers they are not both on localhost as before. Instead you need to point the web container to the db container's IP address on the docker0 adapter - which docker-compose sets for you.
Change:
127.0.0.1 to CONTAINER_NAME (e.g. db)
Example:
DATABASE_URL: postgres://username:pgpassword#127.0.0.1:5432/mydatabase
to
DATABASE_URL: postgres://username:pgpassword#db:5432/mydatabase
This works thanks to Docker links: the web container has a file (/etc/hosts) with a db entry pointing to the IP that the db container is on. This is the first place a system (in this case, the container) will look when trying to resolve hostnames.
For further readers, if you're using Docker desktop for Mac use host.docker.internal instead of localhost or 127.0.0.1 as it's suggested in the doc. I came across same connection refused... problem. Backend api-service couldn't connect to postgres using localhost/127.0.0.1. Below is my docker-compose.yml and environment variables as a reference:
version: "2"
services:
api:
container_name: "be"
image: <image_name>:latest
ports:
- "8000:8000"
environment:
DB_HOST: host.docker.internal
DB_USER: <your_user>
DB_PASS: <your_pass>
networks:
- mynw
db:
container_name: "psql"
image: postgres
ports:
- "5432:5432"
environment:
POSTGRES_DB: <your_postgres_db_name>
POSTGRES_USER: <your_postgres_user>
POSTGRES_PASS: <your_postgres_pass>
volumes:
- ~/dbdata:/var/lib/postgresql/data
networks:
- mynw
If you send database vars separately. You can assign a database host.
DB_HOST=<POSTGRES_SERVICE_NAME> #in your case "db" from docker-compose file.
I had two containers one called postgresdb, and another call node
I changed my node queries.js from:
const pool = new Pool({
user: 'postgres',
host: 'localhost',
database: 'users',
password: 'password',
port: 5432,
})
To
const pool = new Pool({
user: 'postgres',
host: 'postgresdb',
database: 'users',
password: 'password',
port: 5432,
})
All I had to do was change the host to my container name ["postgresdb"] and that fixed this for me. I'm sure this can be done better but I just learned docker compose / node.js stuff in the last 2 days.
If none of the other solutions worked for you, consider manual wrapping of PgPool.connect() with retry upon having ECONNREFUSED:
const pgPool = new Pool(pgConfig);
const pgPoolWrapper = {
async connect() {
for (let nRetry = 1; ; nRetry++) {
try {
const client = await pgPool.connect();
if (nRetry > 1) {
console.info('Now successfully connected to Postgres');
}
return client;
} catch (e) {
if (e.toString().includes('ECONNREFUSED') && nRetry < 5) {
console.info('ECONNREFUSED connecting to Postgres, ' +
'maybe container is not ready yet, will retry ' + nRetry);
// Wait 1 second
await new Promise(resolve => setTimeout(resolve, 1000));
} else {
throw e;
}
}
}
}
};
(See this issue in node-postgres for tracking.)
As mentioned here.
Each container can now look up the hostname web or db and get back the appropriate container’s IP address. For example, web’s application code could connect to the URL postgres://db:5432 and start using the Postgres database.
It is important to note the distinction between HOST_PORT and CONTAINER_PORT. In the above example, for db, the HOST_PORT is 8001 and the container port is 5432 (postgres default). Networked service-to-service communication uses the CONTAINER_PORT. When HOST_PORT is defined, the service is accessible outside the swarm as well.
Within the web container, your connection string to db would look like postgres://db:5432, and from the host machine, the connection string would look like postgres://{DOCKER_IP}:8001.
So DATABASE_URL should be postgres://username:pgpassword#db:5432/mydatabase
I am here with a tiny modification about handle this.
As Andy say in him response.
"you need to point the web container to the db container's"
And taking in consideration the official documentation about docker-compose link's
"Links are not required to enable services to communicate - by default, any service can reach any other service at that service’s name."
Because of that, you can keep your docker_compose.yml in this way:
docker_compose.yml
version: "3"
services:
web:
image: node
command: npm start
ports:
- "8000:4242"
# links:
# - db
working_dir: /src
environment:
SEQ_DB: mydatabase
SEQ_USER: username
SEQ_PW: pgpassword
PORT: 4242
# DATABASE_URL: postgres://username:pgpassword#127.0.0.1:5432/mydatabase
DATABASE_URL: "postgres://username:pgpassword#db:5432/mydatabase"
volumes:
- ./:/src
db:
image: postgres
ports:
- "5432:5432"
environment:
POSTGRES_USER: username
POSTGRES_PASSWORD: pgpassword
But it is a kinda cool way to be verbose while we are coding. So, your approach it is nice.

Postgres and Docker Compose; password authentication fails and role 'postgres' does not exist. Cannot connect from pgAdmin4

I have a docker-compose that brings up the psql database as below, currently I'm trying to connect to it with pgAdmin4 (not in a docker container) and be able to view it. I've been having trouble authenticating with the DB and I don't understand why.
docker-compose
version: "3"
services:
# nginx and server also have an override, but not important for this q.
nginx:
ports:
- 1234:80
- 1235:443
server:
build: ./server
ports:
- 3001:3001 # app server port
- 9230:9230 # debugging port
env_file: .env
command: yarn dev
volumes:
# Mirror local code but not node_modules
- /server/node_modules/
- ./server:/server
database:
container_name: column-db
image: 'postgres:latest'
restart: always
ports:
- 5432:5432
environment:
POSTGRES_USER: postgres # The PostgreSQL user (useful to connect to the database)
POSTGRES_PASSWORD: root # The PostgreSQL password (useful to connect to the database)
POSTGRES_DB: postgres # The PostgreSQL default database (automatically created at first launch)
volumes:
- ./db-data/:/var/lib/postgresql/data/
networks:
app-network:
driver: bridge
I do docker-compose up then check the logs, and it says that it is ready for connections. I go to pgAdmin and enter the following:
where password is root. I then get this error:
FATAL: password authentication failed for user "postgres"
I check the docker logs and I see
DETAIL: Role "postgres" does not exist.
I'm not sure what I'm doing wrong, according to the docs the super user should be created with those specifications. Am I missing something? Been banging my head against this for an hour now. Any help is appreciated!
#jjanes solved it in a comment, I had used a mapped volume and never properly set up the db. Removed the volume and we're good to go.

How compose java and Postgres in docker [duplicate]

The Dockerfile of my spring-boot app:
FROM openjdk:8-jdk-alpine
VOLUME /tmp
COPY target/media-0.0.1-SNAPSHOT.jar app.jar
ENTRYPOINT ["java", "-jar", "/app.jar"]
application.yml
spring:
datasource:
url: jdbc:postgresql://localhost:5432/media
username: postgres
password: postgres
hikari:
connectionTimeout: 30000
and here is the docker-compose.yml:
version: '3'
services:
db:
image: postgres
ports:
- "5432:5432"
environment:
POSTGRES_DB: media
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
app:
build:
context: ./
dockerfile: Dockerfile
depends_on:
- db
ports:
- "8080:8080"
Running docker-compose up --build results in:
app_1 | org.postgresql.util.PSQLException: Connection to 0.0.0.0:5432
refused. Check that the hostname and port are correct and that the
postmaster is accepting TCP/IP connections. app_1
My guess is that the spring app tries to connect to postgres before postgres is ready, but I get the following log:
db_1 | 2019-05-18 19:05:53.692 UTC [1] LOG: database system is
ready to accept connections
The main purpose of Docker Compose is to spin up a set of Docker containers, which will then function as independent entities. By default, all containers will have a virtual network connection to all others, though you can change that if you wish; you will get that feature, since you have not specified a custom configuration.
Each of the containers will get a virtual IP address inside the virtual network set up by Docker. Since these are dynamic, Docker Compose makes it easier for you by creating internal DNS entries corresponding to each service. So, you will have two containers, which can be addressed as app and db respectively, either from themselves or the other. If you have ping installed, you can ping these names too, either via docker-compose exec, or via a manually-created shell.
Thus, as we discovered in the comments, you can connect from app to jdbc:postgresql://db:5432/media, and it should work.

Cannot connect to PostgreSQL when running docker compose [duplicate]

I'm building an app running on NodeJS using postgresql.
I'm using SequelizeJS as ORM.
To avoid using real postgres daemon and having nodejs on my own device, i'm using containers with docker-compose.
when I run docker-compose up
it starts the pg database
database system is ready to accept connections
and the nodejs server.
but the server can't connect to database.
Error: connect ECONNREFUSED 127.0.01:5432
If I try to run the server without using containers (with real nodejs and postgresd on my machine) it works.
But I want it to work correctly with containers. I don't understand what i'm doing wrong.
here is the docker-compose.yml file
web:
image: node
command: npm start
ports:
- "8000:4242"
links:
- db
working_dir: /src
environment:
SEQ_DB: mydatabase
SEQ_USER: username
SEQ_PW: pgpassword
PORT: 4242
DATABASE_URL: postgres://username:pgpassword#127.0.0.1:5432/mydatabase
volumes:
- ./:/src
db:
image: postgres
ports:
- "5432:5432"
environment:
POSTGRES_USER: username
POSTGRES_PASSWORD: pgpassword
Could someone help me please?
(someone who likes docker :) )
Your DATABASE_URL refers to 127.0.0.1, which is the loopback adapter (more here). This means "connect to myself".
When running both applications (without using Docker) on the same host, they are both addressable on the same adapter (also known as localhost).
When running both applications in containers they are not both on localhost as before. Instead you need to point the web container to the db container's IP address on the docker0 adapter - which docker-compose sets for you.
Change:
127.0.0.1 to CONTAINER_NAME (e.g. db)
Example:
DATABASE_URL: postgres://username:pgpassword#127.0.0.1:5432/mydatabase
to
DATABASE_URL: postgres://username:pgpassword#db:5432/mydatabase
This works thanks to Docker links: the web container has a file (/etc/hosts) with a db entry pointing to the IP that the db container is on. This is the first place a system (in this case, the container) will look when trying to resolve hostnames.
For further readers, if you're using Docker desktop for Mac use host.docker.internal instead of localhost or 127.0.0.1 as it's suggested in the doc. I came across same connection refused... problem. Backend api-service couldn't connect to postgres using localhost/127.0.0.1. Below is my docker-compose.yml and environment variables as a reference:
version: "2"
services:
api:
container_name: "be"
image: <image_name>:latest
ports:
- "8000:8000"
environment:
DB_HOST: host.docker.internal
DB_USER: <your_user>
DB_PASS: <your_pass>
networks:
- mynw
db:
container_name: "psql"
image: postgres
ports:
- "5432:5432"
environment:
POSTGRES_DB: <your_postgres_db_name>
POSTGRES_USER: <your_postgres_user>
POSTGRES_PASS: <your_postgres_pass>
volumes:
- ~/dbdata:/var/lib/postgresql/data
networks:
- mynw
If you send database vars separately. You can assign a database host.
DB_HOST=<POSTGRES_SERVICE_NAME> #in your case "db" from docker-compose file.
I had two containers one called postgresdb, and another call node
I changed my node queries.js from:
const pool = new Pool({
user: 'postgres',
host: 'localhost',
database: 'users',
password: 'password',
port: 5432,
})
To
const pool = new Pool({
user: 'postgres',
host: 'postgresdb',
database: 'users',
password: 'password',
port: 5432,
})
All I had to do was change the host to my container name ["postgresdb"] and that fixed this for me. I'm sure this can be done better but I just learned docker compose / node.js stuff in the last 2 days.
If none of the other solutions worked for you, consider manual wrapping of PgPool.connect() with retry upon having ECONNREFUSED:
const pgPool = new Pool(pgConfig);
const pgPoolWrapper = {
async connect() {
for (let nRetry = 1; ; nRetry++) {
try {
const client = await pgPool.connect();
if (nRetry > 1) {
console.info('Now successfully connected to Postgres');
}
return client;
} catch (e) {
if (e.toString().includes('ECONNREFUSED') && nRetry < 5) {
console.info('ECONNREFUSED connecting to Postgres, ' +
'maybe container is not ready yet, will retry ' + nRetry);
// Wait 1 second
await new Promise(resolve => setTimeout(resolve, 1000));
} else {
throw e;
}
}
}
}
};
(See this issue in node-postgres for tracking.)
As mentioned here.
Each container can now look up the hostname web or db and get back the appropriate container’s IP address. For example, web’s application code could connect to the URL postgres://db:5432 and start using the Postgres database.
It is important to note the distinction between HOST_PORT and CONTAINER_PORT. In the above example, for db, the HOST_PORT is 8001 and the container port is 5432 (postgres default). Networked service-to-service communication uses the CONTAINER_PORT. When HOST_PORT is defined, the service is accessible outside the swarm as well.
Within the web container, your connection string to db would look like postgres://db:5432, and from the host machine, the connection string would look like postgres://{DOCKER_IP}:8001.
So DATABASE_URL should be postgres://username:pgpassword#db:5432/mydatabase
I am here with a tiny modification about handle this.
As Andy say in him response.
"you need to point the web container to the db container's"
And taking in consideration the official documentation about docker-compose link's
"Links are not required to enable services to communicate - by default, any service can reach any other service at that service’s name."
Because of that, you can keep your docker_compose.yml in this way:
docker_compose.yml
version: "3"
services:
web:
image: node
command: npm start
ports:
- "8000:4242"
# links:
# - db
working_dir: /src
environment:
SEQ_DB: mydatabase
SEQ_USER: username
SEQ_PW: pgpassword
PORT: 4242
# DATABASE_URL: postgres://username:pgpassword#127.0.0.1:5432/mydatabase
DATABASE_URL: "postgres://username:pgpassword#db:5432/mydatabase"
volumes:
- ./:/src
db:
image: postgres
ports:
- "5432:5432"
environment:
POSTGRES_USER: username
POSTGRES_PASSWORD: pgpassword
But it is a kinda cool way to be verbose while we are coding. So, your approach it is nice.

Connecting to Postgres Docker server - authentication failed

I have a PostgreSQL container set up that I can successfully connect to with Adminer but I'm getting an authentication error when trying to connect via something like DBeaver using the same credentials.
I have tried exposing port 5432 in the Dockerfile and can see on Windows for docker the port being correctly binded. I'm guessing that because it is an authentication error that the issue isn't that the server can not be seen but with the username or password?
Docker Compose file and Dockerfile look like this.
version: "3.7"
services:
db:
build: ./postgresql
image: postgresql
container_name: postgresql
restart: always
environment:
- POSTGRES_DB=trac
- POSTGRES_USER=user
- POSTGRES_PASSWORD=1234
ports:
- 5432:5432
adminer:
image: adminer
restart: always
ports:
- 8080:8080
nginx:
build: ./nginx
image: nginx_db
container_name: nginx_db
restart: always
ports:
- "8004:8004"
- "8005:8005"
Dockerfile: (Dockerfile will later be used to copy ssl certs and keys)
FROM postgres:9.6
EXPOSE 5432
Wondering if there is something else I should be doing to enable this to work via some other utility?
Any help would be great.
Thanks in advance.
Update:
Tried accessing the database through the IP of the postgresql container 172.28.0.3 but the connection times out which suggests that PostgreSQL is correctly listening on 0.0.0.0:5432 and for some reason the user and password are not usable outside of Docker even from the host machine using localhost.
Check your pg_hba.conf file in the Postgres data folder.
The default configuration is that you can only login from localhost (which I assume Adminer is doing) but not from external IPs.
In order to allow access from all external addresses vi password authentication, add the following line to your pg_hba.conf:
host all all * md5
Then you can connect to your postgres DB running in the docker container from outside, given you expose the Port (5432)
Use the command docker container inspect ${container_number}, this will tell you which IPaddress:ports are exposed external to the container.
The command 'docker container ls' will help identify the 'container number'
After updating my default db_name, I also had to update the docker-compose myself by explicitly exposing the ports as the OP did
db:
image: postgres:13-alpine
volumes:
- dev-db-data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=devdb
- POSTGRES_USER=user
- POSTGRES_PASSWORD=1234
ports:
- 5432:5432
But the key here was restarting the server! DBeaver has connected to localhost:5432 :)