How do I solve postgresql error "connection attempt failed"? - postgresql

I wanted to build my springboot project. Then I want to dockerize my code. But when I built, I got error. I think this occured caused by postgresql setting. But I could not find reason.
Could you please help me?
docker-compose.yml file;
version: '2'
services:
web:
build: .
ports:
- 8080:8080
db:
container_name: productdb
image: postgres:9.5
volumes:
- sample_db:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=bright
- POSTGRES_USER=postgres
- POSTGRES_DB=productdb
- PGDATA=/var/lib/postgresql/data/pgdata
ports:
- "5432:5432"
volumes:
productdb: {}
application.yml file;
server:
port: 8761
eureka:
client:
registerWithEureka: false
fetchRegistry: false
server:
enableSelfPreservation: false
waitTimeInMsWhenSyncEmpty: 0
spring:
application:
name: product-service
datasource:
url: jdbc:postgresql://db:5432/productdb
username: postgres
password: xxxx
initialization-mode: always
jpa:
show-sql: true
hibernate:
ddl-auto:
properties:
hibernate:
temp:
use_jdbc_metadata_defaults: false
Error looks like;
org.postgresql.util.PSQLException: The connection attempt failed.
Thank you

If your docker-compose.yml file is well configured, it should be start two containers:
docker ps
source: https://intelligentbee.com/2017/09/18/setup-docker-symfony-project/
One for app and one for db.
These containers are in the same host, so if your web need to connect to the database, you must the ip instead : localhost, 127.0.0.1 or 0.0.0.0
You cat get the ip with this
hostname -I| awk '{printf $1}'
If your web and your database would be in different host, you can use the public ip where is hosted the database. But as you are using docker-compose this is not the case.
I suggest you to test if your database is ready and available, before using it in your web app.
In order to test your database , You can following one of these approaches:
Check db status with telnet
There are several way , but the easiest option is the telnet command. For instance, in order to test if mysql container is ready to use in the same machine where was started:
telnet localhost 3306
If your mysql is ready, telnet must show you a result like the following picture:
Any other negative result, would indicate that your mysql container is exited or wrong.
Note:Change 3306 for the correct postgress port
Check db status with Database IDE
Other option for UI users is testing the database connection using some Database IDE. Just download one of the several postgress client IDEs and testing your database.
Don't hardcode parameters
It is a good practice to externalize configuration using environment variables. Spring and docker know and allow us to use them.
So, modify your application.yml :
From
datasource:
url: jdbc:postgresql://db:5432/productdb
To
datasource:
url: jdbc:postgresql://${DATABASE_HOST}:5432/productdb
For development, in your eclipse use run as configurations >> environment section
For production you can:
export variable before run
pass it to your docker run sentence...
docker run -d \
--name my_funny_api \
-p 8080:8080 \
-e "DATABASE_HOST=10.10.01.52" \
-i -t my_funny_api_image
or
export HOST_IP=$(hostname -I| awk '{printf $1}')
docker run -d \
--name my_funny_api \
-p 8080:8080 \
-e "DATABASE_HOST=${DATABASE_HOST}" \
-i -t my_funny_api_image
Finally to avoid manually task to manage your variables, you can use : http://github.com/jrichardsz/tachikoma-ops

Using DataGrip software and DB in DigitalOcean. Got error
[08001] The connection attempt failed. java.net.SocketTimeoutException: connect timed out.
Made sure my current IP was one of the allowed inbound connections and that worked. (Even though the error should probably have been different.)
Hope this is useful to someone eventually.

Your DB should accept connections outside of the container
sudo docker run --name pg -p 5432:5432 -v pg_data:/var/lib/postgres/data -e POSTGRES_DB=mydb -e POSTGRES_USER=pg_user -e POSTGRES_PASSWORD=pg_password -d postgres -c "listen_addresses=*"
"listen_addresses=" It will accept connection outside of the container*
You can use follow credential to connect your spring boot project
db_user=pg_user
db_password=pg_password
db_url=jdbc:postgresql://localhost:5432/mydb

Related

Error: P1001: Can't reach database server at `localhost`:`5432`

I'm having a problem when running the npx prisma migrate dev command. Docker desktop tells me that the database is running correctly on port 5432 but I still can't see the problem.
I tried to put connect_timeout=300 to the connection string, tried many versions of postgres and docker, but I can't get it to work.
I leave you the link of the repo and photos so you can see the detail of the code.
I would greatly appreciate your help, since I have been lost for a long time with this.
Repo: https://github.com/gabrielmcreynolds/prisma-vs-typeorm/tree/master/prisma-project
Docker-compose.yml
version: "3.1"
services:
postgres:
image: postgres
container_name: postgresprisma
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=santino2002
ports:
- "5432:5432"
volumes:
- postgres:/var/lib/postgresql/data
volumes:
postgres:
Error:
Error: P1001: Can't reach database server at localhost:5432
Please make sure your database server is running at localhost:5432.
Docker ps show this:
Looks like the application and the database are running on two separate containers. So, in this case, connecting to localhost:5432 from the application container will try to connect to 5432 port within that container and not in the docker host's localhost.
To connect to database from the application container, use postgres:5432 (If they are on the same network) or <dockerhost>:5432.
Your docker ps output is showing that your postgres container has no ports connected to your local network.
It should look something similiar to this on ports column.
0.0.0.0:5432->5432/tcp, :::5432->5432/tcp
But yours is just 5432/tcp
You need to open ports for your postgres container.
Your docker-compose.yml file you posted in the question is correct. Probably you started postgres container with no ports first, then changed your docker-compose.yml file to have ports. So you just need to restart it now.
Use docker compose down && docker compose up --build -d to do that.

cannot access postgres db running docker container from local machine

I have been spending 3-4 hours on this and still have not found a solution.
I can successfully run the docker container and use psql from the container bash, however, when I try to call the db from my local machine I continue to get this error message:
error role "postgres" does not exist
I have already tried editing "listen_addresses" in the postgresql.conf file from the container bash
My setup:
I am using a macbook - Monterey 12.4
my docker compose file:
version: '3.4'
services:
postgres:
image: postgres:latest
ports:
- "5432:5432"
environment:
- POSTGRES_DB=postgres_db
- POSTGRES_USER=testUser
- POSTGRES_PASSWORD=testPW
volumes:
- postgres-data:/var/lib/postgresql/db
but this issue occurs if I do it through the standard CLI command as well, i.e:
docker run -d -p 5432:5432 --name my-postgres -e POSTGRES_PASSWORD=mysecretpassword postgres
I tried to follow this tutorial but it didnt work:
[https://betterprogramming.pub/connect-from-local-machine-to-postgresql-docker-container-f785f00461a7][1]
when I try this command:
psql -h localhost -p 5432 -U postgres -W
it doesnt work:
psql: error: connection to server at "localhost" (::1), port 5432 failed: FATAL: role "postgres" does not exist
Also for reference, the user "postgres" does exist in postgres - as a superuser
Replace POSTGRES_USER=testUser with POSTGRES_USER=postgres in the compose configuration. Also use the password defined in POSTGRES_PASSWORD. Delete the old container and create a new one.
Thank you all for your help on this.
It turns out the issue was that I was running postgres on my local machine as well.
so once I turn that off I was able to connect.
I appreciate your time!

How to connect to a Postgres database running in local Docker container through locally-run psql command?

I'm running a docker container with the vanilla Postgres image on my local machine. I'd like to connect to the database from my local machine (i.e., not from "within the container". However, on trying to connect, I get an error.
Here's my docker-compose.yml file:
version: "3.8"
services:
db:
image: postgres
restart: always
ports:
- 5432:5432
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: mypassword
Here's how I start up:
docker-compose run db
Here's how I connect:
psql -h localhost -p 5432 -U postgres
This produces the error:
could not connect to server: Connection refused Is the server running on host "localhost" (127.0.0.1) and accepting TCP/IP connections on port 5432?
If I spin up the database without Docker Compose, the same connection command works as expected:
docker run --name mypg -p 5432:5432 -e POSTGRES_PASSWORD=password postgres
I could just go with the flow and use the command above. But this seems to be pointing to a flaw in how I think about Docker/Compose. For example, maybe Docker Compose's internal DNS resolver makes this approach fail.
Any ideas?
Version info:
psql --version
psql (PostgreSQL) 13.3
I have read through several SO posts, including these, but they don't address or fix the problem I'm seeing:
docker-compose: accessing postgres' shell (psql)
Can't connect to postgres when using docker-compose
Try docker-compose up db instead of run. Using run will run a one-off command against your container, whereas up will turn on the container and leave it running, so another application should be able to access it.
https://docs.docker.com/compose/faq/#whats-the-difference-between-up-run-and-start

How connect to postgres from a golang app when using docker-compose?

My docker-compose file
version: "2"
services: db:
restart: always
image: postgres:latest
ports:
- "5435:5432"
environment:
POSTGRES_PASSWORD: password
POSTGRES_USER: user
POSTGRES_DB: db adminer:
web:
image: golang:1.7
working_dir: /go/src/app
command: go run bot.go
ports:
- "3000:3000"
volumes:
- ./bot:/go/src/app
links:
- db
environment:
PORT: 3000
CONNECTION_STRING_DEV: postgres://user:password#db/db
and my bot.go, where I try connect
db, err = sql.Open("postgres", "user=user password=password host=db dbname=db port=5432 sslmode=verify-full ")
When I bring up my containers, I see errors:
panic: dial tcp 5.61.14.99:5432: getsockopt: connection refused
I changed the port on 5432 and tried connect like this:
db, err = sql.Open("postgres", "postgres://user:password#db/db")
but I get the same errors
What's wrong with my docker-compose setup?
Your docker-compose looks a little messy but that's probably from copy and pasting. It's likely that postgres is not yet up and running when Go tries to connect. To test if that's the problem, first:
docker-compose up -d db
Then wait until postgres is ready by checking:
docker-compose logs -f db
and look out for a log line like:
db_1 | LOG: database system is ready to accept connections
When that line appears, quit the log command (Ctrl+C) and run your bot:
docker-compose up web
If it is now working, that was indeed your problem.
Solution: Wait until postgres is ready. Easy ways to achieve this are:
sleep for an amount of time (e.g. 1 min) before running web
sleep inside web before connecting
when connecting fails, sleep for 5 seconds and retry indefinitely
The disadvantage of these are that you don't know when postgres is ready, so you could wait too long or not long enough. A better solution is to run your bot only after a successful connection to postgres has been made.
Example from https://docs.docker.com/compose/startup-order/:
#!/bin/bash
# wait-for-postgres.sh
set -e
host="$1"
shift
cmd="$#"
until psql -h "$host" -U "postgres" -c '\l'; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
>&2 echo "Postgres is up - executing command"
exec $cmd
Add this script as wait-for-postgres.sh and in you docker-compose.yml change the command for web like so:
command: ["./wait-for-postgres.sh", "db", "go", "run", "bot.go"]
I found answer, i ran container with postgres already in other app, i didn't think about this, because docker compose didn't show errors when build db container. I used docker ps then docker stop xxxxx and stop db container from other app, then build and up my app, and problem solved.

How to connect to Cloud SQL (2nd Generation) via MySQL Proxy Docker Container over TCP

Running on Mac OS X, I have been trying to connect to a Cloud SQL instance via the proxy using these directions. Once you have installed the MySQL client, gce-proxy container, and have created a service account in Google Cloud Platform, you get down to running these two commands specified in the documentation:
docker run -d -v /cloudsql:/cloudsql \
-v [LOCAL_CERTIFICATE_FILE_PATH]:[LOCAL_CERTIFICATE_FILE_PATH] \
b.gcr.io/cloudsql-docker/gce-proxy /cloud_sql_proxy \
-instances=[INSTANCE_CONNECTION_NAME]=tcp:3306 -credential_file=[CLOUD_KEY_FILE_PATH]
mysql -h127.0.0.1 -uroot -p
First, I don't understand how this should ever work, since the container is not exposing a port. So unsurprisingly, when I attempted to connect I get the following error from the MySQL client:
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (61)
But if I do expose the port by adding -p 3306:3306 to the docker run command, I still can't connect. Instead I get the following error from MySQL client:
ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 0
I have successfully connected to the proxy running cloud_sql_proxy on my docker host machine by following that documentation, so I am confident my credential file and my mysql client is configured correctly. The logs of the container do not state that any connection was attempted. I have no problem connecting to a normal mysql container via docker. What am I missing here?
I was able to figure out how to use cloudsql-proxy on my local docker environment by using docker-compose. You will need to pull down your Cloud SQL instance credentials and have them ready. I keep them them in my project root as credentials.json and add it to my .gitignore in the project.
The key part I found was using =tcp:0.0.0.0:5432 after the GCP instance ID so that the port can be forwarded. Then, in your application, use cloudsql-proxy instead of localhost as the hostname. Make sure the rest of your db creds are valid in your application secrets so that it can connect through local proxy being supplied by the cloudsql-proxy container.
Note: Keep in mind I'm writing a tomcat java application and my docker-compose.yml reflects that.
docker-compose.yml:
version: '3'
services:
cloudsql-proxy:
container_name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: /cloud_sql_proxy --dir=/cloudsql -instances=<YOUR INSTANCE ID HERE>=tcp:0.0.0.0:5432 -credential_file=/secrets/cloudsql/credentials.json
ports:
- 5432:5432
volumes:
- ./credentials.json:/secrets/cloudsql/credentials.json
restart: always
tomcatapp-api:
container_name: tomcatapp-api
build: .
volumes:
- ./build/libs:/usr/local/tomcat/webapps
ports:
- 8080:8080
- 8000:8000
env_file:
- ./secrets.env
restart: always
It does look like there are some omissions in the documentation.
1) As you point out, you need to expose the port from the container. You'll want to make sure you only expose it to the local machine by specifying -p 127.0.0.1:3306:3306.
2) Then when running the container, you'll want to expose the port outside the container by specifying -instances=[INSTANCE_CONNECTION_NAME]=tcp:0.0.0.0:3306
I tried #Vadim's suggestion, which is basically this:
docker run -d -v /cloudsql:/cloudsql \
-p 127.0.0.1:3306:3306 \
-v [LOCAL_CERTIFICATE_FILE_PATH]:[LOCAL_CERTIFICATE_FILE_PATH] \
b.gcr.io/cloudsql-docker/gce-proxy /cloud_sql_proxy \
-instances=[INSTANCE_CONNECTION_NAME]=tcp:0.0.0.0:3306 -credential_file=[CLOUD_KEY_FILE_PATH]
I was still unable to get a connection, as I still got this error:
ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 0
However, the logs of the docker container showed a connection, like so:
2016/10/16 07:52:32 New connection for "[INSTANCE_CONNECTION_NAME]"
2016/10/16 07:52:32 couldn't connect to "[INSTANCE_CONNECTION_NAME]": Post https://www.googleapis.com/sql/v1beta4/projects/[PROJECT_NAME]/instances/[CLOUD_SQL_INSTANCE_NAME]/createEphemeral?alt=json: oauth2: cannot fetch token: Post https://accounts.google.com/o/oauth2/token: x509: failed to load system roots and no roots provided
So now it appeared that it was getting the traffic, but it not find the certificates for the SSL container. I had used OpenSSL's cert.pem export of my certificates and mounted it to the same location in the docker container. It makes sense that an arbitrary mapping of [LOCAL_CERTIFICATE_FILE_PATH]:[LOCAL_CERTIFICATE_FILE_PATH] wasn't helping the proxy figure out where the certificates were. So I used a clue from this Kubernetes setup guide and change the mounted volume to -v [LOCAL_CERTIFICATE_FILE_PATH]:/etc/ssl/certs. Mercifully, that worked.
TL;DR - Here is the final syntax for getting the Docker Container to run over TCP:
docker run -d \
-p 127.0.0.1:3306:3306 \
-v [SERVICE_ACCOUNT_PRIVATE_KEY_DIRECTORY]:[SERVICE_ACCOUNT_PRIVATE_KEY_DIRECTORY] \
-v [LOCAL_CERTIFICATE_DIRECTORY]:/etc/ssl/certs \
b.gcr.io/cloudsql-docker/gce-proxy /cloud_sql_proxy \
-instances=[INSTANCE_CONNECTION_NAME]=tcp:0.0.0.0:3306 \
-credential_file=[SERVICE_ACCOUNT_PRIVATE_KEY_JSON_FILE]