How to Debug postgres connection issues on google cloud compute VM - postgresql

When trying to connect to postgres using psql thus: sql -d DATABASE_NAME -U postgres -h PUBLIC_VM_IP
I get the following error: psql: error: could not connect to server: Operation timed out
My configuration is
Google cloudcompute vm instance e2-micro (2 vCPUs, 1 GB memory).
The container image is set to gcr.io/google/postgresql12:latest
I have two env vars set up in the VM via the console: a) POSTGRES_DB=MY_DATABASE_NAME and POSTGRES_PASSWORD=MY_PASSWORD
I have network tags set to allow_postgres.
I have a firewall rule named allow_posgres with the following properties:
logs: Off
Network: default
Priority: 1000
Direction: Ingress
Action on match: Allow
Targets: Target tag allow-postgres
Source filters: IP ranges: 0.0.0.0/0 and my computer's IP
Protocols and ports: tcp:5432
Enforcement: Enabled
Insights: None
If I ssh into the instance and do docker ps I see: gcr.io/stackdriver-agents/stackdriver-logging-agent:1.8.4 as the only listed running container, which doesn't seem right. I should see the postgres container running too right?
If so, how do I change my configuration in the console to ensure that container is running and how can I further debug the connection issue?

When I tried running the image you specified I was always getting en error (seen after I logged in to this VM):
#########################[ Error ]#########################
# The startup agent encountered errors. Your container #
# was not started. To inspect the agent's logs use #
# 'sudo journalctl -u konlet-startup' command. #
###########################################################
The logs displayed by journalctl also gave no indication what's wrong.
So - I went to marketplace and looked for postgresql 12. After I clicked the Show Pull Command button I got a link to this image: marketplace.gcr.io/google/postgresql12:latest.
I created a new VM (default settings) and used above image url to deploy a container. I assigned a proper network tag to allow connections on 5432 port and clicked create button and created a firewall rule just as you described in your question.
After I logged in to a new VM I ran docker ps and seen that the container is running:
wb#pg1 ~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
37b651f9c383 marketplace.gcr.io/google/postgresql12:latest "docker-entrypoint.s…" 28 seconds ago Up 19 seconds klt-pg1-maok
After that I tried connecting from the outside:
wb#cloudshell:~$ psql -h xx.xxx.xxx.xxx -U postgres
psql (13.3 (Debian 13.3-1.pgdg100+1), server 12.6 (Debian 12.6-1.pgdg90+1))
Type "help" for help.
postgres=#
I was able to list all databases:
postgres=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+------------+------------+-----------------------
postgres | postgres | UTF8 | en_US.utf8 | en_US.utf8 |
template0 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres +
| | | | | postgres=CTc/postgres
(3 rows)
As you can see I got in with no issues so you may try doing the same. As you can see I didn't configure any variables nor database password since it was for testing purposes.
Use the link I found in the Marketplace and it should work.
Additionally

Related

How do I connect to a docker postgreSQL container from outside the host

I have a docker container running postgresql connected to another container (webmapping server) on an ubuntu host. I would like to connect to my database from outside the host (another machine on the same network).
I'm able to connect to the database from the host (PGadmin), from the other container but not from outside the host (through public IP)
error screenshot on PGadmin
how should I fix this ?
many thanks, Thomas
I Solved it by myself : I had to specify 0.0.0.0:5432:5432 to make it reachable from any IP, public or local.
Many thanks to Hans for helping
You need to map port 5432 on the container to a port on the host and connect through that.
Let's say you want to use port 55432 on the host. Then you'd add the parameter -r 55432:5432 to your docker run command and then connect to port 55432 on your docker host machine.
You also need to allow incoming connections on port 55432 on your host machine's firewall to be able to connect from outside the host.
Using pgcl utilities in host:
Public IP address: xx.xx.xx.xx
pgcli -h xx.xx.xx.xx -p 5432 or 5416(user defined) -U postgres
Server: PostgreSQL 15.1 (Debian 15.1-1.pgdg110+1)
Version: 3.5.0
Home: http://pgcli.com
postgres#10:postgres>
-----------+-------+----------+------------+------------+-------------------+
| Name | Owner | Encoding | Collate | Ctype | Access privileges |
|-----------+-------+----------+------------+----|
| postgres | postgres | UTF8 | en_US.utf8 | en_US.utf8 | |
| template0 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres |

airflow postgresql backend: (psycopg2.OperationalError) FATAL: Ident authentication failed for user "airflow"

Trying to use postgresql as backend for airflow (v1.10.5) on centos7 machine (following this article: https://www.ryanmerlin.com/2019/07/apache-airflow-installation-on-ubuntu-18-04-18-10/) and seeing error
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) FATAL: Ident authentication failed for user "airflow"
My settings on the machine are...
[airflow#airflowetl airflow]$ psql airflow
psql (9.2.24)
Type "help" for help.
airflow=> \du
List of roles
Role name | Attributes | Member of
-----------+------------------------------------------------+-----------
airflow | | {}
postgres | Superuser, Create role, Create DB, Replication | {}
airflow-> \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+-------------+-------------+-----------------------
airflow | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =Tc/postgres +
| | | | | postgres=CTc/postgres+
| | | | | airflow=CTc/postgres
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
airflow=> \c airflow
You are now connected to database "airflow" as user "airflow".
airflow=> \dt
No relations found.
airflow=> \conninfo
You are connected to database "airflow" as user "airflow" via socket in "/var/run/postgresql" at port "5432".
[root#airflowetl airflow]# cat /var/lib/pgsql/data/pg_hba.conf
....
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all peer
# IPv4 local connections:
#host all all 127.0.0.1/32 ident
host all all 0.0.0.0/0 trust
# IPv6 local connections:
host all all ::1/128 ident
# Allow replication connections from localhost, by a user with the
# replication privilege.
#local replication postgres peer
#host replication postgres 127.0.0.1/32 ident
#host replication postgres ::1/128 ident
[root#airflowetl airflow]# cat /var/lib/pgsql/data/postgresql.conf
....
# — — — — — — — — — — — — — — — — — — — — — — — — — — — — —
# CONNECTIONS AND AUTHENTICATION
# — — — — — — — — — — — — — — — — — — — — — — — — — — — — —
# — Connection Settings -
#listen_addresses = ‘localhost’ # what IP address(es) to listen on;
listen_addresses = ‘*’ # for Airflow connection
[airflow#airflowetl airflow]$ cat airflow.cfg
....
[core]
....
# The executor class that airflow should use. Choices include
# SequentialExecutor, LocalExecutor, CeleryExecutor, DaskExecutor, KubernetesExecutor
#executor = SequentialExecutor
executor = LocalExecutor
# The SqlAlchemy connection string to the metadata database.
# SqlAlchemy supports many different database engine, more information
# their website
#sql_alchemy_conn = sqlite:////home/airflow/airflow/airflow.db
sql_alchemy_conn = postgresql+psycopg2://airflow:mypassword#localhost:5432/airflow
and not quite sure what could be going wrong here. Using the password from the sql_alchemy_conn string, I am able to do "psql -U airflow --password" and login successfully, so not sure what the auth faiure is for.
One odd thing I notice is that the pg_hba.conf line has:
# IPv4 local connections:
#host all all 127.0.0.1/32 ident
host all all 0.0.0.0/0 trust
yet it appears that postgres still trying to use ident authentication (despite my having service postgresql restart multiple times at this point).
Anyone have any further debugging suggestions or can see the error here?
You seem to be matching against
host all all ::1/128 ident
If you are not using IPv6, it's best to just comment out that line and try again
Im comparing with my local one and one of the differences is the owner of my airflow db is the user "airflow" in your case is "postgres". Please run this command:
ALTER DATABASE airflow OWNER TO airflow ;
Regards
xavy

Cannot connect to PostgreSQL from client - Error timed out

After many days I am trying to connect to my PostgreSQL instance, I decided time has come to ask for help.
I am trying to connect to my PostgreSQL db from a Windows machine.
I am trying pgAdmin 4 and dBeaver but both fail to connect. Below is the screenshot of the error I receive when connecting using dBeaver.
The connection I am creating is like so:
My users are (\du):
List of roles
Role name | Attributes | Member of
-----------+------------------------------------------------------------+-----------
postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
umberto | Superuser, Create role, Create DB | {}
My databases (\l):
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+---------+---------+-----------------------
postgres | postgres | UTF8 | C.UTF-8 | C.UTF-8 |
template0 | postgres | UTF8 | C.UTF-8 | C.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | C.UTF-8 | C.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
umberto | umberto | UTF8 | C.UTF-8 | C.UTF-8 |
wondermap | postgres | UTF8 | C.UTF-8 | C.UTF-8 |
Don't know exactly where to search for logs to dig into this problems on the server machine. The only thing I could find was the folder /var/log/postgresql where I see only two non gzipped files but the messages are referring to days previous to my attempts to connect.
Finally, my pg_hba.conf:
# Database administrative login by Unix domain socket
local all postgres peer
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all peer
# IPv4 local connections:
host all all 127.0.0.1/32 md5
host all all 0.0.0.0/0 md5
# IPv6 local connections:
host all all ::1/128 md5
host all all ::0/0 md5
# Allow replication connections from localhost, by a user with the
# replication privilege.
local replication all peer
host replication all 127.0.0.1/32 md5
host replication all ::1/128 md5
host all all ::/0 md5
What could be the problem?
I wouldn't generally look at postgres logs for troubleshooting a connection timeout error, because if postgres is rejecting the connections, they'll get rejected right away rather than giving you a timeout, so if you're getting a timeout it typically means it never reached postgres, so there will be nothing relevant in the logs.
In my experience, a connection timeout error is typically due to a windows/networking issue, for example a firewall on (or in front of) the server doesn't allow access on port 5432, or nothing is actually listening on port 5432 (could be that postgres isn't actually running, or it's configured to listen on a different port, etc).
My favourite tool for troubleshooting these sorts of connectivity issues on Windows is portqry. Usage is portqry -n [hostname] -e [port number]. It will try to connect to [hostname] on port [port number] and give you the results:
Listening: portqry was able to connect to the host on the specified port, and an application was listening on that port. This is what you want.
Not listening: portqry was able to reach the host on the specified port, but nothing was listening on that port. In the case of postgres, this may be because the service isn't running, or is listening on a different port.
Filtered: portqry was unable to reach the host on the specified port. This means it was actually blocked from connecting, and this is generally caused by a firewall on the host or in between the client and host, which is preventing access to the host on that port.
In my case, I was trying to load a database from AWS RDS. After whitelisting my IP address, the error was resolved.
If you are hosting your pgadmin on the same server as your postgres database, try inserting "localhost" into the server / host connection field during the setup of your pgadmin connection to that database, also try "127.0.0.1". After ruling out all network issues this was what worked for me oddly enough.
I got this exact error. The issue was resolved after public accessibility was granted to the AWS database. It can be checked in the AWS console.
Just to add another solution here. Before this I want to clarify that I'm using Rocky Linux with Xfce for the host and Windows 10 for the client. So I was having this same issue and what I did was:
On the host, go to Network (taskbar) > right click > Edit connections
Double click on the connection your machine is in.
On the "General" tab, check the Firewall zone, and then change it to "trusted" then Save.
Now that solved my problem, hope it helps someone.

Docker-compose environment variables

I am trying to setup a postgres container and want to setup the postgres login with:
POSTGRES_USER: docker
POSTGRES_PASSWORD: docker
So I have created the docker-compose.yml like so
web:
build: .
ports:
- "62576:62576"
links:
- redis
- db
db:
image: postgres
environment:
POSTGRES_PASSWORD: docker
POSTGRES_USER: docker
redis:
image: redis
I have also tried the other syntax for environment variable declaring the db section as:
db:
image: postgres
environment:
- POSTGRES_PASSWORD=docker
- POSTGRES_USER=docker
However neither of these options seem to work because for whatever reason whenever I try to connect to the postgres database using the various connection strings:
postgres://postgres:postgres#db:5432/users
postgres://postgres:docker#db:5432/users
postgres://docker:docker#db:5432/users
They all give me auth failures as opposed to complaining there is no users database.
I struggled with this for a while and wasn't having luck with the accepted answer, I finally got it to work by removing the container:
docker-compose rm postgres
And then the volume as well:
docker volume rm myapp_postgres
Then when I did a fresh docker-compose up I saw CREATE ROLE fly by, which I'm assuming is what was missed on the initial up.
The reasons for this are elaborated on here, on the Git repo for the Docker official image for postgres.
If you're using Docker.
Try to check if your local DB is active because mostly it's conflicting with Docker, if so, you can deactivate it or change port number or uninstall it in order to avoid the conflict.
I had the same problem, and in my case problem was fixed with a single command:
docker-compose up --force-recreate
The authentication error you got would help a lot!
I fired up the postgres image with your arguments:
docker run --name db -d -e POSTGRES_PASSWORD=docker -e POSTGRES_USER=docker postgres
Then I exec'ed in :
docker exec -it db psql -U docker user
psql: FATAL: database "user" does not exist
I get the error message you are expecting because I have trust authentication :
docker exec -it db cat /var/lib/postgresql/data/pg_hba.conf | grep -v '^#'
local all all trust
host all all 127.0.0.1/32 trust
host all all ::1/128 trust
host all all 0.0.0.0/0 md5
To simulate your web container, I'll run another instance of the postgres container and link the db container and then connect back to the db container:
core#ku1 /tmp/i $ docker run --rm --name web --link db:db -it postgres psql -h db -Udocker user
Password for user docker:
psql: FATAL: password authentication failed for user "docker"
I get an authentication error if I enter the incorrect password. But, if I enter the correct password:
core#ku1 /tmp/i $ docker run --rm --name web --link db:db -it postgres psql -h db -Udocker user
Password for user docker:
psql: FATAL: database "user" does not exist
It all seems to be working correctly. I put it all in a yaml file and tested it that way as well:
web:
image: postgres
command: sleep 999
ports:
- "62576:62576"
links:
- db
db:
image: postgres
environment:
POSTGRES_PASSWORD: docker
POSTGRES_USER: docker
then fired it up with docker-compose:
core#ku1 /tmp/i $ docker-compose -f dc.yaml up
Creating i_db_1...
Creating i_web_1...
Attaching to i_db_1, i_web_1
db_1 | ok
db_1 | creating template1 database in /var/lib/postgresql/data/base/1 ... ok
db_1 | initializing pg_authid ... ok
db_1 | initializing dependencies ... ok
db_1 | creating system views ... ok
db_1 | loading system objects' descriptions ... ok
db_1 | creating collations ... ok
db_1 | creating conversions ... ok
db_1 | creating dictionaries ... ok
db_1 | setting privileges on built-in objects ... ok
db_1 | creating information schema ... ok
db_1 | loading PL/pgSQL server-side language ... ok
db_1 | vacuuming database template1 ... ok
db_1 | copying template1 to template0 ... ok
db_1 | copying template1 to postgres ... ok
db_1 | syncing data to disk ... ok
db_1 |
db_1 | WARNING: enabling "trust" authentication for local connections
db_1 | You can change this by editing pg_hba.conf or using the option -A, or
db_1 | --auth-local and --auth-host, the next time you run initdb.
db_1 |
db_1 | Success. You can now start the database server using:
db_1 |
db_1 | postgres -D /var/lib/postgresql/data
db_1 | or
db_1 | pg_ctl -D /var/lib/postgresql/data -l logfile start
db_1 |
db_1 |
db_1 | PostgreSQL stand-alone backend 9.4.1
db_1 | backend> statement: CREATE DATABASE "docker" ;
db_1 |
db_1 | backend>
db_1 |
db_1 | PostgreSQL stand-alone backend 9.4.1
db_1 | backend> statement: CREATE USER "docker" WITH SUPERUSER PASSWORD 'docker' ;
db_1 |
db_1 | backend>
db_1 | LOG: database system was shut down at 2015-04-12 22:01:12 UTC
db_1 | LOG: database system is ready to accept connections
db_1 | LOG: autovacuum launcher started
^Z
[1]+ Stopped docker-compose -f dc.yaml up
core#ku1 /tmp/i $ bg
you can see that the user and password were created. I exec in:
core#ku1 /tmp/i $ docker exec -it i_web_1 psql -Udocker -h db user
Password for user docker:
psql: FATAL: password authentication failed for user "docker"
core#ku1 /tmp/i $
db_1 | FATAL: password authentication failed for user "docker"
db_1 | DETAIL: Connection matched pg_hba.conf line 95: "host all all 0.0.0.0/0 md5"
core#ku1 /tmp/i $ docker exec -it i_web_1 psql -Udocker -h db user
Password for user docker:
psql: FATAL: database "user" does not exist
db_1 | FATAL: database "user" does not exist
So the only thing I can think of is that you are trying to connect to the database from your host, not the web container? Or your web container is not using the 'db' as the host to connect to? Your definition for the web container does not contain any errors that I can see.
Thanks to Bryan with the docker-compose exec containername env I have discovered that the need is also to delete volumes. Since for the docker-compose volume rm volumename you need to know the exact name it is easier just to delete all with:
docker-compose down --volumes
This helped me
docker stop $(docker ps -qa) && docker system prune -af --volumes && docker compose up
In my case, running postgres:13-alpine in a windows 10 WSL2, none of the above solutions did the trick.
My mistake was that the docker network name I was using was shared with another project. Let's say I have projects A and B, both with the following structure:
myappfolder
- docker-compose.yml
- services
- app
- depends on db
- db
It happens that, by default, docker-compose takes the network name from the parent directory name of the docker-compose.yml file. Therefore both projects, A and B were trying to connect to the same network: myappfolder_default.
To solve this:
ensure network names are unique among projects:
a. either change the name of the root folder to be unique
b. or edit you docker-compose.yml to set an explicit network name
do docker-compose down -v this will reset all the possible dbs you had defined in your network > make sure you make a psql dump before proceeding
do docker-compose up
More networking docs here: https://docs.docker.com/compose/networking/
I had a similar situation. Following the answer from #Greg, I did a docker-compose up, and it picked up the environment variable.
Prior to that, I had just been using docker-compose run and it wasn't picking up the environment variable as proven by running docker-compose exec task env. Strangely, docker-compose run task env showed the environment variable I was expecting.

pgAdmin III: Access to database denied

I'm trying to connect to a remote database from pgAdmin III. I have create a "New Server Registration". When I connect to database I get "access to database denied".
I set up all correctly. These are my PostgreSQL settings:
pg_hba.conf >
PostgreSQL Client Authentication Configuration File
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all postgres trust
local all all md5
# IPv4 local connections:
host all all 127.0.0.1/32 md5
# IPv6 local connections:
host all all ::1/128 md5
# Allow replication connections from localhost, by a user with the
# replication privilege.
host all all 192.168.0.0/16 md5
postgresql.conf > I allowed all incomming connection listen_addresses = '*'
Using SSH I can connect to databse:
[fuiba#test]$ psql -h localhost -p 26888 -d postgres
psql (9.1.11)
Type "help" for help.
postgres=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
--------------+---------+----------+---------+-------+---------------------
postgres | fuiba | UTF8 | C | C |
template0 | fuiba | UTF8 | C | C | =c/fuiba +
| | | | | fuiba=CTc/fuiba
template1 | fuiba | UTF8 | C | C | =c/fuiba +
| | | | | fuiba=CTc/fuiba
(3 rows)
What am I doing wrong? Any help would be highly appreciated. Thank you!
ps: I'm running pgAdmin III on Windows 7 and PostgreSQL on Linux CentOS.
pgAdmin connects to PostgreSQL from another host than when you are logging in via SSH to the database server. The IP address mentioned in the error message (starting with 93.39) is not mentioned in your pg_hba.conf.
Either add the public IP address (the one starting with 93.39) of the host that runs pgAdmin to pg_hba.conf or connect via a SSH tunnel. Mind to reload PostgreSQL's configuration or restart PostgreSQL after modifying pg_hba.conf.
I once struggled with this, it worked by changing the IP mask for the entries in pg_hba.conf, but I can't quite remember, and besides that configuration is different for every network. The point is that you most likely have an error in one of those entries. Here, they even indicate that the error message is a hint to which of the entries is wrong. In the case they are indeed correct, I'd check the auth-method (see if my password is being passed as a MD5 hash, for example).
I hope this can help you =)