Can't connect to PostgreSQL server from AKS - postgresql

I've created an AKS cluster and Flexible PostgreSQL database.
The postgresql database is public, and I made firewall entry while setting up the DB to allow local connection from my IP - which worked fine.
When I then tried to connect from AKS, I was unable to - I was getting timeouts.
Eventually, I clicked the setting to allow access from everywhere on azure
That fixed the timeouts, and I can now connect, but I get a new error:
no pg_hba.conf entry for host "**.**.***.203"
What am I doing wrong?

Related

Problem to log in at a postgreSQL database on AWS cloud

I have a server on AWS platform running an app. The database is on postgreSQL. The authentication is with .pem archives.
A few days ago, I was reported that the space of database was reaching the asigned quota, so I increase the quota through the RDS module on the cloud.
After doing it, I can't log in through pgAdmin to check the database. Nothing else was modified.
When I try to log in at the database I get this message:"Failed to create the SSH tunnel. Error: Could not establish session to SSH gateway"
The loggin configuration has always been like this (before increasing the quota I was able lo log in without any problems):
Use SSH tunneling: yes
Tunnel host: the IP assigned is the same and is still available.
Tunnel port: the port number is specified
Username: the username is specified
Authentication: Identity File
Identity file: It exist at the local route on my PC.
How can I fix this problem?
Thanks in advance.

Can't connect to amazon rds

I just setup aws rds and I'm trying to connect pgadmin to it. I put in the Endpoint and the port shown in the rds dashboard, and the username and password I set. When I try connecting, I get an error message saying: Unable to connect to server "host name" port "port" failed: timeout expired.
I also tried connecting prisma to it by running npx prisma migrate dev --name init and I also get an error saying P1001: Can't reach database server at "host name".
I made sure to set Publicly accessible to Yes, but it's still not working. What am I doing wrong and how can I fix it?
(For the settings, I used the default free tier settings.)
In the question thread, the security group is defined to accept All Traffic from all IPv6 addresses ::/0. Permission for IPv4 address should be added as well. You may want to address All Traffic for IPv4 address 0.0.0.0/0.

Connect postgres cloud sql through cloud sql proxy

I created a Single Zone postgres db instance on Cloud Sql, and I am trying to connect by cloud sql proxy.
/cloud_sql_proxy -instances=<PROJECT_ID>:us-central1:staging=tcp:5432 -credential_file=./<SERVICE_ACCOUNT_KEY_FILE>
This is running well. But when i run below command,
psql "host=127.0.0.1 sslmode=disable dbname=postgres user=postgres"
the proxy shows this error:
2019/11/14 15:20:10 using credential file for authentication; email=<SERVICE_ACCOUNT_EMAIL>
2019/11/14 15:20:13 Listening on 127.0.0.1:5432 for <PROJECT_ID>:us-central1:staging
2019/11/14 15:20:13 Ready for new connections
2019/11/14 15:20:34 New connection for "<PROJECT_ID>:us-central1:staging"
2019/11/14 15:22:45 couldn't connect to "<PROJECT_ID>:us-central1:staging": dial tcp 34.70.245.249:3307: connect: connection timed out
Why is this happening?
I am doing this from my local.
I've just followed this tutorial step by step and it worked perfectly for me.
I did not have to do any extra steps(whitelisting ip, opening port etc...) and this was done in a clean project.
Are you trying to do this from local with the SDK or from Cloud Shell? Do you have any firewall restrictions in place?
Any further information about specific setup from your side that might affect will surely help.
Let us know.
EDIT:
Make sure your port 3307 is not blocked by anything.
Have a look at this official documentation specifying that.
Make sure you have all the required IAM roles attached to the service account before you connect to it:
For instance, the list of roles for cloudsql can be retrieved from gcloud with:
$ gcloud iam roles list --filter 'name~"roles/cloudsql"' --format 'table(name, description)'
NAME DESCRIPTION
roles/cloudsql.admin Full control of Cloud SQL resources.
roles/cloudsql.client Connectivity access to Cloud SQL instances.
roles/cloudsql.editor Full control of existing Cloud SQL instances excluding modifying users, SSL certificates or deleting resources.
roles/cloudsql.instanceUser Role allowing access to a Cloud SQL instance
roles/cloudsql.serviceAgent Grants Cloud SQL access to services and APIs in the user project
roles/cloudsql.viewer Read-only access to Cloud SQL resources.
If your service account is lacking the appropriate roles, it won't be able to connect to the instance for IAM authentication to work.
The issue is probably that you are not in the VPC network, like when you connect from localhost, so what happens is the cloud proxy showing it cannot connect to the remote IP.
Read this carefully if you use a private IP :
https://cloud.google.com/sql/docs/postgres/private-ip
Note that the Cloud SQL instance is in a Google managed network and the proxy is meant to be used to simplify connections to the DB within the VPC network.
In short: running cloud-sql-proxy from a local machine will not work, because it's not in the VPC network. It should work from a Compute Engine VM that is connected to the same VPC as the DB.
What I usually do as a workaround is use gcloud ssh from a local machine and port forward over a small VM in compute engine, like:
gcloud beta compute ssh --zone "europe-north1-b" "instance-1" --project "my-project" -- -L 5432:cloud_sql_server_ip:5432
Then you can connect to localhost:5432 (make sure nothing else is running or change first port number to one that is free locally)
What should also work is to setup a VPN connection to the VPC network and then run the cloud proxy in node in that network.
I have to say I found this really confusing because it gives the impression the proxy does similar magic like gloud does. It's beyond me why some Google engineers have not wired that together yet, can't be too hard.
I had this issue previously when I didn't specify the port argument to psql for some reason, try this:
psql "host=127.0.0.1 port=5432 sslmode=disable user=postgres"
Don't specify the db, and see if that lets you get to the prompt.

Error no pg_hba.conf entry for host when using PG client, but not when connecting with psql?

I'm pointing my application from one PG cluster to another (changing the team that manages the PG cluster), and I'm getting connection errors when trying to connect with a language binding for javascript:
no pg_hba.conf entry for host xxx.x.x.x, user "blah", database "blah", SSL off
I'm led to believe that this is a pg_hba.conf configuration error, but I can connect to the cluster with psql, from the same machine, with the same credentials.
psql: 9.5.7,
PG cluster 10.5
Client: pg-promise 9.3.3
How can this be possible? Is this still definitely an issue with how pg_hba.conf is set up or is there something wrong with how I configured my client? This is not the first time I've used it, I've been connecting to a PG cluster v. 9.5.7 for the last couple of years.
The issue was that the new cluster is using SSL for the connection. When I connected to the old cluster, there was no SSL, so the connection worked with my current config, but when switching to the new cluster I had to specify that SSL was being used.
https://github.com/vitaly-t/pg-promise/wiki/Connection-Syntax

AWS RDS "pg_hba.conf rejects connection for host"

I am working on setting up a Postgres instance on AWS through RDS. It has been placed into a VPC with a private subnet where the subnet CIDRs are: ["10.0.21.0/24", "10.0.22.0/24", "10.0.23.0/24"].
I have a public subnet and have successfully connected to postgres through a bastion node from public to private subnet and run queries through SSH port forwarding.
However, now I am trying to setup a connection from a lambda that lives in the same private subnet of the VPC. The lambda has access according to the security group, but I receive the following error:
OperationalError: (psycopg2.OperationalError) FATAL: PAM
authentication failed for user "service_worker" FATAL: pg_hba.conf
rejects connection for host "10.0.23.73", user "service_worker",
database "myDB", SSL off
I have connected successfully with service_worker through the bastion, but for some reason I can't do so through lambda. It seems like Postgres is rejecting this particular host. And I can't find any configuration or documentation that specifies how to change what RDS does when managing this information in the pg_hba.conf file.
Does anyone have any insight into telling Postgres that a connection from a host in the same subnet is ok? I'm assuming there is some security policy that I'm somehow missing in the mix of all this.
Thanks!
It turns out that
Role-based authentication is currently not supported for Amazon RDS
for PostgreSQL and Aurora PostgreSQL.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.html
And because Lambdas inherently use role based auth, this fails.