I'm failing to test the ability to use an IAM user in Google Cloud's PostgreSQL offering.
Here's my thinking process:
I've set respective flag on my PostgreSQL instance on Google Cloud:
$ gcloud sql instances describe [MY_DB_INSTANCE] --format json | jq '.settings.databaseFlags'
[
{
"name": "cloudsql.iam_authentication",
"value": "on"
}
]
$
I have an IAM user, which I've created in the database instance:
gcloud sql users create [MY_EMAIL] --instance=[MY_DB_INSTANCE] --type=CLOUD_IAM_USER
$ gcloud sql users list --instance [MY_DB_INSTANCE] | grep CLOUD_IAM_USER
[MY_EMAIL] CLOUD_IAM_USER
$
I get an authentication error when I try to connect to the DB using either of commands below. In both cases I use the output of gcloud auth print-access-token as my password:
this method adds my IP to allowlist:
$ gcloud sql connect [MY_DB_INSTANCE] --database=[DB_NAME] --user=[MY_EMAIL]
Allowlisting your IP for incoming connection for 5 minutes...done.
Connecting to database with SQL user [MY_EMAIL]. Password:
psql: error: FATAL: Cloud SQL IAM user authentication failed for user "[MY_EMAIL]"
FATAL: pg_hba.conf rejects connection for host "100.200.300.400", user "[MY_EMAIL]", database "[MY_EMAIL]", SSL off
$
May or may not be related to the failure, but the error message is confusing here. The last line states ...database "[MY_EMAIL]", while clearly I am not attempting to connect to the database of the same name as my email; I am connecting to a database with a very specific name, e.g. [DB_NAME].
Upd. As of right now the "Known issues" page lists acknowledgement of this:
The following only works with the default user ('postgres'): gcloud sql connect --user
this method uses Cloud SQL Proxy:
$ gcloud beta sql connect [MY_DB_INSTANCE] --database=[DB_NAME] --user=[MY_EMAIL]
Starting Cloud SQL Proxy: [/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/bin/cloud_sql_proxy -instances my-project-id:europe-west1:[MY_DB_INSTANCE]=tcp:9470 -credential_file /Users/eugene/.config/gcloud/legacy_credentials/[MY_EMAIL]/adc.json]]
2021/01/26 16:35:03 Rlimits for file descriptors set to {&{8500 9223372036854775807}}
2021/01/26 16:35:03 using credential file for authentication; path="/Users/eugene/.config/gcloud/legacy_credentials/[MY_EMAIL]/adc.json"
2021/01/26 16:35:04 Listening on 127.0.0.1:9470 for my-project-id:europe-west1:[MY_DB_INSTANCE]
2021/01/26 16:35:04 Ready for new connections
Connecting to database with SQL user [MY_EMAIL].Password:
psql: error: FATAL: Cloud SQL IAM user authentication failed for user "[MY_EMAIL]"
$
If I check the access logs from Cloud Console, for both login attempts I see the same error message:
2021-01-26 14:20:11.988 UTC [594848]: [2-1] db=[DB_NAME],user=[MY_EMAIL] DETAIL: Request is missing required authentication credential. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.
At this point I am quite lost.
Perhaps my expectations are not aligned with how connecting to a DB should work. I expected that:
the would be no need to enter a password (e.g. the value of gcloud auth print-access-token) at all in the first place, as gcloud would generate and use a password for me automagically,
in case entering the password manually (by copy-pasting the output from gcloud auth print-access-token command from the paste-buffer) is necessary it would work (while it doesn't).
I was hoping that by relying on the IAM auth mechanism of authenticating to the DB, I would be able to avoid the necessity to create a user and set him with a password using psql (or similarly using gcloud sql users create ... --type=BUILD_IN).
What is it that I am possibly missing?
Upd. I am able to successfully connect if, instead of using gcloud sql command, I run the proxy and use the command directly:
$ cloud_sql_proxy -instances my-project-id:europe-west1:[MY_DB_INSTANCE]=tcp:9470
2021/01/26 17:29:56 Rlimits for file descriptors set to {&{8500 9223372036854775807}}
2021/01/26 17:29:56 Listening on 127.0.0.1:9470 for my-project-id:europe-west1:[MY_DB_INSTANCE]
2021/01/26 17:29:56 Ready for new connections
$ env PGPASSWORD=(gcloud auth print-access-token) psql --host 127.0.0.1 --port 9470 --username=[MY_EMAIL] --dbname=[MY_DB]
psql (13.1, server 13.0)
Type "help" for help.
[MY_DB]=>
I was having the same issue. The key piece of information that is not well documented is that the Cloud SQL Proxy tool will automatically request fresh tokens for you behind the scenes. So you don't need to pass the token in manually, you just need to point whatever client you want to use at the Cloud SQL Proxy service.
To activate this mode, you need to specify the -enable_iam_login command line option, like so:
./cloud_sql_proxy -instances=[project]:[zone]:[server]=tcp:5432 -enable_iam_login
It will generate access keys for whatever user is currently authenticated using gcloud auth login
Be sure to disable sslmode when connecting to the Cloud SQL Proxy service. It handles its own encryption so if the postgres client is also trying to encrypt the connection will timeout. Since the Cloud SQL Proxy service is handling authentication, you only need to specify the user in your postgres client.
psql "host=127.0.0.1 dbname=postgres user=[the IAM email account] sslmode=disable"
It shouldn't ask for a password. If it does, just leave it blank.
I found the Cloud SQL Proxy readme more useful than the official documentation: https://github.com/GoogleCloudPlatform/cloudsql-proxy/blob/main/README.md
After playing around with it, was able to pick up the proper command for connection. Here it is:
env PGPASSWORD=(gcloud auth print-access-token) gcloud beta sql connect [MY_DB_INSTANCE] --user=[MY_EMAIL] --database=[MY_DB]
The key thing here, it seems, was specifying the PGPASSWORD variable for the process and then, when prompted to enter the password, just hit enter 🤯
What I also tried, and what appears to be a bug in gcloud sql, is connecting via a non-beta gcloud sql connect:
$ env PGPASSWORD=(gcloud auth print-access-token) gcloud sql connect [MY_DB_INSTANCE] --user=[MY_EMAIL] --database=[MY_DB]
Allowlisting your IP for incoming connection for 5 minutes...done.
Connecting to database with SQL user [MY_EMAIL].Password:
psql: error: FATAL: database "[MY_EMAIL]" does not exist
$
Note how it says database "[MY_EMAIL]" does not exist, while the database is specified as a command line flag --database=[MY_DB]. Seems like a bug to me.
Related
I want to connect to aws rds POSTGRESQL in dev from my own computer.
I followed all the steps on how to do it from bunch of articles:
https://aws.amazon.com/premiumsupport/knowledge-center/rds-postgresql-connect-using-iam/
https://aws.amazon.com/blogs/database/using-iam-authentication-to-connect-with-pgadmin-amazon-aurora-postgresql-or-amazon-rds-for-postgresql/.
The problem is if I create the database in aws console interface, I am able to log in ONLY once.
psql -h database.xxxxxxxx.us-west-2.rds.amazonaws.com -U user_name -d database
Other times I try to log in with the same any other command, I get
psql: FATAL: PAM authentication failed for user "user_name"
First and only time I login, I create a user
CREATE USER user_name WITH LOGIN;
GRANT rds_iam TO user_name;
All other attempts including the other steps logging with the iam token etc, I get an error:
psql: FATAL: PAM authentication failed for user "user_name"
If I delete the database from aws console interface and then create a brand new one, I am able to log in only ONCE and and then get the error no matter what I do.
nc Command gives me Connection succeeded at all times I run it:
nc -zv DB-instance-endpoint port
The commands I am using :
export RDSHOST="database.xxxxxxxx.us-west-2.rds.amazonaws.com"
export PGPASSWORD="$(aws rds generate-db-auth-token --hostname $RDSHOST --port 5432 --region us-west-2 --username user_name)"
I get the error if I use the PGPASSWORD in pgAdmin window.
Also, I am trying to connect from the the terminal either mine or ssh into ec2, I use this command:
psql "host=$RDSHOST port=5432 sslmode=verify-full sslrootcert=./rds-combined-ca-bundle.pem dbname=database user=user_name"
and I still get the same error
psql: FATAL: PAM authentication failed for user "user_name"
or
If I use another command, without the .pem certificate
psql --host=database.xxxxxxxx.us-west-2.rds.amazonaws.com --port=5432 --username=user_name --password --dbname=database
Then it asks me for a password and Then I get this error
psql: error: FATAL: PAM authentication failed for user "user_name"
FATAL: pg_hba.conf rejects connection for host "222.22.22.22", user "user_name", database "database", SSL off
"222.22.22.22" is My Ip, I changed it of course.
I attached all the required and all the RDS access Policies to my user and still getting this error.
I am just no sure what to do at this point as I went through every single article and cannot find a solution.
I had a similar problem and after some playing around with psql utility I found the reason for these errors. You shall export your temporary database password/token to shell of the machine/service etc where the connection will be initiated from.
So, if psql connection is initiated from Bastion, the below command should also be run on the same Bastion server.
export PGPASSWORD="$(aws rds generate-db-auth-token --hostname $RDSHOST --port 5432 --region us-west-2 --username user_name)"
or generate it elsewhere and export its value as
export PGPASSWORD="temporary_token_generated_for_user_name"
With this exported $PGPASSWORD variable, it psql should connect straight away, without promting for any additional passwords
I found the solution finally. So if anyone has the same issue and goes nuts about it, here is the solution:
If everything is working as I described above and the only error you get is PAM.. then:
your config file is not properly set up. It does not have the username you are trying to connect, the region, and the keys.
~/.aws/config
[profile PROFILE_NAME]
output=json
region=us-west-1
aws_access_key_id=foo
aws_secret_access_key=bar
Here is the link to the question on how to set it up:
AWS : The config profile (MyName) could not be found
I'm trying to get my GCP Compute Engine instance, Ubuntu 16.04, connected to a GCP Cloud SQL PostgreSQL database.
I've followed all the instructions in the documentation, but when I enter the command to connect to the database:
psql -h [CLOUD_SQL_PUBLIC_IP_ADDR] -U postgres
The result is:
psql: FATAL: Peer authentication failed for user "postgres"
I've done the authentication on both the CloudSQL side and the ComputeEngine side, so I'm not sure why this is going wrong.
The database I'm trying to connect to is in the same project, and the command
gcloud sql instances list
shows the database in the listings. However, the command
sudo -u postgres psql my-db
returns
psql: FATAL: database "my-db" does not exist
The expected result is that a psql connection opens, but instead I get a psql: FATAL: Peer authentication failed for user "postgres".
I've followed the instructions from the documentation you posted and I was able to connect successfully from my Compute Engine instance (Ubuntu 16.04) using a public IP address.
The steps I've followed are documented in "Connecting using a public IP address":
1- Added a static IPv4 IP address to the Compute Engine instance. To do this, navigate to the Cloud console > VPC Network > External IP addresses and click in the button "Reserve static address".
2- Authorize the static IP address of the Compute Engine instance as a network that can connect to the Cloud SQL instance.
3- Connect via SSH button to your Compute Engine instance.
4- Install the psql client:
$ sudo apt-get update
$ sudo apt-get install postgresql-client
5- find the CLOUD_SQL_PUBLIC_IP_ADDR
$ gcloud sql instances list:
6- And connect to the Cloud SQL instance with the psql client making sure both user and database do exist:
$ psql -h [CLOUD_SQL_PUBLIC_IP_ADDR] -U [USER] -d [DATABASE]
Also, the command below worked fine for me:
$ psql [USER] -h [CLOUD_SQL_PUBLIC_IP_ADDR] -d [DATABASE]
Then you will be asked for the user's password and voilà.
Could you please try following the instructions above to verify if it works fine for you?
Did you configure an encrypted connection before using SSL?
Normally, I would create create postgresql user like this.
sudo -u postgres psql
create user deploy_sample with password 'secret';
create database deploy_sample_production owner deploy_sample;
I tried to create the user through ansible script with this task
- name: Create database user
become: yes
become_user: postgres
postgresql_user:
user: user123
password: password123
encrypted: yes
state: present
This does create a user but i cant login using the creds.
I tried to login with this command psql --username=user123 --password. I get peer authenticate failuer error.
Ansible configuration looks correct, and may have nothing to do with the problem.
By the message we can see that it is trying to login with the Peer authentication method. This means that the O.S. user is being used to connect to the database instead of the provided password (see: https://www.postgresql.org/docs/10/auth-methods.html)
Two things you should look at:
How is your auth method configuration?
It is in the file: {data dir}/pg_hba.conf
It is possible that all local connections are configured to use peer (notice that are two types of local connections, one is called local = connection through unix socket, the other is host 127.0.0.1/32 = using network to reach localhost).
I would change the second one to use md5 method, this way you will be able to connect with user/pass using network, but still use peer for local connection - useful for the system user postgres
Connect with the application user using network
psql --username=user123 -> PSQL program will try to use local connections by default, meaning that the Peer authentication is used. You probably don't have the user user123 on the system so this will fail!
psql -h localhost --username=user123 -d <database> -> This way you will connect to local machine using network, thus allowing to authenticate with password.
I've set up a PostgreSQL instance on Google Cloud SQL and have set it up now to only allow SSL connections. I'm able to connect from my workstation via psql and from some apps like R Studio.
However I'm trying to connect via the GCloud Shell and don't seem to see any options to connect with SSL. There are options to manage certifications and I've created another client key and downloaded the files for it in my cloud shell account, I just don't see options for using them to make a connection. Without it just tells me there isn't an HBA for a "No SSL" connection.
Here is what I see (some things obfuscated):
don#cloudshell:~ (xxx)$ gcloud sql connect foo --user=postgres
Whitelisting your IP for incoming connection for 5 minutes...done.
Connecting to database with SQL user [postgres].Password for user postgres:
psql: FATAL: connection requires a valid client certificate
FATAL: pg_hba.conf rejects connection for host "a.b.c.d", user "postgres", database "postgres", SSL off
As per Cloud SQL GCP docs:
Cloud Shell connections do not support SSL. Connections from Cloud
Shell fail if the instance is configured to accept only SSL
connections.
I've been trying to configure postgresql with PAM on a Red hat server so that I can get remote access to the server via pgAdmin and use local (server) authentication with PAM.
I have edited the pg_hba.conf file and changed the appropriate line:
host postgres all 0.0.0.0/0 md5
and added this one:
host pam_testing all 0.0.0.0/0 pam pamservice=postgresql95
Moreover I created database user with the same username as I use to log in with putty (no password, simply create user xxx)
When I try to log in remotely with pgAdmin to postgres database (using md5) with my database user everything works smoothly.
But When I try to connect (also remotely, with pgAdmin) to pam_testing database with my server username (to which I log in via ssh using putty) and give the password I get the following error:
Error connecting to the server: FATAL: PAM authentication failed for
user XXX
BUT! When I log in locally to pam_testing while connected via putty it works! My system user gets logged in and authenticated without any problems. And it only happens for users, which I added to the database using create user.
I'm guessing it must be some kind of authentication issue (with the server maybe? It belongs to company and I don't know what other authentication methods it uses) but I'm not sure. Any ideas?
System: Red hat 6.8,
Postgresql: 9.5
Thanks in advance!
Do systemctl | grep unix_chkpwd and if you see lines like these
unix_chkpwd[13081]: check pass; user unknown
unix_chkpwd[13081]: password check failed for user (<username>)
then you've encountered the same problem I did.
To solve it you need to give postgres user read permissions to /etc/shadow file. You can do this via acl: setfacl -m g:postgres:r /etc/shadow, or by creating some group, giving it this permission and then adding postgres to it. Then do systemctl restart postgresql.service.
The underlying mechanics of authenticating with pam is described in this post. The key moment is the following: unix_chkpwd runs under the uid of the process which wants to authenticate someone, so if it's not root (and /etc/shadow is used which I believe is the common case), it can't do its job.