I have created a Citus DB cluster using the Cloud Formation template here:
Multi-Machine AWS Citus Cloud Formation
I can login to the DB using CLI once I go to the host in PuTTy. This does not require a username/pwd. And, this runs successfully.
/usr/pgsql-9.6/bin/psql -h localhost -d postgres
select * from master_get_active_worker_nodes();
I set the Inbound rules for the 5432 port to 0.0.0.0/0 just to allow my remote connection to the DB.
Yet, now, when I try to connect using a JDBC URL from a remote host, I don't know what username/pwd to enter into the PostgreSQL JDBC URL. Is there a default user/pwd to use?
Default username is ec2-user and there is no default password. As you can see from the pg_hba.conf file, authentication method for the localhost is defined as "trust". You may check the details of authentication methods from here. So, you need to let remote hosts to access the db by changing the pg_hba.conf file.
Related
I created a Single Zone postgres db instance on Cloud Sql, and I am trying to connect by cloud sql proxy.
/cloud_sql_proxy -instances=<PROJECT_ID>:us-central1:staging=tcp:5432 -credential_file=./<SERVICE_ACCOUNT_KEY_FILE>
This is running well. But when i run below command,
psql "host=127.0.0.1 sslmode=disable dbname=postgres user=postgres"
the proxy shows this error:
2019/11/14 15:20:10 using credential file for authentication; email=<SERVICE_ACCOUNT_EMAIL>
2019/11/14 15:20:13 Listening on 127.0.0.1:5432 for <PROJECT_ID>:us-central1:staging
2019/11/14 15:20:13 Ready for new connections
2019/11/14 15:20:34 New connection for "<PROJECT_ID>:us-central1:staging"
2019/11/14 15:22:45 couldn't connect to "<PROJECT_ID>:us-central1:staging": dial tcp 34.70.245.249:3307: connect: connection timed out
Why is this happening?
I am doing this from my local.
I've just followed this tutorial step by step and it worked perfectly for me.
I did not have to do any extra steps(whitelisting ip, opening port etc...) and this was done in a clean project.
Are you trying to do this from local with the SDK or from Cloud Shell? Do you have any firewall restrictions in place?
Any further information about specific setup from your side that might affect will surely help.
Let us know.
EDIT:
Make sure your port 3307 is not blocked by anything.
Have a look at this official documentation specifying that.
Make sure you have all the required IAM roles attached to the service account before you connect to it:
For instance, the list of roles for cloudsql can be retrieved from gcloud with:
$ gcloud iam roles list --filter 'name~"roles/cloudsql"' --format 'table(name, description)'
NAME DESCRIPTION
roles/cloudsql.admin Full control of Cloud SQL resources.
roles/cloudsql.client Connectivity access to Cloud SQL instances.
roles/cloudsql.editor Full control of existing Cloud SQL instances excluding modifying users, SSL certificates or deleting resources.
roles/cloudsql.instanceUser Role allowing access to a Cloud SQL instance
roles/cloudsql.serviceAgent Grants Cloud SQL access to services and APIs in the user project
roles/cloudsql.viewer Read-only access to Cloud SQL resources.
If your service account is lacking the appropriate roles, it won't be able to connect to the instance for IAM authentication to work.
The issue is probably that you are not in the VPC network, like when you connect from localhost, so what happens is the cloud proxy showing it cannot connect to the remote IP.
Read this carefully if you use a private IP :
https://cloud.google.com/sql/docs/postgres/private-ip
Note that the Cloud SQL instance is in a Google managed network and the proxy is meant to be used to simplify connections to the DB within the VPC network.
In short: running cloud-sql-proxy from a local machine will not work, because it's not in the VPC network. It should work from a Compute Engine VM that is connected to the same VPC as the DB.
What I usually do as a workaround is use gcloud ssh from a local machine and port forward over a small VM in compute engine, like:
gcloud beta compute ssh --zone "europe-north1-b" "instance-1" --project "my-project" -- -L 5432:cloud_sql_server_ip:5432
Then you can connect to localhost:5432 (make sure nothing else is running or change first port number to one that is free locally)
What should also work is to setup a VPN connection to the VPC network and then run the cloud proxy in node in that network.
I have to say I found this really confusing because it gives the impression the proxy does similar magic like gloud does. It's beyond me why some Google engineers have not wired that together yet, can't be too hard.
I had this issue previously when I didn't specify the port argument to psql for some reason, try this:
psql "host=127.0.0.1 port=5432 sslmode=disable user=postgres"
Don't specify the db, and see if that lets you get to the prompt.
Normally, I would create create postgresql user like this.
sudo -u postgres psql
create user deploy_sample with password 'secret';
create database deploy_sample_production owner deploy_sample;
I tried to create the user through ansible script with this task
- name: Create database user
become: yes
become_user: postgres
postgresql_user:
user: user123
password: password123
encrypted: yes
state: present
This does create a user but i cant login using the creds.
I tried to login with this command psql --username=user123 --password. I get peer authenticate failuer error.
Ansible configuration looks correct, and may have nothing to do with the problem.
By the message we can see that it is trying to login with the Peer authentication method. This means that the O.S. user is being used to connect to the database instead of the provided password (see: https://www.postgresql.org/docs/10/auth-methods.html)
Two things you should look at:
How is your auth method configuration?
It is in the file: {data dir}/pg_hba.conf
It is possible that all local connections are configured to use peer (notice that are two types of local connections, one is called local = connection through unix socket, the other is host 127.0.0.1/32 = using network to reach localhost).
I would change the second one to use md5 method, this way you will be able to connect with user/pass using network, but still use peer for local connection - useful for the system user postgres
Connect with the application user using network
psql --username=user123 -> PSQL program will try to use local connections by default, meaning that the Peer authentication is used. You probably don't have the user user123 on the system so this will fail!
psql -h localhost --username=user123 -d <database> -> This way you will connect to local machine using network, thus allowing to authenticate with password.
I've set up a PostgreSQL instance on Google Cloud SQL and have set it up now to only allow SSL connections. I'm able to connect from my workstation via psql and from some apps like R Studio.
However I'm trying to connect via the GCloud Shell and don't seem to see any options to connect with SSL. There are options to manage certifications and I've created another client key and downloaded the files for it in my cloud shell account, I just don't see options for using them to make a connection. Without it just tells me there isn't an HBA for a "No SSL" connection.
Here is what I see (some things obfuscated):
don#cloudshell:~ (xxx)$ gcloud sql connect foo --user=postgres
Whitelisting your IP for incoming connection for 5 minutes...done.
Connecting to database with SQL user [postgres].Password for user postgres:
psql: FATAL: connection requires a valid client certificate
FATAL: pg_hba.conf rejects connection for host "a.b.c.d", user "postgres", database "postgres", SSL off
As per Cloud SQL GCP docs:
Cloud Shell connections do not support SSL. Connections from Cloud
Shell fail if the instance is configured to accept only SSL
connections.
Ubuntu 16.04 LTS
I have followed the guides which all say the same thing; to enable remote connection to a postgres server, update the postgresql.conf file, update the pg_hba.conf file and make sure the port (5432) is open and firewall is not blocking.
When I attempt to connect to my server from the remote machine using the following command, I receive no response (for example, 'Connection refused...'). It hangs as if the firewall has DROP policy, but I checked and the host's firewall is ACCEPT all. Here is the command:
psql -h 45.67.82.123 -U postgres -p 5432 -d mydatabase
I have googled extensively and can't find anyone else who's psql request sits with no response from the host server.
Edit: I should mention I have been connecting locally on the host machine. I should also mention that the data directory on the host machine is in a non-default location. I have my cluster on a mounted drive, in case this could affect the remote connection.
Solution:
It is my first AWS instance and I didn't know they have their own firewall rules on the platform. So I was highly confused by the fact all my policies were ACCEPT on my server. Turns out you are behind AWS firewall and you have to go onto the platform to add/change security groups etc. In the past when I've used Digital Ocean droplets or Linodes, the firewall policy on the vps is all I need to change. AWS threw me another curveball there.
I'm unable to connect to a new PostgreSQL in AWS RDS.
I have a Heroku app and I would like to use Amazon RDS for my database instead of Heroku. For that I've been following this guide: https://www.reinteractive.net/posts/128-heroku-app-backed-by-an-aws-rds-postgres-database
I've made a backup from my current Heroku DB and want to load it on the new database.
My security group for the database allows all inbound connections for port 5432 (0.0.0.0/0) and I've made a new VPC to have my DB set as Publicly Accesible (DNS hostnames and DNS resolution enabled). I created the database on postgres version 9.4.9.
However when I do:
-f latest.sql --host=xxx.xxx.us-west-2.rds.amazonaws.com --port=5432 --username=awsuser --password --dbname=mydatabase
from my computer, I only get a connection time out error:
psql: could not connect to server: Connection timed out
Is the server running on host xxx.xxx.us-west-2.rds.amazonaws.com" (1.2.3.4) and accepting
TCP/IP connections on port 5432?
The server is indeed running. In this case latest.sql is the backup I did. After this I edited the Database security groups to accept all connections (0.0.0.0/0) too.
Database Rules
(from what I've read this should not be necessary because I already have the VPC Security Group), but the result is the same.
Is there any way to trace what's going on / why is my connection getting blocked?