psql server ssh tunneling authentication failure - postgresql

I'm trying to connect to a psql database on a remote server.
So I ssh tunnel into the remote server with port forwarding as below
ssh -L 7777:psqlServerHost.com:5432 me#remoteServerHost.com
Unfortunately using this method I'm unable to authenticate properly when I try the below command on separate terminal on my local
psql -h localhost -p 7777 -U user database
I would get
FATAL: password authentication failed for user
However if I were to directly ssh into me#remoteServerHost.com I would be able to connect to the database with the same credentials using the below command
psql -h psqlServerHost.com -U user database
I imagine this is probably a configuration file somewhere that I've missed, but I can't seem to find any similar queries that are helpful on the internet.

Related

PgAdmin 4 password-less connection to local database via unix socket

I have initialized a pg database as such:
user_name#my_machine$ sudo -u postgres createuser -s user_name
user_name#my_machine$ createdb -T template0 db_name
I can now connect to it via psql via user_name#my_machine$ psql db_name
and everything works well with the CLI tooling.
The relevant auth line of /etc/postgresql/13/main/pg_hba.conf is:
local all all peer
Now I'd like to connect to it via PgAdmin 4, and I can't find a way to tell the interface that I want to connect via unix socket and don't need a password.
The sanest way I can think is:
but the connection is still rejected with a fe_sendauth: no password supplied.
I know I could configure a password for my user and give it, but I'd like to know if I can make PgAdmin behave properly.
Short answer: put /var/run/postgresql in host name/address.

pgAdmin4: remote access to PostgreSQL server

I can access the remote PostgreSQL server in the lab from both the terminal and using pgAdmin4 when connected to same private network.
From Terminal:
PC:~$ psql -h 193.13x.xx.xx -U myusername -W dbname
#then password after prompt
Using pgAmin4:
Host name/address: 193.13x.xx.xx
Port: 5432
Maintainance database: dbname
Username: myusername
Password: *******
However, when I switched to another network eduroam, I can only connect to the remote server via the terminal, not from pgAdmin4. So it is not very easy to work away from the lab.
Is there a workaround to enable connecting via pgAdmin4?
I can only connect to the remote server via the terminal, not from
pgAdmin4. So it is not very easy to work away from the lab.
Is there a workaround to enable connecting via pgAdmin4?
Yes. The generic answer to that is: use an SSH tunnel. It's very usual when you cannot access directly a database from outside a certain network, which appears to be your case based on the error message in the comments.
If you need specific advice for pgAdmin4, consider searching for "pgadmin4 ssh tunnel", there are many tutorials available online.

Connecting GCP Compute engine to GCP Cloud SQL with PostgreSQL

I'm trying to get my GCP Compute Engine instance, Ubuntu 16.04, connected to a GCP Cloud SQL PostgreSQL database.
I've followed all the instructions in the documentation, but when I enter the command to connect to the database:
psql -h [CLOUD_SQL_PUBLIC_IP_ADDR] -U postgres
The result is:
psql: FATAL: Peer authentication failed for user "postgres"
I've done the authentication on both the CloudSQL side and the ComputeEngine side, so I'm not sure why this is going wrong.
The database I'm trying to connect to is in the same project, and the command
gcloud sql instances list
shows the database in the listings. However, the command
sudo -u postgres psql my-db
returns
psql: FATAL: database "my-db" does not exist
The expected result is that a psql connection opens, but instead I get a psql: FATAL: Peer authentication failed for user "postgres".
I've followed the instructions from the documentation you posted and I was able to connect successfully from my Compute Engine instance (Ubuntu 16.04) using a public IP address.
The steps I've followed are documented in "Connecting using a public IP address":
1- Added a static IPv4 IP address to the Compute Engine instance. To do this, navigate to the Cloud console > VPC Network > External IP addresses and click in the button "Reserve static address".
2- Authorize the static IP address of the Compute Engine instance as a network that can connect to the Cloud SQL instance.
3- Connect via SSH button to your Compute Engine instance.
4- Install the psql client:
$ sudo apt-get update
$ sudo apt-get install postgresql-client
5- find the CLOUD_SQL_PUBLIC_IP_ADDR
$ gcloud sql instances list:
6- And connect to the Cloud SQL instance with the psql client making sure both user and database do exist:
$ psql -h [CLOUD_SQL_PUBLIC_IP_ADDR] -U [USER] -d [DATABASE]
Also, the command below worked fine for me:
$ psql [USER] -h [CLOUD_SQL_PUBLIC_IP_ADDR] -d [DATABASE]
Then you will be asked for the user's password and voilĂ .
Could you please try following the instructions above to verify if it works fine for you?
Did you configure an encrypted connection before using SSL?

How to connect to an alternative local postgresql cluster for the fist time?

In Ubuntu 16.04 I created second postgres database cluster, called cmg, with a local user as the admin user:
pg_create -u "local_username" -g "local_usergroup" -d /path/to/data/dir 9.5 cmg
The cluster was started with:
pg_ctrlcluster 9.5 cmg start
which ran successfully (pg_lsclusters show both are online)
The problem is I cannot connect to the cluster using psql as is normally done.
I tried using:
psql -h 127.0.0.1 -w -p5433 -U local_username
which fails with:
psql: fe_sendauth: no password supplied"
Is there any way to connect to the specific cluster?
use psql -h your_socket_dir -p5433 -U postgres to connect locally (uses peer auth by default - thus high chahce to login wothout password)
once logged in - set up password (create user if needed) and use it connecting remotely
psql -h 127.0.0.1 -p5433 -U local_username
in your connect string you had -w which is never ask for a password https://www.postgresql.org/docs/current/static/app-psql.html which would by default work only for local connections
I think the default pg_hba.conf when you start up a new cluster expects you to authenticate with peer connections, so you need to change user to your local user before connecting
[root#server~]# su - local_username
>> Enter password:
> password
[local_username#server~]# psql -h 127.0.0.1 -p 5433
You can check your pg_hba.conf file in /path/to/data/dir/pg_hba.conf to see how it expects you to authenticate.
Alternatively, if you cannot get access as your 'local_username' then instead su to postgres user in the instructions above and it should work

Connecting to database through ssh tunnel

Our production databases are only accessible from the production application servers. I am able to login to production app servers and psql to the db, but I would like to setup a ssh tunnel to allow me to access the production db from my work box.
Ideally, it would be a single command that I could run from my workbox that would set up the tunnel/proxy on the production app server
Here is what I have come up with, but it doesnt work.
user#workbox $ ssh -fNT -L 55555:db.projectX.company.com:5432 app.projectX.company.com
user#workbox $ psql -h app.projectX.company.com -p 55555
psql: could not connect to server: No route to host
Is the server running on host "app.projectX.company.com" (10.1.1.55) and accepting
TCP/IP connections on port 55555?
The reported IP address is incorrect.
When connecting to the tunnel endpoint, the hostname is your local host, since that's where the forwarded port is exposed.
ssh -fNT -L 55555:db.projectX.company.com:5432 app.projectX.company.com
psql -h localhost -p 55555
BTW, PgAdmin-III provides ssh tunnel automation. On the other hand, it's a big GUI app without psql's handy \commands.
It's pretty trivial to write a sshpsql bash script that fires up the ssh tunnel, stores the pid of the ssh process, launches psql, lets you do what you want, and on exit kills the ssh tunnel. You'll also want to TRAP "kill $sshpid" EXIT so you kill the tunnel on unclean exits.