Postgres SSL Termination at AWS ELB - postgresql

Is it possible to terminate SSL at an AWS ELB to front a Postgres Server so as to make the following command succeed?
PGSSLMODE=verify-full \
PGSSLROOTCERT=/path/to/go-daddy-root-ca.pem \
PGCONNECT_TIMEOUT=5 \
psql -h my-postgres.example.com -p 5432 -U test_username test-database -c 'select 1';
I'm able to use TCP protocol at the load balancer, and TCP at the instance protocol if I set my ssl certificate and key on the Postgres service w/ the following arguments:
-c
ssl=on
-c
ssl_cert_file=/var/lib/postgresql/my-postgres.example.com.crt
-c
ssl_key_file=/var/lib/postgresql/my-postgres.example.com.key
However, I would like to handle the SSL at the Load Balancer level if possible, so as to not pass certs and keys into the instance running the postgres service. I've tried the following ELB configurations:
LB Proto, Instance Proto, Postres SSL, success/fail
TCP, TCP, on, success
SSL, TCP, on, failure
TCP, SSL, on, failure
SSL, SSL, on, failure
SSL, TCP, off, failure
The failure message is the following:
psql: error: could not connect to server: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
Maybe an NLB would work, of perhaps terminating SSL end-to-end at the Postgres Service level is the only way for a verify-full to succeed?
If it helps for an alternative approach, postgres is running in an EKS cluster.
Thank you for any info you can provide!

We are using NLB in front of the RDS postgres db. Unfortunately, it just "proxies" the requests w/out SSL termination there - the NLB is configured on TCP port 5432. Have you looked at the relatively new AWS RDS proxy - according to the docs it should be able to do the SSL termination, what I'm not sure about is if you can use it without RDS. The other way to go is of course to setup this proxy on your own, with all advantages and disadvantages of this solution.

Related

Haproxy Postgresql SSL configuration

I need ssl connection between postgresql (patroni cluster) and Haproxy but I didnt find any related docs. Is it possible to configure haproxy via ssl without using pgbouncer or pgpool tools.
I can connect directly to the database server with ssl configuration
but I can't connect by using Haproxy,
-bash-4.2$ psql -d "host=x.x.x.x port=7010 dbname=postgres user=test"
psql: error: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
-bash-4.2$
there is no log record on postgresql.
Thanks.

Connect to remote PostgreSQL cluster via TLS/SSL from behind HTTP proxy

I've got a Flexible PostgreSQL Cluster hosted on MS Azure Database for PostgreSQL. This server requires TLS/SSL access like shown in the docs, e.g.:
psql "sslmode=verify-full sslrootcert=c:\ssl\DigiCertGlobalRootCA.crt.pem host=mydemoserver.postgres.database.azure.com dbname=postgres user=myadmin"
or without "verify-full" it works as well:
psql "sslmode=require host=mydemoserver.postgres.database.azure.com dbname=postgres user=myadmin"
Everything works fine when I connect directly, not using a proxy server. But at office we have a corporate HTTP proxy / Firewall configured like http://192.168.X.X:3128. When the proxy is activated, connection to Azure fails.
I was trying to use ssh to set up a proxy tunnel, like so:
ssh -p 5432 <Azure DB username>#<Azure DB host: ....database.azure.com> -L 3128:192.168.X.X:5432
But that didn't work (connection time out error). Also tried to configure connection via PuTTY using some examples from the web, but also to no avail.
Question is: is it at all possible to connect to a remote cluster (both Azure and Google Cloud enforce SSH access) from behind an HTTP proxy?

Can't connect remotely to postgres, no response from psql request

Ubuntu 16.04 LTS
I have followed the guides which all say the same thing; to enable remote connection to a postgres server, update the postgresql.conf file, update the pg_hba.conf file and make sure the port (5432) is open and firewall is not blocking.
When I attempt to connect to my server from the remote machine using the following command, I receive no response (for example, 'Connection refused...'). It hangs as if the firewall has DROP policy, but I checked and the host's firewall is ACCEPT all. Here is the command:
psql -h 45.67.82.123 -U postgres -p 5432 -d mydatabase
I have googled extensively and can't find anyone else who's psql request sits with no response from the host server.
Edit: I should mention I have been connecting locally on the host machine. I should also mention that the data directory on the host machine is in a non-default location. I have my cluster on a mounted drive, in case this could affect the remote connection.
Solution:
It is my first AWS instance and I didn't know they have their own firewall rules on the platform. So I was highly confused by the fact all my policies were ACCEPT on my server. Turns out you are behind AWS firewall and you have to go onto the platform to add/change security groups etc. In the past when I've used Digital Ocean droplets or Linodes, the firewall policy on the vps is all I need to change. AWS threw me another curveball there.

Cant connect with cloud_sql_proxy over tcp

I created a Cloud SQL instance and am trying to connect from my laptop running OSX El Capitan.
I followed the instructions for creating a proxy to run the proxy. I am able to connect if I use a socket file as follows:
sudo ./cloud_sql_proxy -dir=/cloudsql -instances=my-project:us-central1:mysql-instance -credential_file=mycredentials.json
mysql -u root -p -S /cloudsql/my-project:us-central1:mysql-instance
Now I'd like to connect to the Cloud SQL instance from a local python application. So I tried creating the proxy over tcp using =tcp:3306 and testing using the mysql client as follows:
sudo ./cloud_sql_proxy -dir=/cloudsql -instances=my-project:us-central1:mysql-instance=tcp:3306 -credential_file=/web/visi/api/resources/keys/visi-staging-ec040759d57a.json
mysql -u root --host 127.0.0.1 --password
But Im getting this error:
2016/04/06 23:09:58 Got a connection for
"my-project:us-central1:mysql-instance" 2016/04/06 23:09:59
to "my-project:us-central1:mysql-instance" via
111.111.111.111:3307: read tcp 127.0.0.1:3306->127.0.0.1:49518: use of closed network connection ERROR 2026 (HY000): SSL connection error:
error:00000005:lib(0):func(0):DH lib
Try specifying --skip-ssl as an option to your mysql client.
We have a fix for this in progress and should be rolled out in the near future.
The reason this happens is that we reject connections over the proxy that request MySQL SSL. The connection between the proxy and Cloud SQL is already done over SSL so there's no need to use SSL at the MySQL level.

mongodb client - ssh connection from localhost php

I have been using rockmongo as my client for mongodb on localhost for testing.
For prodction i DONT want a client online as this might reduce security.
Is there a client which will allow me to connect SSH? kind of like MySql Workbench?
or
Can rockmongo stay on my local computer and i connect to EC2 instance which has mongodb for production viewing?
or
Is there a better alternative to all of this?
My setup is a standard LAMP stack. willing to make any changes necessary.
MongoHub has the option to connect over ssh, but the app kind of sucks. It crashes a lot.
A more generic approach would be to just create your own ssh tunnel to your production server, and then connect over that through whatever client you want. The client won't care as long as it can make the connection.
On OSX/Linux, creating an ssh tunnel might look like this:
ssh -L 8080:127.0.0.1:27017 -f -C -q -N username#domain.com
This would open a local port 8080 which will forward the traffic to the localhost interface at the mongodb default port 27017 on the remote side. You would point your client at 127.0.0.1:8080 as if mongodb were running there locally.
Check some of these out - http://www.mongodb.org/display/DOCS/Admin+UIs
One workaround would be to set that file in a separate folder and make a .htaccess file that restricts access to only your ip address. Any requests not from your ip address would get denied access...