MySQL Workbench won't work with Google Cloud SQL Proxy - mysql-workbench

I'm using Google Cloud SQL 2nd generation and installed cloud-sql-proxy on my local machine.
On my local machine I'd simply connect to 127.0.0.1:3306 and this has been working fine, in NodeJs, Php and using the mysql command line client.
On Google App Engine Managed VM (flexible environment) i'm using unix_socket or socketPath '/cloudsql/MY_PROJECT_ID:us-central1:SQL_INSTANCE' this has been working fine too, in both, Php and NodeJs.
What doesn't work is MySQL Workbench, I can't figure out how to get it to connect. Does it use another protocol or is cloud-sql-proxy for command line only ?
Here is how I start cloud-sql-proxy
this works:
./cloud_sql_proxy \
-instances=MY_PROJECT:us-central1:MY_SQL_INSTANCE=tcp:3306 \
-credential_file='/Users/ME/SomeFolder/MY_SERVICE_ACC_KEY.json'
after that I'd use MySQL Workbench to try to connect to 127.0.0.1:3306, but I always get an error :
SSL connection error: socket layer receive error
local PHP, NodeJS and mysql client work though.
Any help would be appreciated

Ok, I got it working and believe it might be useful for others too:
I couldn't get it working over the tcp connection, but I figured out how to use the socket method running without fuse:
sudo ./cloud_sql_proxy \
-dir=/cloudsql \
-instances=MY_PROJECT:MY_SQL_REGION:MY_SQL_INSTANCE \
-credential_file='/Users/ME/some_folder/MY_SERVICE_ACC_KEY.json'
couple of things here:
The folder /cloudsql has to already exists e.g. sudo mkdir /cloudsql
Don't mistype that folder name, really don't
Don't specify the tcp port after instances or it will use a tcp connection instead.
sudo is necessary
In MySQL Workbench:
Select Database > Manage Connections...
Under Connection > Connection Method choose Standard (TCP/IP)
Under Parameters set Host: localhost Port:3306 (although I don't think it matters)
Goto Advanced > Others: enter socket=/cloudsql/MY_PROJECT:MY_SQL_REGION:MY_SQL_INSTANCE
Click Test Connection it should as for your username and password, success.

This error indicates that MySQL Workbench is requesting an SSL connection, which is not supported via the proxy. The proxy always uses SSL between the local machine and the instance so there's no need to enable SSL at the MySQL protocol level.
Can you try turning off SSL in MySQL Workbench connection settings?

Related

Connect to remote PostgreSQL cluster via TLS/SSL from behind HTTP proxy

I've got a Flexible PostgreSQL Cluster hosted on MS Azure Database for PostgreSQL. This server requires TLS/SSL access like shown in the docs, e.g.:
psql "sslmode=verify-full sslrootcert=c:\ssl\DigiCertGlobalRootCA.crt.pem host=mydemoserver.postgres.database.azure.com dbname=postgres user=myadmin"
or without "verify-full" it works as well:
psql "sslmode=require host=mydemoserver.postgres.database.azure.com dbname=postgres user=myadmin"
Everything works fine when I connect directly, not using a proxy server. But at office we have a corporate HTTP proxy / Firewall configured like http://192.168.X.X:3128. When the proxy is activated, connection to Azure fails.
I was trying to use ssh to set up a proxy tunnel, like so:
ssh -p 5432 <Azure DB username>#<Azure DB host: ....database.azure.com> -L 3128:192.168.X.X:5432
But that didn't work (connection time out error). Also tried to configure connection via PuTTY using some examples from the web, but also to no avail.
Question is: is it at all possible to connect to a remote cluster (both Azure and Google Cloud enforce SSH access) from behind an HTTP proxy?

Golang and accessing postgres via a client in an Ubuntu VPS?

I'm trying to follow the digital ocean tutorial on configuring pgadmin4 in server mode, but damn it is long, and I have to first configure apache server, python and virtualenv (via other 2 tutorials).
I don't want to install so many dependencies in my server just to access postgres via pgamin 4.
How do you guys do it?
I'm running a go webserver via https listening on ports 443 and redirecting 80 to 443
Seeing your other answer I would like to offer a more secure alternative.
What's wrong with the current approach?
Your PostgreSQL instance is accessible from the internet. Generally you should try to limit access only where it is required. Especially if you are not using SSL to connect to PostgreSQL, an open port like this is a target for traffic interception and brute force attacks.
Alternative
Seeing that you are you using JetBrains IDE's you only need one other step to access your data - setting up a SSH tunnel.
This encrypts with SSH all your connections between development host and server without exposing PostgreSQL to the outside world.
In the connection settings for your database in the Jetbrains IDE select the SSH/SSL tab and "Use SSH tunnel". Input the information of your server and the SSH user + password/SSH key (use SSH keys for better security) into the relevant input fields.
Undo the settings changes you did to open the firewall and configure PostgreSQL to listen to all nodes.
Connections to your database are now possible over encrypted tunnels without exposing your database to any unwanted attacks.
So this is what I did to achieve connection from my laptop to my ubuntu VPS, via webstorm (I suppose any intellij works also should work with other IDE's)
0 login to your server
1. Locate postgresql.conf usually under /etc/postgresql/10/main
2. sudo nano postgresql.conf
3. Locate and change line at connections
listen_addresses = '*'
Then in same dir edit: sudo nano pg_hba.conf
#TYPE DATABASE USER ADDRESS METHOD
host all all 0.0.0.0/0 md5
Md5 means I connect with user and his password
5 Dont forget to allow ufw (firewall)
sudo ufw allow 5432/tcp
Open webstorm > Database (tab) > click + to add PostgtresSQL source (fill relevant info, user name, password, database name, host and port, etc...)
jdbc:postgresql://example.com:5432/my_database_name
Press on schemas and synchronize OR press:
Source > Settings > Schemas tab > [check] All Databases > refresh

How to connect datalab with Google Cloud SQL?

Trying to connect from a datalab notebook with PostgreSQL database hosted on Google Cloud SQL. Try both direct IP and instance connection ways but both give us an exception.
direct connection URI:
"{engine}://{user}:{password}#{host}:{port}/{database}"
using gcloud sql connect
"{engine}://{user}:{password}#/{database}?host=/cloudsql/{instance_connection_name}"
both give us this exception:
OperationalError: (psycopg2.OperationalError) could not connect to
server: Connection timed out
Is the server running on host "***.***.***.***" and accepting
TCP/IP connections on port ****?
Any idea if it need a cloud sql proxy as in Collab proxy connection? And if it is needed how to do it with datalab libraries?
I finally got it.
Assuming that datalab VM is already authenticated on Gcloud i try to use cloud_sql_proxy to connect without auth python commands that appear on Collab proxy connection and fix the error that still appears by crating missing directory. Si i got this:
!wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
!mkdir -p /cloudsql
!chmod +x cloud_sql_proxy
.
!./cloud_sql_proxy --instances=project-id:europe-west1:posty --dir /cloudsql
as with Collab solution we need to let the notebook running in alternate window to keep proxy. With other notebooks on the same machine we finally obtain access to the database.
Note: Probably a better solution could be to edit the docker image of datalab machines to include this behaviour as noted here.
IT may be that your VM machine ip isn't whitelisted on the database.
You can access the list and add new ips on the google cloud console SQL > yor_database > authorization.
Check this link for details https://cloud.google.com/sql/docs/mysql/connect-external-app?hl=en_US&_ga=2.178999533.-851571953.1521816449#appaccessIP

Cant connect with cloud_sql_proxy over tcp

I created a Cloud SQL instance and am trying to connect from my laptop running OSX El Capitan.
I followed the instructions for creating a proxy to run the proxy. I am able to connect if I use a socket file as follows:
sudo ./cloud_sql_proxy -dir=/cloudsql -instances=my-project:us-central1:mysql-instance -credential_file=mycredentials.json
mysql -u root -p -S /cloudsql/my-project:us-central1:mysql-instance
Now I'd like to connect to the Cloud SQL instance from a local python application. So I tried creating the proxy over tcp using =tcp:3306 and testing using the mysql client as follows:
sudo ./cloud_sql_proxy -dir=/cloudsql -instances=my-project:us-central1:mysql-instance=tcp:3306 -credential_file=/web/visi/api/resources/keys/visi-staging-ec040759d57a.json
mysql -u root --host 127.0.0.1 --password
But Im getting this error:
2016/04/06 23:09:58 Got a connection for
"my-project:us-central1:mysql-instance" 2016/04/06 23:09:59
to "my-project:us-central1:mysql-instance" via
111.111.111.111:3307: read tcp 127.0.0.1:3306->127.0.0.1:49518: use of closed network connection ERROR 2026 (HY000): SSL connection error:
error:00000005:lib(0):func(0):DH lib
Try specifying --skip-ssl as an option to your mysql client.
We have a fix for this in progress and should be rolled out in the near future.
The reason this happens is that we reject connections over the proxy that request MySQL SSL. The connection between the proxy and Cloud SQL is already done over SSL so there's no need to use SSL at the MySQL level.

mongodb client - ssh connection from localhost php

I have been using rockmongo as my client for mongodb on localhost for testing.
For prodction i DONT want a client online as this might reduce security.
Is there a client which will allow me to connect SSH? kind of like MySql Workbench?
or
Can rockmongo stay on my local computer and i connect to EC2 instance which has mongodb for production viewing?
or
Is there a better alternative to all of this?
My setup is a standard LAMP stack. willing to make any changes necessary.
MongoHub has the option to connect over ssh, but the app kind of sucks. It crashes a lot.
A more generic approach would be to just create your own ssh tunnel to your production server, and then connect over that through whatever client you want. The client won't care as long as it can make the connection.
On OSX/Linux, creating an ssh tunnel might look like this:
ssh -L 8080:127.0.0.1:27017 -f -C -q -N username#domain.com
This would open a local port 8080 which will forward the traffic to the localhost interface at the mongodb default port 27017 on the remote side. You would point your client at 127.0.0.1:8080 as if mongodb were running there locally.
Check some of these out - http://www.mongodb.org/display/DOCS/Admin+UIs
One workaround would be to set that file in a separate folder and make a .htaccess file that restricts access to only your ip address. Any requests not from your ip address would get denied access...