Can't connect to Cloud SQL via SQL Proxy on Dataproc - google-cloud-sql

I am trying to access Cloud SQL from Dataproc via Cloud SQL Proxy (without using Hive)
After much tinkering based on instructions here:
https://github.com/GoogleCloudPlatform/dataproc-initialization-actions/tree/master/cloud-sql-proxy
I got to the point where at least the cluster gets created with no errors and the proxy seems to be installed. However, my Java Spark jobs can't connect to the cluster with this error:
Exception in thread "main" java.sql.SQLException: Access denied for user 'root'#'localhost' (using password: NO)
I deliberately created an instance with NO user password, but it doesn't work for instances with the password either.
What I find strange is that when I access the same database from my local computer, also using a locally running Cloud SQL Proxy, everything works well, but when I try to force a similar error by deliberately submitting the wrong password, I get a similar, but DIFFERENT error, like this:
Exception in thread "main" java.sql.SQLException: Access denied for user 'root'#'cloudsqlproxy~217.138.38.242' (using password: YES)
So, in the Dataproc error, it says root#localhost, whereas in my local proxy the error says root#cloudproxy~IP address. Why is it doing this? It's exactly the same code running in both places. It seems like it's trying to connect to something local within the Dataproc master machine?
What further confirms this is that I don't see this error logged on the server side when the attempt fails on Dataproc, but the error IS logged when I force the failure from local machine. So the Dataproc's proxy doesn't seem to be pointing at the SQL Server?
I created the cluster with the following instructions:
--scopes sql-admin \
--initialization-actions gs://bucket_name/cloud-sql-proxy.sh \
--metadata 'enable-cloud-sql-hive-metastore=false' \
--metadata 'additional-cloud-sql-instances=project_id:europe-west2:sql_instance_id' \
And the connection string that I specify inside the Spark code is like this:
jdbc:mysql://127.0.0.1:3306/database_name
Thanks for your help!
**** Update:
Based on the below suggestion, I modified my connection string to be as follows:
"jdbc:mysql://google/DATABASE_NAME?cloudSqlInstance=INSTANCE_NAME&socketFactory=com.google.cloud.sql.mysql.SocketFactory&useSSL=false&user=root"
However, in this case I get the following error:
Exception in thread "main" java.sql.SQLNonTransientConnectionException: Cannot connect to MySQL server on google:3,306.
Make sure that there is a MySQL server running on the machine/port you are trying to connect to and that the machine this software is running on is able to connect to this host/port (i.e. not firewalled). Also make sure that the server has not been started with the --skip-networking flag.
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:108)
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:95)
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:87)
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:61)
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:71)
at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:458)
at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:230)
at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:226)
How/where is it supposed to get the driver for 'google'? Also, note that it seems to mal-format the default port 3306 and shows it as 3,306? (I tried supplying the port explicitly, but that didnt' help...

I followed instructions in the tutorial you shared and both Cloud SQL instance and Dataproc Cluster were created. The validation process also was carried out:
$ gcloud dataproc jobs submit pyspark --cluster githubtest pyspark_metastore_test.py
Job [63d2e1ef8c9f45ae818c135c775dcf93] submitted.
Waiting for job output...
18/08/22 17:21:51 INFO org.spark_project.jetty.util.log: Logging initialized #3074ms
...
Successfully found table table_mdhw in Cloud SQL Hive metastore
18/08/22 17:22:53 INFO org.spark_project.jetty.server.AbstractConnector: Stopped Spark#5061d2ce{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
Job [63d2e1ef8c9f45ae818c135c775dcf93] finished successfully.
I only got the same error like yours when I put a different password for root. Could you update the root password and try again from the master the following command?
mysql -u root -h 127.0.0.1 -p
In my environment, the command above connects successfully. If that works, please check this link for further steps to connect your Java application. Authentication and the connector mysql-connector-java are required as additional steps.
Hope it helps!

I ran into the same issues, with the exact same symptoms (Access Denied on localhost instead of cloudsqlproxy~*, and google:3,306).
SSH-ing in and looking at /var/log/cloud-sql-proxy/cloud-sql-proxy.log, I saw that cloud-sql-proxy wasn't actually starting; port 3306 was apparently already in use for some reason. I added =tcp:3307 to the end of the instance connection name in additional-cloud-sql-instances, and I was up and running.
I never managed to get the SocketFactory URIs working. If changing the port doesn't work, others elsewhere have suggested using VPC.

Related

Spring Data JPA app coonection to Google Cloud Run Postgres

Google have an example to connect Cloud SQL-MYSQl from Spring JPA/Boot App ( commit 9ecdc1111e3da388a750ace41a125287d9620534 is used). The example is uses Spring Data and works fine with MySQL. But It does not work when profile is changed to postgress ( after starting right Postgres Database in same account and with same steps in #2)
spring.profiles.active=postgres
and replacing
<artifactId>spring-cloud-gcp-starter-sql-mysql</artifactId>
with
<artifactId>spring-cloud-gcp-starter-sql-postgresql</artifactId>
and
replacing src/main/resources/application-mysql.properties
with
src/main/resources/application-postgres.properties
but still applications fails with
Error creating bean with name 'entityManagerFactory' defined in class
path resource
[org/springframework/boot/autoconfigure/orm/jpa/HibernateJpaConfiguration.class]:
Invocation of init method failed; nested exception is
org.hibernate.service.spi.ServiceException: Unable to create requested
service [org.hibernate.engine.jdbc.env.spi.JdbcEnvironment]
I could not find any sample.
application-postgres.properties is appended to have
spring.profiles.active=postgres
spring.cloud.gcp.sql.instance-connection-name= xyzprj:us-central1:postgres-instance
spring.datasource.username=xyzuser
spring.datasource.password=password
application-postgres.properties is replaced as followes
spring.datasource.username=xyzuser
spring.datasource.password=passord
spring.sql.init.mode=always
spring.cloud.gcp.sql.database-name=petclinic
spring.cloud.gcp.sql.instance-connection-name=xyzprj:us-central1:postgres-instance
later both of these properties files were also changed so that
spring.datasource.username=root
and
spring.datasource.password=root
but same issue
sample is tried on Cloud Shell within Google Cloud,
gcloud auth application-default login
You are running on a Google Compute Engine virtual machine. The
service credentials associated with this virtual machine will
automatically be used by Application Default Credentials, so it is not
necessary to use this command.
If you decide to proceed anyway, your user credentials may be visible
to others with access to this virtual machine. Are you sure you want
to authenticate with your personal account?
Do you want to continue (Y/n)? n
ERROR: (gcloud.auth.application-default.login) Aborted by user.
I tried to reproduce the issue on my side, but I was able to deploy application successfully
Here are the steps I followed
Step1: Created postgresql using below command
gcloud sql instances create postgres-instance \
--database-version=POSTGRES_13 \
--cpu=1 \
--memory=4GB \
--region=us-central \
--root-password=root
Step2: Created database using
gcloud sql databases create petclinic --instance postgres-instance
Step3: Connected to the PostgreSQL instance to verify the connection established or not
gcloud sql connect postgres-instance
Step4: replaced the following as you did
In application.properties
spring.profiles.active=postgres
and replacing
<artifactId>spring-cloud-gcp-starter-sql-mysql</artifactId>
with
<artifactId>spring-cloud-gcp-starter-sql-postgresql</artifactId>
and
replacing src/main/resources/application-mysql.properties
with
src/main/resources/application-postgres.properties
Step5: In addition to above changes
In application.properties replaced
spring.cloud.gcp.sql.instance-connection-name= POSTGRESQL_CONNECTION_NAME
In src/main/resources/application-postgres.properties added
spring.datasource.username=USERNAME
spring.datasource.password=PASSWORD
In pom.xml file added following dependency
<dependency>
<groupId>com.google.cloud.sql</groupId>
<artifactId>postgres-socket-factory</artifactId>
<version>1.1.0</version>
</dependency>
In build.grable file add
dependencies {
compile 'com.google.cloud.sql:postgres-socket-factory:1.1.0'
}
Note: run gcloud auth application-default login to access default credential to communicate withCloud Sql API
For clear information check this document

cannot connect to mongo cluster - user is not allowed to do action [getLog] on [admin.]

I have created a user and added my IP to whitelist.
when trying to connect to a cluster through mongo shell, i am required to enter the following line: mongo "mongodb+srv://cluster0.****.mongodb.net/" --username --password
I have filled in credentials for username and password and replaced dbname with my database name(tried using non-existing one as well in case that was the problem). it connects to the shell, but then crashes with the following error:
Error while trying to show server startup warnings: user is not allowed to do action [getLog] on [admin.]
MongoDB Enterprise atlas-7cwf8s-shard-0:PRIMARY>
tried googling and youtubing the issue, but cannot find the match on how to fix it.
Many thanks
That message says that the shell is unable to show you server startup warnings. It's expected in Atlas environment.
Supposing that's your own cluster, then:
Check the user in Atlas > Database Access
Check the MongoDB Roles header in the table.
If it's not atlas Admin, you can't issue this command:
db.adminCommand({getLog:"startupWarnings"})
Or any admin command, which is issued or tested automatically in the connection, hence the error.
Edit MongoDB Roles to the highest privileges (atlas Admin)
But you can still work anyways.
If you're accessing someone else's cluster, then there isn't much to do.

pgadmin error while connecting to docker postgres instance: "The server encountered an internal error and was unable to complete your request."

I'm running a postgres image from a docker container. While trying to access it from the pgadmin 4 GUI client, I'm getting the error: "The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application."
After connecting to the docker instance, the pgadmin GUI displays the default postgres database, but there is a cross on the other databases which I had created from within the container.
After refreshing connection multiple times I get a message along the lines of connection made to database, but it doesn't actually load in the GUI.
How do I connect to those databases?
(I'm running docker in Windows 10 powershell with admin privileges)
Made changes as per the suggestion here.
Restarted pdadmin, and started container in non-admin mode.
Able to retrieve and commit data.

Getting org.postgresql.util.PSQLException: FATAL: database "null" does not exist in SpringCloudDataFlow

I'm getting "org.postgresql.util.PSQLException: FATAL: database “null” does not exist" in connecting PostgreSQL to my SpringCloudDataFlow Server App in PCF environment.
I have successfully performed following steps.
Deployed SCDF(Spring-Cloud-Dataflow) server in PCF (1.7.3 version)
Created PostgreSQL service instance with 'Standalone' plan. Note: I don't have any other database service available in PCF marketplace.
Connect to that instance (using host (IP) and autogenerated credentials) by a third party software and create database using script 'CREATE DATABASE scdf'
Bind 'PostgreSQL service instance' with 'SCDF server app'.
Set environment variables
spring_datasource_driver_class_name = org.postgresql.Driver
spring_datasource_username [PostgreSQL_Instance_Autogenerated_Username]
spring_datasource_password [PostgreSQL_Instance_Autogenerated_Password]
spring_datasource_url "jdbc:postgresql://10.254.48.231:5432/scdf"
After setting environment variables, when I restart SCDF server app, it gives exception and crash the app
org.postgresql.util.PSQLException: FATAL: database “null” does not exist
Can anyone help please.
A good first step is to make sure the PostgreSQL service-instance is functional on PCF.
Perhaps you could connect to the host/user/pass from outside of PCF via a DB client tool or from other applications. If this is successful standalone, then there's something wrong in supplying the credentials to the SCDF-server.
It is unclear how you're supplying database properties to SCDF. You may have to wrap those "datasource" properties as a well-defined JSON, and provided as the value for SPRING_APPLICATION_JSON property attached to the SCDF-server. If you continue to see issues, please update the description with manifest.yml and other information about the environment.

Google Cloud SQL: SQLSTATE[HY000] [2013] Lost connection to MySQL server at 'reading initial communication packet', system error: 0

I'm desperate since my Google Cloud SQL instance went down. I could connect to it yesterday without problem but since this morning i'm unable to connect to it in any way, it produces the following error: The database server returned this error: SQLSTATE[HY000] [2013] Lost connection to MySQL server at 'reading initial communication packet', system error: 0
This is what I did to try to fix this:
restart instance
added authorized ip-addresses in CIDR notation
reset root password
restored backup
pinged the ip-address and I get response
All these actions completed but i'm still unable to connect through:
PHP
MySQL workbench
Ubuntu MySQL command line
All without luck. What could I do to repair my Cloud SQL instance. Is anyone else having this problem?
I'm from the Cloud SQL team. We are looking into this issue, it should be resolved soon. See https://groups.google.com/forum/#!topic/google-cloud-sql-announce/SwomB2zuRDo. Updates will be posted on that thread (and if there's anything particularly important I'll edit this post).
The problem seems to only affect connections from outside Google Cloud. Clients connecting from App Engine and Compute Engine should work fine.
Our company has same problem.
We are unable to connect through both MySQL workbench and MySQL command line.
Our Google Appengine application has no problems to connect since its not using external IP.
there.I encountered the same problem.You need to find out your public ip address,for that type "my public ip" in Google.Now click on your Cloud SQL instance that you created,under that click on ACCESS CONTROL tab and then click on Authorization tab under that.Under Authorized network,give any name you want to the network and copy your public ip address in the network.Now save changes and try to run the command from console.It should work fine.