Spring Data JPA app coonection to Google Cloud Run Postgres - spring-data-jpa

Google have an example to connect Cloud SQL-MYSQl from Spring JPA/Boot App ( commit 9ecdc1111e3da388a750ace41a125287d9620534 is used). The example is uses Spring Data and works fine with MySQL. But It does not work when profile is changed to postgress ( after starting right Postgres Database in same account and with same steps in #2)
spring.profiles.active=postgres
and replacing
<artifactId>spring-cloud-gcp-starter-sql-mysql</artifactId>
with
<artifactId>spring-cloud-gcp-starter-sql-postgresql</artifactId>
and
replacing src/main/resources/application-mysql.properties
with
src/main/resources/application-postgres.properties
but still applications fails with
Error creating bean with name 'entityManagerFactory' defined in class
path resource
[org/springframework/boot/autoconfigure/orm/jpa/HibernateJpaConfiguration.class]:
Invocation of init method failed; nested exception is
org.hibernate.service.spi.ServiceException: Unable to create requested
service [org.hibernate.engine.jdbc.env.spi.JdbcEnvironment]
I could not find any sample.
application-postgres.properties is appended to have
spring.profiles.active=postgres
spring.cloud.gcp.sql.instance-connection-name= xyzprj:us-central1:postgres-instance
spring.datasource.username=xyzuser
spring.datasource.password=password
application-postgres.properties is replaced as followes
spring.datasource.username=xyzuser
spring.datasource.password=passord
spring.sql.init.mode=always
spring.cloud.gcp.sql.database-name=petclinic
spring.cloud.gcp.sql.instance-connection-name=xyzprj:us-central1:postgres-instance
later both of these properties files were also changed so that
spring.datasource.username=root
and
spring.datasource.password=root
but same issue
sample is tried on Cloud Shell within Google Cloud,
gcloud auth application-default login
You are running on a Google Compute Engine virtual machine. The
service credentials associated with this virtual machine will
automatically be used by Application Default Credentials, so it is not
necessary to use this command.
If you decide to proceed anyway, your user credentials may be visible
to others with access to this virtual machine. Are you sure you want
to authenticate with your personal account?
Do you want to continue (Y/n)? n
ERROR: (gcloud.auth.application-default.login) Aborted by user.

I tried to reproduce the issue on my side, but I was able to deploy application successfully
Here are the steps I followed
Step1: Created postgresql using below command
gcloud sql instances create postgres-instance \
--database-version=POSTGRES_13 \
--cpu=1 \
--memory=4GB \
--region=us-central \
--root-password=root
Step2: Created database using
gcloud sql databases create petclinic --instance postgres-instance
Step3: Connected to the PostgreSQL instance to verify the connection established or not
gcloud sql connect postgres-instance
Step4: replaced the following as you did
In application.properties
spring.profiles.active=postgres
and replacing
<artifactId>spring-cloud-gcp-starter-sql-mysql</artifactId>
with
<artifactId>spring-cloud-gcp-starter-sql-postgresql</artifactId>
and
replacing src/main/resources/application-mysql.properties
with
src/main/resources/application-postgres.properties
Step5: In addition to above changes
In application.properties replaced
spring.cloud.gcp.sql.instance-connection-name= POSTGRESQL_CONNECTION_NAME
In src/main/resources/application-postgres.properties added
spring.datasource.username=USERNAME
spring.datasource.password=PASSWORD
In pom.xml file added following dependency
<dependency>
<groupId>com.google.cloud.sql</groupId>
<artifactId>postgres-socket-factory</artifactId>
<version>1.1.0</version>
</dependency>
In build.grable file add
dependencies {
compile 'com.google.cloud.sql:postgres-socket-factory:1.1.0'
}
Note: run gcloud auth application-default login to access default credential to communicate withCloud Sql API
For clear information check this document

Related

Connect Postgres db hosted in azure storage using docker

I am trying to connect the postgres database hosted in azure storage account from within the flyway, flyway is running as docker image in docker container
docker run --rm flyway/flyway -url=jdbc:postgresql://postgres-azure-db:5432/postgres -user=user -password=password info but I am getting the error ERROR: Unable to obtain connection from database
Any idea/doc-link would be helpful
You have a similar error (different context, same general solution) in this flyway issue
my missing piece for reaching private Cloud SQL instances from private worker pools in Cloud Build was a missing network route.
The fix is ensuring the Service Networking VPC peer has the "Export custom routes" setting enabled, and that the Cloud Router advertises the route.
In your context (Azure), see "Quickstart: Use Azure Data Studio to connect and query PostgreSQL"
You can also try with a local Postgres instance first, and Azure Data Studio, for testing.
After exploring few option, I implemented the flyway using the Azure container instance. Create an ACI to store the flyway docker image and to execute the commands inside ACI, Also created a file share to keep the config file and sql scripts.
All these resource (Storage, ACI, file share) I created via the terraform scripts which is being triggered from Jenkins.

Move my hasura cloud schema, relations, tables etc. and put into my offline docker file using docker-compose

So basically I have my cloud hasura with existing schema, relations tables etc... and i want to offline it using docker and try using metadata export and import that seems not working how can I do it or is there other ways to do it?
this is the docker i want to offline
this is my cloud i want to get the schemas or metadata
OR MAYBE I JUST MANUALLY RECREATE THE TABLES AND RELATIONS??
When using the steps outlined in the Hasura Quickstart with Docker page then the following steps would help get all the table definitions, relationships etc., setup on the local instance just like it is set up on hasura cloud instance.
Migrate all the database schema and metadata using the steps mentioned in Setting up migrations
Since you want to migrate from hasura cloud use the URL of the cloud instance in step 2. Perform steps 3-6 as described in the above link.
Bring up the local docker environment. Ideally edit the docker-compose.yaml file to set HASURA_GRAPHQL_ENABLE_CONSOLE: "false" before running docker-compose up -d.
Resume the process of applying migrations from step 7. Use the endpoint from local instance. For example,
$ hasura metadata apply --endpoint http://localhost:8080
$ hasura migrate apply --endpoint http://localhost:8080

Heroku: importing from S3 failing

I'm trying to import a local Postgresql database to Heroku and I'm following these steps https://devcenter.heroku.com/articles/heroku-postgres-import-export#import-to-heroku-postgres.
I have successfully:
created a dump
uploaded it to an S3 Bucket
created from AWS CLI a signed link
ran the command heroku pg:backups:restore '<SIGNED URL>' DATABASE_URL (adding -a with my app name).
The process to restore a backup starts correctly but then exits with this code:
! An error occurred and the backup did not finish.
!
! Could not initialize transfer
!
! Run heroku pg:backups:info r011 for more details.
Opening the log shows:
Database: BACKUP
Finished at: 2020-01-09 18:49:30 +0000
Status: Failed
Type: Manual
Backup Size: 0.00B (0% compression)
=== Backup Logs
2020-01-09 18:49:30 +0000 Could not initialize transfer
I've tried:
re-uploading the file to the bucket,
generating a new signed link,
putting the app in maintenance mode,
I've created a user in my IAM management service with full S3 access and saved the credentials in the app environment as from https://devcenter.heroku.com/articles/s3
Not sure where to go from here but would appreciate any help. (I'm on the hobby plan therefore I can't ask Heroku's support for help)
Edit: I also tried:
deleting and recreating the S3 Bucket
installing version 1 of the AWS CLI to see if by chance the structure of a presigned link had changed
Edit 2: Since I could not find a solution I've opted to migrate the hosting entirely on AWS for the moment
Make sure that your credentials on your machine that are stored in ~/.aws/ the default value is set to the credentials you created for your heroku configs. Then also make sure the signed url is created with those credentials and configs. I had to set my default credentials to the credentials I put in my heroku configs. Then I also had to set my default region in ~/.aws/config to match the bucket location. Should work after that.
Here are some instructions if you are on mac or linux.
Sorry Windows people. I would assume it is something similar.
Create new access id and key in IAM on AWS
Set heroku configs to use those credentials heroku config:set AWS_ACCESS_KEY_ID=xxx AWS_SECRET_ACCESS_KEY=yyy
Optional (You may have to set the bucket name in heroku config too)
On your machine set your credentials you just created to the default in ~/.aws/credentials
On your machine set your default region that corresponds to your bucket in ~/.aws/config
Create signed URL aws s3 presign s3://your-bucket-address/your-object
Run restore heroku pg:backups:restore '<SIGNED URL>' DATABASE_URL
Had the exact same error and made these 2 adjustments. In the S3 console click on the file you want to use for the backup. You should see the name fo your file followed by 4 tabs. In the General information tab, do the following:
Click on Make public to make the file available for download.
Get the URL for that object where it says URL of object
(should be something like https://mybucket.s3.amazonaws.com/my.file, you can test if it works by pasting that url in a new Chrome tab and hitting that url. That should trigger the download of your file)
Once the previous check is working you can proceed to
heroku pg:backups restore 'https://mybucket.s3.amazonaws.com/my.file' DATABASE_URL
I ran into the same issue and discovered the issue was that I had my bucket's region set as us-east rather than us-east-1.

Can't connect to Cloud SQL via SQL Proxy on Dataproc

I am trying to access Cloud SQL from Dataproc via Cloud SQL Proxy (without using Hive)
After much tinkering based on instructions here:
https://github.com/GoogleCloudPlatform/dataproc-initialization-actions/tree/master/cloud-sql-proxy
I got to the point where at least the cluster gets created with no errors and the proxy seems to be installed. However, my Java Spark jobs can't connect to the cluster with this error:
Exception in thread "main" java.sql.SQLException: Access denied for user 'root'#'localhost' (using password: NO)
I deliberately created an instance with NO user password, but it doesn't work for instances with the password either.
What I find strange is that when I access the same database from my local computer, also using a locally running Cloud SQL Proxy, everything works well, but when I try to force a similar error by deliberately submitting the wrong password, I get a similar, but DIFFERENT error, like this:
Exception in thread "main" java.sql.SQLException: Access denied for user 'root'#'cloudsqlproxy~217.138.38.242' (using password: YES)
So, in the Dataproc error, it says root#localhost, whereas in my local proxy the error says root#cloudproxy~IP address. Why is it doing this? It's exactly the same code running in both places. It seems like it's trying to connect to something local within the Dataproc master machine?
What further confirms this is that I don't see this error logged on the server side when the attempt fails on Dataproc, but the error IS logged when I force the failure from local machine. So the Dataproc's proxy doesn't seem to be pointing at the SQL Server?
I created the cluster with the following instructions:
--scopes sql-admin \
--initialization-actions gs://bucket_name/cloud-sql-proxy.sh \
--metadata 'enable-cloud-sql-hive-metastore=false' \
--metadata 'additional-cloud-sql-instances=project_id:europe-west2:sql_instance_id' \
And the connection string that I specify inside the Spark code is like this:
jdbc:mysql://127.0.0.1:3306/database_name
Thanks for your help!
**** Update:
Based on the below suggestion, I modified my connection string to be as follows:
"jdbc:mysql://google/DATABASE_NAME?cloudSqlInstance=INSTANCE_NAME&socketFactory=com.google.cloud.sql.mysql.SocketFactory&useSSL=false&user=root"
However, in this case I get the following error:
Exception in thread "main" java.sql.SQLNonTransientConnectionException: Cannot connect to MySQL server on google:3,306.
Make sure that there is a MySQL server running on the machine/port you are trying to connect to and that the machine this software is running on is able to connect to this host/port (i.e. not firewalled). Also make sure that the server has not been started with the --skip-networking flag.
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:108)
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:95)
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:87)
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:61)
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:71)
at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:458)
at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:230)
at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:226)
How/where is it supposed to get the driver for 'google'? Also, note that it seems to mal-format the default port 3306 and shows it as 3,306? (I tried supplying the port explicitly, but that didnt' help...
I followed instructions in the tutorial you shared and both Cloud SQL instance and Dataproc Cluster were created. The validation process also was carried out:
$ gcloud dataproc jobs submit pyspark --cluster githubtest pyspark_metastore_test.py
Job [63d2e1ef8c9f45ae818c135c775dcf93] submitted.
Waiting for job output...
18/08/22 17:21:51 INFO org.spark_project.jetty.util.log: Logging initialized #3074ms
...
Successfully found table table_mdhw in Cloud SQL Hive metastore
18/08/22 17:22:53 INFO org.spark_project.jetty.server.AbstractConnector: Stopped Spark#5061d2ce{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
Job [63d2e1ef8c9f45ae818c135c775dcf93] finished successfully.
I only got the same error like yours when I put a different password for root. Could you update the root password and try again from the master the following command?
mysql -u root -h 127.0.0.1 -p
In my environment, the command above connects successfully. If that works, please check this link for further steps to connect your Java application. Authentication and the connector mysql-connector-java are required as additional steps.
Hope it helps!
I ran into the same issues, with the exact same symptoms (Access Denied on localhost instead of cloudsqlproxy~*, and google:3,306).
SSH-ing in and looking at /var/log/cloud-sql-proxy/cloud-sql-proxy.log, I saw that cloud-sql-proxy wasn't actually starting; port 3306 was apparently already in use for some reason. I added =tcp:3307 to the end of the instance connection name in additional-cloud-sql-instances, and I was up and running.
I never managed to get the SocketFactory URIs working. If changing the port doesn't work, others elsewhere have suggested using VPC.

gcloud Export to Google Storage Bucket from Cloud SQL instance

Running this command:
gcloud sql instances export myinstance gs://my_bucket_name/filename.csv -d "mydatabase" -t "mytable"
Giving me the following error:
ERROR: (gcloud.sql.instances.import) ERROR_RDBMS
I have manually ran console uploads to the bucket which go fine. I am able to login to the sql instance and run queries. Which makes me think that there are no permission issues. Has anybody ever seen this type of error and knows a way around it?
Note: i have googled for possible situations, and most of them point to either sql or bucket permission issues.
Nvm. I figured out that i need to make an oauth connection (using the json token generated from gcloud api/credentials section) to the instance before interacting with it.