How to securely connect to Cloud SQL from Cloud Run? - google-cloud-sql

How do I connect to the database on Cloud SQL without having to add my credentials file inside the container?

UPDATE: to connect to Cloud SQL from Cloud Run see the official documentation
Cloud SQL is now supported by the fully managed version of Cloud Run (Cloud Run on GKE users were already able to use Cloud SQL using a private IP)
To get started:
if you do not have one already, create a Cloud SQL instance.
make sure that the Cloud SQL admin API is enabled
deploy a new revision of your Cloud Run service with gcloud alpha and the following flag:
$ gcloud run services update --add-cloudsql-instances [INSTANCE_CONNECTION_NAME]
Where is INSTANCE_CONNECTION_NAME is of the type project:region:instancename.
When you do this, Cloud Run will activate and configure the Cloud SQL proxy for you. You should then connect to it via the /cloudsql/[INSTANCE_CONNECTION_NAME] Unix socket.

CONNECTING FROM CLOUD RUN (fully managed) TO CLOUD SQL USING UNIX DOMAIN SOCKETS (Java)
At this time Cloud Run (fully managed) does not support connecting to
the Cloud SQL instance using TCP. Your code should not try to access the instance
using an IP address such as 127.0.0.1 or 172.17.0.1.
link
1.Install and initialize the Cloud SDK
2.Update components:
gcloud components update
3.Create a new project
gcloud projects create run-to-sql
gcloud config set project run-to-sql
gcloud projects describe run-to-sql
4.Enable billing
gcloud alpha billing projects link run-to-sql --billing-account XXXXXX-XXXXXX-XXXX
5.Set the compute project-info metadata:
gcloud compute project-info describe --project run-to-sql
gcloud compute project-info add-metadata --metadata google-compute-default-region=europe-west2,google-compute-default-zone=europe-west2-b
6.Enable the Cloud SQL Admin API:
gcloud services enable sqladmin.googleapis.com
7.Create a Cloud SQL instance with public Ip
#Create the sql instance in the same region as App Engine Application
gcloud --project=run-to-sql beta sql instances create database-external --region=europe-west2
#Set the password for the "root#%" MySQL user:
gcloud sql users set-password root --host=% --instance database-external --password root
#Create a user
gcloud sql users create user_name --host=% --instance=database-external --password=user_password
#Create a database
gcloud sql databases create user_database --instance=database-external
gcloud sql databases list --instance=database-external
gcloud sql instances list
Cloud Run (fully managed) uses a service account to authorize your
connections to Cloud SQL. This service account must have the correct
IAM permissions to successfully connect. Unless otherwise configured,
the default service account is in the format
PROJECT_NUMBER-compute#developer.gserviceaccount.com.
8.Ensure that the service account for your service has one of the following IAM roles:Cloud SQL Client (preferred)
gcloud iam service-accounts list
gcloud projects add-iam-policy-binding run-to-sql --member serviceAccount:PROJECT_NUMBER-compute#developer.gserviceaccount.com. --role roles/cloudsql.client
9.Clone the java-docs-repository
git clone https://github.com/GoogleCloudPlatform/java-docs-samples.git
cd java-docs-samples/cloud-sql/mysql/servlet/
ls
#Dockerfile pom.xml README.md src
10.Inspect the file that handle the connection to Cloud SQL
cat src/main/java/com/example/cloudsql/ConnectionPoolContextListener.java
11.Containerizing the app and uploading it to Container Registry
gcloud builds submit --tag gcr.io/run-to-sql/run-mysql
12.Deploy the service to Cloud Run
gcloud run deploy run-mysql --image gcr.io/run-to-sql/run-mysql
13.Configure the service for use with Cloud Run
gcloud run services update run-mysql --add-cloudsql-instances run-to-sql:europe-west2:database-external --set-env-vars CLOUD_SQL_CONNECTION_NAME=run-to-sql:europe-west2:database-external DB_USER=user_name,DB_PASS=user_password,DB_NAME=user_database
14.Test it
curl -H "Authorization: Bearer $(gcloud auth print-identity-token)" https://run-mysql-xxxxxxxx-xx.x.run.app
SUCCESS!

I was facing an issue with connecting from a dockerized FastApi application to CloudSQL via private ip. I took the following 3 steps to resolve my issue:
Ensure your application is utilizing the proper database-connection-string.
Sanity check, always do this first. You don't want to spend hours researching a solution without first ruling out a wrong connection string.
When testing (and only when testing): consider logging the db connection string on app init so you can explicitly confirm your connection string is correct.
Provide the Cloud SQL Client role to my cloudrun default service account.
Contains the following permissions:
cloudsql.instances.connect
cloudsql.instances.get
Create a VPC connector within the network of the database (documentation). And assign the VPC connector to the Cloud Run service.

Related

Use hasura with Google Cloud Run and Google Cloud SQL

The docs describe that hasura needs the postgres connection string with the HASURA_GRAPHQL_DATABASE_URL env var.
Example:
docker run -d -p 8080:8080 \
-e HASURA_GRAPHQL_DATABASE_URL=postgres://username:password#hostname:port/dbname \
hasura/graphql-engine:latest
It looks like that my problem is that the server instance connection name for google cloud sql looks like PROJECT_ID:REGION:INSTANCE_ID is not TCP
From the cloud run docs (https://cloud.google.com/sql/docs/postgres/connect-run) I got this example:
postgres://<db_user>:<db_pass>#/<db_name>?unix_sock=/cloudsql/<cloud_sql_instance_name>/.s.PGSQL.5432 but it does not seem to work. Ideas?
I'm currently adding the cloud_sql_proxy as a workaround to the container so that I can connect to TCP 127.0.0.1:5432, but I'm looking for a direct connection to google-cloud-sql.
// EDIT Thanks for the comments, beta8 did mostly the trick, but I also missed the set-cloudsql-instances parameter: https://cloud.google.com/sdk/gcloud/reference/beta/run/deploy#--set-cloudsql-instances
My full cloud-run command:
gcloud beta run deploy \
--image gcr.io/<PROJECT_ID>/graphql-server:latest \
--region <CLOUD_RUN_REGION> \
--platform managed \
--set-env-vars HASURA_GRAPHQL_DATABASE_URL="postgres://<DB_USER>:<DB_PASS>#/<DB_NAME>?host=/cloudsql/<PROJECT_ID>:<CLOUD_SQL_REGION>:<INSTANCE_ID>" \
--timeout 900 \
--set-cloudsql-instances <PROJECT_ID>:<CLOUD_SQL_REGION>:<INSTANCE_ID>
As per v1.0.0-beta.8, which has better support for Postgres connection string parameters, I've managed to make the unix connection to work, from Cloud Run to Cloud SQL, without embedding the proxy into the container.
The connection should look something like this:
postgres://<user>:<password>#/<database>?host=/cloudsql/<instance_name>
Notice that the client will add the suffix /.s.PGSQL.5432 for you.
Make sure you added also the Cloud SQL client permission.
If the Hasura database requires that exact connection string format, you can use it. However, you cannot use Cloud Run's Cloud SQL support. You will need to whitelist the entire Internet so that your Cloud Run instance can connect. Cloud Run does not publish a CIDR block of addresses. This method is not recommended.
The Unix Socket method is for Cloud SQL Proxy that Cloud Run supports. This is the connection method used internally to your container when Cloud Run is managing the connection to Cloud SQL. Note, for this method IP based hostnames are not supported in your client to connect to Cloud Run's Cloud SQL Proxy.
You can embed the Cloud SQL Proxy directly in your container. Then you can use 127.0.0.1 as the hostname part for the connection string. This will require that you create a shell script as your Cloud Run entrypoint to launch both the proxy and your application. Based on your scenario, I recommend this method.
The Cloud SQL Proxy is written in Go and the source code is published.
If you choose to embed the proxy, don't forget to add the Cloud SQL Client role to the Cloud Run service account.

Unable to create repository on IBM Cloud

I'm able to login successfully with : ibmcloud cr login
but when i try to create a repository in the registry, i have the following error :
docker push registry.eu-gb.bluemix.net/fdutreg/ksrepo
The push refers to repository [registry.eu-gb.bluemix.net/fdutreg/ksrepo]
428c97da766c: Preparing
unauthorized: The login credentials are not valid, or your IBM Cloud account is not active.
Any idea ?
Replace registry.eu-gb.bluemix.net by registry.eu-de.bluemix.net and now this is ok.
2 years later but probably someone could be experimenting the same issue. The thing is that you are not authenticate to the registry. You can authenticate with an API key using:
docker login -u iamapikey -p apikey registry_url
For the apikey field you can create an apikey in Manage > IAM > APIkeys > Create an IBM Cloud API key in ibm.cloud.com
It is important to know that Using --password via the CLI is insecure. Use --password-stdin. You can find alternatives in https://cloud.ibm.com/docs/Registry?topic=Registry-registry_access
Log your local Docker daemon into the IBM Cloud Container Registry.run the following command:
ibmcloud cr login

How to access Cloud SQL from dataproc?

I have a dataproc cluster and I'd like to have the cluster access a Cloud SQL instance. When I created the cluster I assigned scope --scopes sql-admin but after reading the Cloud SQL documentation it looks like I need to connect through a proxy. How can I configure this for access from dataproc?
UPDATE:
Until integration comes out of the box (#vadim's answer) I can get this working by using cloud proxy in my dataproc initialization script:
wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64
mv cloud_sql_proxy.linux.amd64 cloud_sql_proxy
chmod +x cloud_sql_proxy
nohup ./cloud_sql_proxy -dir=/cloudsql --instances=my-project:us-central1:mysql-instance=tcp:3307 > cloud_proxy_nohup.log &
(note: port 3306 is already in use so Im using 3307 here)
Using a VPC with a private IP between Cloud SQL and Dataproc seems to be a good option. A proxy should no longer be needed.
There's a pending pull request for a dataproc initialization action that will install the Cloud SQL Proxy on all the nodes in the cluster:
https://github.com/GoogleCloudPlatform/dataproc-initialization-actions/pull/47/commits/ade93cc25d72c33e176840ddaa50671e5ed8ed4a

IBM Bluemix : Bulk load data into MongoDB

I have created a MongoDB service in Bluemix and I can successfully access it in an app deployed on Bluemix. I can create data in the MongoDB instance programmatically through my app, but what I I want to do is load data into MongoDB from my laptop.
I am not able to ping the MongoDB web address from my laptop, so I can not connect it from a standalone java program.
What is the way ahead to bulk load data into MongoDB on Bluemix ?
You can not connect to this experimental service from outside of Bluemix.
mongodb: You can not connect to this experimental service from outside of Bluemix. If you want to use your standalone java program to interact with this service on Bluemix, consider pushing your standalone java program as another application to Bluemix.
cf push mystandaloneapp -p standalone.jar --no-route
Then, bind the same mongodb instance to this application. When you restage the application, it should get the credentials in the VCAP_SERVICES environment variable.
mongolab: Assuming you created the mongolab service, from your Bluemix Dashboard, find and click on your MongoLab instance. From there, launch the MongoLab Dashboard. Click on your deployment (IbmCloud_***). You should see instructions on how to connect to mongo from shell, as well as import/export commands.
mongoimport -h ds049570.mongolab.com:49570 -d IbmCloud_ee4rm8hq_ecl23uf8 -c <collection> -u <user> -p <password> --file <input file>
You should also be able to connect to this from your java program.
Finally, check out the MongoDB by Compose service, which is an IBM provided MongoDB service, with a dashboard.

Install Chef Server 11 with AWS RDS

Now AWS has postgresql service in RDS, So I tried to install Chef Server 11 with postgresql RDS instance by editing attributes in /opt/chef-server/embedded/cookbooks/chef-server/attributes/default.rb
default['chef_server']['postgresql']['vip'] = "rds instance endpoint"
and importing database with the following command
/opt/chef-server/embedded/bin/psql -h "rds instance endpoint" -p 5432 -U "user_name" "database_name" < /opt/chef-server/embedded/service/erchef/lib/chef_db-f086a97/priv/pgsql_schema.sql
But i am not able to achieve that. chef-server-ctl reconfigure gives an error
curl -sf http:// 127.0.0.1:8000 /_status returned 7
Please help me to configure chef server with RDS instance.
I think i am able to solve my problem. It is because of encrypted password in erchef config file. I edited
"/opt/chef-server/embedded/cookbooks/chef-server/templates/default/echef.config.rb"
for the same and it seems working perfectly fine now.
Thanks
The chef-server-rds cookbook on github can be used to install Chef Server 11 with AWS RDS.
Given an iam key and secret, it will provision the rds instance if it doesn't exist in the account, initialize the chef schema, and install the appropriate platform-specific chef-server Omnibus package and perform the initial configuration of Chef Server on an AWS elastic compute ubuntu instance.
Using postgres on Amazon RDS offloads DB resource use away from the chef-server host. It also enables various DB functions like scaling, backup, and restore to be done independently of the chef-server installations. Similar configurations can be written for other db service providers.