Use hasura with Google Cloud Run and Google Cloud SQL - postgresql

The docs describe that hasura needs the postgres connection string with the HASURA_GRAPHQL_DATABASE_URL env var.
Example:
docker run -d -p 8080:8080 \
-e HASURA_GRAPHQL_DATABASE_URL=postgres://username:password#hostname:port/dbname \
hasura/graphql-engine:latest
It looks like that my problem is that the server instance connection name for google cloud sql looks like PROJECT_ID:REGION:INSTANCE_ID is not TCP
From the cloud run docs (https://cloud.google.com/sql/docs/postgres/connect-run) I got this example:
postgres://<db_user>:<db_pass>#/<db_name>?unix_sock=/cloudsql/<cloud_sql_instance_name>/.s.PGSQL.5432 but it does not seem to work. Ideas?
I'm currently adding the cloud_sql_proxy as a workaround to the container so that I can connect to TCP 127.0.0.1:5432, but I'm looking for a direct connection to google-cloud-sql.
// EDIT Thanks for the comments, beta8 did mostly the trick, but I also missed the set-cloudsql-instances parameter: https://cloud.google.com/sdk/gcloud/reference/beta/run/deploy#--set-cloudsql-instances
My full cloud-run command:
gcloud beta run deploy \
--image gcr.io/<PROJECT_ID>/graphql-server:latest \
--region <CLOUD_RUN_REGION> \
--platform managed \
--set-env-vars HASURA_GRAPHQL_DATABASE_URL="postgres://<DB_USER>:<DB_PASS>#/<DB_NAME>?host=/cloudsql/<PROJECT_ID>:<CLOUD_SQL_REGION>:<INSTANCE_ID>" \
--timeout 900 \
--set-cloudsql-instances <PROJECT_ID>:<CLOUD_SQL_REGION>:<INSTANCE_ID>

As per v1.0.0-beta.8, which has better support for Postgres connection string parameters, I've managed to make the unix connection to work, from Cloud Run to Cloud SQL, without embedding the proxy into the container.
The connection should look something like this:
postgres://<user>:<password>#/<database>?host=/cloudsql/<instance_name>
Notice that the client will add the suffix /.s.PGSQL.5432 for you.
Make sure you added also the Cloud SQL client permission.

If the Hasura database requires that exact connection string format, you can use it. However, you cannot use Cloud Run's Cloud SQL support. You will need to whitelist the entire Internet so that your Cloud Run instance can connect. Cloud Run does not publish a CIDR block of addresses. This method is not recommended.
The Unix Socket method is for Cloud SQL Proxy that Cloud Run supports. This is the connection method used internally to your container when Cloud Run is managing the connection to Cloud SQL. Note, for this method IP based hostnames are not supported in your client to connect to Cloud Run's Cloud SQL Proxy.
You can embed the Cloud SQL Proxy directly in your container. Then you can use 127.0.0.1 as the hostname part for the connection string. This will require that you create a shell script as your Cloud Run entrypoint to launch both the proxy and your application. Based on your scenario, I recommend this method.
The Cloud SQL Proxy is written in Go and the source code is published.
If you choose to embed the proxy, don't forget to add the Cloud SQL Client role to the Cloud Run service account.

Related

Invoking pg_repack extension in gcp cloud sql

We have installed the pg_repack extension in Cloud SQL by following the guide:
https://cloud.google.com/sql/docs/postgres/extensions#pg_repack
The installation of the extension works fine and it shows up in the list of extensions when running \dx.
We then want to invoke the extension, but it is unclear from where this should be done. The docs just say to "run the command":
pg_repack -h <hostname> -d testdb -U csuper1 -k -t t1
We cant find anywhere in our project where this command can be invoked though. Do we have to set up a compute engine instance for this, or is there some other way?
We only use Cloud Run for running our code at the moment and would like to keep things as small/simple as possible.
Our solution: We built a docker image that wrapped pg_repack with http, and then deployed it as a Cloud Run service. This enabled us to invoke pg_repack periodically using the cloud scheduler.

Why am I getting "unsupported network unix" with Cloud SQL Proxy, when I'm specifying TCP?

I'm having issues when trying to connect to my Cloud SQL instance. I created a SQL Server instance, downloaded the cloud sql proxy, and everything seems to start to connect, but I keep getting the following error:
errors parsing config:
invalid "instance-connection-name": unsupported network: unix
I'm specifying the tcp port to use, but it still complains about UNIX. Here is the command I'm using when trying to connect (I replaced the actual instance connection name for privacy/security):
./cloud_sql_proxy.exe -instances=[instance-connection-name]=tcp:3306
Any help would be appreciated.
Thanks!
I tried this and it works
Rename cloud_sql_proxy_xxx to cloud_sql_proxy
Open cmd in your cloud_sql_proxy's location
Run the following command: cloud_sql_proxy -instances=[project:region:instance-name]=tcp:1433 without [ ]
From Connecting to a Cloud SQL for SQL Server using a Cloud SQL Proxy:
Depending on your language and environment, you can start the proxy using either TCP sockets or Unix sockets.
TCP sockets:
Copy your instance connection name from the Instance details page
For example: myproject:us-central1:myinstance.
If you are using a service account to authenticate the proxy, note the location on your client machine of the private key file that was created when you created the service account.
Start the proxy.
Some possible proxy invocation strings:
a) Using Cloud SDK authentication:
./cloud_sql_proxy -instances=<INSTANCE_CONNECTION_NAME>=tcp:1433
The specified port must not already be in use, for example, by a local database server.
b) Using a service account and explicit instance specification (recommended for production environments):
./cloud_sql_proxy -instances=<INSTANCE_CONNECTION_NAME>=tcp:1433 \
-credential_file=<PATH_TO_KEY_FILE> &

How to securely connect to Cloud SQL from Cloud Run?

How do I connect to the database on Cloud SQL without having to add my credentials file inside the container?
UPDATE: to connect to Cloud SQL from Cloud Run see the official documentation
Cloud SQL is now supported by the fully managed version of Cloud Run (Cloud Run on GKE users were already able to use Cloud SQL using a private IP)
To get started:
if you do not have one already, create a Cloud SQL instance.
make sure that the Cloud SQL admin API is enabled
deploy a new revision of your Cloud Run service with gcloud alpha and the following flag:
$ gcloud run services update --add-cloudsql-instances [INSTANCE_CONNECTION_NAME]
Where is INSTANCE_CONNECTION_NAME is of the type project:region:instancename.
When you do this, Cloud Run will activate and configure the Cloud SQL proxy for you. You should then connect to it via the /cloudsql/[INSTANCE_CONNECTION_NAME] Unix socket.
CONNECTING FROM CLOUD RUN (fully managed) TO CLOUD SQL USING UNIX DOMAIN SOCKETS (Java)
At this time Cloud Run (fully managed) does not support connecting to
the Cloud SQL instance using TCP. Your code should not try to access the instance
using an IP address such as 127.0.0.1 or 172.17.0.1.
link
1.Install and initialize the Cloud SDK
2.Update components:
gcloud components update
3.Create a new project
gcloud projects create run-to-sql
gcloud config set project run-to-sql
gcloud projects describe run-to-sql
4.Enable billing
gcloud alpha billing projects link run-to-sql --billing-account XXXXXX-XXXXXX-XXXX
5.Set the compute project-info metadata:
gcloud compute project-info describe --project run-to-sql
gcloud compute project-info add-metadata --metadata google-compute-default-region=europe-west2,google-compute-default-zone=europe-west2-b
6.Enable the Cloud SQL Admin API:
gcloud services enable sqladmin.googleapis.com
7.Create a Cloud SQL instance with public Ip
#Create the sql instance in the same region as App Engine Application
gcloud --project=run-to-sql beta sql instances create database-external --region=europe-west2
#Set the password for the "root#%" MySQL user:
gcloud sql users set-password root --host=% --instance database-external --password root
#Create a user
gcloud sql users create user_name --host=% --instance=database-external --password=user_password
#Create a database
gcloud sql databases create user_database --instance=database-external
gcloud sql databases list --instance=database-external
gcloud sql instances list
Cloud Run (fully managed) uses a service account to authorize your
connections to Cloud SQL. This service account must have the correct
IAM permissions to successfully connect. Unless otherwise configured,
the default service account is in the format
PROJECT_NUMBER-compute#developer.gserviceaccount.com.
8.Ensure that the service account for your service has one of the following IAM roles:Cloud SQL Client (preferred)
gcloud iam service-accounts list
gcloud projects add-iam-policy-binding run-to-sql --member serviceAccount:PROJECT_NUMBER-compute#developer.gserviceaccount.com. --role roles/cloudsql.client
9.Clone the java-docs-repository
git clone https://github.com/GoogleCloudPlatform/java-docs-samples.git
cd java-docs-samples/cloud-sql/mysql/servlet/
ls
#Dockerfile pom.xml README.md src
10.Inspect the file that handle the connection to Cloud SQL
cat src/main/java/com/example/cloudsql/ConnectionPoolContextListener.java
11.Containerizing the app and uploading it to Container Registry
gcloud builds submit --tag gcr.io/run-to-sql/run-mysql
12.Deploy the service to Cloud Run
gcloud run deploy run-mysql --image gcr.io/run-to-sql/run-mysql
13.Configure the service for use with Cloud Run
gcloud run services update run-mysql --add-cloudsql-instances run-to-sql:europe-west2:database-external --set-env-vars CLOUD_SQL_CONNECTION_NAME=run-to-sql:europe-west2:database-external DB_USER=user_name,DB_PASS=user_password,DB_NAME=user_database
14.Test it
curl -H "Authorization: Bearer $(gcloud auth print-identity-token)" https://run-mysql-xxxxxxxx-xx.x.run.app
SUCCESS!
I was facing an issue with connecting from a dockerized FastApi application to CloudSQL via private ip. I took the following 3 steps to resolve my issue:
Ensure your application is utilizing the proper database-connection-string.
Sanity check, always do this first. You don't want to spend hours researching a solution without first ruling out a wrong connection string.
When testing (and only when testing): consider logging the db connection string on app init so you can explicitly confirm your connection string is correct.
Provide the Cloud SQL Client role to my cloudrun default service account.
Contains the following permissions:
cloudsql.instances.connect
cloudsql.instances.get
Create a VPC connector within the network of the database (documentation). And assign the VPC connector to the Cloud Run service.

How to access Cloud SQL from dataproc?

I have a dataproc cluster and I'd like to have the cluster access a Cloud SQL instance. When I created the cluster I assigned scope --scopes sql-admin but after reading the Cloud SQL documentation it looks like I need to connect through a proxy. How can I configure this for access from dataproc?
UPDATE:
Until integration comes out of the box (#vadim's answer) I can get this working by using cloud proxy in my dataproc initialization script:
wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64
mv cloud_sql_proxy.linux.amd64 cloud_sql_proxy
chmod +x cloud_sql_proxy
nohup ./cloud_sql_proxy -dir=/cloudsql --instances=my-project:us-central1:mysql-instance=tcp:3307 > cloud_proxy_nohup.log &
(note: port 3306 is already in use so Im using 3307 here)
Using a VPC with a private IP between Cloud SQL and Dataproc seems to be a good option. A proxy should no longer be needed.
There's a pending pull request for a dataproc initialization action that will install the Cloud SQL Proxy on all the nodes in the cluster:
https://github.com/GoogleCloudPlatform/dataproc-initialization-actions/pull/47/commits/ade93cc25d72c33e176840ddaa50671e5ed8ed4a

Install Chef Server 11 with AWS RDS

Now AWS has postgresql service in RDS, So I tried to install Chef Server 11 with postgresql RDS instance by editing attributes in /opt/chef-server/embedded/cookbooks/chef-server/attributes/default.rb
default['chef_server']['postgresql']['vip'] = "rds instance endpoint"
and importing database with the following command
/opt/chef-server/embedded/bin/psql -h "rds instance endpoint" -p 5432 -U "user_name" "database_name" < /opt/chef-server/embedded/service/erchef/lib/chef_db-f086a97/priv/pgsql_schema.sql
But i am not able to achieve that. chef-server-ctl reconfigure gives an error
curl -sf http:// 127.0.0.1:8000 /_status returned 7
Please help me to configure chef server with RDS instance.
I think i am able to solve my problem. It is because of encrypted password in erchef config file. I edited
"/opt/chef-server/embedded/cookbooks/chef-server/templates/default/echef.config.rb"
for the same and it seems working perfectly fine now.
Thanks
The chef-server-rds cookbook on github can be used to install Chef Server 11 with AWS RDS.
Given an iam key and secret, it will provision the rds instance if it doesn't exist in the account, initialize the chef schema, and install the appropriate platform-specific chef-server Omnibus package and perform the initial configuration of Chef Server on an AWS elastic compute ubuntu instance.
Using postgres on Amazon RDS offloads DB resource use away from the chef-server host. It also enables various DB functions like scaling, backup, and restore to be done independently of the chef-server installations. Similar configurations can be written for other db service providers.