How I can connect cloud function to cloudsql.
import psycopg2
def hello_gcs(event, context):
print("Imported")
conn = psycopg2.connect("dbname='db_bio' user='postgres' host='XXXX' password='aox199'")
print("Connected")
file = event
print(f"Processing file: {file['name']}.")
Could not connect to cloud sql's postgres version, please help.
Google Cloud Function provides a unix socket to automatically authenticate connections to your Cloud SQL instance if it is in the same project. This socket is located at /cloudsql/[instance_connection_name].
conn = psycopg2.connect(host='/cloudsql/[instance_connection_name]', dbname='my-db', user='my-user', password='my-password')
You can find the full documentation page (including instructions for authentication from a different project) here.
You could use the Python example mentioned on this public issue tracker, or use the Node.JS code shown in this document to connect to Cloud SQL using Cloud Functions.
Related
I have a postgresql Google Cloud SQL instance and I'm trying to connect a FastAPI application running in Google Cloud Run to it, however I'm getting ConnectionRefusedError: [Errno 111] Connection refused errors.
My application uses the databases package for async database connections:
database = databases.Database(sqlalchemy_database_uri)
Which then tries to connect on app startup through:
#app.on_event("startup")
async def startup() -> None:
if not database.is_connected:
await database.connect() <--- this is where the error is raised
From reading through the documentation here it suggests forming the connection string like so:
"postgresql+psycopg2://user:pass#/dbname?unix_sock=/cloudsql/PROJECT_ID:REGION:INSTANCE_NAME/.s.PGSQL.5432"
I've tried several different variations of the url, with host instead of unix_sock as the sqlalchemy docs seem to suggest, as well as removing the .s.PGSQL.5432 at the end as I've seen some other SO posts suggest, all to no avail.
I've added the Cloud SQL connection to the instance in the Cloud Run dashboard and added a Cloud SQL Client role to the service account.
I'm able to connect to the databases locally with the Cloud SQL Auth Proxy.
I'm at a bit of a loss on how to fix this, or even how to debug it as there doesn't seem to be any easy way to ssh into container and try out some things. Any help would be greatly appreciated, thanks!
UPDATE
I'm able to connect directly with sqlalchemy with:
from sqlalchemy import create_engine
engine = create_engine(url)
engine.connect()
Where url is any of these formats:
"postgresql://user:pass#/db_name?host=/cloudsql/PROJECT_ID:REGION:INSTANCE_NAME"
"postgresql+psycopg2://user:pass#/db_name?host=/cloudsql/PROJECT_ID:REGION:INSTANCE_NAME"
"postgresql+pg8000://user:pass#/db_name?unix_sock=/cloudsql/PROJECT_ID:REGION:INSTANCE_NAME/.s.PGSQL.5432"
Is there something due to databases's async nature that's causing issues?
turns out this was a bug with the databases package. this should now be resolved with https://github.com/encode/databases/pull/423
I've built a Vapor app and am trying to deploy it to Google cloud run. At the same time I'm trying to connect my app to a Cloud SQL instance using unix sockets as it's documented here. There was also a issue opened on Vapor's postgres library here that mentions being successful using unix sockets to connect from cloud run.
Setup
My code looks like this:
let postgresConfig = PostgresConfiguration(unixDomainSocketPath: Environment.get("DB_SOCKET_PATH") ?? "/cloudsql",
username: Environment.get("DATABASE_USERNAME") ?? "vapor_username",
password: Environment.get("DATABASE_PASSWORD") ?? "vapor_password",
database: Environment.get("DATABASE_NAME") ?? "vapor_database")
app.databases.use(.postgres(configuration: postgresConfig), as: .psql)
I've also tested to see if the environment variables are there using this snippet, which didn't throw the fatalError.
if Environment.get("DB_SOCKET_PATH") == nil {
fatalError("No environment variables found...")
}
My DB_SOCKET_PATH looks like cloudsql/project-id:us-central1:sql-instance-connection-name/.s.PGSQL.5432
I've also set up the correct user and database in the Cloud SQL instance, as well as enabled public ip connections.
Whats happening?
When I deploy this image to google cloud run, it throws the error: Fatal error: Error raised at top level: connect(descriptor:addr:size:): No such file or directory (errno: 2): file Swift/ErrorType.swift, line 200
What have you done?
I tried testing unix socket connections on my local machine, and I found that when I had the incorrect unix socket, it would through this error.
I also tried commenting out all migration and connection code, which fixed the error, and narrowed it down to the PostgresConfiguration code.
What are you trying to do?
I'm trying to figure out how to connect my cloud run app to a cloud sql instance. Am I missing a configuration somewhere on my instances? Or am I making a mistake with my unix path/vapor implementation?
When using Google Appengine Flexible + Python, how to use Psycopg2 directly (without SQLAlchemy) to access a CloudSQL PostgreSQL database?
Hello myselfhimself,
Here is a solution:
in your app.yaml, add an environment variable, imitating Google Appengine Flexible Python CloudSQL documentation's SQLAlchemy's URI but without the psycopg2+ prefix:
env_variables:
PSYCOPG2_POSTGRESQL_URI: postgresql://user:password#/databasename?host=/cloudsql/project-name:region:database-instance-name
in any python file to be deployed and run, pass that environment variable to psycopg2's connect statement directly. This leverages psycopg2.connect's ability to pass the URI directly to the psql client library (this might not work with older PostgreSQL versions..).
import os
import psycopg2
conn = psycopg2.connect(os.environ['PSYCOPG2_POSTGRESQL_URI'])
when working locally with the google cloud proxy tool, make sure you set the URI environment variable first, if your local server is not aware of app.yaml:
export PSYCOPG2_POSTGRESQL_URI="postgresql://user:password#/databasename?host=/cloudsql/project-name:region:database-instance-name"
./cloud_sql_proxy -instances=project-name:region:database-instance-name=tcp:5432
#somewhat later:
python myserver.py
I hope it will work for your too : )
I'm trying to connect to my Heroku DB and I'm getting the following series of errors related to SSL:
SSL connection to data store using host matching failed. Retrying without host matching.
SSL connection to data store failed. Retrying without SSL.
Check that your connection definition references your JDBC database with correct URL syntax, username, and password. org.postgresql.util.PSQLException: Connection attempt timed out.
I managed to connect to the DB with DBeaver and had similar SSL problems until I set the SSL Factory to org.postgresql.ssl.NonValidatingFactory, but Glue doesn't offer any SSL options.
The DB is actually hosted on AWS, the connection URL is:
jdbc:postgresql://ec2-52-19-160-2.eu-west-1.compute.amazonaws.com:5432/something
(p.s. the AWS Glue forums are useless! They don't seem to be answering anyones questions)
I was having the same issue and it seems that the issue is that Heroku requires a newer JDBC driver than the one that Amazon requires. See this thread:
AWS Data Pipelines with a Heroku Database
Also, it seems that you can use the jbdc directly from your python scripts. See here:
https://dzone.com/articles/extract-data-into-aws-glue-using-jdbc-drivers-and
So it seems like you need to download a new driver, upload it to s3, then manually use it in your scripts as mentioned here:
https://gist.github.com/saiteja09/2af441049f253d90e7677fb1f2db50cc
Good luck!
UPDATE: I was able to use the following code snippet in a Glue Job to connect to the data. I had to upload the Postgres driver to S3 and then add it to the path for my Glue Job. Also, make sure that either the Jars are public or you've configured the IAM user's policy such that they have access to the bucket.
%pyspark
import sys
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.dynamicframe import DynamicFrame
from awsglue.transforms import *
glueContext = GlueContext(SparkContext.getOrCreate())
source_df = spark.read.format("jdbc").option("url","jdbc:postgresql://<hostname>:<port>/<datbase>“).option("dbtable", “<table>”).option("driver", "org.postgresql.Driver").option("sslfactory", "org.postgresql.ssl.NonValidatingFactory").option("ssl", "true").option("user", “<username>”).option("password", “<password>”).load()
dynamic_dframe = DynamicFrame.fromDF(source_df, glueContext, "dynamic_df")
I have exported my google Cloud SQL instance to Google Cloud Storage. I have exported the file in the compressed format (.gz) to Cloud Storage bucket. Then after I downloaded to my system and extracted it using 7zip. How can open it in MySQL Workbench to see the database and values. Its file type is shown as instance name.
The exported data from Cloud SQL is similar to what you get from mysqldump. It's basically a series of SQL statements that, when you run it on another server will run all the commands to get from a clean state to the exported state.
I'm not very familiar with MySQL Workbench, but from what I've read it allows you to manage your MySQL database, browsing tables and data. So you may need to upload your exported data to another MySQL server, for example a local one running on your computer.
Note that you could also connect directly from MySQL Workbench to your Cloud SQL instance by requesting an IP for your instance and authorizing the network that you'll connect from.
You can connect directly to your Cloud SQL instance. All you need to do is whitelist your IP address and connect through MySQL Workbench as if it's a normal database instance.
You can whitelist your IP by:
Navigate to https://console.cloud.google.com/sql and select your project.
Go to the Connections tab and Add Network in the Public IP section.
Use the connection details on the Overview tab to connect
Then you can browse your database through Workbench as if it was a local instance.