Prisma DATABASE_URL error (Cloud Run + Cloud SQL) - google-cloud-sql

I use Prisma with Cloud Run & Cloud SQL. After providing DATABASE_URL to the prisma.schema it throws me an error in runtime.
Can't reach database server at `(/cloudsql/project-name:us-east1:database-id)`:`5432`
Please make sure your database server is running at `(/cloudsql/project-name:us-east1:database-id)`:`5432`."
Database: Postgres
Provided url DATABASE_URL: postgresql://username:password#localhost/databasename?host=(/cloudsql/project-name:us-east1:database-id)
What is wrong with connection? Do I failed to construct DATABASE_URL correctly?

I removed brackets () around host parameter /cloudsql/project-name:us-east1:database-id and everything start to work as expected.
Before (with brackets)
postgresql://username:password#localhost/databasename?host=(/cloudsql/project-name:us-east1:database-id)
After (without brackets)
postgresql://username:password#localhost/databasename?host=/cloudsql/project-name:us-east1:database-id

Related

Prisma DB Can't connect to AWS RDS

I have a nextjs project that's using prismaDB for the ORM. I'm able to connect just fine to my local postgres db but I'm getting this error when running npx prisma migrate.
Error: P1001: Can't reach database server at db-name.*.us-west-2.rds.amazonaws.com:5432.
schema.prisma:
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
//url = "postgresql://master_username:master_password#aws_host:5432/db_name"
}
The RDS db is currently public and I'm positive that I've copied over the RDS credentials correctly. There doesn't seem to be anything I should be including for the connection to work but I'm not getting any other info as to why I can't reach the db server.
Seems like you have to replace db-name.*.us-west-2.rds.amazonaws.com with the name of your actual database, unless you replaced it for the purpose of asking this question. Specifically the part where it says db-name.*.
Docs: https://www.prisma.io/docs/reference/api-reference/error-reference#common
P1001 indicates that it couldn't find the database given the connection string, NOT necessarily that the credentials you provided were wrong. Make sure you're specifying the correct database name/host and whatever else you need to make it work for AWS.
Somehow I was able to connect to RDS after deleting and creating a new DB for the third time. I confirmed connection through pgAdmin then tried it again my app deployed to vercel.

Apache Airflow Init Db

I am trying to initialize a database for my project which is based on using apache airflow. I am not too familiar with what happened but I changed my value from airflow.cfg file to sql_alchemy_conn =postgresql+psycopg2:////Users/gabeuy/airflow/airflow.db. Then when I saved the changes and ran the command airflow db init, the error occurred and did not allow me to run the db.
I tried looking up different ways to change it, ensured that I had Postgres and psycopg installed but it still resulted in an error when I ran the command. I was expecting it to run so that I could access the airflow db local host with the DAGs. error occured
Your sql_alchemy_conn is pointing to a local file path (indicating a SQLite DB), but the protocol is indicating a PostgreSQL DB. The error is telling you it's missing a password, which is required by PostgreSQL.
For PostgreSQL, the expected URL format is:
postgresql+psycopg2://<user>:<password>#<host>/<db>
And for a SQLite DB, the expected URL format is:
sqlite:////<path/to/airflow.db>
A SQLite DB is convenient for testing purposes. A SQLite DB is stored as a single file on your computer which makes it easy to set up (airflow db init will generate the file if it doesn't exist). A PostgreSQL DB takes a bit more work to set up, but is generally advised for a production scenario.
For more information about Airflow database configuration, see: https://airflow.apache.org/docs/apache-airflow/stable/howto/set-up-database.html.
And for more information about airflow db CLI commands, see: https://airflow.apache.org/docs/apache-airflow/stable/cli-and-env-variables-ref.html#db.

What is the Postgres DATABASE_URL to connect cloud run to postgres on cloud SQL

I am trying to connect my cloud run app to cloud sql, here is my cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/npm'
env:
- DATABASE_URL=$_DATABASE_URL
entrypoint: npx
dir: './server'
args:
- 'prisma'
- 'migrate'
- 'deploy'
However, I keep on getting the error Please make sure your database server is running at '/cloudsql/learninfra001:us-central1:learninfra001-postgres':'5432'.
Here is the _DATABASE_URL I use for substitution variable postgresql://postgres:password#localhost/db?schema=public&host=/cloudsql/learninfra001:us-central1:learninfra001-postgres
I have made sure the following:
The default cloud run service account has Cloud SQL Client role
The database db is created
Within the cloudrun service, under connections, the Cloud SQL connections is pointing to the correct instance (learninfra001:us-central1:learninfra001-postgres)
Using white-listed public IP, I am able to connect to the DB. However, I just can't seem to get cloud run to work. Is there anything else I could check? Or is there a way to get more logging to see why it is not connecting?
In short, you'll need to enable a Cloud SQL connection to your Cloud SQL instance from Cloud Build.
See https://cloud.google.com/sql/docs/mysql/connect-build for details.

Connecting to postgres on supabase via pgBouncer from prisma

Problem
running out of database connections in prod leading to errors like this;
Error:
Invalid `prisma.queryRaw()` invocation:
Timed out fetching a new connection from the connection pool. More info: http://pris.ly/d/connection-pool (Current connection pool timeout: 10, connection limit: 5)
at Object.request (/usr/src/app/node_modules/#prisma/client/runtime/index.js:45629:15)
at async Proxy._request (/usr/src/app/node_modules/#prisma/client/runtime/index.js:46456:18)
Situation
multiple API containers running in Google Cloud Run running node/express/prisma API
using Supabase's hosted postgres. In supabase Settings > Database, Connection Pooling is enabled.
in db connection string, using :6543/postgres?pgbouncer=true
Attempt to diagnose
in supabase, Database > Roles, I can see a list of roles and the number of connections for each. pgBouncer has 0 and the role which my application uses has several.
If I query pg_stat_activity, I can see connections for the usename which is used by my application, and client_addr values representing ip addresses for a couple of different container instances. Are these "forwarded on" from pgBouncer? or have they bypassed pgBouncer entirely?
I am not familiar with what either of these should look like if I were using pgBouncer correctly so it's hard for me to tell what's going on.
I assume this means that I either haven't configured pgBouncer correctly, or I'm not connecting to it properly, or both. I'd be really grateful if someone could point out how I could either check or fix my connection to pgBouncer and clarify what I should see in pg_stat_activity if I was correctly connected to pgBouncer. Thanks.
Figured out what's going wrong here, so writing out how I fixed it in case anyone else runs into this issue.
Better understanding of the problem
in my prisma schema file I'm getting my database url from the env
datasource db {
provider = "postgresql"
url = env("SUPABASE_POSTGRES_URL")
}
and when I'm instantiating the prisma client I'm using the same variable
export const prisma = new PrismaClient({
datasources: {
db: {
url: process.env.SUPABASE_POSTGRES_URL,
},
},
});
I have a build trigger in Google Cloud Build that builds containers when branches that are merged into certain other branches, eg when PRs are merged in to master, build new containers and deploy them to prod.
In the build trigger, the SUPABASE_POSTGRES_URL value is set in the env, using :5432 which connects directly to postgres, bypassing pgBouncer. This is a requirement for prisma migrations which can't be run through pgBouncer.
The Google Cloud Run container env vars specify a different value for SUPABASE_POSTGRES_URL however it looks like the this not being used, and instead the direct-to-postgres :5432 value is used while the app is running, to connect to the db and run queries - so pgBouncer was permanently bypassed.
Solution
where the prisma client is instantiated, I'm using a second env var. It turns out that prisma uses the env var referenced in the schema file for the DB URL for migrations and the db url in the client instantiation for queries when the app is running, and you can happily have two completely separate values for these two URLs.
export const prisma = new PrismaClient({
datasources: {
db: {
url: process.env.SUPABASE_PGBOUNCER_URL,
},
},
});
Now, SUPABASE_POSTGRES_URL is still populated from the build trigger, but it doesn't get used at runtime; instead I set SUPABASE_PGBOUNCER_URL in the Google Cloud Run env vars and that gets used during the prisma client instantiation, so queries a run through pgBouncer.
Result
Effective Prisma migrations direct to postgres
Effective connection pooling by running queries through pgBouncer

What could I be missing with Prisma client, Cloud Run, and Cloud SQL - my Prisma client can't socket-connect to my Cloud SQL instance DB?

Background
I have a NestJS project with Prisma ORM, and I am continually receiving the error:
PrismaClientInitializationError: Can't reach database server at `localhost`:`5432`
This is happening during the Cloud Build Deploy step.
Since this is a containerized application (attempting to) run in a Cloud Run instance, I'm supposed to use a socket connection. Here's the documentation from Prisma on connecting to a Postgres DB through a socket connection: https://www.prisma.io/docs/concepts/database-connectors/postgresql#connecting-via-sockets
Connecting via sockets
To connect to your PostgreSQL database via sockets, you must add a host field as a query parameter to the connection URL (instead of setting it as the host part of the URI). The value of this parameter then must point to the directory that contains the socket, e.g.: postgresql://USER:PASSWORD#localhost/database?host=/var/run/postgresql/
Note that localhost is required, the value itself is ignored and can be anything.
I've done this to the letter, as described in the Cloud SQL documentation, with the exception that I percent-encoded my path to the directory containing the socket. I've included and excluded the trailing slash.
So my host var looks like this, mapped from the percent-encoded values:
/cloudsql/<MY CLOUD SQL CONNECTION NAME>/<DB>
I've read over the Cloud Run documentation, and in my mind, I should expect a different error if the instance itself can't connect to the Cloud SQL instance. I've followed the "Make sure you have the appropriate permissions and connection" from the documentation a few times now.
Is there anything obvious that I'm missing? Am I wrong about an error related to Cloud Run instance just not connecting with Cloud SQL instance?
Things I've tried & things I know
I CAN connect directly to the Cloud SQL instance locally through psql
I CAN run a local server with the Cloud SQL instance public IP and establish a client connection & interact with the database
I CAN successfully create an image and run a container from that image locally
My big concern
It doesn't make sense to me in which order things should connect to the Cloud SQL instance. To me, the Cloud Run - Cloud SQL connection MUST be established before the application run inside the Cloud Run instance can establish its connection through the socket to the Cloud SQL instance. -- Am I thinking through that correctly?