I have a server built with sam local start-api using a PostgreSQL database running in a Docker container.
I'd like to invoke a lambda function that sends emails via AWS SES on the execution of a trigger in the PostgreSQL database. Is this something possible to achieve?
Related
Overall I'm trying to create a Datastream Connection to a Postgres database in Cloud SQL.
As I'm trying to configure it all through Terraform, I'm stuck on how I should create a Replication Slot. This guide explains how to do it through the Postgres Client and running SQL commands, but I thought there might be a way to do it in the Terraform configuration directly.
Example SQL that I would like to replicate in Terraform:
ALTER USER [CURRENT_USER] WITH REPLICATION;
CREATE PUBLICATION [PUBLICATION_NAME] FOR ALL TABLES;
SELECT PG_CREATE_LOGICAL_REPLICATION_SLOT('[REPLICATION_SLOT_NAME]', 'pgoutput');
If not, does anyone know how to run the Postgres SQL commands against the Cloud SQL database through Terraform?
I have setup the Datastream and Postgres connection for all other parts. I'm expecting that there is a Terraform setting I'm missing or a way to run Postgres commands against the Google Cloud SQL Postgres database.
Unfortunately, there is no terraform resource for specifying a replication slot on a google_sql_database_instance.
I was trying to figure-out how we can schedule to refresh the materialized view on azure postgres database single server which is in azure cloud, one solution is to use pg_cron extension, but it
seems it is only available on azure flexible postgres database server and not on azure postgres database single server, I did not get any other option available, any suggestion in this regard will be really helpful.
I did not find any postgres scheduler extension for the db hosted on Azure, so created one microservice to schedule the db functions.
Example Link
Background
I have a NestJS project with Prisma ORM, and I am continually receiving the error:
PrismaClientInitializationError: Can't reach database server at `localhost`:`5432`
This is happening during the Cloud Build Deploy step.
Since this is a containerized application (attempting to) run in a Cloud Run instance, I'm supposed to use a socket connection. Here's the documentation from Prisma on connecting to a Postgres DB through a socket connection: https://www.prisma.io/docs/concepts/database-connectors/postgresql#connecting-via-sockets
Connecting via sockets
To connect to your PostgreSQL database via sockets, you must add a host field as a query parameter to the connection URL (instead of setting it as the host part of the URI). The value of this parameter then must point to the directory that contains the socket, e.g.: postgresql://USER:PASSWORD#localhost/database?host=/var/run/postgresql/
Note that localhost is required, the value itself is ignored and can be anything.
I've done this to the letter, as described in the Cloud SQL documentation, with the exception that I percent-encoded my path to the directory containing the socket. I've included and excluded the trailing slash.
So my host var looks like this, mapped from the percent-encoded values:
/cloudsql/<MY CLOUD SQL CONNECTION NAME>/<DB>
I've read over the Cloud Run documentation, and in my mind, I should expect a different error if the instance itself can't connect to the Cloud SQL instance. I've followed the "Make sure you have the appropriate permissions and connection" from the documentation a few times now.
Is there anything obvious that I'm missing? Am I wrong about an error related to Cloud Run instance just not connecting with Cloud SQL instance?
Things I've tried & things I know
I CAN connect directly to the Cloud SQL instance locally through psql
I CAN run a local server with the Cloud SQL instance public IP and establish a client connection & interact with the database
I CAN successfully create an image and run a container from that image locally
My big concern
It doesn't make sense to me in which order things should connect to the Cloud SQL instance. To me, the Cloud Run - Cloud SQL connection MUST be established before the application run inside the Cloud Run instance can establish its connection through the socket to the Cloud SQL instance. -- Am I thinking through that correctly?
We have an RDS AWS Postgres database cluster (v13.7) from which we want to be able to invoke a Lambda function. We have carefully followed the directions laid out in this AWS document and paid special attention to our RDS security group allowing TCP traffic to port 443 and to any IPv4 addresses (0.0.0.0/0).
Using psql to connect to the database, we have been running the following command to try and invoke a lambda call:
SELECT * from aws_lambda.invoke(aws_commons.create_lambda_function_arn('arn:aws:lambda:ca-central-1:123456789012:function:lambda-postgres-test', 'ca-central-1'), '{"body": "Hello from Postgres!"}'::json, 'Event');
However the result is the following error after about 30s:
ERROR: invoke API failed
DETAIL: AWS Lambda client returned 'curlCode: 28, Timeout was reached'.
CONTEXT: SQL function "invoke" statement 1
It feels like this is some simple inbound or outbound rules issue but so far nothing we have tried has changed the outcome. Does anyone know about another change that may be required to permit RDS Postgres to invoke a lambda?
Also: our RDS Postgres cluster is publicly available and so we've followed the security-setup directions from the AWS docuement labelled: "To configure connectivity to AWS Lambda for a public DB instance"
I have one AWS RDS PostgreSQL instance, is there any way we can schedule a job that will call any PostgreSQL method as we do in SQL server to call procedure/method from job?