Unable to create Postgres trigger on AWS serverless database - postgresql

I am trying to create a trigger on an AWS serverless Postgres database. The problem I am having is that AWS Query Editor is breaking up the create trigger statement into 3 partial statements.
When I submit this create function statement:
create or replace function init_vstamp()
returns trigger as $body$
begin
new.vstamp := 1;
end;
$body$ language plpgsql;
It is getting broken up into these three partial statements by Query Editor:
create or replace function init_vstamp()
returns trigger as $body$
begin
new.vstamp := 1;
end;
$body$ language plpgsql;
Query Editor is forcing each statement to terminate at the semicolon, and the semicolon is required by Postgres syntax.
The error for the first statement is:
Unterminated dollar quote started at position 60 in SQL create or replace function init_vstamp() returns trigger as $body$ begin new.vstamp := 1;DISCARD ALL. Expected terminating $$
I have been unable to find any AWS Query Editor options to run the contents of the editor window as a single statement.
Similar problems on a different platform were noted here. I tried some of the solutions there to no avail.

I did not find a way to do this from the AWS Query Editor UI, but I came up with this work around. Note that AWS Serverless databases do not support public access, so you must access the database from an EC2 instance on the same VPC.
Establish an EC2 instance on the same VPC.
Install docker on your EC2 instance. Instructions here.
Run a docker container with psql client. Container here. This container runs psql 10.3 which seems to work fine with a 10.7 database.
syntax:
docker run -it --rm jbergknoff/postgresql-client postgresql://{username}#{aws-endpoint}/{database}
Most likely your {username} and {database} will be 'postgres'. The {aws-endpoint} is the endpoint name assigned to your RDS cluster by AWS. The default port is 5432.
If connectivity is good, you will immediately be prompted for a password. From there you can run traditional psql commands.
If connectivity is bad, the command line will appear to hang with a flashing cursor. The most likely connectivity problems are:
EC2 and RDS are not on the same VPC.
RDS inbound traffic is not enabled for Postgres port 5432 from EC2.
The database is paused due to inactivity (this one will self-correct).

Related

How to create multiple databases with Postgres in pgAdmin4

I am trying to run the following query in pgAdmin:
CREATE DATABASE abc;
CREATE DATABASE xyz;
And I get the following error:
ERROR: current transaction is aborted, commands ignored until end of transaction block
SQL state: 25P02
I'm relatively new to postgres.
With SQL Server it's possible to create multiple databases in a single query with the "GO" statement in between if necessary.
I've tried to google this error, and most answers are to simply run each line separately.
That would work, but I'm curious why this doesn't work.
It may also be a setting in pgAdmin.
The "autocommit" is currently on. I've tried it off, and same result.
I'm using postgres 14.5 (in aws)

Use Terraform on Google Cloud SQL Postgres to create a Replication Slot

Overall I'm trying to create a Datastream Connection to a Postgres database in Cloud SQL.
As I'm trying to configure it all through Terraform, I'm stuck on how I should create a Replication Slot. This guide explains how to do it through the Postgres Client and running SQL commands, but I thought there might be a way to do it in the Terraform configuration directly.
Example SQL that I would like to replicate in Terraform:
ALTER USER [CURRENT_USER] WITH REPLICATION;
CREATE PUBLICATION [PUBLICATION_NAME] FOR ALL TABLES;
SELECT PG_CREATE_LOGICAL_REPLICATION_SLOT('[REPLICATION_SLOT_NAME]', 'pgoutput');
If not, does anyone know how to run the Postgres SQL commands against the Cloud SQL database through Terraform?
I have setup the Datastream and Postgres connection for all other parts. I'm expecting that there is a Terraform setting I'm missing or a way to run Postgres commands against the Google Cloud SQL Postgres database.
Unfortunately, there is no terraform resource for specifying a replication slot on a google_sql_database_instance.

Create a stored procedure in Aurora serverless postgres using aws cli in windows 10

I am trying to create stored procedure in Aurora serverless postgres db.
I tried using the below command from windows cmd:
aws rds-data execute-statement --resource-arn "arn:aws:rds:eu-west-1:xxxx:cluster:democluster" --database "demo" --secret-arn "arn:aws:secretsmanager:eu-west-1:xxxx:secret:rds-db-credentials/cluster-U5TWNYQ2UGIBDE64D3ENAxxxx/postgres-xxxx" --sql && type storedProc.sql
Below is the stroredProc.sql:
CREATE PROCEDURE simpleproc (OUT param1 INT)
BEGIN
SELECT COUNT(*) INTO param1 FROM myrecords;
END
Error:
aws.cmd: error: argument --sql-statements: expected one argument
Some time I get issue saying procedure keyword does not exist.
I found this url in stackoverflow but its for Linux.
How to add a Stored Procedure to AWS RDS Aurora-Serverless?
But any help for windows based aws cli solution will be appreciated.
I realized from the documentation of Postgres, that stored proc is not supported in Postgres v10. Better to use functions than stored procedure.

pgAdmin 4 does not create functions with geometry

I started recently using pgAdmin 4 after having used pgAdmin 3 for a long time.
The problem I have now is that I cannot replace existing functions that reference geometry objects. I am on Windows 10, and the current version of pgAdmin 4 is 2.1.
The PL/pgSQL function already exists in the database and it was created with flyway, psql or pgAdmin 3, and it works.
The PostGIS extension is enabled.
Now I go in pgAdmin 4 and use the same login as ever, choose "Scripts"->"Create script" for the function, and then click F5 - "run Script" to create it.
What I get is an error:
ERROR: type "geometry" does not exist
LINE 22: v_location geometry;
The same operation in pgAdmin 3 gives no error.
The function has the search_path set correctly and can be created in pgAdmin3 and psql.
Actually I can create a dummy function in the query tool in pgadmin4 and compile it. See below:
set search_path=res_cc_01,public;
create or replace function test() returns text
LANGUAGE plpgsql VOLATILE SECURITY DEFINER
COST 100
set search_path=res_cc_01,public
AS
$BODY$
DECLARE
v_g geometry := ST_GeometryFromText('POINT(142.279859 -9.561480)',4326);
begin
return 'xxx'::text;
end;
$BODY$
Only when recompiling through Scripts->Create Script then F5 ( Execute ) I got the error.
What is the problem here?
I could not find a similar problem on the net, nor into the manuals. But I haven't read them all.
You're certainly missing the postgis extension in the current database. If you already have it installed in your server, just execute the following command to create the extension and so the missing data type:
CREATE EXTENSION postgis;
If it happens to be that you don't have it installed in your database server, you can find the installation instructions to many operating systems here.
The problem is caused by the way pgadmin 4 shows the plpgsql code when using Scripts-> Create Script.
It adds unneeded quotes to the paths in the search_path directive that makes the entire search_path invalid. With an invalid search_path the compiler will not find the available extensions.
We actually have an item with PGAdmin4 with Scripts->Create Script malfunction.
In PgAdmin 3 the search_path directive used in the function was showed as alter function statement but in a correct form.
Many thanks to Jim Jones for the precious analysis on this problem.

Heroku Postgres delete trigger not replicating to Salesforce over Heroku Connect

A Heroku Postgres delete trigger is triggering and successfully deleting its target Postgres rows, however the delete isn't syncing over to Salesforce via Heroku Connect. When the same delete is done manually it syncs to Salesforce without issue.
Are there any special considerations for Postgres delete triggers that need to sync over to Salesforce via Heroku Connect, e.g., setting certain fields before performing the delete?
From Heroku support...
Connect uses an almost-entirely-unused Postgres session variable called xmlbinary as a flag to prevent Connect's own writes to a table from triggering additional _trigger_log entries. If these deletes are happening as a result of Connect activity (and therefore executing in the context of a Connect database connection), Connect is likely seeing this xmlbinary variable and ignoring the change.
If you can change the value of xmlbinary to base64 for the duration of your trigger, it should fix it. That would look like this:
CREATE OR REPLACE FUNCTION foo() RETURNS TRIGGER AS $$
DECLARE
oldxmlbinary varchar;
BEGIN
-- Save old value
oldxmlbinary := get_xmlbinary();
-- Change value to ensure writing to _trigger_log is enabled
SET SESSION xmlbinary TO 'base64';
-- Perform your trigger functions here
-- Reset the value
EXECUTE 'SET SESSION xmlbinary TO ' || oldxmlbinary;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;