A Heroku Postgres delete trigger is triggering and successfully deleting its target Postgres rows, however the delete isn't syncing over to Salesforce via Heroku Connect. When the same delete is done manually it syncs to Salesforce without issue.
Are there any special considerations for Postgres delete triggers that need to sync over to Salesforce via Heroku Connect, e.g., setting certain fields before performing the delete?
From Heroku support...
Connect uses an almost-entirely-unused Postgres session variable called xmlbinary as a flag to prevent Connect's own writes to a table from triggering additional _trigger_log entries. If these deletes are happening as a result of Connect activity (and therefore executing in the context of a Connect database connection), Connect is likely seeing this xmlbinary variable and ignoring the change.
If you can change the value of xmlbinary to base64 for the duration of your trigger, it should fix it. That would look like this:
CREATE OR REPLACE FUNCTION foo() RETURNS TRIGGER AS $$
DECLARE
oldxmlbinary varchar;
BEGIN
-- Save old value
oldxmlbinary := get_xmlbinary();
-- Change value to ensure writing to _trigger_log is enabled
SET SESSION xmlbinary TO 'base64';
-- Perform your trigger functions here
-- Reset the value
EXECUTE 'SET SESSION xmlbinary TO ' || oldxmlbinary;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
Related
I'm trying to use boto3 redshift-data client to execute transactional SQL for external table (Redshift spectrum) with following statement,
ALTER TABLE schema.table ADD IF NOT EXISTS
PARTITION(key=value)
LOCATION 's3://bucket/prefix';
After submit using execute_statement, I received error "ALTER EXTERNAL TABLE cannot run inside a transaction block".
I tried use VACUUM and COMMIT commands before the statement, but it will just mention that VACUUM or COMMIT cannot run inside a transaction block.
How may I successfully execute such statement?
This has to do with the settings of your bench. You have an open transaction at the start of every statement you run. Just add “END;” before the statement that needs to run outside of a transaction and things should work. Just make sure you launch both commands at the same time from your bench.
Like this:
END; VACUUM;
It seems not quite easy to run transactional SQL through boto3. However, I found a workaround using the redshift_connector library.
import redshift_connector
connection = redshift_connector.connect(
host=host, port=port, database=database, user=user, password=password
)
connection.autocommit = True
connection.cursor.execute(transactional_sql)
connection.autocommit = False
Reference - https://docs.aws.amazon.com/redshift/latest/mgmt/python-connect-examples.html#python-connect-enable-autocommit
I have a PostgreSQL database running on an Azure machine. When I try to create a table on a database, I get an error "cannot execute CREATE TABLE in a read-only transaction". The SQL query is being executed by a python script using a sqlalchemy engine. But I tried a similar query in PGAdmin installed on my machine and I get the same error. And I noticed that I do not have this issue if I connect to the database from a colleague's machine.
After further research, I found that if I execute SELECT pg_is_in_recovery(); in my PGAdmin, it returns true. And false on my colleague's machine.
Let me know if there is any way to correct this
SELECT pg_is_in_recovery() - returned true = Database has only Read Acces
can you check your permission?
you can check postgresql.conf file and atribute default_transaction_read_only
or try this:
begin;
set transaction read write;
alter database exercises set default_transaction_read_only = off;
commit;
The issue was that our posgtresql machine is a HA machine, and that I was connecting to an IP address rather than the domain.
I have the following:
CREATE OR REPLACE TRIGGER trigger_commands
AFTER INSERT ON commands
FOR EACH ROW EXECUTE PROCEDURE on_commands_change();
it works on a local Postgres instance, but fails on AWS with:
[42601] ERROR: syntax error at or near "TRIGGER"
however, if I remove the 'OR REPLACE' part:
CREATE TRIGGER trigger_commands
AFTER INSERT ON commands
FOR EACH ROW EXECUTE PROCEDURE on_commands_change();
then RDS accepts it. Why is that? and what would be the best way to update the trigger at every code restart?
It's most likely a Postgresql version mismatch, CREATE OR REPLACE TRIGGER works only on Postgresql 14 (see documentation, 14 and 13).
I'd suggest to use DROP TRIGGER IF EXISTS then CREATE TRIGGER for backwards compatibility.
I'm trying to learn transactions management in PostgreSQL (I'm an Oracle PL/SQL programmer); using DBeaver I face myself with the following problem:
I run this script
DO
$$
BEGIN
DELETE FROM my_table;
END;
$$
LANGUAGE plpgsql;
COMMIT;
it runs smoothly but when I look at pending transactions I find that there are two pending transactions that need to be committed (I get rid of them only using the "COMMIT" button on the IDE).
I expect to have no pending transaction after the commit in the script, where do I get it wrong?
I am trying to create a trigger on an AWS serverless Postgres database. The problem I am having is that AWS Query Editor is breaking up the create trigger statement into 3 partial statements.
When I submit this create function statement:
create or replace function init_vstamp()
returns trigger as $body$
begin
new.vstamp := 1;
end;
$body$ language plpgsql;
It is getting broken up into these three partial statements by Query Editor:
create or replace function init_vstamp()
returns trigger as $body$
begin
new.vstamp := 1;
end;
$body$ language plpgsql;
Query Editor is forcing each statement to terminate at the semicolon, and the semicolon is required by Postgres syntax.
The error for the first statement is:
Unterminated dollar quote started at position 60 in SQL create or replace function init_vstamp() returns trigger as $body$ begin new.vstamp := 1;DISCARD ALL. Expected terminating $$
I have been unable to find any AWS Query Editor options to run the contents of the editor window as a single statement.
Similar problems on a different platform were noted here. I tried some of the solutions there to no avail.
I did not find a way to do this from the AWS Query Editor UI, but I came up with this work around. Note that AWS Serverless databases do not support public access, so you must access the database from an EC2 instance on the same VPC.
Establish an EC2 instance on the same VPC.
Install docker on your EC2 instance. Instructions here.
Run a docker container with psql client. Container here. This container runs psql 10.3 which seems to work fine with a 10.7 database.
syntax:
docker run -it --rm jbergknoff/postgresql-client postgresql://{username}#{aws-endpoint}/{database}
Most likely your {username} and {database} will be 'postgres'. The {aws-endpoint} is the endpoint name assigned to your RDS cluster by AWS. The default port is 5432.
If connectivity is good, you will immediately be prompted for a password. From there you can run traditional psql commands.
If connectivity is bad, the command line will appear to hang with a flashing cursor. The most likely connectivity problems are:
EC2 and RDS are not on the same VPC.
RDS inbound traffic is not enabled for Postgres port 5432 from EC2.
The database is paused due to inactivity (this one will self-correct).