If any of you created/tried Triggers on Greenplum, Please help me to resolve this
I have a table where a "id" column has some value, and i want to put a trigger
Before insert any data in this table, it should call a function/trigger to check
a) if the data is available for "id" in Parent table or not
b) there is already a row available for given "id"
--Table DDL
create table test_trigger(id integer, details text);
--Trigger Function
create or replace function insert_row_trigger() returns trigger as $$
begin
if exists (SELECT 1 FROM test_trigger WHERE id = NEW.id)
Then
Return NULL;
else
Return NEW;
End If;
End;
$$ language plpgsql;
--Trigger Creation
create trigger my_trigger before insert on test_trigger for each row execute procedure insert_row_trigger();
--Drop Trigger
drop trigger my_trigger on test_trigger
ERROR
ERROR: function cannot execute on segment because it accesses
relation "jiodba.test_trigger" (functions.c:151) (seg1
SRDCB0002GPM02:40001 pid=11366) (cdbdisp.c:1477) DETAIL: SQL
statement "SELECT exists (SELECT 1 FROM test_trigger WHERE id = $1 )"
PL/pgSQL function "insert_row_trigger" line 2 at if
********** Error **********
ERROR: function cannot execute on segment because it accesses relation
"jiodba.test_trigger" (functions.c:151) (seg1 SRDCB0002GPM02:40001
pid=11366) (cdbdisp.c:1477) SQL state: XX000 Detail: SQL statement
"SELECT exists (SELECT 1 FROM test_trigger WHERE id = $1 )" PL/pgSQL
function "insert_row_trigger" line 2 at if
Please help me on this.
~I also read somewhere that triggers are not supported in GP
Trigger is a function executed on the segment level for each of the input data rows. The issue is that in Greenplum you cannot execute any query from the segment level as it would require each segment to reconnect to the master to execute it separately, which will cause a connection bloat for a big systems.
The way to overcome this is for instance this way:
Have an unique index on the Parent table
In a single transaction, execute two statements: first, insert into parent select all the rows that does not exist in parent table. Second, insert into target table all the input rows with the keys just inserted to the parent table.
In general, you will have the same logic, but without trigger
Create a RULE statement is a alternative to triggers in GP.
Try this:
https://gpdb.docs.pivotal.io/5280/ref_guide/sql_commands/CREATE_RULE.html#:~:text=The%20Greenplum%20Database%20rule%20system,used%20on%20views%20as%20well.
Related
In postgresql 13, I am inserting data of a table into temporary table at run time. Query works fine when executed. But when I try to create store procedure on top of the query it fails with error: ERROR: "temp" is not a known variable
Can someone please help me to understand what am I missing?
DB FIDDLE
CREATE OR REPLACE PROCEDURE dbo.<proc-name>()
LANGUAGE plpgsql
AS $$
BEGIN
DROP TABLE IF EXISTS child;
SELECT Id, Name
into TEMP TABLE child
FROM dbo.master;
COMMIT;
END;
$$;
Thanks
SELECT ... INTO is PL/pgSQL syntax to store a query result in a variable. You should use CREATE TABLE ... AS SELECT.
I have the following trigger function & trigger in PostgreSQL 12.1:
create or replace function constraint_for_present()
returns trigger
as $$
BEGIN
if
new.present_status = 'viewing'
and new.name not in (select viewable_item from sourcing)
then raise exception 'a present_status of "viewing" requires that the viewable item is in sourcing';
end if;
return new;
END;
$$ language plpgsql;
create trigger constraint_for_present
before insert or update of present_status on viewable_item
for each row
execute function constraint_for_present();
These work as expected during data entry in the psql and TablePlus clients. However, the function throws an error when accessing the database via LibreOffice Base:
pq_driver: [PGRES_FATAL_ERROR]ERROR: relation "sourcing" does not exist
LINE 2: and new.name not in (select viewable_item from sourcing)
QUERY: SELECT new.present_status = 'viewing'
and new.name not in (select viewable_item from sourcing)
CONTEXT: PL/pgSQL function viewing.constraint_for_present() line 3 at IF
(caused by statement 'UPDATE "viewing"."viewable_item" SET "present_status" = 'none' WHERE "name" = 'test4'')
In Base I have a simple form set up for the trigger's table, with each foreign-key column set to list box, and the Type of list contents set to Sql (also tried Sql [Native]). The List content of each is (with appropriate table and primary key columns):
select name from viewing.cv_present_status order by name
(This database is using natural keys for now, for organizational political reasons.) The Bound field is set to 0, which is the displayed and primary key column.
So ... 2 questions:
Why is this problem happening only in Base, and how might I fix it (or at least better trouble-shoot it)?
Since Bound field appears to take only a single integer, does that in effect mean that you can't use list boxes for tables with multi-column primary keys, at least if there is a single displayed column?
In the trigger function, you can fully qualify the table
...
and new.name not in (select viewable_item from viewing.sourcing)
...
I'm trying to, somehow, trigger a automatic function drop when a table is dropped and I can't figure out how to do it.
TL;DR: Is there a way to trigger a function drop when a specific table is dropped? (POSTGRESQL 11.7)
Detailed explanation
I'll try to explain my problem using a simplified use case with dummy names.
I have three tables: sensor1, sensor2 and sumSensors;
A FUNCTION (sumdata) was created to INSERT data on sumSensors table. Inside this function I'll fetch data from sensor1 and sensor2 tables and insert its sum on table sumSensors;
A trigger was created for each sensor table which like this:
CREATE TRIGGER trig1
AFTER INSERT ON sensor1
FOR EACH ROW EXECUTE
FUNCTION sumdata();
Now, when a new row is inserted on tables sensor1 OR sensor2, the function sumdata will be executed and insert the sum of last values from both on table sumSensors
If I wanted to DROP FUNTION sumdata CASCADE;, the triggers would be automatically removed from tables sensor1 and sensor2. Until now that's everything fine! But that's not what I want.
My problem is:
Q: And if I just DROP TABLE sumSensors CASCADE;? What would happen to the function which was meant to insert on this table?
A: As expected, since there's no association between sumSensors table and sumdata function, the function won't be dropped (still exist)! The same happens to the triggers which use it (still exist). This means that when a new row is inserted on sensor tables, the function sumdata will be executed and corrupted, leading to a failure (even the INSERT which triggered the function execution won't be actually inserted).
Is there a way to trigger a function drop when a specific table is dropped?
Thank you in advance
There is no dependency tracking for functions in PostgreSQL (as of version 12).
You can use event triggers to maintain the dependencies yourself.
Full example follows.
More information: documentation of event triggers feature, support functions.
BEGIN;
CREATE TABLE _testtable ( id serial primary key, payload text );
INSERT INTO _testtable (payload) VALUES ('Test data');
CREATE FUNCTION _testfunc(integer) RETURNS integer
LANGUAGE SQL AS $$ SELECT $1 + count(*)::integer FROM _testtable; $$;
SELECT _testfunc(100);
CREATE FUNCTION trg_drop_dependent_functions()
RETURNS event_trigger
LANGUAGE plpgsql AS $$
DECLARE
_dropped record;
BEGIN
FOR _dropped IN
SELECT schema_name, object_name
FROM pg_catalog.pg_event_trigger_dropped_objects()
WHERE object_type = 'table'
LOOP
IF _dropped.schema_name = 'public' AND _dropped.object_name = '_testtable' THEN
EXECUTE 'DROP FUNCTION IF EXISTS _testfunc(integer)';
END IF;
END LOOP;
END;
$$;
CREATE EVENT TRIGGER trg_drop_dependent_functions ON sql_drop
EXECUTE FUNCTION trg_drop_dependent_functions();
DROP TABLE _testtable;
ROLLBACK;
After some aggravation, I found (IMO) odd behavior when a function calls another. If the outer function creates a temporary table, and the inner function creates a temporary table with the same name, the inner function "wins." Is this intended? FWIW, I am proficient at SQL Server, and temporary tables do not act this way. Temporary tables (#temp or #temp) are scoped to the function. So, an equivalent function (SQL Server stored procedure) would return "7890," not "1234."
drop function if exists inner_function();
drop function if exists outer_function();
create function inner_function()
returns integer
as
$$
begin
drop table if exists tempTable;
create temporary table tempTable (
inner_id int
);
insert into tempTable (inner_id) values (1234);
return 56;
end;
$$
language plpgsql;
create function outer_function()
returns table (
return_id integer
)
as
$$
declare intReturn integer;
begin
drop table if exists tempTable; -- note that inner_function() also declares tempTable
create temporary table tempTable (
outer_id integer
);
insert into tempTable (outer_id) values (7890);
intReturn = inner_function(); -- the inner_function() function recreates tempTable
return query
select * from tempTable; -- returns "1234", not "7890" like I expected
end;
$$
language plpgsql;
select * from outer_function(); -- returns "1234", not "7890" like I expected
There are no problem with this behaviour, in PostgreSQL temp table can have two scopes:
- session (default)
- transaction
To use the "transaction" scope you should use "ON COMMIT DROP" at the end of the CREATE TEMP statement, i.e:
CREATE TEMP TABLE foo(bar INT) ON COMMIT DROP;
Anyway your two functions will be executed in one transaction so when you call the inner_function from the outer_function you'll be in the same transaction and PostgreSQL will detect that "tempTable" already exists in the current session and will drop it in "inner_function" and create again...
Is this intended?
Yes, these are tables in the database, similar to permanent tables.
They exist in a special schema, and are automatically dropped at the end of a session or transaction. If you create a temporary table with the same name as a permanent table, then you must prefix the permanent table with its schema name to reference it while the temporary table exists.
If you want to emulate the SQL Server implementation then you might consider using particular prefixes for your temporary tables.
I'm trying to dynamically partition log entries in Postgres. I have 53 child tables (1 for each week's worth of log entries), and would like to route INSERTs to a child table using a trigger.
I run the function with INSERT INTO log5 VALUES (NEW.*), and it works.
I run the function with the EXECUTE statement instead, and it fails. Within the EXECUTE statement, it's recognizing NEW as a table name and not a variable passed to the trigger function. Any ideas on how to fix? Thanks!
The error:
QUERY: INSERT INTO log5 VALUES (NEW.*)
CONTEXT: PL/pgSQL function log_roll_test() line 6 at EXECUTE statement
ERROR: missing FROM-clause entry for table "new" SQL state: 42P01
My function:
CREATE FUNCTION log_roll_test() RETURNS trigger AS $body$
DECLARE t text;
BEGIN
t := 'log' || extract(week FROM NEW.updt_ts); --child table name
--INSERT INTO log5 VALUES (NEW.*);
EXECUTE format('INSERT INTO %I VALUES (NEW.*);', t);
RETURN NULL;
END;
$body$ LANGUAGE plpgsql;
My trigger:
CREATE TRIGGER log_roll_test
BEFORE INSERT ON log FOR EACH ROW
EXECUTE PROCEDURE log_roll_test();
CREATE FUNCTION log_roll_test()
RETURNS trigger
LANGUAGE plpgsql AS
$func$
BEGIN
EXECUTE format('INSERT INTO %I SELECT ($1).*' -- !
, to_char(NEW.updt_ts, '"log"WW')) -- child table name
USING NEW; -- !
RETURN NULL;
END
$func$;
You cannot reference NEW inside the query string. NEW is visible in the function body, but not inside EXECUTE environment. The best solution is to pass values in the USING clause.
I also substituted the equivalent to_char(NEW.updt_ts, '"log"WW') for the table name. to_char() is faster and simpler here.