CREATE TRIGGER AFTER INSERT WHEN (condition) DB2 - triggers

CREATE TRIGGER hundert
AFTER INSERT ON Leistung
FOR EACH ROW MODE DB2SQL
WHEN(SELECT modulNR, SUM(Prozentanteil) AS summe
FROM Leistung
GROUP BY modulNr
HAVING SUM(prozentanteil) > 100)
BEGIN ATOMIC
SIGNAL SQLSTATE '23506'
SET MESSAGE_TEXT = ('The Sum is bigger then 100');
END
How to create a WHEN statement if i wanna say it has to exam all "prozentanteil" if the sum is bigger then 100?

If you look at the syntax diagram for the CREATE TRIGGER statement in the manual, you'll see that the WHEN clause needs a search condition that returns a boolean value. A subselect by itself cannot return a boolean value. You probably meant to use the EXISTS predicate there:
...
WHEN EXISTS (SELECT ...)
...

Related

Execute select statement conditionally

I'm using PostgreSQL 9.6 and I need to create a query that performs a select depending on the logic of an if
Basically I've tried:
DO $$
BEGIN
IF exists ( SELECT 1 FROM TABLE WHERE A = B ) THEN
SELECT *
FROM A
ELSE
SELECT *
FROM B
END IF
END $$
And that returns me an error:
ERROR: query has no destination for result data
HINT: If you want to discard the results of a SELECT, use PERFORM
instead.
CONTEXT: PL/pgSQL function inline_code_block line 15 at SQL statement
Then I switched "SELECT" for "PERFORM", but that don't actually execute the SELECT statement for me.
I read that I need to call a void function to perform a "dynamic" query, but I couldn't make that work either. I'm new to writing queries on PostgreSQL. Is there any better way of doing that?
DO statements do not take parameters nor return anything. See:
Returning values for Stored Procedures in PostgreSQL
You may want a function instead. Create once:
CREATE FUNCTION foo()
RETURNS SETOF A -- or B, all the same
LANGUAGE plpgsql AS
$func$
BEGIN
IF EXISTS (SELECT FROM ...) THEN -- some meaningful test
RETURN QUERY
SELECT *
FROM A;
ELSE
RETURN QUERY
SELECT *
FROM B;
END IF;
END
$func$
Call:
SELECT * FROM foo();
But the function has one declared return type. So both tables A and B must share the same columns (at least columns with compatible data types in the same order; names are no problem).
The same restriction applies to a plain SQL statement. SQL is strictly typed.
Anonymous code blocks just can't return anything - you would need a function instead.
But I think you don't need pl/pgsql to do what you want. Assuming that a and b have the same count of columns and datatypes, you can use union all and not exists:
select a.* from a where exists (select 1 from mytable where ...)
union all
select b.* from b where not exists (select 1 from mytable where ...)

How can I set a timeout for a locked table in PostgreSQL?

I want to set a timeout for this query. How can I do ?
CREATE OR REPLACE FUNCTION public."testlock"()
RETURNS TABLE
(
id integer,
name character varying
)
LANGUAGE 'plpgsql'
AS $BODY$
BEGIN
LOCK TABLE public."lock" IN ROW EXCLUSIVE MODE;
UPDATE public."lock" as l set name = 'deneme' WHERE l."id" = 4;
return query
select l."id",l."name" from public."lock" as l, pg_sleep(10) where l."id" = 4;
END;
$BODY$;
As suggested, you should merge UPDATE and SELECT into a single statement. UPDATE will lock the updated rows in ROW EXCLUSIVE MODE. Thus the LOCK statement ist unnecessary. The code in the function then looks like this:
RETURN QUERY UPDATE public."lock" as l set name = 'deneme' WHERE l."id" = 4
RETURNING l."id", l."name";
You can't set and use a statement timeout inside of a function. See How we can make “statement_timeout” work inside a function?

Postgresql trigger syntax error at or near "NEW"

Here is what i'm trying to do:
ALTER TABLE publishroomcontacts ADD COLUMN IF NOT EXISTS contactorder integer NOT NULL default 1;
CREATE OR REPLACE FUNCTION publishroomcontactorder() RETURNS trigger AS $publishroomcontacts$
BEGIN
IF (TG_OP = 'INSERT') THEN
with newcontactorder as (SELECT contactorder FROM publishroomcontacts WHERE publishroomid = NEW.publishroomid ORDER BY contactorder limit 1)
NEW.contactorder = (newcontactorder + 1);
END IF;
RETURN NEW;
END;
$publishroomcontacts$ LANGUAGE plpgsql;
CREATE TRIGGER publishroomcontacts BEFORE INSERT OR UPDATE ON publishroomcontacts
FOR EACH ROW EXECUTE PROCEDURE publishroomcontactorder();
I've been looking into a lot of examples and they all look like this. Most of them a couple of years old tho. Has this changed or why doesn't NEW work? And do i have to do the insert in the function or does postgres do the insert with the returned NEW object after the function is done?
I'm not sure what you're trying to do, but your syntax is wrong here:
with newcontactorder as (SELECT contactorder FROM publishroomcontacts WHERE publishroomid = NEW.publishroomid ORDER BY contactorder limit 1)
NEW.contactorder = (newcontactorder + 1);
Do not use CTE query if there is no select that comes afterwards. If you want to increment contactorder column for particular publishroomid whenever new one is being added and this is your sequence (auto increment) mechanism then you should replace it with:
NEW.contactorder = COALESCE((
SELECT max(contactorder)
FROM publishroomcontacts
WHERE publishroomid = NEW.publishroomid
), 1);
Note the changes:
there's no CTE, just variable assignment with SELECT query
use MAX() aggregate function instead of ORDER BY + LIMIT
wrapped up with COALESCE(x,1) function to properly insert first contacts for rooms, it will return 1 if your query does return NULL
Your trigger should look like this
CREATE OR REPLACE FUNCTION publishroomcontactorder() RETURNS trigger AS $publishroomcontacts$
BEGIN
IF (TG_OP = 'INSERT') THEN
NEW.contactorder = COALESCE((
SELECT max(contactorder) + 1
FROM publishroomcontacts
WHERE publishroomid = NEW.publishroomid
), 1);
END IF;
RETURN NEW;
END;
$publishroomcontacts$ LANGUAGE plpgsql;
Postgres will insert the row itself, you don't have to do anything, because RETURN NEW does that.
This solution does not take care of concurrent inserts which makes it unsafe for multi-user environment! You can work around this by performing an UPSERT !
WITH is not an assignment in PL/pgSQL.
PL/pgSQL interprets the line as SQL statement, but that is bad SQL because the WITH clause is followed by NEW.contactorder rather than SELECT or another CTE.
Hence the error; it has nothing to do with NEW as such.
You probably want something like
SELECT contactorder INTO newcontactorder
FROM publishroomcontacts
WHERE publishroomid = NEW.publishroomid
ORDER BY contactorder DESC -- you want the biggest one, right?
LIMIT 1;
You'll have to declare newcontactorder in the DECLARE section.
Warning: If there are two concurrent inserts, they might end up with the same newcontactorder.

Reusing json parsed input in postgres plpgsql function

I have a plpgsql function that takes a jsonb input, and uses it to first check something, and then again in a query to get results. Something like:
CREATE OR REPLACE FUNCTION public.my_func(
a jsonb,
OUT inserted integer)
RETURNS integer
LANGUAGE 'plpgsql'
COST 100.0
VOLATILE NOT LEAKPROOF
AS $function$
BEGIN
-- fail if there's something already there
IF EXISTS(
select t.x from jsonb_populate_recordset(null::my_type, a) f inner join some_table t
on f.x = t.x and
f.y = t.y
) THEN
RAISE EXCEPTION 'concurrency violation... already present.';
END IF;
-- straight insert, and collect number of inserted
WITH inserted_rows AS (
INSERT INTO some_table (x, y, z)
SELECT f.x, f.y, f.z
FROM jsonb_populate_recordset(null::my_type, a) f
RETURNING 1
)
SELECT count(*) from inserted_rows INTO inserted
;
END
Here, I'm using jsonb_populate_recordset(null::my_type, a) both in the IF check, and also in the actual insert. Is there a way to do the parsing once - perhaps via a variable of some sort? Or would the query optimiser kick in and ensure the parse operation happens only once?
If I understand correctly you look to something like this:
CREATE OR REPLACE FUNCTION public.my_func(
a jsonb,
OUT inserted integer)
RETURNS integer
LANGUAGE 'plpgsql'
COST 100.0
VOLATILE NOT LEAKPROOF
AS $function$
BEGIN
WITH checked_rows AS (
SELECT f.x, f.y, f.z, t.x IS NOT NULL as present
FROM jsonb_populate_recordset(null::my_type, a) f
LEFT join some_table t
on f.x = t.x and f.y = t.y
), vioalted_rows AS (
SELECT count(*) AS violated FROM checked_rows AS c WHERE c.present
), inserted_rows AS (
INSERT INTO some_table (x, y, z)
SELECT c.x, c.y, c.z
FROM checked_rows AS c
WHERE (SELECT violated FROM vioalted_rows) = 0
RETURNING 1
)
SELECT count(*) from inserted_rows INTO inserted
;
IF inserted = 0 THEN
RAISE EXCEPTION 'concurrency violation... already present.';
END IF;
END;
$function$;
JSONB type is no need to parse more then once, at the assignment:
while jsonb data is stored in a decomposed binary format that makes it slightly slower to input due to added conversion overhead, but significantly faster to process, since no reparsing is needed.
Link
jsonb_populate_recordset function declared as STABLE:
STABLE indicates that the function cannot modify the database, and that within a single table scan it will consistently return the same result for the same argument values, but that its result could change across SQL statements.
Link
I am not sure about it. From the one side UDF call is considering as single statements, from the other side UDF can contains multiple statement. Clarification needed.
Finally if you want to cache such sings then you could to use arrays:
CREATE OR REPLACE FUNCTION public.my_func(
a jsonb,
OUT inserted integer)
RETURNS integer
LANGUAGE 'plpgsql'
COST 100.0
VOLATILE NOT LEAKPROOF
AS $function$
DECLARE
d my_type[]; -- There is variable for caching
BEGIN
select array_agg(f) into d from jsonb_populate_recordset(null::my_type, a) as f;
-- fail if there's something already there
IF EXISTS(
select *
from some_table t
where (t.x, t.y) in (select x, y from unnest(d)))
THEN
RAISE EXCEPTION 'concurrency violation... already present.';
END IF;
-- straight insert, and collect number of inserted
WITH inserted_rows AS (
INSERT INTO some_table (x, y, z)
SELECT f.x, f.y, f.z
FROM unnest(d) f
RETURNING 1
)
SELECT count(*) from inserted_rows INTO inserted;
END $function$;
If you actually want to reuse a result set repeatedly, the general solution would be a temporary table. Example:
Using temp table in PL/pgSQL procedure for cleaning tables
However, that's rather expensive. Looks like all you need is a UNIQUE constraint or index:
Simple and safe with UNIQUE constraint
ALTER TABLE some_table ADD CONSTRAINT some_table_x_y_uni UNIQUE (x,y);
As opposed to your procedural attempt, this is also concurrency-safe (no race conditions). Much faster, too.
Then the function can be dead simple:
CREATE OR REPLACE FUNCTION public.my_func(a jsonb, OUT inserted integer) AS
$func$
BEGIN
INSERT INTO some_table (x, y, z)
SELECT f.x, f.y, f.z
FROM jsonb_populate_recordset(null::my_type, a) f;
GET DIAGNOSTICS inserted = ROW_COUNT; -- OUT param, we're done here
END
$func$ LANGUAGE plpgsql;
If any (x,y) is already present in some_table you get your exception. Chose an instructive name for the constraint, which is reported in the error message.
And we can just read the command tag with GET DIAGNOSTICS, which is substantially cheaper than running another count query.
Related:
How does PostgreSQL enforce the UNIQUE constraint / what type of index does it use?
UNIQUE constraint not possible?
For the unlikely case that a UNIQUE constraint should not be feasible, you can still have it rather simple:
CREATE OR REPLACE FUNCTION public.my_func(a jsonb, OUT inserted integer) AS
$func$
BEGIN
INSERT INTO some_table (x, y, z)
SELECT f.x, f.y, f.z -- empty result set if there are any violations
FROM (
SELECT f.x, f.y, f.z, count(t.x) OVER () AS conflicts
FROM jsonb_populate_recordset(null::my_type, a) f
LEFT JOIN some_table t USING (x,y)
) f
WHERE f.conflicts = 0;
GET DIAGNOSTICS inserted = ROW_COUNT;
IF inserted = 0 THEN
RAISE EXCEPTION 'concurrency violation... already present.';
END IF;
END
$func$ LANGUAGE plpgsql;
Count the number of violations in the same query. (count() only counts non-null values). Related:
Best way to get result count before LIMIT was applied
You should have at least a simple index on some_table (x,y) anyway.
It's important to know that plpgsql does not return results before control exits the function. The exception cancels the return, the user never gets results, only the error message. We added a code example to the manual.
Note, however, that there are race conditions here under concurrent write load. Related:
Is SELECT or INSERT in a function prone to race conditions?
Would the query planner avoid repeated evaluation?
Certainly not between multiple SQL statements.
Even if the function itself is defined STABLE or IMMUTABLE (jsonb_populate_recordset() in the example is STABLE), the query planner does not know that values of input parameters are unchanged between calls. It would be expensive to keep track and make sure of it.
Actually, since plpgsql treats SQL statements like prepared statements, that's plain impossible, since the query is planned before parameter values are fed to the planned query.

PostgreSQL Immutable function usage

I'm trying to fasten posgresql selection using a function by means of saying it's immutable or stable, so I have a function
CREATE OR REPLACE FUNCTION get_data(uid uuid)
RETURNS integer AS $$
BEGIN
RAISE NOTICE 'UUID %', $1;
-- DO SOME STUFF
RETURN 0;
END;
$$ LANGUAGE plpgsql IMMUTABLE STRICT;
When I call it like:
SELECT get_data('3642e529-b098-4db4-b7e7-6bb62f8dcbba'::uuid)
FROM table
WHERE true LIMIT 100;
I have 100 results and only one notice raised
When I call it this way:
SELECT get_data(table.hash)
FROM table
WHERE 1 = 1 AND table.hash = '3642e529-b098-4db4-b7e7-6bb62f8dcbba' LIMIT 100;
I have 100 result and 100 notices raised
the condition (table.hash = '3642e529-b098-4db4-b7e7-6bb62f8dcbba') added to make sure that the in param is the same
table.hash is uuid type
The questions is:
So how can force PG to some how cache the result of the function? ( if it's possibe )
I want to have only one notice ( function call ) be raised in the second case...
In your first example get_data('3642e529-b098-4db4-b7e7-6bb62f8dcbba'::uuid) is a constant, independent of table rows, so it is evaluated once.
In the second example get_data(table.hash) is functionally depending on a column value, therefore it is evaluated once per row.
If you want to evaluate the function once, it cannot depend on a value from a column (when more than one row is processed).
After discussion in comments, here is an example how to call function only once per hash:
SELECT *, get_data(x.hash) AS some_data_once_per_hash
FROM (
SELECT hash, count(*) AS ct
FROM table
WHERE table.hash = '3642e529-b098-4db4-b7e7-6bb62f8dcbba'
GROUP BY 1
) x
If Erwin's answer is not good for your case you can either create a materialized view or a trigger to update a "computed column"