Four triggers: It does not work - triggers

Hello everybody,
excuse me for my bad english
It's been more than 4 days I am trying to solve my problem:
each trigger works well but when I combine them there is an error:
the subquery returns more than 1 value.
I tried to follow all the tips in this website and others, I could not make it works, though.
the concerned tables are: PIECES, COMPOSITIONSGAMMES, nomenclatures and SITUATIONS.
What I want the triggers to do is :
When the user inserts a new row on "SITUATIONS" and if 'nomstrategie'= "DST" (It's a name of a strategy but this detail does not really matter, I mean for people who will help me), I need other rows to be inserted with the same reference (referencepiece), the same strategy(nomstrategie). Only 'ancienposte' and 'nouveauposte' have to change. Indeed, the first one's value(s) has to be all 'Numeroposte' from the table "Compositionsgammes". The second one's value has to be '???'.
I need, when I insert a new row and 'nomstrategie'='DST', other rows to be inserted with all 'piecesfilles' in the table "Nomenclatures"
of the reference 'referencepiece' in the row inserted by the user. And in 'ancienposte', there should be 'numeroposte' in the table "compositionsgammes".
I need, when the user inserts a new row and 'nomstrategie'= 'delestage, another row to be inserted as below, for example :
inserted row: Ref A ancienposte : P01 Nouveauposte :P02 Nomstrategie :Delestage…………
row to be inserted: Ref A ancienposte : P02 Nouveauposte :NULL Nomstrategie :Delestage…………
I need, for every row in the table "situations", calculate a value called 'charge' in the table situations charge=(TS/Taillelot)+Tu
here are the triggers I've done:
create trigger [dbo].[ALLDST]
ON [dbo].[SITUATIONS]
AFTER INSERT /*pas d'update*/
as
begin
set nocount on
insert into SITUATIONS(ReferencePiece,nomstrategie,AncienPoste,nouveauposte,DateStrategie)
select distinct i.referencepiece, i.nomstrategie,COMPOSITIONSGAMMES.NumeroPoste,'???',i.DateStrategie
from inserted i, PIECES, compositionsgammes, SITUATIONS s
where i.ReferencePiece is not null
and i.NomStrategie='DST'
and i.ReferencePiece=pieces.ReferencePiece and pieces.CodeGamme=COMPOSITIONSGAMMES.CodeGamme
and i.AncienPoste<>COMPOSITIONSGAMMES.NumeroPoste
and i.DateStrategie=s.DateStrategie
end
create trigger [dbo].[Calcul_Charge]
on [charges].[dbo].[SITUATIONS]
after insert
as
begin
update situations
set charge= (select (cg.TS/pieces.TailleLot)+cg.tu from situations s
inner join COMPOSITIONSGAMMES cg on cg.NumeroPoste=SITUATIONS.AncienPoste
inner join pieces on SITUATIONS.ReferencePiece=pieces.ReferencePiece
inner join inserted i on s.DateStrategie=i.DateStrategie
where cg.CodeGamme=pieces.CodeGamme and NumeroPoste=situations.AncienPoste
)
end
create trigger [dbo].[Duplicate_SITUATIONS]
ON [dbo].[SITUATIONS]
AFTER INSERT
as
begin
set nocount on
declare #ref varchar(50)
declare #strategie varchar(50)
declare #ancienposte varchar(50)
declare #datestrategie date
declare #pourcentage decimal(18,3)
declare #coeff decimal(18,3)
declare #charge decimal(18,3)
/*while (select referencepiece from situations where ReferencePiece) is not null*/
select #ref=referencepiece, #strategie=nomstrategie,#ancienposte=NouveauPoste,
#datestrategie=datestrategie, #pourcentage=PourcentageStrategie,#coeff=coeffameliorationposte,#charge=charge
from inserted,POSTESDECHARGE
where ReferencePiece is not null
and POSTESDECHARGE.NumeroPoste = inserted.AncienPoste
if #strategie = 'delestage' and #ancienposte is not null
/*if GETDATE()>= (select datestrategie from SITUATIONS)*/
begin
insert into SITUATIONS(ReferencePiece, nomstrategie,AncienPoste,DateStrategie,
StatutStrategie,DateModification,PourcentageStrategie,charge)
values
(#ref, #strategie, #ancienposte, #datestrategie,1,getdate(),#pourcentage,#charge*#coeff)
end
end

I'm mostly familiar with T-SQL (MS SQL), not sure if this will work for your case.. but I usually avoid updates using a sub query and rewrite your update:
update situations
set charge= (select (cg.TS/pieces.TailleLot)+cg.tu from situations s
inner join COMPOSITIONSGAMMES cg on cg.NumeroPoste=SITUATIONS.AncienPoste
inner join pieces on SITUATIONS.ReferencePiece=pieces.ReferencePiece
inner join inserted i on s.DateStrategie=i.DateStrategie
where cg.CodeGamme=pieces.CodeGamme and NumeroPoste=situations.AncienPoste
)
as follows
update s set
charge= (cg.TS/pieces.TailleLot)+cg.tu
from situations s
inner join COMPOSITIONSGAMMES cg on cg.NumeroPoste=SITUATIONS.AncienPoste
inner join pieces on SITUATIONS.ReferencePiece=pieces.ReferencePiece
inner join inserted i on s.DateStrategie=i.DateStrategie
where cg.CodeGamme=pieces.CodeGamme and NumeroPoste=situations.AncienPoste

Related

How to return different format of records from a single PL/pgSQL function?

I am a frontend developer but I started to write backend stuff. I have spent quite some amount of time trying to figure out how to solve this. I really need some help.
Here are the simplified definitions and relations of two tables:
Relationship between tables
CREATE TABLE IF NOT EXISTS items (
item_id uuid NOT NULL DEFAULT gen_random_uuid() ,
parent_id uuid DEFAULT NULL ,
parent_table parent_tables NOT NULL
);
CREATE TABLE IF NOT EXISTS collections (
collection_id uuid NOT NULL DEFAULT gen_random_uuid() ,
parent_id uuid DEFAULT NULL
);
Our product is an online document collaboration tool, page can have nested pages.
I have a piece of PostgreSQL code for getting all of its ancestor records for given item_ids.
WITH RECURSIVE ancestors AS (
SELECT *
FROM items
WHERE item_id in ( ${itemIds} )
UNION
SELECT i.*
FROM items i
INNER JOIN ancestors a ON a.parent_id = i.item_id
)
SELECT * FROM ancestors
It works fine for nesting regular pages, But if I am going to support nesting collection pages, which means some items' parent_id might refer to "collection" table's collection_id, this code will not work anymore. According to my limited experience, I don't think pure SQL code can solve it. I think writing a PL/pgSQL function might be a solution, but I need to get all ancestor records to given itemIds, which means returning a mix of items and collections records.
So how to return different format of records from a single PL/pgSQL function? I did some research but haven't found any example.
You can make it work by returning a superset as row: comprised of item and collection. One of both will be NULL for each result row.
WITH RECURSIVE ancestors AS (
SELECT 0 AS lvl, i.parent_id, i.parent_table, i AS _item, NULL::collections AS _coll
FROM items i
WHERE item_id IN ( ${itemIds} )
UNION ALL -- !
SELECT lvl + 1, COALESCE(i.parent_id, c.parent_id), COALESCE(i.parent_table, 'i'), i, c
FROM ancestors a
LEFT JOIN items i ON a.parent_table = 'i' AND i.item_id = a.parent_id
LEFT JOIN collections c ON a.parent_table = 'c' AND c.collection_id = a.parent_id
WHERE a.parent_id IS NOT NULL
)
SELECT lvl, _item, _coll
FROM ancestors
-- ORDER BY ?
db<>fiddle here
UNION ALL, not UNION.
Assuming a collection's parent is always an item, while an item can go either way.
We need LEFT JOIN on both potential parent tables to stay in the race.
I added an optional lvl to keep track of the level of hierarchy.
About decomposing row types:
Combine postgres function with query
Record returned from function has columns concatenated

postgresql function error ERROR: query has no destination for result data

I have created one function in postgresql. but when i try to return data i am getting below error
ERROR: query has no destination for result data
HINT: If you want to discard the results of a SELECT, use PERFORM instead.
CONTEXT: PL/pgSQL function "fn_GetAllCountData"() line 27 at SQL statement
SQL state: 42601
Below is the my postgresql function. In this function I am getting task status count in one query
CREATE OR REPLACE FUNCTION public."fn_GetAllCountData"() RETURNS setof "AssignDetails" AS $BODY$
DECLARE
total_draft text;
total_pending text;
total_rejected text;
total_approved text;
total_prev_pending text;
"AssignDetails" text;
BEGIN
--Total pending application no by the user
Select k."UserCode" as "UserCode",count(S."taskAssignTo") as "TotalPending" into total_pending
from user
left Outer Join public."tbl_task" S
on k."UserCode"=S."taskAssignTo" and s.Status='P'
And to_char(S."assignDate"::date, 'dd-mm-yyyy') = to_char(current_Date, 'dd-mm-yyyy')
group by k."UserCode";
--Previous Pending
Select k."UserCode" as "UserCode",count(S."taskAssignTo") as "TotalPrevPending" into total_prev_pending
from kyc k
left Outer Join public."tbl_task" S
on k."UserCode"=S."taskAssignTo" and s.Status='P'
And S."assignDate" < CONCAT(current_Date, ' 00:00:00'):: timestamp
group by k."UserCode";
-- Total Objection raised by the user
Select k."UserCode" as "UserCode",count(S."taskAssignTo") as "TotalRejected" into total_rejected
from kyc k
left Outer Join tbl_task S
on k."UserCode"=S."taskAssignTo" and s.Status='R'
And to_char(S."objectionDate"::date, 'dd-mm-yyyy') = to_char(current_Date, 'dd-mm-yyyy')
group by k."UserCode";
-- Total Approved application no by the user
Select k."UserCode" as "UserCode",count(S."taskAssignTo") as "TotalApproved" into total_approved
from kyc k
left Outer Join public."tbl_task" S
on k."UserCode"=S."taskAssignTo" and s.Status='A'
And S."assignDate" < CONCAT(current_Date, ' 00:00:00'):: timestamp
group by k."UserCode";
--Application no with start Time and total time
Select K."UserCode",K."Status", K."AppType",ST."taskNo" as "TaskId", ST."startTime" as "StartTime",
case
when COALESCE(ST."endTime",'')=''
then (SELECT DATEDIFF('second', ST."startTime":: timestamp, current_timestamp::timestamp))
else (SELECT DATEDIFF('second', ST."startTime":: timestamp, ST."endTime"::timestamp))
end as "Totaltime"
into "Final"
from kyc K
left outer join public."tbl_task_details" ST
On K."UserCode"=ST."empCode";
--Total Checked In Draft application no through by the user
Select k."UserCode" as "UserCode",count(S."taskAssignTo") as "Status_Count" into total_draft
from kyc k
left Outer Join public."tbl_task" S
on k."UserCode"=S."taskAssignTo" and s.Status='D'
And S."assignDate" < CONCAT(current_Date, ' 00:00:00'):: timestamp
group by k."UserCode";
Select distinct K."UserCode",K."Status",K."AppType",K."LoginTime",K."LogoutTime",
F."TaskId",F."StartTime",F."Totaltime",
TP."TotalPending" as "Pending",
TR."TotalRejected" as "Objection",
TA."TotalApproved" as "Approved",
TS."TotalAssign" as "Total_Assigned",
TD."Status_Count" as "Draft_Count",
TPP."TotalPrevPending" As "Prev_Pending"
into "AssignDetails"
From "Final" F
Right outer join kyc K On K."UserCode"=F."UserCode"
left outer join total_scrutiny TS On K."UserCode"=Ts."UserCode"
left outer join total_draft TD On TD."UserCode"=K."UserCode"
left outer join total_pending TP On TP."UserCode"=K."UserCode"
left outer join total_rejected TR On TR."UserCode"=K."UserCode"
left outer join total_approved TA On TA."UserCode"=K."UserCode"
Left Outer Join total_prev_pending TPP On TPP."UserCode"=K."UserCode"
order by TS."TotalAssign" desc;
Select * From "AssignDetails";
END
$BODY$ LANGUAGE plpgsql;
I tried to return table with return query but still not working. I don't know what i am doing wrong. Please help me with this.
Please note that postgreSQL only reports one error at a time. In fact there is a very great deal wrong with your function, so much so that it would take too long to correct everything here.
I have therefore given you a cut-down version here, which should point you in the right direction. I will give the code first, and then explain the points.
CREATE OR REPLACE FUNCTION public.fn_getallcountdata() RETURNS TABLE (usercode text, totalpending integer) AS $BODY$
BEGIN
CREATE TEMP TABLE total_pending
(
usercode text,
totalpending int
) ON COMMIT DROP;
--Total pending application no by the user
INSERT INTO total_pending
Select k.usercode, count(s.taskassignto)::integer
from public.user k
left Outer Join public.tbl_task s
on k.usercode=s.taskassignto and s.status='P'
And s.assigndate::date = current_date
group by k.usercode;
RETURN QUERY
select t.usercode, t.totalpending From total_pending t;
END;
$BODY$ LANGUAGE plpgsql;
Points to note:
Firstly please avoid using mixed case names in postgreSQL. It means that you have to double quote everything which is a real pain!
Secondly, you were declaring variables as text, when in fact they were holding table data. This you cannot do (you can only put a single value in any variable). Instead you need to create temporary tables in the way I have done. Note in particular the use of ON COMMIT DROP. This is a useful way in postgreSQL to avoid having to remember to drop temporary tables when you are finished with them!
Thirdly your alias k is not referring to anything in your first select. Note also that user is a reserved word. If you insist on having user as a name for a table, then you will need to access it through public.user (assuming it is in the public schema).
(As an aside it is generally considered to be a security risk to use the public schema, because of guest access).
Fourthly there is no need to convert a date to string form in order to compare it. Casting a timestamp to a date and directly comparing to another date is in fact far faster, than converting both dates to a string representation and comparing the strings.
Fifthly COUNT in postgreSQL returns a bigint, which is why I generally cast it as integer, because an integer usually suffices!
I have defined the function to return a table containing named columns. You can use setof, but if you do it has to be a known table type.
For the final SELECT I have supplied the required RETURN QUERY first. Note also that I am using a table alias. This is because the column names in the returning table match those in the temporary table, so you need to be explicit as to what you are doing.
I strongly recommend that you experiment with a shorter function first, (as in my cutdown version) and then increase the complexity once you have it compiling (and running). To this end please also note that in postgreSQL, if a function compiles, it does not mean that it contains no runtime errors. Also if you change the return columns between different compilations, you will need to delete the previous version.
Hope this points you in the right direction, but please feel free to get back with any further issues.

Postgresql function executed much longer than the same query

I'm using PostgreSQL 9.2.9 and have the following problem.
There are function:
CREATE OR REPLACE FUNCTION report_children_without_place(text, date, date, integer)
RETURNS TABLE (department_name character varying, kindergarten_name character varying, a1 bigint) AS $BODY$
BEGIN
RETURN QUERY WITH rh AS (
SELECT (array_agg(status ORDER BY date DESC))[1] AS status, request
FROM requeststatushistory
WHERE date <= $3
GROUP BY request
)
SELECT
w.name,
kgn.name,
COUNT(*)
FROM kindergarten_request_table_materialized kr
JOIN rh ON rh.request = kr.id
JOIN requeststatuses s ON s.id = rh.status AND s.sysname IN ('confirmed', 'need_meet_completion', 'kindergarten_need_meet')
JOIN workareas kgn ON kr.kindergarten = kgn.id AND kgn.tree <# CAST($1 AS LTREE) AND kgn.active
JOIN organizationforms of ON of.id = kgn.organizationform AND of.sysname IN ('state','municipal','departmental')
JOIN workareas w ON w.tree #> kgn.tree AND w.active
JOIN workareatypes mt ON mt.id = w.type AND mt.sysname = 'management'
WHERE kr.requestyear = $4
GROUP BY kgn.name, w.name
ORDER BY w.name, kgn.name;
END
$BODY$ LANGUAGE PLPGSQL STABLE;
EXPLAIN ANALYZE SELECT * FROM report_children_without_place('83.86443.86445', '14-04-2015', '14-04-2015', 2014);
Total runtime: 242805.085 ms.
But query from function's body executes much faster:
EXPLAIN ANALYZE WITH rh AS (
SELECT (array_agg(status ORDER BY date DESC))[1] AS status, request
FROM requeststatushistory
WHERE date <= '14-04-2015'
GROUP BY request
)
SELECT
w.name,
kgn.name,
COUNT(*)
FROM kindergarten_request_table_materialized kr
JOIN rh ON rh.request = kr.id
JOIN requeststatuses s ON s.id = rh.status AND s.sysname IN ('confirmed', 'need_meet_completion', 'kindergarten_need_meet')
JOIN workareas kgn ON kr.kindergarten = kgn.id AND kgn.tree <# CAST('83.86443.86445' AS LTREE) AND kgn.active
JOIN organizationforms of ON of.id = kgn.organizationform AND of.sysname IN ('state','municipal','departmental')
JOIN workareas w ON w.tree #> kgn.tree AND w.active
JOIN workareatypes mt ON mt.id = w.type AND mt.sysname = 'management'
WHERE kr.requestyear = 2014
GROUP BY kgn.name, w.name
ORDER BY w.name, kgn.name;
Total runtime: 2156.740 ms.
Why function executed so longer than the same query? Thank's
Your query runs faster because the "variables" are not actually variable -- they are static values (IE strings in quotes). This means the execution planner can leverage indexes. Within your stored procedure, your variables are actual variables, and the planner cannot make assumptions about indexes. For example - you might have a partial index on requeststatushistory where "date" is <= '2012-12-31'. The index can only be used if the $3 is known. Since it might hold a date from 2015, the partial index would be of no use. In fact, it would be detrimental.
I frequently construct a string within my functions where I concatenate my variables as literals and then execute the function using something like the following:
DECLARE
my_dynamic_sql TEXT;
BEGIN
my_dynamic_sql := $$
SELECT *
FROM my_table
WHERE $$ || quote_literal($3) || $$::TIMESTAMPTZ BETWEEN start_time
AND end_time;$$;
/* You can only see this if client_min_messages = DEBUG */
RAISE DEBUG '%', my_dynamic_sql;
RETURN QUERY EXECUTE my_dynamic_sql;
END;
The dynamic SQL is VERY useful because you can actually get an explain of the query when I have set client_min_messages=DEBUG; I can scrape the query from the screen and paste it back in after EXPLAIN or EXPLAIN ANALYZE and see what the execution planner is doing. This also allows you to construct very different queries as needed to optimize for variables (IE exclude unnecessary tables if warranted) and maintain a common API for your clients.
You may be tempted to avoid the dynamic SQL for fear of performance issues (I was at first) but you will be amazed at how LITTLE time is spent in planning compared to some of the cost of a couple of table scans on your seven-table join!
Good luck!
Follow-up: You might experiment with Common Table Expressions (CTEs) for performance as well. If you have a table that has a low signal-to-noise ratio (has many, many more records in it than you actually want to return) then a CTE can be very helpful. PostgreSQL executes CTEs early in the query, and materializes the resulting rows in memory. This allows you to use the same result set multiple times and in multiple places in your query. The benefit can really be surprising if you design it correctly.
sql_txt := $$
WITH my_cte as (
select fk1 as moar_data 1
, field1
, field2 /*do not need all other fields taking up RAM!*/
from my_table
where field3 between $$ || quote_literal(input_start_ts) || $$::timestamptz
and $$ || quote_literal(input_end_ts) || $$::timestamptz
),
keys_cte as ( select key_field
from big_look_up_table
where look_up_name = ANY($$ ||
QUOTE_LITERAL(input_array_of_names) || $$::VARCHAR[])
)
SELECT field1, field2, moar_data1, moar_data2
FROM moar_data_table
INNER JOIN my_cte
USING (moar_data1)
WHERE moar_data_table.moar_data_key in (select key_field from keys_cte) $$;
An execution plan is likely to show that it chooses to use an index on moar_data_tale.moar_data_key. This would appear to go against what I said above in my prior answer - except for the fact that the keys_cte results are materialized (and therefore cannot be changed by another transaction in a race-condition) -- you have your own little copy of the data for use in this query.
Oh - and CTEs can use other CTEs that are declared earlier in the same query. I have used this "trick" to replace sub-queries in very complex joins and seen great improvements.
Happy Hacking!

In DB2, perform an update based on insert for large number of rows

In DB2, I need to do an insert, then, using results/data from that insert, update a related table. I need to do it on a million plus records and would prefer not to lock the entire database. So, 1) how do I 'couple' the insert and update statements? 2) how can I ensure the integrity of the transaction (without locking the whole she-bang)?
some pseudo-code should help clarify
STEP 1
insert into table1 (neededId, id) select DYNAMICVALUE, id from tableX where needed value is null
STEP 2
update table2 set neededId = (GET THE DYNAMIC VALUE JUST INSERTED) where id = (THE ID JUST INSERTED)
note: in table1, the ID col is not unique, so i can't just filter on that to find the new DYNAMICVALUE
This should be more clear (FTR, this works, but I don't like it, because I'd have to lock the tables to maintain integrity. Would be great it I could run these statements together, and allow the update to refer to the newAddressNumber value.)
/****RUNNING TOP INSERT FIRST****/*
--insert a new address for each order that does not have a address id
insert into addresses
(customerId, addressNumber, address)
select
cust.Id,
--get next available addressNumber
ifNull((select max(addy2.addressNumber) from addresses addy2 where addy2.customerId = cust.id),0) + 1 as newAddressNumber,
cust.address
from customers cust
where exists (
--find all customers with at least 1 order where addressNumber is null
select 1 from orders ord
where 1=1
and ord.customerId = cust.id
and ord.addressNumber is null
)
/*****RUNNING THIS UPDATE SECOND*****/
update orders ord1
set addressNumber = (
select max(addressNumber) from addresses addy3
where addy3.customerId = ord1.customerId
)
where 1=1
and ord1.addressNumber is null
The IDENTITY_VAL_LOCAL function is a non-deterministic function that returns the most recently assigned value for an identity column, where the assignment occurred as a result of a single INSERT statement using a VALUES clause

How to refactor this sql query

I have a lengthy query here, and wondering whether it could be refactor?
Declare #A1 as int
Declare #A2 as int
...
Declare #A50 as int
SET #A1 =(Select id from table where code='ABC1')
SET #A2 =(Select id from table where code='ABC2')
...
SET #A50 =(Select id from table where code='ABC50')
Insert into tableB
Select
Case when #A1='somevalue' Then 'x' else 'y' End,
Case when #A2='somevalue' Then 'x' else 'y' End,
..
Case when #A50='somevalue' Then 'x' else 'y' End
From tableC inner join ......
So as you can see from above, there is quite some redundant code. But I can not think of a way to make it simpler.
Any help is appreciated.
If you need the variables assigned, you could pivot your table...
SELECT *
FROM
(
SELECT Code, Id
FROM Table
) t
PIVOT
(MAX(Id) FOR Code IN ([ABC1],[ABC2],[ABC3],[ABC50])) p /* List them all here */
;
...and then assign them accordingly.
SELECT #A1 = [ABC1], #A2 = [ABC2]
FROM
(
SELECT Code, Id
FROM Table
) t
PIVOT
(MAX(Id) FOR Code IN ([ABC1],[ABC2],[ABC3],[ABC50])) p /* List them all here */
;
But I doubt you actually need to assign them at all. I just can't really picture what you're trying to achieve.
Pivotting may help you, as you can still use the CASE statements.
Rob
Without taking the time to develop a full answer, I would start by trying:
select id from table where code in ('ABC1', ... ,'ABC50')
then pivot that, to get one row result set of columns ABC1 through ABC50 with ID values.
Join that row in the FROM.
If 'somevalue', 'x' and 'y' are constant for all fifty expressions. Then start from:
select case id when 'somevalue' then 'x' else 'y' end as XY
from table
where code in ('ABC1', ... ,'ABC50')
I am not entirely sure from your example, but it looks like you should be able to do one of a few things.
Create a nice look up table that will tell you for a given value of the select statement what should be placed there. This would be much shorter and should be insanely fast.
Create a simple for loop in your code and generate a list of 50 small queries.
Use sub-selects or generate a list of selects with one round trip to retrieve your #a1-#A50 values and then generate the query with them already in place.
Jacob