Trigger Compilation Error, Oracle 11g - triggers

Banging my head up against this one for a while. I'm constructing a database on oracle 11g, and am attempting to insert a record into a "registry" table whenever a record is created on a "data product" table. The registry table needs to auto-increment the product_id, and then that product_id is used as a foreign key on the data product table. Here is my trigger code:
CREATE OR REPLACE TRIGGER "TR_CAMERA_DP_DPR_CREATE"
BEFORE INSERT ON "DD1"."CAMERA_DP"
FOR EACH ROW
BEGIN
:new.product_id := ID_SEQ.NEXTVAL;
insert into dd1.dp_registry
( product_id,
fs_location,
parent_group_id,
product_name,
shortdes,
createdate,
revision )
values
( :new.product_id,
'placeholder',
0,
'_image',
'description placeholder',
sysdate,
0
);
END;
So, ideally, an insert into dd1.camera_dp without providing a product_id will first insert a record into dd1.dp_registry, and then use that incremented product_id as the key field for dd1.camera_dp.
The insert statement works when run with a hard-coded value for :new.product_id, and ID_SEQ.NEXTVAL is also working properly. I get the feeling I'm missing something obvious.
Thanks!

Your code works perfectly well for me. If you're getting an error, there is something about the code that you are actually running from the code that you posted.
SQL> create table CAMERA_DP(
2 product_id number,
3 name varchar2(10)
4 );
Table created.
SQL> create sequence id_seq;
Sequence created.
SQL> ed
Wrote file afiedt.buf
1 create table dp_registry
2 ( product_id number,
3 fs_location varchar2(100),
4 parent_group_id number,
5 product_name varchar2(100),
6 shortdes varchar2(100),
7 createdate date,
8* revision number)
SQL> /
Table created.
SQL> ed
Wrote file afiedt.buf
1 CREATE OR REPLACE TRIGGER "TR_CAMERA_DP_DPR_CREATE"
2 BEFORE INSERT ON "CAMERA_DP"
3 FOR EACH ROW
4 BEGIN
5 :new.product_id := ID_SEQ.NEXTVAL;
6 insert into dp_registry
7 ( product_id,
8 fs_location,
9 parent_group_id,
10 product_name,
11 shortdes,
12 createdate,
13 revision )
14 values
15 ( :new.product_id,
16 'placeholder',
17 0,
18 '_image',
19 'description placeholder',
20 sysdate,
21 0
22 );
23* END;
24 /
Trigger created.
SQL> insert into camera_dp( name ) values( 'Foo' );
1 row created.
SQL> ed
Wrote file afiedt.buf
1* select product_id from dp_registry
SQL> /
PRODUCT_ID
----------
1
If you're getting an error that a table doesn't exist, the common culprits would be
You actually have a typo in the name of your table
You don't have permission to insert into the table. Note that if in your actual code, not everything is in the same schema, my guess would be that the user that owns the trigger has privileges to INSERT into the DP_REGISTRY table via a role rather than via a direct grant. Since priileges granted through a role are not available in a definer's rights stored procedure block, that would explain why you can do something at the command line but not in PL/SQL.

Related

How to return a count result of only one column?

I have a table like this:
And would like to make a selection by period (date field), where I could return the registers, and a column with the total of registres matched with only one column.
For example:
If I use this query:
SELECT date, product_type, operation, unit
FROM table
WHERE date BETWEEN '2019-08-26 00:00:00' AND '2019-08-26 23:59:59';
It must return:
But I wish I could return one more column, with the total of operations regardless period like this:
Where in this case, 6 is the frequency that "ajoy" appears in the table.
IMPORTANT! If I select a period where two or more operations are returned, so the query must be able to return their frequency as well.
I used part of your data and will include what I had for data in the table, but I believe this is what you want
create table mytab (job int, operations char(8), prod char(8),
ts_date datetime year to minute, unit int) lock mode row;
insert into mytab values(22, "ajoy","arrow","2020-05-11 08:51", 20);
insert into mytab values(22, "ajoy","arrow","2020-05-11 08:51", 20);
insert into mytab values(22, "ajoy","arrow","2020-05-11 08:51", 20);
insert into mytab values(22, "ajoy","arrow","2020-04-11 14:15", 20);
insert into mytab values(22, "ajoy","arrow","2020-04-11 14:15", 20);
insert into mytab values(22, "ajoy","arrow","2020-04-11 14:15", 20);
insert into mytab values(23, "dinn","curve","2020-05-11 08:51",1);
insert into mytab values(23, "dinn","point","2020-05-11 08:51",1);
insert into mytab values(23, "dinn","arrow","2020-04-11 08:51",1);
The query:
select job, operations, prod, ts_date, unit, (select count(*) from mytab b
where b.operations = a.operations) total_operation from mytab a where
a.ts_date between "2020-05-11 08:50" and "2020-05-11 08:59"
The above query gave me the following results which is I think what you were asking for:
job operations prod ts_date unit total_operation
22 ajoy arrow 2020-05-11 08:51 20 6
22 ajoy arrow 2020-05-11 08:51 20 6
22 ajoy arrow 2020-05-11 08:51 20 6
23 dinn curve 2020-05-11 08:51 1 3
23 dinn point 2020-05-11 08:51 1 3
This example is small and has doesn't include/account for indexes you may which to put on the table to speed query performance.

sql - Postgresql, increment column based on other column values

I have table like
create table test(employee integer NOT NULL, code character varying(200), number integer)
I want to auto increment column 'number' on every insert record
insert into test(employee, code) values(17,'bangalore')
insert into test(employee, code) values(17,'bangalore')
insert into test(employee, code) values(17,'mumbai')
I want result like
employee code number
17 bangalore 1
17 bangalore 2
17 bangalore 3
17 mumbai 1
17 mumbai 2
17 bangalore 4
17 mumbai 3
18 bangalore 1
18 bangalore 2
18 mumbai 1
18 mumbai 2
For a batch upload of data, try if below approach would be useful.
create a temporary table test2
create table test2(employee integer NOT NULL, code character varying(200))
insert into test2(employee, code) values(17,'bangalore')
insert into test2(employee, code) values(17,'bangalore')
insert into test2(employee, code) values(17,'mumbai')
Insert into actual table along with incremental number
insert into test(employee, code, number)
select employee, code, row_number() over (partition by code ) from test2
You could include order by clause like primary key column or another column like created_date :
over (partition by code order by created_date)
create table test (employee integer NOT NULL, code character varying(200), number integer)
insert into test(employee, code, number ) values(17,'bangalore',(select coalesce(max(number) + 1,1) from test where employee = 17 and code = 'bangalore'));
insert into test(employee, code, number ) values(17,'bangalore',(select coalesce(max(number) + 1,1) from test where employee = 17 and code = 'bangalore'));
insert into test(employee, code, number ) values(17,'mumbai',(select coalesce(max(number) + 1,1) from test where employee = 17 and code = 'mumbai'));

PostgreSQL - dynamic INSERT on column names

I'm looking to dynamically insert a set of columns from one table to another in PostgreSQL. What I think I'd like to do is read in a 'checklist' of column headings (those columns which exist in table 1 - the storage table), and if they exist in the export table (table 2) then insert them in all at once from table 1. Table 2 will be variable in its columns though - once imported ill drop it and import new data to be imported with potentially different column structure. So I need to import it based on the column names.
e.g.
Table 1. - The storage table
ID NAME YEAR LITH_AGE PROV_AGE SIO2 TIO2 CAO MGO COMMENTS
1 John 1998 2000 3000 65 10 5 5 comment1
2 Mark 2005 2444 3444 63 8 2 3 comment2
3 Luke 2001 1000 1500 77 10 2 2 comment3
Table 2. - The export table
ID NAME MG# METHOD SIO2 TIO2 CAO MGO
1 Amy 4 Method1 65 10 5 5
2 Poe 3 Method2 63 8 2 3
3 Ben 2 Method3 77 10 2 2
As you can see the export table may include columns which do not exist in the storage table, so these would be ignored.
I want to insert all of these columns at once, as I've found if I do it individually by column it extends the number of rows each time on the insert (maybe someone can solve this issue instead? Currently I've written a function to check if a column name exists in table 2, if it does, insert it, but as said this extends the rows of the table every time and NULL the rest of the columns).
The INSERT line from my function:
EXECUTE format('INSERT INTO %s (%s) (SELECT %s::%s FROM %s);',_tbl_import, _col,_col,_type,_tbl_export);
As a type of 'code example' for my question:
EXECUTE FORMAT('INSERT INTO table1 (%s) (SELECT (%s) FROM table2)',columns)
where 'columns' would be some variable denoting the columns that exist in the export table that need to go into the storage table. This will be variable as table 2 will be different every time.
This would ideally update Table 1 as:
ID NAME YEAR LITH_AGE PROV_AGE SIO2 TIO2 CAO MGO COMMENTS
1 John 1998 2000 3000 65 10 5 5 comment1
2 Mark 2005 2444 3444 63 8 2 3 comment2
3 Luke 2001 1000 1500 77 10 2 2 comment3
4 Amy NULL NULL NULL 65 10 5 5 NULL
5 Poe NULL NULL NULL 63 8 2 3 NULL
6 Ben NULL NULL NULL 77 10 2 2 NULL
UPDATED answer
As my original answer did not meet requirement came out later but was asked to post an alternative example for information_schema solution so here it is.
I made two versions for solutions:
V1 - is equivalent to already given example using information_schema. But that solution relies on table1 column DEFAULTs. Meaning, if table1 column that does not exist at table2 does not have DEFAULT NULL then it will be filled with whatever the default is.
V2 - is modified to force 'NULL' in case of two table columns mismatch and does not inherit table1 own DEFAULTs
Version1:
CREATE OR REPLACE FUNCTION insert_into_table1_v1()
RETURNS void AS $main$
DECLARE
columns text;
BEGIN
SELECT string_agg(c1.attname, ',')
INTO columns
FROM pg_attribute c1
JOIN pg_attribute c2
ON c1.attrelid = 'public.table1'::regclass
AND c2.attrelid = 'public.table2'::regclass
AND c1.attnum > 0
AND c2.attnum > 0
AND NOT c1.attisdropped
AND NOT c2.attisdropped
AND c1.attname = c2.attname
AND c1.attname <> 'id';
-- Following is the actual result of query above, based on given data examples:
-- -[ RECORD 1 ]----------------------
-- string_agg | name,si02,ti02,cao,mgo
EXECUTE format(
' INSERT INTO table1 ( %1$s )
SELECT %1$s
FROM table2
',
columns
);
END;
$main$ LANGUAGE plpgsql;
Version2:
CREATE OR REPLACE FUNCTION insert_into_table1_v2()
RETURNS void AS $main$
DECLARE
t1_cols text;
t2_cols text;
BEGIN
SELECT string_agg( c1.attname, ',' ),
string_agg( COALESCE( c2.attname, 'NULL' ), ',' )
INTO t1_cols,
t2_cols
FROM pg_attribute c1
LEFT JOIN pg_attribute c2
ON c2.attrelid = 'public.table2'::regclass
AND c2.attnum > 0
AND NOT c2.attisdropped
AND c1.attname = c2.attname
WHERE c1.attrelid = 'public.table1'::regclass
AND c1.attnum > 0
AND NOT c1.attisdropped
AND c1.attname <> 'id';
-- Following is the actual result of query above, based on given data examples:
-- t1_cols | t2_cols
-- --------------------------------------------------------+--------------------------------------------
-- name,year,lith_age,prov_age,si02,ti02,cao,mgo,comments | name,NULL,NULL,NULL,si02,ti02,cao,mgo,NULL
-- (1 row)
EXECUTE format(
' INSERT INTO table1 ( %s )
SELECT %s
FROM table2
',
t1_cols,
t2_cols
);
END;
$main$ LANGUAGE plpgsql;
Also link to documentation about pg_attribute table columns if something is unclear: https://www.postgresql.org/docs/current/static/catalog-pg-attribute.html
Hopefully this helps :)

Converting Daily Snapshots to Ranges in PostgreSQL

I have a very large table with years' worth of daily snapshots, showing what the data looks like each day. For the sake of illustration the table looks something like this:
Part Qty Snapshot
---- ---- --------
A 5 1/1/2015
B 10 1/1/2015
A 5 1/2/2015
B 10 1/2/2015
A 6 1/3/2015
B 10 1/3/2015
A 5 1/4/2015
B 10 1/4/2015
I would like to implement a slowly changing data methodology and collapse this data into a form that would look like this (assume current date is 1/4/15)
Part Qty From Thru Active
---- ---- -------- -------- ------
A 5 1/1/2015 1/2/2015 I
B 10 1/1/2015 1/4/2015 A
A 6 1/3/2015 1/3/2015 I
A 5 1/4/2015 1/4/2015 A
I have a function that runs daily so when I capture the latest snapshot, I convert it to this methodology. This function runs once the data is actually loaded into the table with an active flag of 'C' (current), from the giant table (which is actually in DB2).
This works for me moving forward (once I have all past dates loaded), but I'd like to have a way to do this in one fell swoop, for all existing dates and convert the individual snapshot dates into ranges.
For what it's worth, my current method is to run this function for every possible date value. While it's working, it's quite slow, and I have several years worth of history to process as I loop one day at a time.
Tables:
create table main.history (
part varchar(25) not null,
qty integer not null,
from_date date not null,
thru_date date not null,
active_flag char(1)
);
create table stage.history as select * from main.history where false;
create table partitioned.history_active (
constraint history_active_ck1 check (active_flag in ('A', 'C'))
) inherits (main.history);
create table partitioned.history_inactive (
constraint history_active_ck1 check (active_flag = 'I')
) inherits (main.history);
Function to process a day's worth of new data:
CREATE OR REPLACE FUNCTION main.capture_history(new_date date)
RETURNS null AS
$BODY$
DECLARE
rowcount integer := 0;
BEGIN
-- partitioned.history_active already has a current snapshot for new_date
truncate table stage.history;
insert into stage.history
select
part, qty,
min (from_date), max (thru_date),
case when max (thru_date) = new_date then 'A' else 'I' end
FROM
partitioned.history_active
group by
part_qty;
truncate table partitioned.history_active;
insert into partitioned.history_active
select * from stage.history
where active_flag = 'A';
insert into partitioned.history_inactive
select * from stage.history
where active_flag = 'I';
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;

TSQL passing 2 values array to stored procedure

I'm using SQL Server 2012 and C#.
Imagine have something similar to a shopping cart and now need to create an order with the following items:
productA - 4 (qty)
productB - 1 (qty)
productC - 9 (qty)
In my C# code I have a list that looks like this:
id : "productA" , qty : "4"
id : "productB" , qty : "1"
id : "productV" , qty : "9"
Questions:
How can I pass the list of 2 values to the stored procedure?
How can I have the stored procedure run 3 while loops each one running 4 times, then once then 9 times in order to physically create one record x request?
Note: In my case I don't have a QTY column in the table, I need to specifically create one record x item on the order.
You can done this by Table Value Parameter in SQL.
Sql Authority
MSDN
You can done this by passing TVP as #table format
declare #table table(product varchar(10), qty int)
insert into #table
select 'product1', 4 union
select 'product2', 2
;WITH cte AS (
SELECT product, qty FROM #table
UNION ALL
SELECT product, qty-1 FROM cte WHERE qty > 1
)
SELECT t.product, t.qty
FROM cte c
JOIN #table t ON c.product = t.product
ORDER BY 1
Reference for the CTE : Creating duplicate records for a given table row
To pass a table into the stored procedure use table-valued parameter.
At first create a type:
CREATE TYPE [dbo].[ProductsTableType] AS TABLE(
[ID] [varchar](50) NOT NULL,
[qty] [int] NOT NULL
)
Then use this type in the stored procedure. The #ParamProducts is a table and can be used in all queries where a table can be used.
CREATE PROCEDURE [dbo].[AddProducts]
#ParamProducts ProductsTableType READONLY
AS
BEGIN
...
END
To actually insert required number of rows I would use a table of numbers , http://web.archive.org/web/20150411042510/http://sqlserver2000.databases.aspfaq.com/why-should-i-consider-using-an-auxiliary-numbers-table.html
In my database I have a table called Numbers with a column Number that contains numbers from 1 to 100,000. Once you have such table it is trivial to get the set that you need.
DECLARE #T TABLE (ID varchar(50), qty int);
INSERT INTO #T (ID, qty) VALUES ('productA', 4);
INSERT INTO #T (ID, qty) VALUES ('productB', 1);
INSERT INTO #T (ID, qty) VALUES ('productV', 9);
SELECT *
FROM
#T AS Products
INNER JOIN dbo.Numbers ON Products.qty >= dbo.Numbers.Number
;
Result set
ID qty Number
productA 4 1
productA 4 2
productA 4 3
productA 4 4
productB 1 1
productV 9 1
productV 9 2
productV 9 3
productV 9 4
productV 9 5
productV 9 6
productV 9 7
productV 9 8
productV 9 9
This is an example. In your case you would have this SELECT inside INSERT INTO YourFinalTable.