I am working with TSQL and my task is to write trigger which will be activated every time after something is inserted in table Role and it checks if number of main roles exceeds 3. If that is the case we have to stop inserting.
Here are tables given:
Actor(AID, name)(AID is primary key)
Film(FID, name, dateRecorded, numberOfMainRoles)(FID is primary key)
RoleType(RTID, nameOfType)(RTID is primary key)
Role(FID, AID,RTID)(FID and AID are primary keys and RTID is foreign key)
Here is my function:
CREATE TRIGGER addRole
ON Role
After INSERT
AS
BEGIN
DECLARE CR1 CURSOR
LOCAL
FROM
SELECT * FROM INSERTED
DECLARE #FID INT
DECLARE #AID INT
DECLARE #RTID INT
DECLARE #NUMOFROLES INT
SET #NUMOFROLES=0
OPEN CR1
FETCH NEXT FROM CR1
INTO #FID, #AID, #RTID
WHILE ##FETCH_STATUS = 0
BEGIN
...
END
CLOSE CR1
END
For while loop I have an idea, but don't know how to realize it. So I would like to write following query into it:
SELECT rt.RTID FROM
(SELECT * FROM
Film f, Role r
WHERE f.FID = r.FID) AS table,
RoleType rt
WHERE table.RTID = rt.RTID AND rt.nameOfType = "main"
And then my idea is to check whether the RTID of inserted Role is the same as some of RTID-s that I got in the query above, and then ncrement #NUMOFROLES.
Also, I would probably need to check whether the #NUMOFROLES exceeds the attribute numberOfMainRoles in table Film but not sure how to do that.
Would be very thankful if someone could solve this for me.
Related
I am copying data (importing)from table tmp_header into as_solution2 table, first IdNumber and Date needs to be checked on destiny table, to not copy repeated values. if date and idNumber are found in destiny table, i don't copy the row, if not found ,row is copied into table as_solution2.
Source table has 800.000 records and destiny table already contains 200.000 records.
caveat: the id_solution pk in "as_solution2" table is not serial, so I created a sequence and start from the last id.
v_max_cod_solicitud := (select max(id_solution)+1 from municipalidad.as_solution2);
CREATE SEQUENCE increment START v_max_cod_solicitud;
this provokes an errorerror
tmp_header (id, cod_cause, idNumber , date_sol(2012-05-12), glosa_desc)
as_solution2(id_solution, cod_cause, idNumber, date_sol, desc )
CREATE OR REPLACE FUNCTION municipalidad.as_importar()
RETURNS integer AS
$$
DECLARE
v_max_cod_solicitud numeric;
id_solution numeric;
begin
v_max_cod_solicitud := (select max(id_solution)+1 from municipalidad.as_solution2);
CREATE SEQUENCE increment START v_max_cod_solicitud;
INSERT INTO municipalidad.as_solution2(
id_solution,
cod_cause,
idNumber,
date_sol,
desc,
)
SELECT
(SELECT nextval('increment')), <-- when saving i need to start from the last sequence number
cod_causingreso,
idNumber,
date_sol,
glosa_atenc,
FROM municipalidad.tmp_header as tmp_e
WHERE(SELECT count(*)
FROM municipalidad.as_solution2 as s2
WHERE s2.idNumber = tmp_e.idNumber AND s2.date_sol::date = tmp_e.date_sol::date)=0;
drop sequence increment;
return 1;
end
$$
LANGUAGE 'plpgsql'
thanks in advance
You can brute-force the execution of the sequence with the start parameter as follows:
execute (format ('CREATE SEQUENCE incremento start %s', v_max_cod_solicitud));
Unrelated, but I think you will gain efficiencies by changing your insert to use an anti-join instead of the Where select count (*) = 0:
INSERT INTO as_solution2(
id_solution,
cod_cause,
idNumber,
date_sol,
description
)
SELECT
nextval('incremento'), -- when saving i need to start from the last sequence number
cod_causingreso,
idNumber,
date_sol,
glosa_atenc
FROM tmp_header as tmp_e
WHERE not exists (
select null
from as_solution2 s2
where
s2.idNumber = tmp_e.idNumber AND
s2.date_sol::date = tmp_e.date_sol::date
)
This will scale very nicely as your dataset increases in size.
Even though it's not listed as a reserved key word in https://www.postgresql.org/docs/9.5/sql-keywords-appendix.html, the increment in your create sequence statement might not be allowed here:
CREATE SEQUENCE increment START v_max_cod_solicitud;
As the parser expects this:
ALTER SEQUENCE name [ INCREMENT [ BY ] increment ]
It probably thinks you forgot the name
I've tried to figure my way around this but I'm relatively new to tsql.
These are my two tables:
This is my dbo.UsersAccountLink table:
This is my Company.Token tables:
Right now the UsersAccountLink.CorporationId is blank and I need to populate it based on what is in the Company.Token table.
So, I need to loop through each record in the Company.Token table and get the Company.Token.TokenId value and then query the Company.Token table with the TokenId, then lastly, I need to update the record on the dbo.UsersAccountLink table with the CorporationId.
Ultimately I want to update the dbo.UsersAccountLink.CorporationId with the value from Company.Token.CorporationId.
I hope that makes sense.
Well, here is what I have so far... It's not much but I'm struggling.
USE SuburbanPortal
go
-- Get the number of rows in the looping table
DECLARE #RowCount INT
SET #RowCount = (SELECT COUNT(*) FROM dbo.UsersAccountLink)
-- Declare an iterator
DECLARE #I INT
-- Initialize the iterator
SET #I = 1
-- Loop through the rows of a table #myTable
WHILE (#I <= #RowCount)
BEGIN
-- Declare variables to hold the data which we get after looping each record
DECLARE #CorpId UNIQUEIDENTIFIER, #TokenId UNIQUEIDENTIFIER
-- Get the data from table and set to variables
SET #TokenId = (SELECT [TokenId] FROM [SuburbanPortal].[dbo].[UsersAccountLink])
SET #CorpId = (SELECT [CorporationId] FROM [SuburbanPortal].[Company].[Token] WHERE #TokenId = ???)
-- Increment the iterator
SET #I = #I + 1
END
Welcome to SQL Server. Your code indicates that you are coming from a programming background with this pattern called "row-by-agonizing-row" (ROAR). The first order of business is to replace the "loop" thinking with "join". Instead of looping through a table then search for match in the other, use join:
UPDATE UAL
SET UAL.CorporationId = T.CorporationId
FROM dbo.UserAccountLink UAL
INNER JOIN Company.Token T ON UAL.TokenId = T.TokenId
I have a table which doesn't have an unique ID. I want to make a stored procedure which is adding to each row the number of the row as ID, but I don't know how to get the current row number. This is what I have done until now
CREATE OR ALTER PROCEDURE INSERTID_MYTABLE
returns (
cnt integer)
as
declare variable rnaml_count integer;
begin
/* Procedure Text */
Cnt = 1;
for select count(*) from MYTABLE r into:rnaml_count do
while (cnt <= rnaml_count) do
begin
update MYTABLE set id=:cnt
where :cnt = /*how should I get the rownumber here from select??*/
Cnt = Cnt + 1;
suspend;
end
end
I think better way will be:
Add new nullable column (let's call it ID).
Create a generator/sequence (let's call it GEN_ID).
Create a before update/insert trigger that fetches new value from sequence whenever the NEW.ID is null. Example.
Do update table set ID = ID. (This will populate the keys.)
Change the ID column to not null.
A bonus. The trigger can be left there, because it will generate the value in new inserted rows.
I have a question about copying rows in PostgreSQL. My table hierarchy is quite complex, where many tables are linked to each other via foreign keys. For the sake of simplicity, I will explain my question with two tables, but please bear in mind that my actual case requires a lot more complexity.
Say I have the following two tables:
table A
(
integer identifier primary key
... -- other fields
);
table B
(
integer identifier primary key
integer a foreign key references A (identifier)
... -- other fields
);
Say A and B hold the following rows:
A(1)
B(1, 1)
B(2, 1)
My question is: I would like to create a copy of a row in A such that the related rows in B are also copied into a new row. This would give:
A(1) -- the old row
A(2) -- the new row
B(1, 1) -- the old row
B(2, 1) -- the old row
B(3, 2) -- the new row
B(4, 2) -- the new row
Basically I am looking for a COPY/INSERT CASCADE.
Is there a neat trick to achieve this more or less automatically? Maybe by using temporary tables?
I believe that if I have to write all the INSERT INTO ... FROM ... queries myself in the correct order and stuff, I might go mental.
update
Let's answer my own question ;)
I did some try-outs with the RULE mechanisms in PostgreSQL and this is what I came up with:
First, the table definitions:
drop table if exists A cascade;
drop table if exists B cascade;
create table A
(
identifier serial not null primary key,
name varchar not null
);
create table B
(
identifier serial not null primary key,
name varchar not null,
a integer not null references A (identifier)
);
Next, for each table, we create a function and corresponding rule which translates UPDATE into INSERT.
create function A(in A, in A) returns integer as
$$
declare
r integer;
begin
-- A
if ($1.identifier <> $2.identifier) then
insert into A (identifier, name) values ($2.identifier, $2.name) returning identifier into r;
else
insert into A (name) values ($2.name) returning identifier into r;
end if;
-- B
update B set a = r where a = $1.identifier;
return r;
end;
$$ language plpgsql;
create rule A as on update to A do instead select A(old, new);
create function B(in B, in B) returns integer as
$$
declare
r integer;
begin
if ($1.identifier <> $2.identifier) then
insert into B (identifier, name, a) values ($2.identifier, $2.name, $2.a) returning identifier into r;
else
insert into B (name, a) values ($2.name, $2.a) returning identifier into r;
end if;
return r;
end;
$$ language plpgsql;
create rule B as on update to B do instead select B(old, new);
Finally, some testings:
insert into A (name) values ('test_1');
insert into B (name, a) values ('test_1_child', (select identifier from a where name = 'test_1'));
update A set name = 'test_2', identifier = identifier + 50;
update A set name = 'test_3';
select * from A, B where B.a = A.identifier;
This seems to work quite fine. Any comments?
This will work. One thing I note you wisely avoided was DO ALSO rules on inserts and updates. DO ALSO with insert and update is pretty dangerous so avoid that at pretty much all cost.
On further reflection, however, triggers are not going to perform worse and offer fewer hard corners.
I have two tables. Lets say tblA and tblB.
I need to insert a row in tblA and use the returned id as a value to be inserted as one of the columns in tblB.
I tried finding out this in documentation but could not get it. Well, is it possible to write a statement (intended to be used in prepared) like
INSERT INTO tblB VALUES
(DEFAULT, (INSERT INTO tblA (DEFAULT, 'x') RETURNING id), 'y')
like we do for SELECT?
Or should I do this by creating a Stored Procedure?. I'm not sure if I can create a prepared statement out of a Stored Procedure.
Please advise.
Regards,
Mayank
You'll need to wait for PostgreSQL 9.1 for this:
with
ids as (
insert ...
returning id
)
insert ...
from ids;
In the meanwhile, you need to use plpgsql, a temporary table, or some extra logic in your app...
This is possible with 9.0 and the new DO for anonymous blocks:
do $$
declare
new_id integer;
begin
insert into foo1 (id) values (default) returning id into new_id;
insert into foo2 (id) values (new_id);
end$$;
This can be executed as a single statement. I haven't tried creating a PreparedStatement out of that though.
Edit
Another approach would be to simply do it in two steps, first run the insert into tableA using the returning clause, get the generated value through JDBC, then fire the second insert, something like this:
PreparedStatement stmt_1 = con.prepareStatement("INSERT INTO tblA VALUES (DEFAULT, ?) returning id");
stmt_1.setString(1, "x");
stmt_1.execute(); // important! Do not use executeUpdate()!
ResultSet rs = stmt_1.getResult();
long newId = -1;
if (rs.next()) {
newId = rs.getLong(1);
}
PreparedStatement stmt_2 = con.prepareStatement("INSERT INTO tblB VALUES (default,?,?)");
stmt_2.setLong(1, newId);
stmt_2.setString(2, "y");
stmt_2.executeUpdate();
You can do this in two inserts, using currval() to retrieve the foreign key (provided that key is serial):
create temporary table tb1a (id serial primary key, t text);
create temporary table tb1b (id serial primary key,
tb1a_id int references tb1a(id),
t text);
begin;
insert into tb1a values (DEFAULT, 'x');
insert into tb1b values (DEFAULT, currval('tb1a_id_seq'), 'y');
commit;
The result:
select * from tb1a;
id | t
----+---
3 | x
(1 row)
select * from tb1b;
id | tb1a_id | t
----+---------+---
2 | 3 | y
(1 row)
Using currval in this way is safe whether in or outside of a transaction. From the Postgresql 8.4 documentation:
currval
Return the value most recently
obtained by nextval for this sequence
in the current session. (An error is
reported if nextval has never been
called for this sequence in this
session.) Because this is returning a
session-local value, it gives a
predictable answer whether or not
other sessions have executed nextval
since the current session did.
You may want to use AFTER INSERT trigger for that. Something along the lines of:
create function dostuff() returns trigger as $$
begin
insert into table_b(field_1, field_2) values ('foo', NEW.id);
return new; --values returned by after triggers are ignored, anyway
end;
$$ language 'plpgsql';
create trigger trdostuff after insert on table_name for each row execute procedure dostuff();
after insert is needed because you need to have the id to reference it. Hope this helps.
Edit
A trigger will be called in the same "block" as the command that triggered it, even if not using transactions - in other words, it becomes somewhat part of that command.. Therefore, there is no risk of something changing the referenced id between inserts.