DB2 Query : insert data in history table if not exists already - db2

I have History table and transaction table.....and reference table...
If status in reference table is CLOSE then take those record verify in History table if not there insert from transaction table..... wiring query like this .... checking better one... please advice.. this query can be used for huge data ?
INSERT INTO LIB1.HIST_TBL
( SELECT R.ACCT, R.STATUS, R.DATE FROM
LIB2.HIST_TBL R JOIN LIB1.REF_TBL C
ON R.ACCT = C.ACCT WHERE C.STATUS = '5'
AND R.ACCT NOT IN
(SELECT ACTNO FROM LIB1.HIST_TBL)) ;

If you're on a current release of DB2 for i, take a look at the MERGE statement
MERGE INTO hist_tbl H
USING (SELECT * FROM ref_tbl R
WHERE r.status = 'S')
ON h.actno = r.actno
WHEN NOT MATCHED THEN
INSERT (actno,histcol2, histcol3) VALUES (r.actno,r.refcol2,r.refcol3)
--if needed
WHEN MATCHED
UPDATE SET (actno,histcol2, histcol3) = (r.actno,r.refcol2,r.refcol3)

Related

AS400 index configuration table

How can I view index of particular table in AS400? In which table index description of table is stored?
If your "index" is really a logical file, you can see a list of these using:
select * from qsys2.systables
where table_schema = 'YOURLIBNAME' and table_type = 'L'
To complete the previous answers: if your AS400/IBMi's files are "IBM's old style" Physical and Logical files, the qsys2.syskeys and qsys2.sysindexes are empty.
==> you retrieve index infos in QADBKFLD (for "indexes" info) and QADBXREF(for fields list) tables
select * from QSYS.QADBXREF where DBXFIL = 'YOUR_LOGICAL_FILE_NAME' and DBXLIB = 'YOUR_LIBRARY'
select * from QSYS.QADBKFLD where DBKFIL = 'YOUR_LOGICAL_FILE_NAME' and DBKLB2 = 'YOUR_LIBRARY'
WARNING: YOUR_LOGICAL_FILE_NAME is not your "table name", but the name of the file ! You have to join another table QSYS.QADBFDEP to match LOGICAL_FILE_NAME / TABLE_NAME :
To found indexes from your table's name:
Select r.*
from QSYS.QADBXREF r, QSYS.QADBFDEP d
where d.DBFFDP = r.DBXFIL and d.DBFLIB=r.DBXLIB
and d.DBFFIL = 'YOUR_TABLE_NAME' and d.DBFLIB = 'YOUR_LIBRARY'
To found all indexes' fields from your table:
Select DBXFIL , f.DBKFLD, DBKPOS , t.DBXUNQ
from QSYS.QADBXREF t
INNER JOIN QSYS.QADBKFLD f on DBXFIL = DBKFIL and DBXLIB = DBKLIB
INNER JOIN QSYS.QADBFDEP d on d.DBFFDP = t.DBXFIL and d.DBFLIB=t.DBXLIB
where d.DBFFIL = 'YOUR_TABLE_NAME' and d.DBFLIB = 'YOUR_LIBRARY'
order by DBXFIL, DBKPOS
if your indexes is create with SQL you can see liste of index in sysindexes system view
SELECT * FROM qsys2.sysindexes WHERE TABLE_SCHEMA='YOURLIBNAME' and
TABLE_NAME = 'YOURTABLENAME'
if you want detail columns for index you can join syskeys tables
SELECT KEYS.INDEX_NAME, KEYS.COLUMN_NAME
FROM qsys2.syskeys KEYS
JOIN qsys2.sysindexes IX ON KEYS.ixname = IX.name
WHERE TABLE_SCHEMA='YOURLIBNAME' and TABLE_NAME = 'YOURTABLENAME'
order by INDEX_NAME
You could also use commands to get the information. Command DSPDBR FILE(LIBNAME/FILENAME) will show a list of the objects dependent on a physical file. The objects that show a data dependency can then be further explored by running DSPFD FILE(LIBNAME/FILENAME). This will show the access paths of the logical file.

Cannot insert NULL value into column error

I have a issue where I want to update a column in a table and with a trigger to update same column but in another table. It says I cannot insert NULL but I can't seem to understand from where it gets that NULL value. This is the trigger:
CREATE TRIGGER Custom_WF_Update_WF_DefinitionSteps_DefinitionId ON WF.Definition
AFTER UPDATE AS BEGIN
IF UPDATE(DefinitionId)
IF TRIGGER_NESTLEVEL() < 2
BEGIN
ALTER TABLE WF.DefinitionSteps NOCHECK CONSTRAINT ALL
UPDATE WF.DefinitionSteps
SET DefinitionId =
(SELECT i.DefinitionId
FROM inserted i,
deleted d
WHERE WF.DefinitionSteps.DefinitionId = d.DefinitionId
AND i.oldPkCol = d.DefinitionId)
WHERE WF.DefinitionSteps.DefinitionId IN
(SELECT DefinitionId FROM deleted)
ALTER TABLE WF.DefinitionSteps CHECK CONSTRAINT ALL
END
END
This update statement works just fine:
UPDATE [CCHMergeIntermediate].[WF].[Definition]
SET DefinitionId = source.DefinitionId + 445
FROM [CCHMergeIntermediate].[WF].[Definition] source
But this one fails:
UPDATE [CCHMergeIntermediate].[WF].[Definition]
SET DefinitionId = target.DefinitionId
FROM [CCHMergeIntermediate].[WF].[Definition] source
INNER JOIN [centralq3].[WF].[Definition] target
ON (((source.Name = target.Name) OR (source.Name IS NULL AND target.Name IS NULL)))
I get the following error:
Msg 515, Level 16, State 2, Procedure Custom_WF_Update_WF_DefinitionSteps_DefinitionId, Line 7
Cannot insert the value NULL into column 'DefinitionId', table 'CCHMergeIntermediate.WF.DefinitionSteps'; column does not allow nulls. UPDATE fails.
If I do a select instead of the update statement, like this:
SELECT source.DefinitionId, target.DefinitionId
FROM [CCHMergeIntermediate].[WF].[Definition] source
INNER JOIN [centralq3].[WF].[Definition] target
ON (((source.Name = target.Name) OR (source.Name IS NULL AND target.Name IS NULL)))
I get this result:
http://i.stack.imgur.com/3cZsM.png (sorry for external link, I don't have enaugh reputation to post image here )
What am I doing wrong? What I don't see? What am I missing..?
The problem was in the trigger at the condition. I modified the second where from i.oldPkCol = d.DefinitionId to i.oldPkCol = **d.oldPkCol** and it worked.
UPDATE WF.DefinitionSteps
SET DefinitionId =
(SELECT i.DefinitionId
FROM inserted i,
deleted d
WHERE WF.DefinitionSteps.DefinitionId = d.DefinitionId
AND i.oldPkCol = **d.oldPkCol**)
WHERE WF.DefinitionSteps.DefinitionId IN
(SELECT DefinitionId FROM deleted)

TSQL CTE Error: Incorrect syntax near ')'

I am developing a TSQL stored proc using SSMS 2008 and am receiving the above error while generating a CTE. I want to add logic to this SP to return every day, not just the days with data. How do I do this? Here is my SP so far:
ALTER Proc [dbo].[rpt_rd_CensusWithChart]
#program uniqueidentifier = NULL,
#office uniqueidentifier = NULL
AS
DECLARE #a_date datetime
SET #a_date = case when MONTH(GETDATE()) >= 7 THEN '7/1/' + CAST(YEAR(GETDATE()) AS VARCHAR(30))
ELSE '7/1/' + CAST(YEAR(GETDATE())-1 AS VARCHAR(30)) END
if exists (
select * from tempdb.dbo.sysobjects o where o.xtype in ('U') and o.id = object_id(N'tempdb..#ENROLLEES')
) DROP TABLE #ENROLLEES;
if exists (
select * from tempdb.dbo.sysobjects o where o.xtype in ('U') and o.id = object_id(N'tempdb..#DISCHARGES')
) DROP TABLE #DISCHARGES;
declare #sum_enrollment int
set #sum_enrollment =
(select sum(1)
from enrollment_view A
join enrollment_info_expanded_view C on A.enrollment_id = C.enroll_el_id
where
(#office is NULL OR A.group_profile_id = #office)
AND (#program is NULL OR A.program_info_id = #program)
and (C.pe_end_date IS NULL OR C.pe_end_date > #a_date)
AND C.pe_start_date IS NOT NULL and C.pe_start_date < #a_date)
select
A.program_info_id as [Program code],
A.[program_name],
A.profile_name as Facility,
A.group_profile_id as Facility_code,
A.people_id,
1 as enrollment_id,
C.pe_start_date,
C.pe_end_date,
LEFT(datename(month,(C.pe_start_date)),3) as a_month,
day(C.pe_start_date) as a_day,
#sum_enrollment as sum_enrollment
into #ENROLLEES
from enrollment_view A
join enrollment_info_expanded_view C on A.enrollment_id = C.enroll_el_id
where
(#office is NULL OR A.group_profile_id = #office)
AND (#program is NULL OR A.program_info_id = #program)
and (C.pe_end_date IS NULL OR C.pe_end_date > #a_date)
AND C.pe_start_date IS NOT NULL and C.pe_start_date >= #a_date
;WITH #ENROLLEES AS (
SELECT '7/1/11' AS dt
UNION ALL
SELECT DATEADD(d, 1, pe_start_date) as dt
FROM #ENROLLEES s
WHERE DATEADD(d, 1, pe_start_date) <= '12/1/11')
The most obvious issue (and probably the one that causes the error message too) is the absence of the actual statement to which the last CTE is supposed to pertain. I presume it should be a SELECT statement, one that would combine the result set of the CTE with the data from the #ENROLLEES table.
And that's where another issue emerges.
You see, apart from the fact that a name that starts with a single # is hardly advisable for anything that is not a local temporary table (a CTE is not a table indeed), you've also chosen for your CTE a particular name that already belongs to an existing table (more precisely, to the already mentioned #ENROLLEES temporary table), and the one you are going to pull data from too. You should definitely not use an existing table's name for a CTE, or you will not be able to join it with the CTE due to the name conflict.
It also appears that, based on its code, the last CTE represents an unfinished implementation of the logic you say you want to add to the SP. I can suggest some idea, but before I go on I'd like you to realise that there are actually two different requests in your post. One is about finding the cause of the error message, the other is about code for a new logic. Generally you are probably better off separating such requests into distinct questions, and so you might be in this case as well.
Anyway, here's my suggestion:
build a complete list of dates you want to be accounted for in the result set (that's what the CTE will be used for);
left-join that list with the #ENROLLEES table to pick data for the existing dates and some defaults or NULLs for the non-existing ones.
It might be implemented like this:
… /* all your code up until the last WITH */
;
WITH cte AS (
SELECT CAST('7/1/11' AS date) AS dt
UNION ALL
SELECT DATEADD(d, 1, dt) as dt
FROM cte
WHERE dt < '12/1/11'
)
SELECT
cte.dt,
tmp.[Program code],
tmp.[program_name],
… /* other columns as necessary; you might also consider
enveloping some or all of the "tmp" columns in ISNULLs,
like in
ISNULL(tmp.[Program code], '(none)') AS [Program code]
to provide default values for absent data */
FROM cte
LEFT JOIN #ENROLLEES tmp ON cte.dt = tmp.pe_start_date
;

Metadata about a column in SQL Server 2008 R2?

I'm trying to figure out a way to store metadata about a column without repeating myself.
I'm currently working on a generic dimension loading SSIS package that will handle all my dimensions. It currently does :
Create a temporary table identical to the given table name in parameters (this is a generic stored procedure that receive the table name as parameter, and then do : select top 0 * into ##[INSERT ORIGINAL TABLE NAME HERE] from [INSERT ORIGINAL TABLE NAME HERE]).
==> Here we insert custom code for this particular dimension that will first query the data from a datasource and get my delta, then transform the data and finally loads it into my temporary table.
Merge the temporary table into my original table with a T-SQL MERGE, taking care of type1 and type2 fields accordingly.
My problem right now is that I have to maintain a table with all the fields in it to store a metadata to tell my scripts if this particular field is type1 or type2... this is nonsense, I can get the same data (minus type1/type2) from sys.columns/sys.types.
I was ultimately thinking about renaming my fields to include their type in it, such as :
FirstName_T2, LastName_T2, Sex_T1 (well, I know this can be type2, let's not fall into that debate here).
What do you guyz would do with that? My solution (using a table with that metadata) is currently in place and working, but it's obvious that repeating myself from the systables to a custom table is nonsense, just for a simple type1/type2 info.
UPDATE: I also thought about creating user defined types like varchar => t1_varchar, t2_varchar, etc. This sounds like something a bit sluggy too...
Everything you need should already be in INFORMATION_SCHEMA.COLUMNS
I can't follow your thinking of not using provided tables/views...
Edit: As scarpacci mentioned, this somewhat portable if needed.
I know this is bad, but I will post an answer to my own question... Thanks to GBN for the help tho!
I am now storing "flags" in the "description" field of my columns. I, for example, can store a flag this way : "TYPE_2_DATA".
Then, I use this query to get the flag back for each and every column :
select columns.name as [column_name]
,types.name as [type_name]
,extended_properties.value as [column_flags]
from sys.columns
inner join sys.types
on columns.system_type_id = types.system_type_id
left join sys.extended_properties
on extended_properties.major_id = columns.object_id
and extended_properties.minor_id = columns.column_id
and extended_properties.name = 'MS_Description'
where object_id = ( select id from sys.sysobjects where name = 'DimDivision' )
and is_identity = 0
order by column_id
Now I can store metadata about columns without having to create a separate table. I use what's already in place and I don't repeat myself. I'm not sure this is the best possible solution yet, but it works and is far better than duplicating information.
In the future, I will be able to use this field to store more metadata, where as : "TYPE_2_DATA|ANOTHER_FLAG|ETC|OH BOY!".
UPDATE :
I now store the information in separate extended properties. You can manage extended properties using sp_addextendedproperty and sp_updateextendedproperty stored procedures. I have created a simple store procedure that help me to update those values regardless if they currently exist or not :
create procedure [dbo].[UpdateSCDType]
#tablename nvarchar(50),
#fieldname nvarchar(50),
#scdtype char(1),
#dbschema nvarchar(25) = 'dbo'
as
begin
declare #already_exists int;
if ( #scdtype = '1' or #scdtype = '2' )
begin
select #already_exists = count(1)
from sys.columns
inner join sys.extended_properties
on extended_properties.major_id = columns.object_id
and extended_properties.minor_id = columns.column_id
and extended_properties.name = 'ScdType'
where object_id = (select sysobjects.id from sys.sysobjects where sysobjects.name = #tablename)
and columns.name = #fieldname
if ( #already_exists = 0 )
begin
exec sys.sp_addextendedproperty
#name = N'Scd_Type',
#value = #scdtype,
#level0type = N'SCHEMA',
#level0name = #dbschema,
#level1type = N'TABLE',
#level1name = #tablename,
#level2type = N'COLUMN',
#level2name = #fieldname
end
else
begin
exec sys.sp_updateextendedproperty
#name = N'Scd_Type',
#value = #scdtype,
#level0type = N'SCHEMA',
#level0name = #dbschema,
#level1type = N'TABLE',
#level1name = #tablename,
#level2type = N'COLUMN',
#level2name = #fieldname
end
end
end
Thanks again

oracle sequence question

There are two inserts in my trigger which is fired by an update. My Vendor_Hist table has a field called thID which is the primary key in Task_History table. thID gets its' value from mySeq.nextval.
INSERT INTO TASK_HISTORY
( thID, phId, LABOR, VERSION )
( select mySeq.NEXTVAL, mySeq2.CurrVal, LABOR, tmpVersion
from tasks t
where t.project_id = :new.project_ID );
select mySeq.currval into tmpTHID from dual; -- problem here!
INSERT INTO VENDOR_HIST
( vhID, thID, Amount, Position, version )
( select mySeq3.NEXTVAL, tmpTHID,
Amount, Position, tmpVersion
from vendors v2, tasks t2
where v2.myID = t2.myID
and t2.project_id = :new.project_ID );
Now, my problem is the tmpTHID always the latest value of mySeq.nextVal. So, if thID in task_history is 1,2,3, I get three inserts into vendor_hist table with 3,3,3. It has to be 1,2,3. I also tried
INSERT INTO TASK_HISTORY
( thID, phId, LABOR, VERSION )
( select mySeq.NEXTVAL, mySe2.CurrVal, LABOR, tmpVersion
from tasks t
where t.project_id = :new.project_ID ) returning thID into :tmpTHID;
but then I get a "warning compiled with errors" message when I execute the trigger. How do I make sure that the thID in first insert is also the same in my second insert?
Hope it makes sense.
for i in (select * from tasks t
where t.project_id = :new.project_id)
loop
insert into task_history
( thID, phId, LABOR, VERSION )
values
(mySeq.NEXTVAL, mySeq2.CurrVal, i.LABOR, i.tmpVersion);
for each j in (select * from vendors v
where i.myId = v.myId)
loop
insert into vendor_history
( vhID, thID, Amount, Position, version )
values
(mySeq3.NEXTVAL, mySeq.CURRVAL, j.Amount, j.Position, j.tmpVersion)
end loop;
end loop;
I'm assuming the columns inserted in the second insert are from the VENDORS table; if not, the referencing cursor (i or j) should be used as appropriate.
Instead of the currVal, it works with the following subselect.
( select min(thID) from task_history t3
where t3.project_id = t2.project_id
and t3.myID = t2.myID
and t3.version = tmpVersion ),