can you please help me to how to convert below sql server function to db2 format.
CREATE FUNCTION [dbo].[GetKeyStructureXml]
(
#pf_wkstn_oid_sh smallint,
)
RETURNS varchar(max)
AS
BEGIN
DECLARE #RtKeys varchar(Max)
set #RtKeys = (SELECT rt.rbase_field_name,
pf_wkstn_oid_sh,
pf_wkstn_oid_lng,
''N'' status_indc,
rt.field_data_type,
rt.field_size,
''Rate Key '' DisplayType,
'''' Author,
0 DateCreated,
rb.field_level_indc,
rb.field_scope_indc
FROM rt_tmplt_key rt inner join rbase_field_dict rb
on rb.rbase_field_name=rt.rbase_field_name
where pf_wkstn_oid_sh = #pf_wkstn_oid_sh
order by rt_key_sqnc_num asc
FOR XML AUTO, BINARY BASE64,root(''TableKeys''))
RETURN #RtKeys;
END;
please provide some guide and help for the above conversion. it is quite confusing to use XML AUTO and BINARY BASE64 in db2.
Example of a scalar function constructing XML output from relational data.
I cast the XML result as text using XMLSERIALIZE just for example. Some 3-rd party tools can't work with DB2 XML data type.
create or replace function test_xml(p_CustomerID int)
returns XML
return
with Customer (CustomerID, CustomerType) as (values
(1, 'S')
, (2, 'A')
)
, SalesOrderHeader (CustomerID, SalesOrderID, Status) as (values
(1, 11, '5')
, (1, 12, '5')
, (1, 13, '5')
, (1, 14, '5')
, (2, 21, '6')
, (2, 22, '6')
, (2, 23, '6')
, (2, 24, '6')
)
SELECT
XMLELEMENT(NAME "Cust", XMLATTRIBUTES(Cust.CustomerID as "CustomerID", Cust.CustomerType as "CustomerType"), OrderHeader.ord)
as col
FROM Customer Cust,
(
select CustomerID
, XMLAGG(XMLELEMENT(NAME "OrderHeader", XMLATTRIBUTES(CustomerID AS "CustomerID", SalesOrderID AS "SalesOrderID", Status as "Status"))) ord
from SalesOrderHeader
group by CustomerID
) OrderHeader
WHERE Cust.CustomerID = OrderHeader.CustomerID
and Cust.CustomerID=p_CustomerID;
values xmlserialize(test_xml(1) as clob(1k));
CREATE FUNCTION GetKeyStructureXml(v_pf_wkstn_oid_sh smallint)
RETURNS xml
LANGUAGE SQL
BEGIN ATOMIC
DECLARE v_RtKeys xml;
SET v_RtKeys=(SELECT XMLELEMENT(
NAME "TableKeys",
XMLAGG(XMLELEMENT( NAME "rt", XMLAttributes( rt.rbase_field_name AS "rbase_field_name",
rt.wkstn_oid_sh AS "wkstn_oid_sh",
rt.wkstn_oid_lng AS "wkstn_oid_lng",
rt_key_sqnc_num AS "rt_key_sqnc_num",
rt.rt_key_rtrvl_cd AS "rt_key_rtrvl_cd",
pf_wkstn_oid_sh AS "pf_wkstn_oid_sh",
pf_wkstn_oid_lng AS "pf_wkstn_oid_lng",
'N' as status_indc,
rt.field_data_type AS "field_data_type",
rt.field_size AS "field_size",
'Rate Key ' as "DisplayType",
'' as "Author",
0 as "DateCreated"),
XMLELEMENT( NAME "rb", XMLAttributes( rb.field_level_indc AS "field_level_indc",
rb.field_scope_indc AS "field_scope_indc")))order by rt_key_sqnc_num asc)OPTION NULL ON NULL)
FROM rt_tmplt_key rt inner join rbase_field_dict rb
on rb.rbase_field_name=rt.rbase_field_name
where pf_wkstn_oid_sh = v_pf_wkstn_oid_sh );
RETURN v_RtKeys;
Related
I got a problem regarding missing rows in a table that is giving me a headache.
As base data, I have the following table:
declare #table table
(
id1 int,
id2 int,
ch char(1) not null,
val int
)
insert into #table values (1112, 121, 'A', 12)
insert into #table values (1351, 121, 'A', 13)
insert into #table values (1411, 121, 'B', 81)
insert into #table values (1312, 7, 'C', 107)
insert into #table values (1401, 2, 'A', 107)
insert into #table values (1454, 2, 'D', 107)
insert into #table values (1257, 6, 'A', 1)
insert into #table values (1269, 6, 'B', 12)
insert into #table values (1335, 6, 'C', 12)
insert into #table values (1341, 6, 'D', 5)
insert into #table values (1380, 6, 'A', 3)
The output should be ordered by id2 and follow a fixed sequence of ch, which should repeat until next id2 begins.
Sequence:
'A'
'B'
'C'
'D'
If the sequence or the pattern is interrupted, it should fill the missing rows with null, so that i get this result table:
id1 id2 ch val
----------------------------
1112 121 'A' 12
NULL 121 'B' NULL
NULL 121 'C' NULL
NULL 121 'D' NULL
1351 121 'A' 13
1411 121 'B' 81
NULL 121 'C' NULL
NULL 121 'D' NULL
NULL 7 'A' NULL
NULL 7 'B' NULL
1312 7 'C' 107
NULL 7 'D' NULL
1401 2 'A' 107
NULL 2 'B' NULL
NULL 2 'C' NULL
1454 2 'D' 107
and so on...
What I'm looking for is a way to do this without iterations.
I hope someone can help!
Thanks in advance!
A solution might be this:
declare #table table ( id1 int, id2 int, ch char(1) not null, val int )
insert into #table values (1112, 121, 'A', 12)
,(1351, 121, 'A', 13),(1411, 121, 'B', 81),(1312, 7, 'C', 107),(1401, 2, 'A', 107)
,(1454, 2, 'D', 107),(1257, 6, 'A', 1),(1269, 6, 'B', 12),(1335, 6, 'C', 12)
,(1341, 6, 'D', 5),(1380, 6, 'A', 3)
;with foo as
(select
*
,row_number() over (partition by id2 order by id1) rwn
,ascii(isnull(lag(ch,1) over (partition by id2 order by id1),'A'))-ascii('A') prev
,count(*) over (partition by id2,ch) nr
,ascii(ch)-ascii('A') cur
from #table
)
,bar as
(
select
*,case when cur<=prev and rwn>1 then 4 else 0 end + cur-prev step
from foo
)
,foobar as
(
select *,sum(step) over (partition by id2 order by id1 rows unbounded preceding) rownum
from bar
)
,iterations as
(
select id2,max(nr) nr from foo
group by id2
)
,blanks as
(
select
id2,ch chnr,char(ch+ascii('A') )ch,ROW_NUMBER() over (partition by id2 order by c.nr,ch)-1 rownum,c.nr
from iterations a
inner join (values (1),(2),(3),(4),(5),(6),(7),(8),(9),(10)) c(nr)
on c.nr<=a.nr
cross join (values (0),(1),(2),(3)) b(ch)
)
select
b.id1,a.id2,a.ch,b.val
from blanks a
left join foobar b
on a.id2=b.id2 and a.rownum=b.rownum
order by a.id2,a.rownum
I first make the query "foo" which looks at the row number and gets the previous value for ch for each id2.
"bar" then finds how many missing values there are between the rows. For instance If the previous was an A and the current is a c then there are 2. If the previous was an A and the current is an A, then there are 4!
"foobar" then adds the steps, thus numbering the original rows, where they should be in the final output.
"iterations" counts the number of times the "ABCD" rows should appear.
"BLANKS" then is all the final rows, that is for each id2, it outputs all the "ABCD" rows that should be in the final output, and numbers them in rownum
Finally I left join "foobar" with "BLANKS" on id2 and rownum. Thus we get the correct number of rows, and the places where there are values in the original is output.
If you can manage to add an extra column in your table, that defines which [id2] are part from the same sequence you can try this:
declare #table table
(
id1 int,
id2 int,
ch char(1) not null,
val int,
category int -- extra column
)
insert into #table values (1112, 121, 'A', 12, 1)
insert into #table values (1351, 121, 'A', 13, 2)
insert into #table values (1411, 121, 'B', 81, 2)
insert into #table values (1312, 7, 'C', 107, 3)
insert into #table values (1401, 2, 'A', 107, 4)
insert into #table values (1454, 2, 'D', 107, 4)
insert into #table values (1257, 6, 'A', 1, 5)
insert into #table values (1269, 6, 'B', 12, 5)
insert into #table values (1335, 6, 'C', 12, 5)
insert into #table values (1341, 6, 'D', 5, 5)
insert into #table values (1380, 6, 'A', 3, 5)
DECLARE #sequence table (seq varchar(1))
INSERT INTO #sequence values ('A'), ('B'), ('C'), ('D')
SELECT b.id1, a.id2, a.seq, b.val, a.category
INTO #T1
FROM (
SELECT *
FROM #table
CROSS JOIN #sequence
) A
LEFT JOIN (
SELECT * FROM #table
) B
ON 1=1
AND a.id1 = b.id1
AND a.id2 = b.id2
AND a.seq = b.ch
AND a.val = b.val
;WITH rem_duplicates AS (
SELECT *, dup = ROW_NUMBER() OVER (PARTITION by id2, seq, category ORDER BY id1 DESC)
FROM #T1
) DELETE FROM rem_duplicates WHERE dup > 1
SELECT * FROM #T1 ORDER BY id2 DESC, category ASC, seq ASC
DROP TABLE #T1
I'm little confused by your output, try this:
Update
DECLARE #table TABLE
(
row INT IDENTITY(1, 1) ,
id1 INT ,
id2 INT ,
ch CHAR(1) NOT NULL ,
val INT
);
DECLARE #Sequence TABLE ( ch3 CHAR(1) NOT NULL );
INSERT INTO #Sequence
VALUES ( 'A' );
INSERT INTO #Sequence
VALUES ( 'B' );
INSERT INTO #Sequence
VALUES ( 'C' );
INSERT INTO #Sequence
VALUES ( 'D' );
INSERT INTO #table
VALUES ( 1112, 121, 'A', 12 );
INSERT INTO #table
VALUES ( 1351, 121, 'A', 13 );
INSERT INTO #table
VALUES ( 1411, 121, 'B', 81 );
INSERT INTO #table
VALUES ( 1312, 7, 'C', 107 );
INSERT INTO #table
VALUES ( 1401, 2, 'A', 107 );
INSERT INTO #table
VALUES ( 1454, 2, 'D', 107 );
INSERT INTO #table
VALUES ( 1257, 6, 'A', 1 );
INSERT INTO #table
VALUES ( 1269, 6, 'B', 12 );
INSERT INTO #table
VALUES ( 1335, 6, 'C', 12 );
INSERT INTO #table
VALUES ( 1341, 6, 'D', 5 );
INSERT INTO #table
VALUES ( 1380, 6, 'A', 3 );
SELECT r.id1 ,
fin.id2 ,
ch3 ,
r.val
FROM ( SELECT *
FROM ( SELECT CASE WHEN r.chd - l.chd = 1 THEN 0
ELSE 1
END [gap in sq] ,
l.*
FROM ( SELECT id2 ,
ASCII(ch) chd ,
ch ,
val ,
id1 ,
row
FROM #table
) AS l
LEFT JOIN ( SELECT id2 ,
ASCII(ch) chd ,
row
FROM #table
) AS r ON l.row = r.row - 1
) AS temp ,
#Sequence s
WHERE temp.[gap in sq] = 1
OR ( temp.[gap in sq] = 0
AND s.ch3 = temp.ch
)
) AS fin
LEFT JOIN #table r ON r.id2 = fin.id2
AND r.id1 = fin.id1
AND r.ch = fin.ch3
This is a T-SQL related question. I am using SQL Server 2012.
I have a table like this:
I would like to have output like this:
Explanation:
For each employee, there will be a row. An employee has one or more assignments. Batch Id specifies this. Based on the batch Id, the column names will change (e.g. Country 1, Country 2 etc.).
Approach so far:
Un-pivot the source table like the following:
select
EmpId, 'Country ' + cast(BatchId as varchar) as [ColumnName],
Country as [ColumnValue]
from
SourceTable
UNION
select
EmpId, 'Pass ' + cast(BatchId as varchar) as [ColumnName],
Pass as [ColumnValue]
from
SourceTable
which gives each column's values as rows. Then, this result can be pivoted to get the desired output.
Questions:
Is there a better way of doing this?
At the moment, I know there will be fixed amount of batches, but, for future, if I like to make the pivoting part dynamic, what is the best approach?
Using tools like SSIS or SSRS, is it easier to handle the pivot dynamically?
Screw doing it in SQL.
Let SSRS do the work for you with a MATRIX. It will PIVOT for you without having to create dynamic SQL to handle the terrible limitation of needing to know all the columns.
For your data, you would have EMP ID as the ROW Group and PASS as your column grouping.
https://msdn.microsoft.com/en-us/library/dd207149.aspx
There are many possible solutions to achieve what you want (search for Dynamic Pivot on multiple columns)
SqlFiddleDemo
Warning: I assume that columns Country and Pass are NOT NULL
CREATE TABLE SourceTable(EmpId INT, BatchId INT,
Country NVARCHAR(100) NOT NULL, Pass NVARCHAR(5) NOT NULL);
INSERT INTO SourceTable(EmpId, BatchId, Country, Pass)
VALUES
(100, 1, 'UK', 'M'), (200, 2, 'USA', 'U'),
(100, 2, 'Romania', 'M'), (100, 3, 'India', 'MA'),
(100, 4, 'Hongkong', 'MA'), (300, 1, 'Belgium', 'U'),
(300, 2, 'Poland', 'U'), (200, 1, 'Australia', 'M');
/* Get Number of Columns Groups Country1..Country<MaxCount> */
DECLARE #max_count INT
,#sql NVARCHAR(MAX) = ''
,#columns NVARCHAR(MAX) = ''
,#i INT = 0
,#i_s NVARCHAR(10);
WITH cte AS
(
SELECT EmpId
,[cnt] = COUNT(*)
FROM SourceTable
GROUP BY EmpId
)
SELECT #max_count = MAX(cnt)
FROM cte;
WHILE #i < #max_count
BEGIN
SET #i += 1;
SET #i_s = CAST(#i AS NVARCHAR(10));
SET #columns += N',MAX(CASE WHEN [row_no] = ' + #i_s + ' THEN Country END) AS Country' + #i_s +
',MAX(CASE WHEN [row_no] = ' + #i_s + ' THEN Pass END) AS Pass' + #i_s;
END
SELECT #sql =
N';WITH cte AS (
SELECT EmpId, Country, Pass, [row_no] = ROW_NUMBER() OVER (PARTITION BY EmpId ORDER BY BatchId)
FROM SourceTable)
SELECT EmpId ' + #columns + N'
FROM cte
GROUP BY EmpId';
/* Debug */
/* SELECT #sql */
EXEC(#sql);
Or:
SQLFiddleDemo2
DECLARE #cols NVARCHAR(MAX),
#sql NVARCHAR(MAX) = '';
;WITH cte(col_name, rn) AS(
SELECT DISTINCT col_name = col_name + CAST(BatchId AS VARCHAR(10)),
rn = ROW_NUMBER() OVER(PARTITION BY EmpId ORDER BY BatchId)
FROM SourceTable
CROSS APPLY (VALUES ('Country', Country), ('Pass', Pass)) AS c(col_name, val)
)
SELECT #cols = STUFF((SELECT ',' + QUOTENAME(col_name)
FROM cte
ORDER BY rn /* If column order is important for you */
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
, 1, 1, '');
SET #sql =
N';WITH cte AS
(
SELECT EmpId, col_name = col_name + CAST(BatchId AS VARCHAR(10)), val
FROM SourceTable
CROSS APPLY (VALUES (''Country'', Country), (''Pass'', Pass)) AS c(col_name, val)
)
SELECT *
FROM cte
PIVOT
(
MAX(val)
FOR col_name IN (' + #cols + ')
) piv';
EXEC(#sql);
Basically, I would like to get a recordset similar to this:
CustomerID, CustomerName, OrderNumbers
1 John Smith 112, 113, 114, 115
2 James Smith 116, 117, 118
Currently I am using an Sql Server UDF to concatenate order #s.
Are there more efficient solutions ?
1) There are allot of solutions.
2) Example (SQL2005+):
DECLARE #Customer TABLE (
CustomerID INT PRIMARY KEY,
Name NVARCHAR(50) NOT NULL
);
INSERT #Customer VALUES (1, 'Microsoft');
INSERT #Customer VALUES (2, 'Macrosoft');
INSERT #Customer VALUES (3, 'Appl3');
DECLARE #Order TABLE (
OrderID INT PRIMARY KEY,
CustomerID INT NOT NULL -- FK
);
INSERT #Order VALUES (1, 1);
INSERT #Order VALUES (2, 2);
INSERT #Order VALUES (3, 2);
INSERT #Order VALUES (4, 1);
INSERT #Order VALUES (5, 2);
SELECT c.CustomerID, c.Name, oa.OrderNumbers
FROM #Customer AS c
/*CROSS or */OUTER APPLY (
SELECT STUFF((
SELECT ',' + CONVERT(VARCHAR(11), o.OrderID) FROM #Order AS o
WHERE o.CustomerID = c.CustomerID
-- ORDER BY o.OrderID -- Uncomment of order nums must be sorted
FOR XML PATH(''), TYPE
).value('.', 'VARCHAR(MAX)'), 1, 1, '') AS OrderNumbers
) oa;
Output:
CustomerID Name OrderNumbers
----------- --------- ------------
1 Microsoft 1,4
2 Macrosoft 2,3,5
3 Appl3 NULL
I want to select from an enumaration that is not in database.
E.g. SELECT id FROM my_table returns values like 1, 2, 3
I want to display 1 -> 'chocolate', 2 -> 'coconut', 3 -> 'pizza' etc. SELECT CASE works but is too complicated and hard to overview for many values. I think of something like
SELECT id, array['chocolate','coconut','pizza'][id] FROM my_table
But I couldn't succeed with arrays. Is there an easy solution? So this is a simple query, not a plpgsql script or something like that.
with food (fid, name) as (
values
(1, 'chocolate'),
(2, 'coconut'),
(3, 'pizza')
)
select t.id, f.name
from my_table t
join food f on f.fid = t.id;
or without a CTE (but using the same idea):
select t.id, f.name
from my_table t
join (
values
(1, 'chocolate'),
(2, 'coconut'),
(3, 'pizza')
) f (fid, name) on f.fid = t.id;
This is the correct syntax:
SELECT id, (array['chocolate','coconut','pizza'])[id] FROM my_table
But you should create a referenced table with those values.
What about creating another table that enumerate all cases, and do join ?
CREATE TABLE table_case
(
case_id bigserial NOT NULL,
case_name character varying,
CONSTRAINT table_case_pkey PRIMARY KEY (case_id)
)
WITH (
OIDS=FALSE
);
and when you select from your table:
SELECT id, case_name FROM my_table
inner join table_case on case_id=my_table_id;
I use SQL Server 2008 R2.
I have a weird problem as following. I have a table as shown in
I need to write such a query like:
SELECT DISTINCT Field1
FROM MYTABLE
WHERE Field2 IN (96,102)
in this query, WHERE Field2 IN (96,102) gives me 96 or 102 or both!
More over, I would like to return rows that contains 96 and 102 at the same time!
Is there any suggestion? please write result oriented...
I have made a sqlfiddle for this..
create table a (id int, val int)
go
insert into a select 1, 22
insert into a select 1, 122
insert into a select 2, 22
insert into a select 3, 122
insert into a select 4, 22
insert into a select 4, 122
then select like this
select count(distinct id), id
from a
where val in (22, 122)
group by id
having count(id) > 1
EDIT: count(distinct id) will only show distinct counts..
EDIT:
Here's a sqlfiddle example (thanks to Mark Kremers):
http://sqlfiddle.com/#!3/df201/1
create table mytable (field1 int, field2 int)
go
insert into mytable values (199201, 84)
insert into mytable values (199201, 96)
insert into mytable values (199201, 102)
insert into mytable values (199201, 103)
insert into mytable values (581424, 96)
insert into mytable values (581424, 84)
insert into mytable values (581424, 106)
insert into mytable values (581424, 122)
insert into mytable values (687368, 79)
insert into mytable values (687368, 96)
insert into mytable values (687368, 102)
insert into mytable values (687368, 104)
insert into mytable values (687368, 106)
Here's the query:
select distinct a.field1 from
( select field1 from mytable where field2=96) a
inner join
( select field1 from mytable where field2=102) b
on a.field1 = b.field1
And here are the results:
FIELD1
199201
687368
Finally, here's a simplified version of the query (thans to pst):
select distinct a.field1 from mytable a
inner join mytable b
on a.field1 = b.field1
where a.field2=96 and b.field2=102
Use a self-join? Not the most tidy, but I think it works well for 2 values
SELECT *
FROM T R1
JOIN T R2 -- join table with itself
ON R1.F1 = R2.F1 -- where the first field is the same
WHERE R1.F2 = 96 AND R2.F2 = 102 -- and each has one of the required values
(T = Table, Rx = Relation Alias, Fx = Field)
If there can be an arbitrary number of fields, this can be solved as
CREATE TABLE #T (id int, val int)
GO
INSERT INTO #T (id, val)
VALUES
(1, 22), (1, 22), -- no, only 22 (but 2 records)
(2, 22), (2, 122), -- yes, both values (only)
(3, 122), -- no, only 122
(4, 22), (4,122), -- yes, both values ..
(4, 444), (4, null), -- and extra values
(5, 555) -- no, neither value
GO
-- Using DISTINCT over filtered results first, as
-- SQL Server 2008 does not support HAVING COUNT(DISTINCT F1, F2)
SELECT id
FROM (SELECT DISTINCT id, val
FROM #T
WHERE val IN (22, 122)) AS R1
GROUP BY id
HAVING COUNT(id) >= 2 -- or 3 or ..
GO
-- Or a similar variation, as can COUNT(DISTINCT ..)
-- in the SELECT of a GROUP BY
SELECT id
FROM (SELECT id, COUNT(DISTINCT val) as ct
FROM #T
WHERE val IN (22, 122)
GROUP BY id) AS R1
WHERE ct >= 2 -- or 3 or ..
GO
For larger IN (..) sizes, say above 20 values, it may be advisable to use a separate table or table-value and a JOIN for performance reasons.
Try from your original query:
SELECT DISTINCT Field1
FROM MYTABLE
WHERE rtrim(ltrim(cast(Field2 as varchar))) IN ('96','102')