jdbc data comparision - db2

I want to compare content of 15 columns in two rows.
I am using db2 9 with jdbc.
Can I use a sql to get a result like "match or not match"
And How can I get columns differs?

You can use the EXCEPT operator to do this.
In the example below, I'm using common table expressions to fetch a the single rows (assuming, in this case, that id is the primary key.
with r1 as (select c1, c2, ..., c15 from t where id = 1),
r2 as (select c1, c2, ..., c15 from t where id = 2)
select * from r1
except
select * from r2
If this returns 0 rows, then the rows are identical. If it returns a row, then the two rows differ.
If you really want the result to be 'MATCH' or 'NOT MATCH':
with r1 as (select c1, c2, ..., c15 from t where id = 1),
r2 as (select c1, c2, ..., c15 from t where id = 2),
rs as (select * from r1 except select * from r2)
select
case when count(*) = 0 then 'MATCH'
else 'NOT MATCH'
end as comparison
from
rs;

Related

Select top 5 and bottom 5 columns from a list of columns based on their values

I have a requirement where I need to Select top 5 and bottom 5 columns from a list of columns based on their values.
If more than 1 column has same value then select any one from them.
Eg
CREATE TABLE #b(Company VARCHAR(10),A1 INt,A2 INt,A3 INt,A4 INt,B1 INt,G1 INt,G2 INt,G3 INt,HH5 INt,SS6 INt)
INSERT INTo #b
SELECT 'test_A',8,10,6,10,0,6,0,6,13,4 UNION ALL
SELECT 'test_B',17,7,0,1,3,18,0,6,9,5 UNION ALL
SELECT 'test_C',0,0,6,1,2,6,3,4,3,2 UNION ALL
SELECt 'test_D',13,1,4,1,4,1,9,0,0,5
SELECT * FROM #b
Desired Output:
Company
Top5
Bottom5
test_A
HH5,A2,A1,A3,SS6
B1,SS6,A3,A1,A2
test_B
G1,A1,HH5,A2,G3
A3,A4,B1,SS6,G3
I am able to find the top values but not the column names.
Here is I am stuck at, I am able to find the max scores but not sure how to find the column that holds this max value.
SELECT Company,(
SELECT MAX(myval)
FROM (VALUES (A1),(A2),(A3),(A4),(B1),(G1),(G2),(G3),(HH5)) AS temp(myval))
AS MaxOfColumns
FROM #b
As Larnu suggested, the first step would be to UNPIVOT the data into a form like (Company, ColumnName, Value). You can then use the ROW_NUMBER() window function to assign ordinals 1 - 10 to each value for each company based on the sorted value.
Next, you can wrap the above in a Common Table Expression (CTE) to feed a query that, for each Company, uses conditional aggregation with the STRING_AGG() to selectively combine the top 5 and bottom 5 column names to produce the desired result.
Something like:
;WITH Data AS (
SELECT
Company,
ColumnName,
Value,
ROW_NUMBER() OVER(PARTITION BY Company ORDER BY Value DESC, ColumnName) AS Ord
FROM #b
UNPIVOT (
Value FOR ColumnName IN (A1, A2, A3, A4, B1, G1, G2, G3, HH5, SS6)
) U
)
SELECT
D.Company,
STRING_AGG(CASE WHEN D.Ord BETWEEN 1 AND 5 THEN D.ColumnName END, ', ')
WITHIN GROUP (ORDER BY D.ORD) AS Top5,
STRING_AGG(CASE WHEN D.Ord BETWEEN 6 AND 10 THEN D.ColumnName END, ', ')
WITHIN GROUP (ORDER BY D.ORD) AS Bottom5
FROM Data D
GROUP BY D.Company
ORDER BY D.Company
For older SQL Server versions that don't support STRING_AGG(), the FOR XML PATH(''),TYPE construct can be used to concatenate text. The .value('text()[1]', 'varchar(max)') function is then used to safely extract the result from the XML, and finally the STUFF() function is used to strip out the leading separator (comma-space).
;WITH Data AS (
SELECT
Company,
ColumnName,
Value,
ROW_NUMBER() OVER(PARTITION BY Company ORDER BY Value DESC, ColumnName) AS Ord
FROM #b
UNPIVOT (
Value FOR ColumnName IN (A1, A2, A3, A4, B1, G1, G2, G3, HH5, SS6)
) U
)
SELECT B.Company, C.Top5, C.Bottom5
FROM #b B
CROSS APPLY (
SELECT
STUFF((
SELECT ', ' + D.ColumnName
FROM Data D
WHERE D.Company = B.Company
AND D.Ord BETWEEN 1 AND 5
ORDER BY D.ORD
FOR XML PATH(''),TYPE
).value('text()[1]', 'varchar(max)'), 1, 2, '') AS Top5,
STUFF((
SELECT ', ' + D.ColumnName
FROM Data D
WHERE D.Company = B.Company
AND D.Ord BETWEEN 6 AND 10
ORDER BY D.ORD
FOR XML PATH(''),TYPE
).value('text()[1]', 'varchar(max)'), 1, 2, '') AS Bottom5
) C
ORDER BY B.Company
See this db<>fiddle fr a demo.
If you also want lists of the top 5 and bottom 5 values, you can repeat the aggregations above while substituting CONVERT(VARCHAR, D.Value) for D.ColumnName where appropriate.

Query Transform postgreSQL

I get an error with query
UPDATE schema1.table1 t
SET (a1, a2, a3, a4, a5) = (
SELECT DISTINCT tt.a1, tt.a2, tt.a3, tt.a4, tt.a5
FROM schema1.table2 tt
WHERE tt.is_flg = 1
AND tt.some_cnt > 0
AND (t.date1 BETWEEN tt.date1 AND tt.date2)
AND tt.id = t.id
);
More than one row returned by a subquery used as an expression
How can I transform query to catch rows which give this error?
UPD: I transform it with with but I don't sure that result set is correct, because query with update use 251 rows and query with with and without distinct use 465 rows (with & distinct 51 rows)
with h as (select * from schema1.table1)
select DISTINCT tt.a1, tt.a2, tt.a3, tt.a4, tt.a5
FROM schema1.table2 tt INNER JOIN h ON tt.id = t.id
WHERE tt.is_flg = 1
AND tt.some_cnt > 0
AND (t.date1 BETWEEN tt.date1 AND tt.date2)

SSRS Expression Split string in rows and column

I am working with SQL Server 2008 Report service. I have to try to split string values in different columns in same row in expression but I can't get the excepted output. I have provided input and output details. I have to split values by space (" ") and ("-").
Input :
Sample 1:
ASY-LOS,SLD,ME,A1,A5,J4A,J4B,J4O,J4P,J4S,J4T,J7,J10,J2A,J2,S2,S3,S3T,S3S,E2,E2F,E6,T6,8,SB1,E1S,OTH AS2-J4A,J4B,J4O,J4P,J4S,J4T,J7,J1O,J2A,S2,S3,J2,T6,T8,E2,E4,E6,SLD,SB1,OTH
Sample 2:
A1 A2 A3 A5 D2 D3 D6 E2 E4 E5 E6 EOW LH LL LOS OTH P8 PH PL PZ-1,2,T1,T2,T3 R2-C,E,A RH RL S1 S2-D S3
Output should be:
Thank you.
I wrote this before I saw your comment about having to do it in the report. If you can explain why you cannot do this in the dataset query then there may be a way around that.
Anyway, here's one way of doing this using SQL
DECLARE #t table (RowN int identity (1,1), sample varchar(500))
INSERT INTO #t (sample) SELECT 'ASY-LOS,SLD,ME,A1,A5,J4A,J4B,J4O,J4P,J4S,J4T,J7,J10,J2A,J2,S2,S3,S3T,S3S,E2,E2F,E6,T6,8,SB1,E1S,OTH AS2-J4A,J4B,J4O,J4P,J4S,J4T,J7,J1O,J2A,S2,S3,J2,T6,T8,E2,E4,E6,SLD,SB1,OTH'
INSERT INTO #t (sample) SELECT 'A1 A2 A3 A5 D2 D3 D6 E2 E4 E5 E6 EOW LH LL LOS OTH P8 PH PL PZ-1,2,T1,T2,T3 R2-C,E,A RH RL S1 S2-D S3'
drop table if exists #s1
SELECT RowN, sample, SampleIdx = idx, SampleValue = [Value]
into #s1
from #t t
CROSS APPLY
spring..fn_Split(sample, ' ') as x
drop table if exists #s2
SELECT
s1.*
, s2idx = Idx
, s2Value = [Value]
into #s2
FROM #s1 s1
CROSS APPLY spring..fn_Split(SampleValue, '-')
SELECT SampleKey = [1],
Output = [2] FROM #s2
PIVOT (
MAX(s2Value)
FOR s2Idx IN ([1],[2])
) p
This produced the following results
If you do not have a split function, here is the script to create the one I use
CREATE FUNCTION [dbo].[fn_Split]
/* Define I/O parameters WARNING! DO NOT USE MAX DATA-TYPES HERE! IT WILL KILL PERFORMANCE! */
(#pString VARCHAR(8000)
,#pDelimiter CHAR(1)
)
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN
/*"Inline" CTE Driven "Tally Table" produces values from 1 up to 10,000: enough to cover VARCHAR(8000)*/
WITH E1(N) AS (
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1
)--10E+1 or 10 rows
,E2(N) AS (SELECT 1 FROM E1 a,E1 b)--10E+2 or 100 rows
,E4(N) AS (SELECT 1 FROM E2 a,E2 b)--10E+4 or 10,000 rows max
/* This provides the "base" CTE and limits the number of rows right up front
for both a performance gain and prevention of accidental "overruns" */
,cteTally(N) AS (
SELECT TOP (ISNULL(DATALENGTH(#pString), 0)) ROW_NUMBER() OVER (ORDER BY (SELECT NULL))
FROM E4
)
/* This returns N+1 (starting position of each "element" just once for each delimiter) */
,cteStart(N1) AS (
SELECT 1 UNION ALL
SELECT t.N + 1 FROM cteTally t WHERE SUBSTRING(#pString, t.N, 1) = #pDelimiter
)
/* Return start and length (for use in SUBSTRING later) */
,cteLen(N1, L1) AS (
SELECT s.N1
,ISNULL(NULLIF(CHARINDEX(#pDelimiter, #pString, s.N1), 0) - s.N1, 8000)
FROM cteStart s
)
/* Do the actual split.
The ISNULL/NULLIF combo handles the length for the final element when no delimiter is found. */
SELECT
idx = ROW_NUMBER() OVER (ORDER BY l.N1)
,value = SUBSTRING(#pString, l.N1, l.L1)
FROM cteLen l

PostgreSQL SQL query for traversing an entire undirected graph and returning all edges found

I have an edges table in my PostgreSQL database that represents the edges of a directed graph, with two columns: node_from and node_to (value is a node's id).
Given a single node (initial_node) I'd like to be able to traverse the entire graph, but in an undirected way.
What I mean is, for instance for the following graph :
(a->b)
(c->b)
(c->d)
If initial_node is a, b, c, or d, in any case, I would get [a, b, c, d].
I used the following SQL query (based on http://www.postgresql.org/docs/8.4/static/queries-with.html ):
WITH RECURSIVE search_graph(uniq, depth, path, cycle) AS (
SELECT
CASE WHEN g.node_from = 'initial_node' THEN g.node_to ELSE g.node_from END,
1,
CASE WHEN g.node_from = 'initial_node' THEN ARRAY[g.node_from] ELSE ARRAY[g.node_to] END,
false
FROM edges g
WHERE 'initial_node' in (node_from, node_to)
UNION ALL
SELECT
CASE WHEN g.node_from = sg.uniq THEN g.node_to ELSE g.node_from END,
sg.depth + 1,
CASE WHEN g.node_from = sg.uniq THEN path || g.node_from ELSE path || g.node_to END,
g.node_to = ANY(path) OR g.node_from = ANY(path)
FROM edges g, search_graph sg
WHERE sg.uniq IN (g.node_from, g.node_to) AND NOT cycle
)
SELECT * FROM search_graph
It worked fine... Until I had a case with 12 nodes that are all connected together, in all directions (for each pair I have both (a->b) and (b->a)), which makes the query loops indefinitely. (Changing UNION ALL to UNION doesn't eliminate the looping.)
Has anyone any piece of advice to handle this issue?
Cheers,
Antoine.
I got to this, it should not get into infinite loops with any kind of data:
--create temp table edges ("from" text, "to" text);
--insert into edges values ('initial_node', 'a'), ('a', 'b'), ('a', 'c'), ('c', 'd');
with recursive graph(points) as (
select array(select distinct "to" from edges where "from" = 'initial_node')
union all
select g.points || e1.p || e2.p
from graph g
left join lateral (
select array(
select distinct "to"
from edges
where "from" =any(g.points) and "to" <>all(g.points) and "to" <> 'initial_node') AS p) e1 on (true)
left join lateral (
select array(
select distinct "from"
from edges
where "to" =any(g.points) and "from" <>all(g.points) and "from" <> 'initial_node') AS p) e2 on (true)
where e1.p <> '{}' OR e2.p <> '{}'
)
select distinct unnest(points)
from graph
order by 1
Recursive queries are very limiting in terms of what can be selected, and since they don't allow using the recursive results inside a subselect, one can't use NOT IN (select * from recursive where...). Storing results in an array, using LEFT JOIN LATERAL and using =ANY() and <>ALL() solved this conundrum.

Common records for 2 fields in a table?

I have a Table which has 2 fields say A,B. Suppose A has values a1,a2.
Corresponding records for a1 in B are 1,2,3,x,y,z.
Corresponding records for a2 in B are 1,2,3,4,d,e,f
I need a a query to be written in DB2, so that it will fetch the common records in B for each record in A (a1 and a2).
So here the output would be :
A B
a1 1
a1 2
a1 3
a2 1
a2 2
a2 3
Can someone please help on this?
Try something like:
SELECT A, B
FROM Table t1
WHERE (SELECT COUNT(*) FROM Table t2 WHERE t2.B = t1.B)
= (SELECT COUNT(DISTINCT t3.A) FROM Table t3)
ORDER BY A, B
This might not be 100% accurate as I can't test it out in DB2 so you might have to tweak the query a little bit to make it work.
with t(num) as (select count(distinct A) from table)
select t1.A, t1.B
from table t1, table t2, t
where t1.B = t2.B
group by t1.A, t1.B, num
having count(*) = num
Basically, the idea is to join the same table with column B and filter out just the ones that match exactly the same number of times as the number of elements in column A, which indicates that it is a common record out of all the A values.