Correlated subquery with null - tsql

I need to use correlated subquery with CASE WHEN statement. The problem is that there are cases when correlated subquery may return NULL. I combine correlated subquery with another column and because subquery returns NULL I receive null in column "result".
Here's the example:
create table ##sygnatura (ID int, syg_numer varchar(50))
create table ##sprawa (ID int, sp_numer varchar(50))
create table ##dluznik (ID int, nazwa varchar(max))
insert into ##sygnatura
select null,null
insert into ##sprawa
select 1,'abc'
insert into ##dluznik
select 1,'XYZ'
select sp_numer,
case when nazwa='XYZ' then (select isnull(syg_numer,'') from ##sygnatura where ##sygnatura.ID=##sprawa.ID)+', '+isnull(nazwa, '') end as result
From ##sprawa
join ##dluznik on ##sprawa.ID=##dluznik.ID

You need to move the ISNULL to encapsulate the subquery because you are getting a NULL result due to the subquery not returning any rows:
select sp_numer,
case when nazwa='XYZ' then ISNULL((select isnull(syg_numer,'') from ##sygnatura where ##sygnatura.ID=##sprawa.ID) + ', ','') + isnull(nazwa, '') end as result
From ##sprawa
join ##dluznik on ##sprawa.ID=##dluznik.ID

Below is one way, however probably there is some better way to resolve your issue than using EXISTS
select sp_numer,
case when nazwa='XYZ'
and exists (select 1 from ##sygnatura where ##sygnatura.ID=##sprawa.ID)
then (select isnull(syg_numer,'') from ##sygnatura where ##sygnatura.ID=##sprawa.ID)+', '+isnull(nazwa, '')
else isnull(nazwa, '')
end as result
From ##sprawa
join ##dluznik on ##sprawa.ID=##dluznik.ID

I think you can easily get the expected result using another JOIN like this :
select sp_numer,
case when nazwa='XYZ' then isnull(syg_numer + ', ', '') + isnull(nazwa, '') end as result
From ##sprawa
join ##dluznik on ##sprawa.ID=##dluznik.ID
left join ##sygnatura on ##sprawa.ID=##sygnatura.ID

Related

T-SQL - Pivot/Crosstab - variable number of values

I have a simple data set that looks like this:
Name Code
A A-One
A A-Two
B B-One
C C-One
C C-Two
C C-Three
I want to output it so it looks like this:
Name Code1 Code2 Code3 Code4 Code...n ...
A A-One A-Two
B B-One
C C-One C-Two C-Three
For each of the 'Name' values, there can be an undetermined number of 'Code' values.
I have been looking at various examples of Pivot SQL [including simple Pivot sql and sql using the XML function?] but I have not been able to figure this out - or to understand if it is even possible.
I would appreciate any help or pointers.
Thanks!
Try it like this:
DECLARE #tbl TABLE([Name] VARCHAR(100),Code VARCHAR(100));
INSERT INTO #tbl VALUES
('A','A-One')
,('A','A-Two')
,('B','B-One')
,('C','C-One')
,('C','C-Two')
,('C','C-Three');
SELECT p.*
FROM
(
SELECT *
,CONCAT('Code',ROW_NUMBER() OVER(PARTITION BY [Name] ORDER BY Code)) AS ColumnName
FROM #tbl
)t
PIVOT
(
MAX(Code) FOR ColumnName IN (Code1,Code2,Code3,Code4,Code5 /*add as many as you need*/)
)p;
This line
,CONCAT('Code',ROW_NUMBER() OVER(PARTITION BY [Name] ORDER BY Code)) AS ColumnName
will use a partitioned ROW_NUMBER in order to create numbered column names per code. The rest is simple PIVOT...
UPDATE: A dynamic approach to reflect the max amount of codes per group
CREATE TABLE TblTest([Name] VARCHAR(100),Code VARCHAR(100));
INSERT INTO TblTest VALUES
('A','A-One')
,('A','A-Two')
,('B','B-One')
,('C','C-One')
,('C','C-Two')
,('C','C-Three');
DECLARE #cols VARCHAR(MAX);
WITH GetMaxCount(mc) AS(SELECT TOP 1 COUNT([Code]) FROM TblTest GROUP BY [Name] ORDER BY COUNT([Code]) DESC)
SELECT #cols=STUFF(
(
SELECT CONCAT(',Code',Nmbr)
FROM
(SELECT TOP((SELECT mc FROM GetMaxCount)) ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) FROM master..spt_values) t(Nmbr)
FOR XML PATH('')
),1,1,'');
DECLARE #sql VARCHAR(MAX)=
'SELECT p.*
FROM
(
SELECT *
,CONCAT(''Code'',ROW_NUMBER() OVER(PARTITION BY [Name] ORDER BY Code)) AS ColumnName
FROM TblTest
)t
PIVOT
(
MAX(Code) FOR ColumnName IN (' + #cols + ')
)p;';
EXEC(#sql);
GO
DROP TABLE TblTest;
As you can see, the only part which will change in order to reflect the actual amount of columns is the list in PIVOTs IN() clause.
You can create a string, which looks like Code1,Code2,Code3,...CodeN and build the statement dynamically. This can be triggered with EXEC().
I'd prefer the first approach. Dynamically created SQL is very mighty, but can be a pain in the neck too...

How to put variable in JSONB using PostgreSQL?

I am doing an example of indexing with JSONB in PostgreSQL and want add random uuid to a piece of JSON like below. However I can't get the syntax just right the closest I have got is "{"lookup_id": " || uuid || "}".
But I require
{"lookup_id": "92b3b21a-a87c-1798-5d91-3dbf3043c209"}
My code is:
INSERT INTO test (id, json)
SELECT x.id, '{
"lookup_id": " || uuid || "
}'::jsonb
FROM generate_series(1,100) AS x(id),
uuid_in(md5(now()::text)::cstring) AS uuid;
you can use row_to_json function:
select x.id, row_to_json(r.*)::jsonb
from generate_series(1,100) AS x(id)
cross join (select uuid_in(md5(now()::text)::cstring) as lookup_id) as r;
update
first, you can use uuid so you can create unique uids:
CREATE EXTENSION "uuid-ossp";
with cte as (
select
*, uuid_generate_v4() as uuid
from generate_series(1,5) AS x(id)
)
select distinct uuid from cte
------------------------------------------------
"e980c784-8aae-493f-90fb-1091280fe4f7"
"45a80660-3be8-4538-a039-13d97d6306af"
"5380f285-5d6b-467a-a83a-7fdc5c0ebc4c"
"7a435b36-95d3-49fc-808f-359838a866ed"
"3164a544-a2c9-4cd0-b0c4-199a99986cea"
next, merging this to your existing json. The stupid and easiest way for now could be something like this:
with cte as (
select
'{"a":1}'::json as j, uuid_generate_v4() as uuid
from generate_series(1,5) AS x(id)
)
select
left(j::text, length(j::text) - 1) || ', "uuid":' || to_json(uuid) || '}'
from cte
But you can also write some function to merge jsons together, or you can use hstore extension to merge jsons together:
with cte as (
select
id, '{"a":1, "b":2}'::json as data, uuid_generate_v4() as uuid
from generate_series(1,5) AS x(id)
), cte2 as (
select
id,
(
select hstore(array_agg(r.key), array_agg(r.value))
from (
select *
from json_each_text(c.data) as j
union all
select 'uuid', c.uuid::text
) as r
) as data
from cte as c
)
select
id, hstore_to_json(data)
from cte2
And I'm sure bigger experts on PostgreSQL could advice more elegant way to merge jsons together

tsearch2 add resultset to index

How can I add a resultset (more than one entry) to a tsvector? I use postgres 8.3.
I have an m-n relationship and I'd like to have all values from one column of the n-side in the tsvector of the m-side.
This statement will work if I have an limit to one the subselect. But not without the limit.
UPDATE mytable
SET mytsvector=to_tsvector('english',
coalesce(column_a, '') ||' '||
coalesce((SELECT item FROM other_table WHERE id = other_id LIMIT 1), '')
)
ERROR: more than one row returned by a subquery used as an expression
Under Postgres 8.3 at first I have to create an aggregate function to generate an array from an select.
CREATE AGGREGATE array_accum (
sfunc = array_append,
basetype = anyelement,
stype = anyarray,
initcond = '{}'
);
Since 8.4 there is the function array_agg().
The hole statement looks like this:
UPDATE mytable
SET mytsvector=to_tsvector('english',
coalesce(column_a, '') ||' '||
coalesce(
(SELECT array_to_string(array_accum(item), ' ')
FROM mytable m, other_table o
WHERE o.id = m.other_id AND m.id = id GROUP BY m.id), '')
)

Dynamic field definitions - can this be done in T-SQL?

I have a requirement that I'm struggling to implement. If possible, I'd like to achieve this with native T-SQL.
I have the following tables:
CUSTOMER
========
ID,
Name
FIELDDEF
========
ID,
Name
FieldType (Char T, N, D for Text, Number or Date)
CUSTOMERFIELD
=============
ID,
CustomerID,
FieldDefID,
CaptureDate,
ValueText,
ValueNumber,
ValueDate
Basically, the purpose of these tables is to provide an extensible custom field system. The idea is that the user creates new field definitions that can be a text, number or date field. Then, they create values for these fields in the ValueText, ValueNumber OR ValueDate field.
Example:
*Customer*
1,BOB
2,JIM
*FieldDef*
1,Mobile,T
1,DateOfBirth,D
*CustomerField*
ID,CustomerID,FieldDefID,CaptureDate,ValueText,ValueNumber,ValueDate
1,1,1,2011-01-1,07123456789,NULL,NULL
2,1,2,2011-01-1,NULL,NULL,09-DEC-1980
3,1,1,2011-01-2,07123498787,NULL,NULL
I need to create a view that looks like this:
*CustomerView*
ID,Name,Mobile,DateOfBirth
1,BOB,07123498787,09-DEC-1980
Note that Bob's mobile is the second one in the list, because it uses the most recent capture date.
Ideally, I need this to be extensible, so if I create a new field def in the future, it is automatically picked up in the CustomerView.
Is this possible in T-SQL at all?
Thanks,
Simon.
This would not be possible with a view, unless the view is dynamically recreated on the fly every time FieldDef changes because view schemas are locked-in at creation time. However, it may be possible with a stored procedure, which may or may not work depending on how you are using it.
Edit 1
Here is a sample query that works just for your current field names, and would have to be modified by dynamic SQL to work in general:
Edit 2
Modified to grab the newest values from the customer field table
with CustomerFieldNewest as (
select
cf1.*
from
customerfield cf1
inner join
(
select
customerid,
fielddefid,
max(capturedate) as maxcapturedate
from
customerfield cf2
group by
customerid,
fielddefid
) cf2 on cf1.customerid = cf2.customerid
and cf1.fielddefid = cf2.fielddefid
and cf1.capturedate = cf2.maxcapturedate
)
,CustomerFieldPivot as (
select
C.ID as ID
,max(case when F.Name = 'Mobile' then CF.ValueText end) as Mobile
,max(case when F.Name = 'DateOfBirth' then CF.ValueDate end) as DateOfBirth
from
Customer C
left join
CustomerFieldNewest CF on C.ID = CF.CustomerID
left join
FieldDef F on F.ID = CF.FieldDefID
group by
C.ID
)
select
C.*
,P.Mobile
,P.DateOfBirth
from
Customer C
left join
CustomerFieldPivot P on C.ID = P.ID
Edit 3
Here is T-SQL code to generate the view on the fly based on the current set of fields in FieldDef (this assumes the view CustomerView already exists, so you will need to create it first as a blank definition or you will get an error). I'm not sure about the performance of all this, but it should work correctly.
declare #sql varchar(max)
declare #fielddef varchar(max)
declare #fieldlist varchar(max)
select
#fielddef = coalesce(#fielddef + ', ' + CHAR(13) + CHAR(10), '') +
' max(case when F.Name = ''' + F.Name + ''' then CF.' +
case F.FieldType
when 'T' then 'ValueText'
when 'N' then 'ValueNumber'
when 'D' then 'ValueDate'
end
+ ' end) as [' + F.Name + ']'
,#fieldlist = coalesce(#fieldlist + ', ' + CHAR(13) + CHAR(10), '') +
' [' + F.Name + ']'
from
FieldDef F
set #sql = '
alter view [CustomerView] as
with CustomerFieldNewest as (
select
cf1.*
from
customerfield cf1
inner join
(
select
customerid,
fielddefid,
max(capturedate) as maxcapturedate
from
customerfield cf2
group by
customerid,
fielddefid
) cf2 on cf1.customerid = cf2.customerid
and cf1.fielddefid = cf2.fielddefid
and cf1.capturedate = cf2.maxcapturedate
)
,CustomerFieldPivot as (
select
C.ID as ID,
' + #fielddef + '
from
Customer C
left join
CustomerFieldNewest CF on C.ID = CF.CustomerID
left join
FieldDef F on F.ID = CF.FieldDefID
group by
C.ID
)
select
C.*,
' + #fieldlist + '
from
Customer C
left join
CustomerFieldPivot P on C.ID = P.ID
'
print #sql
exec(#sql)
select * from CustomerView
You need to build a crosstab which you do with the Pivot statement in TSQL. Here's an article that talks about how to build the pivot dynamically.
http://sqlserver-qa.net/blogs/t-sql/archive/2008/08/27/4809.aspx
Just for completeness there is sql_variant:
declare #t table (typ varchar(1), yuk sql_variant)
insert #t values ('d', getdate())
insert #t values ('i', 1234)
insert #t values ('s', 'bleep bloop')
select
yuk,
case typ
when 'd' then convert(datetime, yuk, 106)+50
when 'i' then cast(yuk as int) * 2
when 's' then reverse(cast(yuk as varchar))
else yuk
end
from #t

Set operation on TSQL (SQL 2005/2008)

When a set is given say {1,2,3,4,5,6}
The task is to separe pair of subsets
{1,2},
{1,3},
{1,4},
{1,5},
{1,6},
{2,3},
{2,4},
{2,5},
{2,6},
{3,4},
{3,5},
{3,6},
{4,5},
{5,6}
So when i have a table
Table Element
1
2
3
4
5
6
What is the way to list out all possible pair of comma separated subset ?
(Duplicates can be ignored (i.e) {1,2} is identical to {2,1})
SELECT T1.elem, T2.elem
FROM MyTable T1
INNER JOIN MyTable T2
ON T2.elem > T1.elem
...gets you most of the way there - if you want these shown as sets then...
SELECT '{' + CAST(T1.elem AS VARCHAR(12)) + ', ' + CAST(T2.elem AS VARCHAR(12)) + '}'
FROM MyTable T1
INNER JOIN MyTable T2
ON T2.elem > T1.elem
...is what you're after.
Here is a solution to the problem using a CTE. It isn’t particularly elegant, but it gets the job done.
DECLARE #set TABLE (Element INT);
INSERT INTO #set(Element) VALUES (1);
INSERT INTO #set(Element) VALUES (2);
INSERT INTO #set(Element) VALUES (3);
INSERT INTO #set(Element) VALUES (4);
INSERT INTO #set(Element) VALUES (5);
INSERT INTO #set(Element) VALUES (6);
;WITH array (Element1, Element2, Row)
AS
(
SELECT t.Element
, t2.Element
, ROW_NUMBER() OVER(ORDER BY t.Element)
FROM #set AS t
CROSS JOIN #set AS t2
WHERE t.Element <> t2.Element
)
SELECT a.Element1
, a.Element2
, '{' + CONVERT(VARCHAR(5),a.Element1) + ',' + CONVERT(VARCHAR(5),a.Element2) + '}' AS 'Subset'
FROM array AS a
WHERE NOT EXISTS (SELECT *
FROM array AS sa
WHERE sa.Element1 = a.Element2
AND sa.Element2 = a.Element1
AND sa.Row < a.Row
);