TSQL how to produce UNION result without actual UNION command - tsql

Can I produce results like in my example below without actual UNION command. In my real scenario I have 1000 cat(egories) and would like to save typing and learn how to make it smarter without WHERE hard coding. Appreciate your hints, not sure if I can do PIVOT. Thanks M
My setup: SQL Server 2017 (RTM-CU22)
My test reproducable test source and sample code which I'd like to modify:
/*
SELECT * INTO #t FROM (
SELECT 'A ' Cat, 101 Score UNION ALL
SELECT 'A ' Cat, 102 Score UNION ALL
SELECT 'A ' Cat, 103 Score UNION ALL
SELECT 'BB' Cat, 2001 Score UNION ALL
SELECT 'BB' Cat, 2002 Score UNION ALL
SELECT 'CCC' Cat, 3333 Score
) b --- select * from #t
*/
-- this is desired output made with UNION.
SELECT 'A ' Cat, COUNT(1) CCount FROM #t WHERE Cat = 'A' UNION
SELECT 'BB ' Cat, COUNT(1) CCount FROM #t WHERE Cat = 'BB' UNION
SELECT 'CCC' Cat, COUNT(1) CCount FROM #t WHERE Cat NOT IN ('A','BB')
and this is my desired output:

If all you are looking for is a count of each Cat you can do the following:
SELECT Cat, COUNT(*) CCount FROM [#t]
GROUP BY [Cat]

Related

Postgresql, combine different columns counts into one result?

I have Car table. Car has is_sold and is_shipped. A Car belongs to a dealership, dealership_id (FK).
I want to run a query that tells me the count of sold cars and the count of shipped cars for a given dealership all in one result.
sold_count | shipped_count
10 | 4
The single queries I have look like this:
select count(*) as sold_count
from car
where dealership_id=25 and is_sold=true;
and
select count(*) as shipped_count
from car
where dealership_id=25 and is_shipped=true;
How do I combine the two to get both counts in one result?
This will do:
select dealership_id,
sum(case when is_sold is true then 1 else 0 end),
sum(case when is_shipped is true then 1 else 0 end)
from cars group by dealership_id;
You can use the filter clause of the Aggregate function. (see demo)
select dealership_id
, count(*) filter (where is_sold) cars_sold
, count(*) filter (where is_shipped) cars_shipped
from cars
where dealership_id = 25
group by dealership_id;
You can also using cross join.
select 'hello' as col1, 'world' as col2;
return:
col1 | col2
-------+-------
hello | world
(1 row)
similarly,
with a as
(
select count(*) as a1 from emp where empid> 5),
b as (
select count(*) as a2 from emp where salary > 6000)
select * from a, b;
or you can even apply to different table. like:
with a as
(select count(*) as a1 from emp where empid> 5),
b as
(select count(*) as a2 from ab )
select * from a, b;
with a as
(
select count(*) as sold_count
from car
where dealership_id=25 and is_sold=true
),
b as
(
select count(*) as shipped_count
from car
where dealership_id=25 and is_shipped=true
)
select a,b;
further reading: https://www.postgresql.org/docs/current/queries-table-expressions.html.
https://stackoverflow.com/a/26369295/15603477

TSQL - in a string, replace a character with a fixed one every 2 characters

I can't replace every 2 characters of a string with a '.'
select STUFF('abcdefghi', 3, 1, '.') c3,STUFF('abcdefghi', 5, 1,
'.') c5,STUFF('abcdefghi', 7, 1, '.') c7,STUFF('abcdefghi', 9, 1, '.')
c9
if I use STUFF I should subsequently overlap the strings c3, c5, c7 and c9. but I can't find a method
can you help me?
initial string:
abcdefghi
the result I would like is
ab.de.gh.
the string can be up to 50 characters
Create a numbers / tally / digits table, if you don't have one already, then you can use this to target each character position:
with digits as ( /* This would be a real table, here it's just to test */
select n from (values(1),(2),(3),(4),(5),(6),(7),(8),(9),(10))x(n)
), t as (
select 'abcdefghi' as s
)
select String_Agg( case when d.n%3 = 0 then '.' else Substring(t.s, d.n, 1) end, '')
from t
cross apply digits d
where d.n <Len(t.s)
Using for xml with existing table
with digits as (
select n from (values(1),(2),(3),(4),(5),(6),(7),(8),(9),(10))x(n)
),
r as (
select t.id, case when d.n%3=0 then '.' else Substring(t.s, d.n, 1) end ch
from t
cross apply digits d
where d.n <Len(t.s)
)
select result=(select '' + ch
from r r2
where r2.id=r.id
for xml path('')
)
from r
group by r.id
You can try it like this:
Easiest might be a quirky update ike here:
DECLARE #string VARCHAR(100)='abcdefghijklmnopqrstuvwxyz';
SELECT #string = STUFF(#string,3*A.pos,1,'.')
FROM (SELECT TOP(LEN(#string)/3) ROW_NUMBER() OVER(ORDER BY (SELECT NULL))
FROM master..spt_values) A(pos);
SELECT #string;
Better/Cleaner/Prettier was a recursive CTE:
We use a declared table to have some tabular sample data
DECLARE #tbl TABLE(ID INT IDENTITY, SomeString VARCHAR(200));
INSERT INTO #tbl VALUES('')
,('a')
,('ab')
,('abc')
,('abcd')
,('abcde')
,('abcdefghijklmnopqrstuvwxyz');
--the query
WITH recCTE AS
(
SELECT ID
,SomeString
,(LEN(SomeString)+1)/3 AS CountDots
,1 AS OccuranceOfDot
,SUBSTRING(SomeString,4,LEN(SomeString)) AS RestString
,CAST(LEFT(SomeString,2) AS VARCHAR(MAX)) AS Growing
FROM #tbl
UNION ALL
SELECT t.ID
,r.SomeString
,r.CountDots
,r.OccuranceOfDot+2
,SUBSTRING(RestString,4,LEN(RestString))
,CONCAT(Growing,'.',LEFT(r.RestString,2))
FROM #tbl t
INNER JOIN recCTE r ON t.ID=r.ID
WHERE r.OccuranceOfDot/2<r.CountDots-1
)
SELECT TOP 1 WITH TIES ID,Growing
FROM recCTE
ORDER BY ROW_NUMBER() OVER(PARTITION BY ID ORDER BY OccuranceOfDot DESC);
--the result
1
2 a
3 ab
4 ab
5 ab
6 ab.de
7 ab.de.gh.jk.mn.pq.st.vw.yz
The idea in short
We use a recursive CTE to walk along the string
we add the needed portion together with a dot
We stop, when the remaining length is to short to continue
a little magic is the ORDER BY ROW_NUMBER() OVER() together with TOP 1 WITH TIES. This will allow all first rows (frist per ID) to appear.

Convertion of tabular data to JSON in Redshift

I am unable to figure out how to convert tabular data to JSON format and store it in another table in Redshift. For example, I have a "DEMO" table with four columns: pid,stid,item_id,trans_id.
For each combination of pid,stid,item_id there exist many trans_ids.
pid stid item_id trans_id :
1 , AB , P1 , T1
1 , AB , P1 , T2
1 , AB , P1 , T3
1 , AB , P1 , T4
2 , ABC , P2 , T5
2 , ABC , P2 , T6
2 , ABC , P2 , T7
2 , ABC , P2 , T8
I want to store this data in another table called "SAMPLE" as:
pid stid item_id trans_id
1 , AB , P1 , {"key1":T1, "key2":"T2" "key2":"T3" "key2":"T4"}
2 , ABC , P2 , {"key1":T5, "key2":"T6" "key2":"T7" "key2":"T8"}
I am unable to figure out how to load the data from "DEMO" to "SAMPLE" in JSON format only for column "trans_id" using a SQL query in Redshift. I don't want to use any intermediate files.
There is LISTAGG aggregate function that allows you to concatenate text values within groups. It allows the effective construction of JSON objects:
SELECT
pid
,stid
,item_id
,'{'||listagg(
'"key'||row_number::varchar||'":'||trans_id::varchar
,',') within group (order by row_number)
||'}'
FROM (
SELECT *, row_number() over (partition by pid,stid,item_id order by trans_id)
FROM "DEMO"
)
GROUP BY 1,2,3;
As a side note, in this particular case an array of transaction IDs might work better, you'll be able to request the element of a specific order easily without using keyN key:
WITH tran_arrays as (
SELECT
pid
,stid
,item_id
,listagg(trans_id::varchar,',') within group (order by trans_id) as tran_array
FROM "DEMO"
GROUP BY 1,2,3
)
SELECT *
,split_part(tran_array,',',1) as first_element
FROM tran_arrays;
Very similar to the existing Answer however slightly different. This example is also run out of an Oracle Database. I put the work into it and felt like sharing in case it may help someone else out.
/* Oracle Example */
WITH demo_data AS
(
SELECT 1 AS pid, 'AB' AS stid, 'P1' AS item_id, 'T1' AS trans_id FROM dual UNION ALL
SELECT 1 AS pid, 'AB' AS stid, 'P1' AS item_id, 'T2' AS trans_id FROM dual UNION ALL
SELECT 1 AS pid, 'AB' AS stid, 'P1' AS item_id, 'T3' AS trans_id FROM dual UNION ALL
SELECT 1 AS pid, 'AB' AS stid, 'P1' AS item_id, 'T4' AS trans_id FROM dual UNION ALL
SELECT 2 AS pid, 'ABC' AS stid, 'P2' AS item_id, 'T5' AS trans_id FROM dual UNION ALL
SELECT 2 AS pid, 'ABC' AS stid, 'P2' AS item_id, 'T6' AS trans_id FROM dual UNION ALL
SELECT 2 AS pid, 'ABC' AS stid, 'P2' AS item_id, 'T7' AS trans_id FROM dual UNION ALL
SELECT 2 AS pid, 'ABC' AS stid, 'P2' AS item_id, 'T8' AS trans_id FROM dual
)
, transformData AS
(
SELECT pid, stid, item_id, trans_id, rownum AS keyNum FROM demo_data
)
SELECT pid, stid, item_id
, '{'||
LISTAGG(CHR(34)||'key'||keynum||CHR(34)||':'||CHR(34)||trans_id||CHR(34), ' ')
WITHIN GROUP (ORDER BY pid)
||'}' AS trans_id
FROM transformData
GROUP BY pid, stid, item_id
;
Output will look like this:

Make a column values header for rest of columns using TSQL

I have following table
ID | Group | Type | Product
1 Dairy Milk Fresh Milk
2 Dairy Butter Butter Cream
3 Beverage Coke Coca cola
4 Beverage Diet Dew
5 Beverage Juice Fresh Juice
I need following output/query result:
ID | Group | Type | Product
1 Dairy
1 Milk Fresh Milk
2 Butter Butter Cream
2 Beverage
1 Coke Coca cola
2 Diet Dew
3 Juice Fresh Juice
For above sample a hard coded script can do the job but I look for a dynamic script for any number of groups. I do not have any idea how it can be done so, I do not have a sample query yet. I need ideas, examples that at least give me an idea. PIVOT looks a close option but does not looks to be fully fit for this case.
Here's a possible way. It basically unions the "Group-Headers" and the "Group-Items". The difficulty was to order them correctly.
WITH CTE AS
(
SELECT ID,[Group],Type,Product,
ROW_NUMBER() OVER (PARTITION BY [Group] Order By ID)AS RN
FROM Drink
)
SELECT ID,[Group],Type,Product
FROM(
SELECT RN AS ID,[Group],[Id]AS OriginalId,'' As Type,'' As Product, 0 AS RN, 'Group' As RowType
FROM CTE WHERE RN = 1
UNION ALL
SELECT RN AS ID,'' AS [Group],[Id]AS OriginalId,Type,Product, RN, 'Item' As RowType
FROM CTE
)X
ORDER BY OriginalId ASC
, CASE WHEN RowType='Group' THEN 0 ELSE 1 END ASC
, RN ASC
Here's a demo-fiddle: http://sqlfiddle.com/#!6/ed6ca/2/0
A slightly simplified approach:
With Groups As
(
Select Distinct Min(Id) As Id, [Group], '' As [Type], '' As Product
From dbo.Source
Group By [Group]
)
Select Coalesce(Cast(Z.Id As varchar(10)),'') As Id
, Coalesce(Z.[Group],'') As [Group]
, Z.[Type], Z.Product
From (
Select Id As Sort, Id, [Group], [Type], Product
From Groups
Union All
Select G.Id, Null, Null, S.[Type], S.Product
From dbo.Source As S
Join Groups As G
On G.[Group] = S.[Group]
) As Z
Order By Sort
It should be noted that the use of Coalesce is purely for aesthetic reasons. You could simply return null in these cases.
SQL Fiddle
And an approach with ROW_NUMBER:
IF OBJECT_ID('dbo.grouprows') IS NOT NULL DROP TABLE dbo.grouprows;
CREATE TABLE dbo.grouprows(
ID INT,
Grp NVARCHAR(MAX),
Type NVARCHAR(MAX),
Product NVARCHAR(MAX)
);
INSERT INTO dbo.grouprows VALUES
(1,'Dairy','Milk','Fresh Milk'),
(2,'Dairy','Butter','Butter Cream'),
(3,'Beverage','Coke','Coca cola'),
(4,'Beverage','Diet','Dew'),
(5,'Beverage','Juice','Fresh Juice');
SELECT
CASE WHEN gg = 0 THEN dr1 END GrpId,
CASE WHEN gg = 1 THEN rn1 END TypeId,
ISNULL(Grp,'')Grp,
CASE WHEN gg = 1 THEN Type ELSE '' END Type,
CASE WHEN gg = 1 THEN Product ELSE '' END Product
FROM(
SELECT *,
DENSE_RANK()OVER(ORDER BY Grp DESC) dr1
FROM(
SELECT *,
ROW_NUMBER()OVER(PARTITION BY Grp ORDER BY type,gg) rn1,
ROW_NUMBER()OVER(ORDER BY type,gg) rn0
FROM(
SELECT Grp,Type,Product, GROUPING(Grp) gg, GROUPING(type) tg FROM dbo.grouprows
GROUP BY Product, Type, Grp
WITH ROLLUP
)X1
WHERE tg = 0
)X2
WHERE gg=1 OR rn1 = 1
)X3
ORDER BY rn0

t-sql "LIKE" and Pattern Matching

I've found a small annoyance that I was wondering how to get around...
In a simplified example, say I need to return "TEST B-19" and "TEST B-20"
I have a where clause that looks like:
where [Name] LIKE 'TEST B-[12][90]'
and it works... unless there's a "TEST B-10" or "TEST-B29" value that I don't want.
I'd rather not resort to doing both cases, because in more complex situations that would become prohibitive.
I tried:
where [Name] LIKE 'TEST B-[19-20]'
but of course that doesn't work because it is looking for single characters...
Thoughts? Again, this is a very simple example, I'd be looking for ways to grab ranges from 16 to 32 or 234 to 459 without grabbing all the extra values that could be created.
EDITED to include test examples...
You might see "TEXAS 22" or "THX 99-20-110-B6" or "E-19" or "SOUTHERN B" or "122 FLOWERS" in that field. The presense of digits is common, but not a steadfast rule, and there are absolutely no general patterns for hypens, digits, characters, order, etc.
I would divide the Name column into the text parts and the number parts, and convert the number parts into an integer, and then check if that one was between the values. Something like:
where cast(substring([Name], 7, 2) as integer) between 19 and 20
And, of course, if the possible structure of [Name] is much more complex, you'd have to calculate the values for 7 and 2, not hardcode them....
EDIT: If you want to filter out the ones not conforming to the pattern first, do the following:
where [Name] LIKE '%TEST B-__%'
and cast(substring([Name], CHARINDEX('TEST B-', [Name]) + LEN('TEST B-'), 2) as integer) between 19 and 20
Maybe it's faster using CHARINDEX in place of the LIKE in the topmost line two, especially if you put an index on the computed value, but... That is only optimization... :)
EDIT: Tested the procedure. Given the following data:
jajajajajajajTEST B-100
jajajajajajajTEST B-85
jajajajjTEST B-100
jajjajajTEST B-100
jajajajajajajTEST B-00
jajajajaTEST B-100
jajajajajajajEST B-99
jajajajajajajTEST B-100
jajajajajajajTEST B-19
jajajajjTEST B-100
jajjajajTEST B-120
jajajajajajajTEST B-00
jajajajaTEST B-150
jajajajajajajEST B-20
TEST B-20asdfh asdfkh
The query returns the following rows:
jajajajajajajTEST B-19
TEST B-20asdfh asdfkh
Wildcards or no, you still have to edit the query every time you want to change the range definition. If you're always dealing with a range (and it's not always the same range), you might use parameters. For example:
note: for some reason (this has happened in many other posts as well), when I try to post code beginning with 'declare', SO hangs and times-out. I reported it on meta already, but nobody could reproduce it (including me). Here it's happening again, so I took the 'D' off, and now it works. I'll come back tomorrow, and it will let me put the 'D' back on.
DECLARE #min varchar(5)
DECLARE #max varchar(5)
SET #min = 'B-19'
SET #max = 'B-20'
SELECT
...
WHERE NAME BETWEEN #min AND #max
You should avoid formatting [NAME] as others have suggested (using function on it) -- this way, your search can benefit from an index on it.
In any case -- you might re-consider your table structure. It sounds like 'TEST B-19' is a composite (non-normalized) value of category ('TEST') + sub-category ('B') + instance ('19'). Put it in a lookup table with 4 columns (id being the first), and then join it by id in whatever query needs to output the composite value. This will make searching and indexing much easier and faster.
In the absence of test data, I generated my own. I just removed the Test B- prefix, converted to int and did a Between
With Numerals As
(
Select top 100 row_number() over (order by name) TestNumeral
from sys.columns
),
TestNumbers AS
(
Select 'TEST B-' + Convert (VarChar, TestNumeral) TestNumber
From Numerals
)
Select *
From TestNumbers
Where Cast (Replace (TestNumber, 'TEST B-', '') as Integer) between 1 and 16
This gave me
TestNumber
-------------------------------------
TEST B-1
TEST B-2
TEST B-3
TEST B-4
TEST B-5
TEST B-6
TEST B-7
TEST B-8
TEST B-9
TEST B-10
TEST B-11
TEST B-12
TEST B-13
TEST B-14
TEST B-15
TEST B-16
This means, however, that if you have different strategies for naming tests, you would have to remove all different kinds of prefixes.
Now, on the other hand, if your Test numbers are in the TEST-Space-TestType-Hyphen-TestNumber format, you could use PatIndex and SubString
With Numerals As
(
Select top 100 row_number() over (order by name) TestNumeral
from sys.columns
),
TestNumbers AS
(
Select 'TEST B-' + Convert (VarChar, TestNumeral) TestNumber
From Numerals
Where TestNumeral Between 10 and 19
UNION
Select 'TEST A-' + Convert (VarChar, TestNumeral) TestNumber
From Numerals
Where TestNumeral Between 20 and 29
)
Select *
From TestNumbers
Where Cast (SubString (TestNumber, PATINDEX ('%-%', TestNumber)+1, Len (TestNumber) - PATINDEX ('%-%', TestNumber)) as Integer) between 16 and 26
That should yield the following
TestNumber
-------------------------------------
TEST A-20
TEST A-21
TEST A-22
TEST A-23
TEST A-24
TEST A-25
TEST A-26
TEST B-16
TEST B-17
TEST B-18
TEST B-19
All of your examples seem to have the test numbers at the end. So if you can create a table of patterns and then JOIN using a LIKE statement, you may be able make it work. Here is an example:
;
With TestNumbers As
(
select 'E-1' TestNumber
union select 'E-2'
union select 'E-3'
union select 'E-4'
union select 'E-5'
union select 'E-6'
union select 'E-7'
union select 'SOUTHERN B1'
union select 'SOUTHERN B2'
union select 'SOUTHERN B3'
union select 'SOUTHERN B4'
union select 'SOUTHERN B5'
union select 'SOUTHERN B6'
union select 'SOUTHERN B7'
union select 'Southern CC'
union select 'Southern DD'
union select 'Southern EE'
union select 'TEST B-1'
union select 'TEST B-2'
union select 'TEST B-3'
union select 'TEST B-4'
union select 'TEST B-5'
union select 'TEST B-6'
union select 'TEST B-7'
union select 'TEXAS 1'
union select 'TEXAS 2'
union select 'TEXAS 3'
union select 'TEXAS 4'
union select 'TEXAS 5'
union select 'TEXAS 6'
union select 'TEXAS 7'
union select 'THX 99-20-110-B1'
union select 'THX 99-20-110-B2'
union select 'THX 99-20-110-B3'
union select 'THX 99-20-110-B4'
union select 'THX 99-20-110-B5'
union select 'THX 99-20-110-B6'
union select 'THX 99-20-110-B7'
union select 'Southern AA'
union select 'Southern CC'
union select 'Southern DD'
union select 'Southern EE'
),
Prefixes as
(
Select 'TEXAS ' TestPrefix
Union Select 'THX 99-20-110-B'
Union Select 'E-'
Union Select 'SOUTHERN B'
Union Select 'TEST B-'
)
Select TN.TestNumber
From TestNumbers TN, Prefixes P
Where 1=1
And TN.TestNumber Like '%' + P.TestPrefix + '%'
And Cast (REPLACE (Tn.TestNumber, p.TestPrefix, '') AS INTEGER) between 4 and 6
This will give you
TestNumber
----------------
E-4
E-5
E-6
SOUTHERN B4
SOUTHERN B5
SOUTHERN B6
TEST B-4
TEST B-5
TEST B-6
TEXAS 4
TEXAS 5
TEXAS 6
THX 99-20-110-B4
THX 99-20-110-B5
THX 99-20-110-B6
(15 row(s) affected)
Is this acceptable:
WHERE [Name] IN ( 'TEST B-19', 'TEST B-20' )
The list of values can come from a subquery, e.g.:
WHERE [Name] IN ( SELECT [Name] FROM Elsewhere WHERE ... )