I've ran into an issue that I cant seem to solve without a lot of changes deep in the code, and I think there must be a simpler solution that I'm simply not aware of.
I have a table of product names, product locations and various statuses (from 1 to 10). I have data for all products and locations but only some of the statuses (for example product X in city XX has data for categories 1 and 3, and product Y for city YY has data for categories 1 to 6).
I'd like to always display 10 repetitions of each product/location, with corresponding data (if there is any) or nulls. This makes a report I'm planning on creating much easier to read and understand.
I'm using SSMS2017, on SQL Server 2016.
SELECT
[Product],
[Location],
[Category],
[Week1],
[Week2],
[Week3]
FROM MyView
Naturally it will only return data that I have, but I'd like to always return all 10 rows for each product/location combination (with nulls in Week columns if I have no data there).
Your question ist not very clear, but I think, that my magic crystall ball gave me a good guess:
I think, that you are looking for LEFT JOIN and CROSS JOIN:
--Next time please create a stand-alone sample like this yourself
--I create 3 dummy tables with sample data
DECLARE #tblStatus TABLE(ID INT IDENTITY,StatusName VARCHAR(100));
INSERT INTO #tblStatus VALUES('Status 1')
,('Status 2')
,('Status 3')
,('Status 4')
,('Status 5');
DECLARE #tblGroup TABLE(ID INT IDENTITY,GroupName VARCHAR(100));
INSERT INTO #tblGroup VALUES ('Group 1')
,('Group 2')
,('Group 3')
,('Group 4')
,('Group 5');
DECLARE #tblProduct TABLE(ID INT IDENTITY,ProductName VARCHAR(100),StatusID INT, GroupID INT);
INSERT INTO #tblProduct VALUES ('Product 1, Status 1, Group 2',1,2)
,('Product 2, Status 1, Group 3',1,3)
,('Product 3, Status 3, Group 4',3,4)
,('Product 4, Status 3, Group 3',3,3)
,('Product 5, Status 1, Group 5',1,5);
--This will return each status (independent of product values), together with the products (if there is a corresponding line)
SELECT s.StatusName
,p.*
FROM #tblStatus s
LEFT JOIN #tblProduct p ON s.ID=p.StatusID
--This will first use CROSS JOIN to create an each-with-each cartesian product.
--The LEFT JOIN works as above
SELECT s.StatusName
,g.GroupName
,p.*
FROM #tblStatus s
CROSS JOIN #tblGroup g
LEFT JOIN #tblProduct p ON s.ID=p.StatusID AND g.ID=p.GroupID;
If this is not what you need, please try to set up an example like mine and provide the expected output.
Related
I am trying to run a query against a table in AWS Redshift (i.e., postgresql). Below is a simplified definition of the table:
CREATE TABLE some_schema.some_table (
row_id int
,productid_level1 char(1)
,productid_level2 char(1)
,productid_level3 char(1)
)
;
INSERT INTO some_schema.some_table
VALUES
(1, a, b, c)
,(2, d, c, e)
,(3, c, f, g)
,(4, e, h, i)
,(5, f, j, k)
,(6, g, l, m)
;
I need to return a de-duped, single column table of a given productid and all of its children. "Children" means any productid that has "level" higher than the given product (for a given row) and also its grandchildren.
For example, for productid 'c', I expect to return...
'c' (because it's found in rows 1, 2, and 3)
'e' (because it's a child of 'c' in row 2)
'f' and 'g' (because they're children of 'c' in row 3)
'h' and 'i' (because they're children of 'e' in row 4)
'j' and 'k' (because they're children of 'f' in row 5)
and 'l' and 'm' (because they're children of 'g' in row 6)
Visually, I expect to return the following:
productid
---------
c
e
f
g
h
i
j
k
l
m
The actual table has about 3M rows and has about 20 "levels".
I think there are 2 parts to this query -- (1) a recursive CTE to build out the hierarchy and (2) an unpivot operation.
I have not attempted (1) yet. For (2), I have tried a query like the following, but it hasn't returned even after 3 minutes. As this will be used for an operational report, I need it to return in < 15 seconds.
select
b.productid
,b.product_level
from
some_schema.some_table as a
cross join lateral (
values
(a.productid_level1, 1)
,(a.productid_level2, 2)
...
,(a.productid_level20, 20)
) as b(productid, product_level)
How can I write the query to achieve (1) and (2) and be very performant?
I would avoid using the term Hierarchy, as that "usually" implies any node having a single parent at most.
I admit I'm lost as to the nature of the graph/network this table represents. But you might benefit from a little brute force and code repetition.
Whatever eventually works for you, I think you'll need to persist/materialise/cache the results, as repeating this at report time is unlikely to ever be a good idea.
I'm a data engineer by trade, and I'm sure they have good reasons for what they've done (or, like me, they maybe screwed up). Either way, there are many good reasons to ask them to materialise the graph in more than just one form, each suited to different use cases. So, asking them for a traditional adjacency list, as well as the table you already have, is a reasonable request. Or, at the very least, a good starting point for a conversation.
So, a brute force approach?
WITH
adjacency AS
(
SELECT level01, level02 FROM some_table WHERE level02 IS NOT NULL
UNION
SELECT level02, level03 FROM some_table WHERE level03 IS NOT NULL
UNION
...
UNION
SELECT level19, level20 FROM some_table WHERE level20 IS NOT NULL
)
The WHERE clause elimates any sparse data before it enters the map.
The UNION (without ALL) ensures duplicate links are eliminated. You should also test UNION ALL and then wrapping a SELECT DISTINCT around it (or similar).
Then you can use that adjacency list in the usual recursive walk, to find all children of a given node. (Taking care that there aren't any cyclic paths.)
Does any one know how to create crosstab queries in PostgreSQL?
For example I have the following table:
Section Status Count
A Active 1
A Inactive 2
B Active 4
B Inactive 5
I would like the query to return the following crosstab:
Section Active Inactive
A 1 2
B 4 5
Is this possible?
Install the additional module tablefunc once per database, which provides the function crosstab(). Since Postgres 9.1 you can use CREATE EXTENSION for that:
CREATE EXTENSION IF NOT EXISTS tablefunc;
Improved test case
CREATE TABLE tbl (
section text
, status text
, ct integer -- "count" is a reserved word in standard SQL
);
INSERT INTO tbl VALUES
('A', 'Active', 1), ('A', 'Inactive', 2)
, ('B', 'Active', 4), ('B', 'Inactive', 5)
, ('C', 'Inactive', 7); -- ('C', 'Active') is missing
Simple form - not fit for missing attributes
crosstab(text) with 1 input parameter:
SELECT *
FROM crosstab(
'SELECT section, status, ct
FROM tbl
ORDER BY 1,2' -- needs to be "ORDER BY 1,2" here
) AS ct ("Section" text, "Active" int, "Inactive" int);
Returns:
Section | Active | Inactive
---------+--------+----------
A | 1 | 2
B | 4 | 5
C | 7 | -- !!
No need for casting and renaming.
Note the incorrect result for C: the value 7 is filled in for the first column. Sometimes, this behavior is desirable, but not for this use case.
The simple form is also limited to exactly three columns in the provided input query: row_name, category, value. There is no room for extra columns like in the 2-parameter alternative below.
Safe form
crosstab(text, text) with 2 input parameters:
SELECT *
FROM crosstab(
'SELECT section, status, ct
FROM tbl
ORDER BY 1,2' -- could also just be "ORDER BY 1" here
, $$VALUES ('Active'::text), ('Inactive')$$
) AS ct ("Section" text, "Active" int, "Inactive" int);
Returns:
Section | Active | Inactive
---------+--------+----------
A | 1 | 2
B | 4 | 5
C | | 7 -- !!
Note the correct result for C.
The second parameter can be any query that returns one row per attribute matching the order of the column definition at the end. Often you will want to query distinct attributes from the underlying table like this:
'SELECT DISTINCT attribute FROM tbl ORDER BY 1'
That's in the manual.
Since you have to spell out all columns in a column definition list anyway (except for pre-defined crosstabN() variants), it is typically more efficient to provide a short list in a VALUES expression like demonstrated:
$$VALUES ('Active'::text), ('Inactive')$$)
Or (not in the manual):
$$SELECT unnest('{Active,Inactive}'::text[])$$ -- short syntax for long lists
I used dollar quoting to make quoting easier.
You can even output columns with different data types with crosstab(text, text) - as long as the text representation of the value column is valid input for the target type. This way you might have attributes of different kind and output text, date, numeric etc. for respective attributes. There is a code example at the end of the chapter crosstab(text, text) in the manual.
db<>fiddle here
Effect of excess input rows
Excess input rows are handled differently - duplicate rows for the same ("row_name", "category") combination - (section, status) in the above example.
The 1-parameter form fills in available value columns from left to right. Excess values are discarded.
Earlier input rows win.
The 2-parameter form assigns each input value to its dedicated column, overwriting any previous assignment.
Later input rows win.
Typically, you don't have duplicates to begin with. But if you do, carefully adjust the sort order to your requirements - and document what's happening.
Or get fast arbitrary results if you don't care. Just be aware of the effect.
Advanced examples
Pivot on Multiple Columns using Tablefunc - also demonstrating mentioned "extra columns"
Dynamic alternative to pivot with CASE and GROUP BY
\crosstabview in psql
Postgres 9.6 added this meta-command to its default interactive terminal psql. You can run the query you would use as first crosstab() parameter and feed it to \crosstabview (immediately or in the next step). Like:
db=> SELECT section, status, ct FROM tbl \crosstabview
Similar result as above, but it's a representation feature on the client side exclusively. Input rows are treated slightly differently, hence ORDER BY is not required. Details for \crosstabview in the manual. There are more code examples at the bottom of that page.
Related answer on dba.SE by Daniel Vérité (the author of the psql feature):
How do I generate a pivoted CROSS JOIN where the resulting table definition is unknown?
SELECT section,
SUM(CASE status WHEN 'Active' THEN count ELSE 0 END) AS active, --here you pivot each status value as a separate column explicitly
SUM(CASE status WHEN 'Inactive' THEN count ELSE 0 END) AS inactive --here you pivot each status value as a separate column explicitly
FROM t
GROUP BY section
You can use the crosstab() function of the additional module tablefunc - which you have to install once per database. Since PostgreSQL 9.1 you can use CREATE EXTENSION for that:
CREATE EXTENSION tablefunc;
In your case, I believe it would look something like this:
CREATE TABLE t (Section CHAR(1), Status VARCHAR(10), Count integer);
INSERT INTO t VALUES ('A', 'Active', 1);
INSERT INTO t VALUES ('A', 'Inactive', 2);
INSERT INTO t VALUES ('B', 'Active', 4);
INSERT INTO t VALUES ('B', 'Inactive', 5);
SELECT row_name AS Section,
category_1::integer AS Active,
category_2::integer AS Inactive
FROM crosstab('select section::text, status, count::text from t',2)
AS ct (row_name text, category_1 text, category_2 text);
DB Fiddle here:
Everything works: https://dbfiddle.uk/iKCW9Uhh
Without CREATE EXTENSION tablefunc; you get this error: https://dbfiddle.uk/j8W1CMvI
ERROR: function crosstab(unknown, integer) does not exist
LINE 4: FROM crosstab('select section::text, status, count::text fro...
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
Solution with JSON aggregation:
CREATE TEMP TABLE t (
section text
, status text
, ct integer -- don't use "count" as column name.
);
INSERT INTO t VALUES
('A', 'Active', 1), ('A', 'Inactive', 2)
, ('B', 'Active', 4), ('B', 'Inactive', 5)
, ('C', 'Inactive', 7);
SELECT section,
(obj ->> 'Active')::int AS active,
(obj ->> 'Inactive')::int AS inactive
FROM (SELECT section, json_object_agg(status,ct) AS obj
FROM t
GROUP BY section
)X
Sorry this isn't complete because I can't test it here, but it may get you off in the right direction. I'm translating from something I use that makes a similar query:
select mt.section, mt1.count as Active, mt2.count as Inactive
from mytable mt
left join (select section, count from mytable where status='Active')mt1
on mt.section = mt1.section
left join (select section, count from mytable where status='Inactive')mt2
on mt.section = mt2.section
group by mt.section,
mt1.count,
mt2.count
order by mt.section asc;
The code I'm working from is:
select m.typeID, m1.highBid, m2.lowAsk, m1.highBid - m2.lowAsk as diff, 100*(m1.highBid - m2.lowAsk)/m2.lowAsk as diffPercent
from mktTrades m
left join (select typeID,MAX(price) as highBid from mktTrades where bid=1 group by typeID)m1
on m.typeID = m1.typeID
left join (select typeID,MIN(price) as lowAsk from mktTrades where bid=0 group by typeID)m2
on m1.typeID = m2.typeID
group by m.typeID,
m1.highBid,
m2.lowAsk
order by diffPercent desc;
which will return a typeID, the highest price bid and the lowest price asked and the difference between the two (a positive difference would mean something could be bought for less than it can be sold).
There's a different dynamic method that I've devised, one that employs a dynamic rec. type (a temp table, built via an anonymous procedure) & JSON. This may be useful for an end-user who can't install the tablefunc/crosstab extension, but can still create temp tables or run anon. proc's.
The example assumes all the xtab columns are the same type (INTEGER), but the # of columns is data-driven & variadic. That said, JSON aggregate functions do allow for mixed data types, so there's potential for innovation via the use of embedded composite (mixed) types.
The real meat of it can be reduced down to one step if you want to statically define the rec. type inside the JSON recordset function (via nested SELECTs that emit a composite type).
dbfiddle.uk
https://dbfiddle.uk/N1EzugHk
Crosstab function is available under the tablefunc extension. You'll have to create this extension one time for the database.
CREATE EXTENSION tablefunc;
You can use the below code to create pivot table using cross tab:
create table test_Crosstab( section text,
status text,
count numeric)
insert into test_Crosstab values ( 'A','Active',1)
,( 'A','Inactive',2)
,( 'B','Active',4)
,( 'B','Inactive',5)
select * from crosstab(
'select section
,status
,count
from test_crosstab'
)as ctab ("Section" text,"Active" numeric,"Inactive" numeric)
I have an employee , manager hierarchy which could end up being circular.
Ex:
28397468N>88518119N>87606705N>28397468N
Create Table emp_manager ( Emp_id varchar(30), Manager_id varchar(30));
Insert into emp_manager values ('28397468N','88518119N');
Insert into emp_manager values ('88518119N','87606705N');
Insert into emp_manager values ('87606705N','28397468N');
My requirement is:
When my proc is called and there are circular hierarchies in the emp_manager table, we should return an error listing the employees in the hierarchy.
The below link contains some useful info:
https://mccalljt.io/blog/2017/01/postgres-circular-references/
I have modified it as below:
select * from (
WITH RECURSIVE circular_managers(Emp_id, Manager_id, depth, path, cycle) AS (
SELECT u.Emp_id, u.Manager_id, 1,
ARRAY[u.Emp_id],
false
FROM emp_manager u
UNION ALL
SELECT u.Emp_id, u.Manager_id, cm.depth + 1,
(path || u.Emp_id)::character varying(32)[],
u.Emp_id = ANY(path)
FROM emp_manager u, circular_managers cm
WHERE u.Emp_id = cm.Manager_id AND NOT cycle
)
select
distinct (path) d
FROM circular_managers
WHERE cycle
AND path[1] = path[array_upper(path, 1)]) cm
BUT, the problem is, it is returning all combinations of the hierarchy:
{28397468N,88518119N,87606705N,28397468N}
{87606705N,28397468N,88518119N,87606705N}
{88518119N,87606705N,28397468N,88518119N}
I need a simple answer like this:
28397468N>88518119N>87606705N>28397468N
even this will do:
28397468N>88518119N>87606705N
Please help!
So all references:
{28397468N,88518119N,87606705N,28397468N}
{87606705N,28397468N,88518119N,87606705N}
{88518119N,87606705N,28397468N,88518119N}
are correct but just start from different element.
I need a simple answer like this: 28397468N>88518119N>87606705N>28397468N
So what's needed is a filter for the same circle refs.
Let's do that in a way:
sort distinct items in arrays
aggregate them back - so for all references it will be '{28397468N,87606705N,88518119N}'
use produced value for DISTINCT FIRST_VALUE
WITH D (circle_ref ) AS (
VALUES
('{28397468N,88518119N,87606705N,28397468N}'::text[]),
('{87606705N,28397468N,88518119N,87606705N}'::text[]),
('{88518119N,87606705N,28397468N,88518119N}'::text[])
), ordered AS (
SELECT
D.circle_ref,
(SELECT ARRAY_AGG(DISTINCT el ORDER BY el) FROM UNNEST(D.circle_ref) AS el ) AS ordered_circle
FROM
D
)
SELECT DISTINCT
FIRST_VALUE (circle_ref) OVER (PARTITION BY ordered_circle ORDER BY circle_ref) AS circle_ref
FROM
ordered;
circle_ref
{28397468N,88518119N,87606705N,28397468N}
DB Fiddle: https://www.db-fiddle.com/f/6ytb2v11s8T95PPLoTZZed/0
To prevent circular references, you can use a closure table and a trigger - as explained in https://stackoverflow.com/a/38701519/5962802
The closure table will also allow you to easily get all subordinates for a given supervisor (no matter how deep in the hierarchy) - or all direct bosses of a given employee (up to the root).
Before using the rebuild_tree stored procedure you will have to remove all circular references from the hierarchy.
Does any one know how to create crosstab queries in PostgreSQL?
For example I have the following table:
Section Status Count
A Active 1
A Inactive 2
B Active 4
B Inactive 5
I would like the query to return the following crosstab:
Section Active Inactive
A 1 2
B 4 5
Is this possible?
Install the additional module tablefunc once per database, which provides the function crosstab(). Since Postgres 9.1 you can use CREATE EXTENSION for that:
CREATE EXTENSION IF NOT EXISTS tablefunc;
Improved test case
CREATE TABLE tbl (
section text
, status text
, ct integer -- "count" is a reserved word in standard SQL
);
INSERT INTO tbl VALUES
('A', 'Active', 1), ('A', 'Inactive', 2)
, ('B', 'Active', 4), ('B', 'Inactive', 5)
, ('C', 'Inactive', 7); -- ('C', 'Active') is missing
Simple form - not fit for missing attributes
crosstab(text) with 1 input parameter:
SELECT *
FROM crosstab(
'SELECT section, status, ct
FROM tbl
ORDER BY 1,2' -- needs to be "ORDER BY 1,2" here
) AS ct ("Section" text, "Active" int, "Inactive" int);
Returns:
Section | Active | Inactive
---------+--------+----------
A | 1 | 2
B | 4 | 5
C | 7 | -- !!
No need for casting and renaming.
Note the incorrect result for C: the value 7 is filled in for the first column. Sometimes, this behavior is desirable, but not for this use case.
The simple form is also limited to exactly three columns in the provided input query: row_name, category, value. There is no room for extra columns like in the 2-parameter alternative below.
Safe form
crosstab(text, text) with 2 input parameters:
SELECT *
FROM crosstab(
'SELECT section, status, ct
FROM tbl
ORDER BY 1,2' -- could also just be "ORDER BY 1" here
, $$VALUES ('Active'::text), ('Inactive')$$
) AS ct ("Section" text, "Active" int, "Inactive" int);
Returns:
Section | Active | Inactive
---------+--------+----------
A | 1 | 2
B | 4 | 5
C | | 7 -- !!
Note the correct result for C.
The second parameter can be any query that returns one row per attribute matching the order of the column definition at the end. Often you will want to query distinct attributes from the underlying table like this:
'SELECT DISTINCT attribute FROM tbl ORDER BY 1'
That's in the manual.
Since you have to spell out all columns in a column definition list anyway (except for pre-defined crosstabN() variants), it is typically more efficient to provide a short list in a VALUES expression like demonstrated:
$$VALUES ('Active'::text), ('Inactive')$$)
Or (not in the manual):
$$SELECT unnest('{Active,Inactive}'::text[])$$ -- short syntax for long lists
I used dollar quoting to make quoting easier.
You can even output columns with different data types with crosstab(text, text) - as long as the text representation of the value column is valid input for the target type. This way you might have attributes of different kind and output text, date, numeric etc. for respective attributes. There is a code example at the end of the chapter crosstab(text, text) in the manual.
db<>fiddle here
Effect of excess input rows
Excess input rows are handled differently - duplicate rows for the same ("row_name", "category") combination - (section, status) in the above example.
The 1-parameter form fills in available value columns from left to right. Excess values are discarded.
Earlier input rows win.
The 2-parameter form assigns each input value to its dedicated column, overwriting any previous assignment.
Later input rows win.
Typically, you don't have duplicates to begin with. But if you do, carefully adjust the sort order to your requirements - and document what's happening.
Or get fast arbitrary results if you don't care. Just be aware of the effect.
Advanced examples
Pivot on Multiple Columns using Tablefunc - also demonstrating mentioned "extra columns"
Dynamic alternative to pivot with CASE and GROUP BY
\crosstabview in psql
Postgres 9.6 added this meta-command to its default interactive terminal psql. You can run the query you would use as first crosstab() parameter and feed it to \crosstabview (immediately or in the next step). Like:
db=> SELECT section, status, ct FROM tbl \crosstabview
Similar result as above, but it's a representation feature on the client side exclusively. Input rows are treated slightly differently, hence ORDER BY is not required. Details for \crosstabview in the manual. There are more code examples at the bottom of that page.
Related answer on dba.SE by Daniel Vérité (the author of the psql feature):
How do I generate a pivoted CROSS JOIN where the resulting table definition is unknown?
SELECT section,
SUM(CASE status WHEN 'Active' THEN count ELSE 0 END) AS active, --here you pivot each status value as a separate column explicitly
SUM(CASE status WHEN 'Inactive' THEN count ELSE 0 END) AS inactive --here you pivot each status value as a separate column explicitly
FROM t
GROUP BY section
You can use the crosstab() function of the additional module tablefunc - which you have to install once per database. Since PostgreSQL 9.1 you can use CREATE EXTENSION for that:
CREATE EXTENSION tablefunc;
In your case, I believe it would look something like this:
CREATE TABLE t (Section CHAR(1), Status VARCHAR(10), Count integer);
INSERT INTO t VALUES ('A', 'Active', 1);
INSERT INTO t VALUES ('A', 'Inactive', 2);
INSERT INTO t VALUES ('B', 'Active', 4);
INSERT INTO t VALUES ('B', 'Inactive', 5);
SELECT row_name AS Section,
category_1::integer AS Active,
category_2::integer AS Inactive
FROM crosstab('select section::text, status, count::text from t',2)
AS ct (row_name text, category_1 text, category_2 text);
DB Fiddle here:
Everything works: https://dbfiddle.uk/iKCW9Uhh
Without CREATE EXTENSION tablefunc; you get this error: https://dbfiddle.uk/j8W1CMvI
ERROR: function crosstab(unknown, integer) does not exist
LINE 4: FROM crosstab('select section::text, status, count::text fro...
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
Solution with JSON aggregation:
CREATE TEMP TABLE t (
section text
, status text
, ct integer -- don't use "count" as column name.
);
INSERT INTO t VALUES
('A', 'Active', 1), ('A', 'Inactive', 2)
, ('B', 'Active', 4), ('B', 'Inactive', 5)
, ('C', 'Inactive', 7);
SELECT section,
(obj ->> 'Active')::int AS active,
(obj ->> 'Inactive')::int AS inactive
FROM (SELECT section, json_object_agg(status,ct) AS obj
FROM t
GROUP BY section
)X
Sorry this isn't complete because I can't test it here, but it may get you off in the right direction. I'm translating from something I use that makes a similar query:
select mt.section, mt1.count as Active, mt2.count as Inactive
from mytable mt
left join (select section, count from mytable where status='Active')mt1
on mt.section = mt1.section
left join (select section, count from mytable where status='Inactive')mt2
on mt.section = mt2.section
group by mt.section,
mt1.count,
mt2.count
order by mt.section asc;
The code I'm working from is:
select m.typeID, m1.highBid, m2.lowAsk, m1.highBid - m2.lowAsk as diff, 100*(m1.highBid - m2.lowAsk)/m2.lowAsk as diffPercent
from mktTrades m
left join (select typeID,MAX(price) as highBid from mktTrades where bid=1 group by typeID)m1
on m.typeID = m1.typeID
left join (select typeID,MIN(price) as lowAsk from mktTrades where bid=0 group by typeID)m2
on m1.typeID = m2.typeID
group by m.typeID,
m1.highBid,
m2.lowAsk
order by diffPercent desc;
which will return a typeID, the highest price bid and the lowest price asked and the difference between the two (a positive difference would mean something could be bought for less than it can be sold).
There's a different dynamic method that I've devised, one that employs a dynamic rec. type (a temp table, built via an anonymous procedure) & JSON. This may be useful for an end-user who can't install the tablefunc/crosstab extension, but can still create temp tables or run anon. proc's.
The example assumes all the xtab columns are the same type (INTEGER), but the # of columns is data-driven & variadic. That said, JSON aggregate functions do allow for mixed data types, so there's potential for innovation via the use of embedded composite (mixed) types.
The real meat of it can be reduced down to one step if you want to statically define the rec. type inside the JSON recordset function (via nested SELECTs that emit a composite type).
dbfiddle.uk
https://dbfiddle.uk/N1EzugHk
Crosstab function is available under the tablefunc extension. You'll have to create this extension one time for the database.
CREATE EXTENSION tablefunc;
You can use the below code to create pivot table using cross tab:
create table test_Crosstab( section text,
status text,
count numeric)
insert into test_Crosstab values ( 'A','Active',1)
,( 'A','Inactive',2)
,( 'B','Active',4)
,( 'B','Inactive',5)
select * from crosstab(
'select section
,status
,count
from test_crosstab'
)as ctab ("Section" text,"Active" numeric,"Inactive" numeric)
I need to use STRING_SPLIT in my stage table and import the results into another table.
Stage table:
DECLARE #stage TABLE(ID INT, Code VARCHAR(500))
INSERT INTO #stage
SELECT 1, '123_Potato_Orange_Fish'
UNION ALL
SELECT 2, '456_Tomato_Banana_Chicken'
UNION ALL
SELECT 3, '789_Onion_Mango_Lamb'
Final table:
DECLARE #Final TABLE
(
ID INT,
code VARCHAR(500),
Unit VARCHAR(100),
Vegetable VARCHAR(100),
Fruit VARCHAR(100),
Meat VARCHAR(100)
)
I am using SSIS execute task to transform the stage table data and insert into the final table. The Code column in stage table is string and '_' is used for delimiter. I need to separate the string and display the final table as shown below
ID code Unit Vegetable Fruit Meat
------------------------------------------------------------------
1 123_Potato_Orange_Fish 123 Potato Orange Fish
2 456_Tomato_Banana_Chicken 456 Tomato Banana Chicken
3 789_Onion_Mango_Lamb 789 Onion Mango Lamb
I am trying to use SQL Server 2016 built-in String_Split() function as shown here:
SELECT
ID,
Code, f.value AS Vegetable
FROM
#stage AS s
CROSS APPLY
(SELECT
value,
ROW_NUMBER() OVER(PARTITION BY s.ID ORDER BY s.ID) AS rn
FROM
String_Split(s.Code, '_')) AS f
WHERE
s.ID = 1 AND f.rn = 2
But it only split one string at a time, as my stage data contain millions of records i need to split all the string in the code column and store in the respective column.
Note: I don't want to use temporary table.
thanks
You can add a Derived Column and assuming that the format is consist with what you listed, use the TOKEN function to split the input based on the "_" delimiter and position of each string. From here, you can map each of the outputs to the appropriate destination column. The three statements below split your code column based on the sample data in your question. Note that the output data type of TOKEN is DT_WSTR (Unicode). If you need the non-Unicode data, you'll have to cast it back to DT_STR, which can also be done within the same Derived Column by adding (DT_STR,50,1252) (adjust length as necessary) before each statement.
TOKEN(Code,"_",1)
TOKEN(Code,"_",2)
TOKEN(Code,"_",3)
Like #userfl89 here is another SSIS solution using script component:
Add the 4 output columns to your output0. Make sure you select Code as in input column.
string[] col = Row.Code.ToString().Split('_');
Row.Unit = Int.Parse(col[0]);
Row.Vegetable = col[1];
Row.Fruit = col[2];
Row.Meat = col[3];
Since the accepted answer uses TOKEN(), which is bound to SSIS, I want to provide a SQL-Server-solution too.
You are using v2016, that allows for OPENJSON. When you use this on a JSON-array you'll get a column [key] indicating the position in the array and a column [value] providing the actual content.
It is very easy to transform a CSV-string to a JSON array. The rest ist pivoting by conditional aggregation. Try it out:
DECLARE #stage TABLE(ID INT, Code VARCHAR(500))
INSERT INTO #stage
SELECT 1, '123_Potato_Orange_Fish'
UNION ALL
SELECT 2, '456_Tomato_Banana_Chicken'
UNION ALL
SELECT 3, '789_Onion_Mango_Lamb'
SELECT ID
,Code
,MAX(CASE WHEN [key]=0 THEN CAST([value] AS INT) END) AS Unit
,MAX(CASE WHEN [key]=1 THEN [value] END) AS Vegetable
,MAX(CASE WHEN [key]=2 THEN [value] END) AS Fruit
,MAX(CASE WHEN [key]=3 THEN [value] END) AS Meat
FROM #stage
CROSS APPLY OPENJSON('["' + REPLACE(Code,'_','","') + '"]') A
GROUP BY ID,Code