Showing only TOP 1 value result from join duplicates - tsql

I have 3 tables like below. You will see how they are joined.
Orders Table
+---------+------------+
| Orderid | LocationId |
+---------+------------+
| 36 | 14 |
| 38 | 13 |
+---------+------------+
OrdersDetails Table
+-----------+------------+
| Detailsid | OrderId |
+-----------+------------+
| 38 | 36 |
| 39 | 36 |
| 40 | 38 |
+-----------+------------+
OrderLocations
+------------+------------+
| Locationid | DistanceKM |
+------------+------------+
| 13 | 550 |
| 14 | 245 |
+------------+------------+
When doing an inner join of the 3 tables we get:
I don't want to have a duplicate DistanceKM, ex. 245. I would like a 0 instead for line item 2 like this:

Here is my solution:
Creating tables:
CREATE TABLE #Orders
(
Orderid INT, LocationId INT
);
INSERT INTO #Orders
VALUES
(36, 14
),
(38, 13
);
CREATE TABLE #OrdersDetails
(
Detailsid INT, OrderId INT
);
INSERT INTO #OrdersDetails
VALUES
(38, 36
),
(39, 36
),
(40, 38
);
CREATE TABLE #OrderLocations
(
Locationid INT, DistanceKM INT
);
INSERT INTO #OrderLocations
VALUES
(13, 550
),
(14, 245
);
The actual query:
;WITH cte
AS
(SELECT o.Orderid, d.Detailsid, l.DistanceKM, ROW_NUMBER() OVER
(PARTITION BY l.DistanceKM ORDER BY o.Orderid
) AS rn
FROM #Orders AS o
INNER JOIN
#OrdersDetails AS d
ON o.Orderid = d.OrderId
INNER JOIN
#OrderLocations AS l
ON o.LocationId = l.Locationid
)
SELECT cte.Orderid, cte.Detailsid,
CASE
WHEN cte.rn > 1
THEN 0
ELSE cte.DistanceKM
END AS DistanceKM
FROM CTE;
And here is the results:

Related

CTE RECURSIVE optimization, how to?

I need to optimize the performance of a commom WITH RECURSIVE query... We can limit the depth of the tree and decompose in many updates, and can also change representation (use array)... I try some options but perhaps there are a "classic optimization solution" that I'm not realizing.
All details
There are a t_up table, to be updated, with a composit primary key (pk1,pk2), one attribute attr and an array of references to primary keys... And a unnested representation t_scan, with the references; like this:
pk1 | pk2 | attr | ref_pk1 | ref_pk2
n | 123 | 1 | |
n | 456 | 2 | |
r | 123 | 1 | w | 123
w | 123 | 5 | n | 456
r | 456 | 2 | n | 123
r | 123 | 1 | n | 111
n | 111 | 4 | |
... | ...| ... | ... | ...
There are no loops.
UPDATE t_up SET x = pairs
FROM (
WITH RECURSIVE tree as (
SELECT pk1, pk2, attr, ref_pk1, ref_pk2,
array[array[0,0]]::bigint[] as all_refs
FROM t_scan
UNION ALL
SELECT c.pk1, c.pk2, c.attr, c.ref_pk1, c.ref_pk2
,p.all_refs || array[c.attr,c.pk2]
FROM t_scan c JOIN tree p
ON c.ref_pk1=p.pk1 AND c.ref_pk2=p.pk2 AND c.pk2!=p.pk2
AND array_length(p.all_refs,1)<5 -- 5 or 6 avoiding endless loops
)
SELECT pk1, pk2, array_agg_cat(all_refs) as pairs
FROM (
SELECT distinct pk1, pk2, all_refs
FROM tree
WHERE array_length(all_refs,1)>1 -- ignores initial array[0,0].
) t
GROUP BY 1,2
ORDER BY 1,2
) rec
WHERE rec.pk1=t_up.pk1 AND rec.pk2=t_up.pk2
;
To test:
CREATE TABLE t_scan(
pk1 char,pk2 bigint, attr bigint,
ref_pk1 char, ref_pk2 bigint
);
INSERT INTO t_scan VALUES
('n',123, 1 ,NULL,NULL),
('n',456, 2 ,NULL,NULL),
('r',123, 1 ,'w' ,123),
('w',123, 5 ,'n' ,456),
('r',456, 2 ,'n' ,123),
('r',123, 1 ,'n' ,111),
('n',111, 4 ,NULL,NULL);
Running only rec you will obtain:
pk1 | pk2 | pairs
-----+-----+-----------------
r | 123 | {{0,0},{1,123}}
r | 456 | {{0,0},{2,456}}
w | 123 | {{0,0},{5,123}}
But, unfortunately, to appreciate the "Big Data performance problem", you need to see it in a real database... I am preparing a public Github that run with OpenStreetMap Big Data.

Ignore the first and last row in a query result

I'm trying to do a query where I want to ignore the first and the last row of the result query.
My result query is retrieving the sum of all mediums in the last hour grouped by 5 minutes.
To ignore the first record I'm using offset(1) and to ignore the last i was trying to do a limit in my id field, ordering by timestamp desc.
My query:
ws_controller_hist=>
SELECT to_timestamp(floor((extract('epoch' FROM TIMESTAMP) / 300)) * 300)
AS timestamp_min,
TYPE,
floor(sum(medium[1]))
FROM default_dataset
WHERE TYPE LIKE 'ap_clients.wlan0'
AND TIMESTAMP > CURRENT_TIMESTAMP - interval '85 minutes'
AND organization_id = '9fc02db4-c3df-4890-93ac-8dd575ca5638'
AND id NOT IN
(SELECT id
FROM default_dataset
ORDER BY TIMESTAMP DESC
LIMIT 1)
GROUP BY timestamp_min,
TYPE
ORDER BY timestamp_min ASC
OFFSET 1;
timestamp_min | type | floor
------------------------+------------------+-------
2017-12-19 14:20:00+00 | ap_clients.wlan0 | 38
2017-12-19 14:25:00+00 | ap_clients.wlan0 | 37
2017-12-19 14:30:00+00 | ap_clients.wlan0 | 39
2017-12-19 14:35:00+00 | ap_clients.wlan0 | 42
2017-12-19 14:40:00+00 | ap_clients.wlan0 | 43
2017-12-19 14:45:00+00 | ap_clients.wlan0 | 44
2017-12-19 14:50:00+00 | ap_clients.wlan0 | 45
2017-12-19 14:55:00+00 | ap_clients.wlan0 | 45
2017-12-19 15:00:00+00 | ap_clients.wlan0 | 43
2017-12-19 15:05:00+00 | ap_clients.wlan0 | 43
2017-12-19 15:10:00+00 | ap_clients.wlan0 | 50
2017-12-19 15:15:00+00 | ap_clients.wlan0 | 52
2017-12-19 15:20:00+00 | ap_clients.wlan0 | 50
2017-12-19 15:25:00+00 | ap_clients.wlan0 | 53
2017-12-19 15:30:00+00 | ap_clients.wlan0 | 49
2017-12-19 15:35:00+00 | ap_clients.wlan0 | 39
2017-12-19 15:40:00+00 | ap_clients.wlan0 | 16
This is not ignoring the last record because i have the same records dont using the subquery " and id not in (select id from default_dataset order by timestamp desc limit 1) "
Wrap your query in an outer query and use lag and OFFSET to do the trick.
SELECT lag(timestamp_min) OVER (ORDER BY timestamp_min) AS timestamp_min,
lag(type) OVER (ORDER BY timestamp_min) AS type,
lag(sum_first_medium) OVER (ORDER BY timestamp_min),
FROM (SELECT to_timestamp(
floor(
(extract('epoch' FROM TIMESTAMP) / 300)
) * 300
) AS timestamp_min,
type,
floor(sum(medium[1])) AS sum_first_medium
FROM default_dataset
WHERE type = 'ap_clients.wlan0'
AND timestamp > current_timestamp - INTERVAL '85 minutes'
AND organization_id = '9fc02db4-c3df-4890-93ac-8dd575ca5638'
GROUP BY timestamp_min, type) lagme
OFFSET 2;
This is probably bit long, but will do exactly as you requested
SELECT z.*
FROM
(SELECT y.*, min(row_number) OVER(), max(row_number) OVER()
FROM
(SELECT x.*, row_number() OVER(ORDER BY timestamp_min)
FROM
(SELECT to_timestamp(floor((extract('epoch' FROM TIMESTAMP) / 300)) * 300)
AS timestamp_min,
TYPE,
floor(sum(medium[1]))
FROM default_dataset
WHERE TYPE LIKE 'ap_clients.wlan0'
AND TIMESTAMP > CURRENT_TIMESTAMP - interval '85 minutes'
AND organization_id = '9fc02db4-c3df-4890-93ac-8dd575ca5638'
AND id NOT IN
(SELECT id
FROM default_dataset
ORDER BY TIMESTAMP DESC
LIMIT 1)
GROUP BY timestamp_min,
TYPE
ORDER BY timestamp_min ASC
) AS x
) AS y
) AS z WHERE row_number NOT IN (min, max)

Select limited set of fields from inner query with preserved order

I've got a SQL query which involves one-to-many relationships with ORDER BY clause:
SELECT
s0_.id,
s0_.created_at,
s5_.sort_order
FROM
surveys_submits s0_
INNER JOIN surveys_answers s3_ ON s0_.id = s3_.submit_id
INNER JOIN surveys_questions s4_ ON s3_.question_id = s4_.id
INNER JOIN surveys_questions_references s5_ ON s4_.id = s5_.question_id
ORDER BY
s0_.created_at DESC,
s5_.sort_order ASC
This query returns following results:
id | created_at | sort_order
----+---------------------+-----------
218 | 2014-03-18 12:21:09 | 1
218 | 2014-03-18 12:21:09 | 2
218 | 2014-03-18 12:21:09 | 3
218 | 2014-03-18 12:21:09 | 4
218 | 2014-03-18 12:21:09 | 5
217 | 2014-03-18 12:20:57 | 1
217 | 2014-03-18 12:20:57 | 2
217 | 2014-03-18 12:20:57 | 3
...
214 | 2014-03-18 12:18:01 | 4
214 | 2014-03-18 12:18:01 | 5
213 | 2014-03-18 12:17:48 | 1
213 | 2014-03-18 12:17:48 | 2
213 | 2014-03-18 12:17:48 | 3
213 | 2014-03-18 12:17:48 | 4
213 | 2014-03-18 12:17:48 | 5
Now, I need to modify this query in a way that would return first 25 distinct ids from the begining with preserved order.
I've tried something like this:
SELECT DISTINCT id
FROM (
SELECT ... ORDER BY ...
) inner_query
ORDER BY created_at DESC, sort_order ASC
LIMIT 25 OFFSET 0;
But obviously it doesn't work:
ERROR: for SELECT DISTINCT, ORDER BY expressions must appear in select list
LINE 16: created_at DESC,
^
********** Error **********
...and I can't add created_at and sort_order columns to SELECT clause cause it would result in duplicated ids, just like the first query.
select *
from (
SELECT distinct on (s0_.id)
s0_.id,
s0_.created_at,
s5_.sort_order
FROM
surveys_submits s0_
INNER JOIN surveys_answers s3_ ON s0_.id = s3_.submit_id
INNER JOIN surveys_questions s4_ ON s3_.question_id = s4_.id
INNER JOIN surveys_questions_references s5_ ON s4_.id = s5_.question_id
ORDER BY
s0_.id,
s0_.created_at DESC,
s5_.sort_order ASC
) s
order by
created_at desc,
sort_order ASC
limit 25
From the manual
SELECT DISTINCT ON ( expression [, ...] ) keeps only the first row of each set of rows where the given expressions evaluate to equal. The DISTINCT ON expressions are interpreted using the same rules as for ORDER BY (see above). Note that the "first row" of each set is unpredictable unless ORDER BY is used to ensure that the desired row appears first.

Find Parent Recursively using Query

I am using postgresql. I have the table as like below
parent_id child_id
----------------------
101 102
103 104
104 105
105 106
I want to write a sql query which will give the final parent of input.
i.e suppose i pass 106 as input then , its output will be 103.
(106 --> 105 --> 104 --> 103)
Here's a complete example. First the DDL:
test=> CREATE TABLE node (
test(> id SERIAL,
test(> label TEXT NOT NULL, -- name of the node
test(> parent_id INT,
test(> PRIMARY KEY(id)
test(> );
NOTICE: CREATE TABLE will create implicit sequence "node_id_seq" for serial column "node.id"
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "node_pkey" for table "node"
CREATE TABLE
...and some data...
test=> INSERT INTO node (label, parent_id) VALUES ('n1',NULL),('n2',1),('n3',2),('n4',3);
INSERT 0 4
test=> INSERT INTO node (label) VALUES ('garbage1'),('garbage2'), ('garbage3');
INSERT 0 3
test=> INSERT INTO node (label,parent_id) VALUES ('garbage4',6);
INSERT 0 1
test=> SELECT * FROM node;
id | label | parent_id
----+----------+-----------
1 | n1 |
2 | n2 | 1
3 | n3 | 2
4 | n4 | 3
5 | garbage1 |
6 | garbage2 |
7 | garbage3 |
8 | garbage4 | 6
(8 rows)
This performs a recursive query on every id in node:
test=> WITH RECURSIVE nodes_cte(id, label, parent_id, depth, path) AS (
SELECT tn.id, tn.label, tn.parent_id, 1::INT AS depth, tn.id::TEXT AS path
FROM node AS tn
WHERE tn.parent_id IS NULL
UNION ALL
SELECT c.id, c.label, c.parent_id, p.depth + 1 AS depth,
(p.path || '->' || c.id::TEXT)
FROM nodes_cte AS p, node AS c
WHERE c.parent_id = p.id
)
SELECT * FROM nodes_cte AS n ORDER BY n.id ASC;
id | label | parent_id | depth | path
----+----------+-----------+-------+------------
1 | n1 | | 1 | 1
2 | n2 | 1 | 2 | 1->2
3 | n3 | 2 | 3 | 1->2->3
4 | n4 | 3 | 4 | 1->2->3->4
5 | garbage1 | | 1 | 5
6 | garbage2 | | 1 | 6
7 | garbage3 | | 1 | 7
8 | garbage4 | 6 | 2 | 6->8
(8 rows)
This gets all of the descendents WHERE node.id = 1:
test=> WITH RECURSIVE nodes_cte(id, label, parent_id, depth, path) AS (
SELECT tn.id, tn.label, tn.parent_id, 1::INT AS depth, tn.id::TEXT AS path FROM node AS tn WHERE tn.id = 1
UNION ALL
SELECT c.id, c.label, c.parent_id, p.depth + 1 AS depth, (p.path || '->' || c.id::TEXT) FROM nodes_cte AS p, node AS c WHERE c.parent_id = p.id
)
SELECT * FROM nodes_cte AS n;
id | label | parent_id | depth | path
----+-------+-----------+-------+------------
1 | n1 | | 1 | 1
2 | n2 | 1 | 2 | 1->2
3 | n3 | 2 | 3 | 1->2->3
4 | n4 | 3 | 4 | 1->2->3->4
(4 rows)
The following will get the path of the node with id 4:
test=> WITH RECURSIVE nodes_cte(id, label, parent_id, depth, path) AS (
SELECT tn.id, tn.label, tn.parent_id, 1::INT AS depth, tn.id::TEXT AS path
FROM node AS tn
WHERE tn.parent_id IS NULL
UNION ALL
SELECT c.id, c.label, c.parent_id, p.depth + 1 AS depth,
(p.path || '->' || c.id::TEXT)
FROM nodes_cte AS p, node AS c
WHERE c.parent_id = p.id
)
SELECT * FROM nodes_cte AS n WHERE n.id = 4;
id | label | parent_id | depth | path
----+-------+-----------+-------+------------
4 | n4 | 3 | 4 | 1->2->3->4
(1 row)
And let's assume you want to limit your search to descendants with a depth less than three (note that depth hasn't been incremented yet):
test=> WITH RECURSIVE nodes_cte(id, label, parent_id, depth, path) AS (
SELECT tn.id, tn.label, tn.parent_id, 1::INT AS depth, tn.id::TEXT AS path
FROM node AS tn WHERE tn.id = 1
UNION ALL
SELECT c.id, c.label, c.parent_id, p.depth + 1 AS depth,
(p.path || '->' || c.id::TEXT)
FROM nodes_cte AS p, node AS c
WHERE c.parent_id = p.id AND p.depth < 2
)
SELECT * FROM nodes_cte AS n;
id | label | parent_id | depth | path
----+-------+-----------+-------+------
1 | n1 | | 1 | 1
2 | n2 | 1 | 2 | 1->2
(2 rows)
I'd recommend using an ARRAY data type instead of a string for demonstrating the "path", but the arrow is more illustrative of the parent<=>child relationship.
Use WITH RECURSIVE to create a Common Table Expression (CTE). For the non-recursive term, get the rows in which the child is immediately below the parent:
SELECT
c.child_id,
c.parent_id
FROM
mytable c
LEFT JOIN
mytable p ON c.parent_id = p.child_id
WHERE
p.child_id IS NULL
child_id | parent_id
----------+-----------
102 | 101
104 | 103
For the recursive term, you want the children of these children.
WITH RECURSIVE tree(child, root) AS (
SELECT
c.child_id,
c.parent_id
FROM
mytable c
LEFT JOIN
mytable p ON c.parent_id = p.child_id
WHERE
p.child_id IS NULL
UNION
SELECT
child_id,
root
FROM
tree
INNER JOIN
mytable on tree.child = mytable.parent_id
)
SELECT * FROM tree;
child | root
-------+------
102 | 101
104 | 103
105 | 103
106 | 103
You can filter the children when querying the CTE:
WITH RECURSIVE tree(child, root) AS (...) SELECT root FROM tree WHERE child = 106;
root
------
103

Equivalent to unpivot() in PostgreSQL

Is there a unpivot equivalent function in PostgreSQL?
Create an example table:
CREATE TEMP TABLE foo (id int, a text, b text, c text);
INSERT INTO foo VALUES (1, 'ant', 'cat', 'chimp'), (2, 'grape', 'mint', 'basil');
You can 'unpivot' or 'uncrosstab' using UNION ALL:
SELECT id,
'a' AS colname,
a AS thing
FROM foo
UNION ALL
SELECT id,
'b' AS colname,
b AS thing
FROM foo
UNION ALL
SELECT id,
'c' AS colname,
c AS thing
FROM foo
ORDER BY id;
This runs 3 different subqueries on foo, one for each column we want to unpivot, and returns, in one table, every record from each of the subqueries.
But that will scan the table N times, where N is the number of columns you want to unpivot. This is inefficient, and a big problem when, for example, you're working with a very large table that takes a long time to scan.
Instead, use:
SELECT id,
unnest(array['a', 'b', 'c']) AS colname,
unnest(array[a, b, c]) AS thing
FROM foo
ORDER BY id;
This is easier to write, and it will only scan the table once.
array[a, b, c] returns an array object, with the values of a, b, and c as it's elements.
unnest(array[a, b, c]) breaks the results into one row for each of the array's elements.
You could use VALUES() and JOIN LATERAL to unpivot the columns.
Sample data:
CREATE TABLE test(id int, a INT, b INT, c INT);
INSERT INTO test(id,a,b,c) VALUES (1,11,12,13),(2,21,22,23),(3,31,32,33);
Query:
SELECT t.id, s.col_name, s.col_value
FROM test t
JOIN LATERAL(VALUES('a',t.a),('b',t.b),('c',t.c)) s(col_name, col_value) ON TRUE;
DBFiddle Demo
Using this approach it is possible to unpivot multiple groups of columns at once.
EDIT
Using Zack's suggestion:
SELECT t.id, col_name, col_value
FROM test t
CROSS JOIN LATERAL (VALUES('a', t.a),('b', t.b),('c',t.c)) s(col_name, col_value);
<=>
SELECT t.id, col_name, col_value
FROM test t
,LATERAL (VALUES('a', t.a),('b', t.b),('c',t.c)) s(col_name, col_value);
db<>fiddle demo
Great article by Thomas Kellerer found here
Unpivot with Postgres
Sometimes it’s necessary to normalize de-normalized tables - the opposite of a “crosstab” or “pivot” operation. Postgres does not support an UNPIVOT operator like Oracle or SQL Server, but simulating it, is very simple.
Take the following table that stores aggregated values per quarter:
create table customer_turnover
(
customer_id integer,
q1 integer,
q2 integer,
q3 integer,
q4 integer
);
And the following sample data:
customer_id | q1 | q2 | q3 | q4
------------+-----+-----+-----+----
1 | 100 | 210 | 203 | 304
2 | 150 | 118 | 422 | 257
3 | 220 | 311 | 271 | 269
But we want the quarters to be rows (as they should be in a normalized data model).
In Oracle or SQL Server this could be achieved with the UNPIVOT operator, but that is not available in Postgres. However Postgres’ ability to use the VALUES clause like a table makes this actually quite easy:
select c.customer_id, t.*
from customer_turnover c
cross join lateral (
values
(c.q1, 'Q1'),
(c.q2, 'Q2'),
(c.q3, 'Q3'),
(c.q4, 'Q4')
) as t(turnover, quarter)
order by customer_id, quarter;
will return the following result:
customer_id | turnover | quarter
------------+----------+--------
1 | 100 | Q1
1 | 210 | Q2
1 | 203 | Q3
1 | 304 | Q4
2 | 150 | Q1
2 | 118 | Q2
2 | 422 | Q3
2 | 257 | Q4
3 | 220 | Q1
3 | 311 | Q2
3 | 271 | Q3
3 | 269 | Q4
The equivalent query with the standard UNPIVOT operator would be:
select customer_id, turnover, quarter
from customer_turnover c
UNPIVOT (turnover for quarter in (q1 as 'Q1',
q2 as 'Q2',
q3 as 'Q3',
q4 as 'Q4'))
order by customer_id, quarter;
FYI for those of us looking for how to unpivot in RedShift.
The long form solution given by Stew appears to be the only way to accomplish this.
For those who cannot see it there, here is the text pasted below:
We do not have built-in functions that will do pivot or unpivot. However,
you can always write SQL to do that.
create table sales (regionid integer, q1 integer, q2 integer, q3 integer, q4 integer);
insert into sales values (1,10,12,14,16), (2,20,22,24,26);
select * from sales order by regionid;
regionid | q1 | q2 | q3 | q4
----------+----+----+----+----
1 | 10 | 12 | 14 | 16
2 | 20 | 22 | 24 | 26
(2 rows)
pivot query
create table sales_pivoted (regionid, quarter, sales)
as
select regionid, 'Q1', q1 from sales
UNION ALL
select regionid, 'Q2', q2 from sales
UNION ALL
select regionid, 'Q3', q3 from sales
UNION ALL
select regionid, 'Q4', q4 from sales
;
select * from sales_pivoted order by regionid, quarter;
regionid | quarter | sales
----------+---------+-------
1 | Q1 | 10
1 | Q2 | 12
1 | Q3 | 14
1 | Q4 | 16
2 | Q1 | 20
2 | Q2 | 22
2 | Q3 | 24
2 | Q4 | 26
(8 rows)
unpivot query
select regionid, sum(Q1) as Q1, sum(Q2) as Q2, sum(Q3) as Q3, sum(Q4) as Q4
from
(select regionid,
case quarter when 'Q1' then sales else 0 end as Q1,
case quarter when 'Q2' then sales else 0 end as Q2,
case quarter when 'Q3' then sales else 0 end as Q3,
case quarter when 'Q4' then sales else 0 end as Q4
from sales_pivoted)
group by regionid
order by regionid;
regionid | q1 | q2 | q3 | q4
----------+----+----+----+----
1 | 10 | 12 | 14 | 16
2 | 20 | 22 | 24 | 26
(2 rows)
Hope this helps, Neil
Pulling slightly modified content from the link in the comment from #a_horse_with_no_name into an answer because it works:
Installing Hstore
If you don't have hstore installed and are running PostgreSQL 9.1+, you can use the handy
CREATE EXTENSION hstore;
For lower versions, look for the hstore.sql file in share/contrib and run in your database.
Assuming that your source (e.g., wide data) table has one 'id' column, named id_field, and any number of 'value' columns, all of the same type, the following will create an unpivoted view of that table.
CREATE VIEW vw_unpivot AS
SELECT id_field, (h).key AS column_name, (h).value AS column_value
FROM (
SELECT id_field, each(hstore(foo) - 'id_field'::text) AS h
FROM zcta5 as foo
) AS unpiv ;
This works with any number of 'value' columns. All of the resulting values will be text, unless you cast, e.g., (h).value::numeric.
Just use JSON:
with data (id, name) as (
values (1, 'a'), (2, 'b')
)
select t.*
from data, lateral jsonb_each_text(to_jsonb(data)) with ordinality as t
order by data.id, t.ordinality;
This yields
|key |value|ordinality|
|----|-----|----------|
|id |1 |1 |
|name|a |2 |
|id |2 |1 |
|name|b |2 |
dbfiddle
I wrote a horrible unpivot function for PostgreSQL. It's rather slow but it at least returns results like you'd expect an unpivot operation to.
https://cgsrv1.arrc.csiro.au/blog/2010/05/14/unpivotuncrosstab-in-postgresql/
Hopefully you can find it useful..
Depending on what you want to do... something like this can be helpful.
with wide_table as (
select 1 a, 2 b, 3 c
union all
select 4 a, 5 b, 6 c
)
select unnest(array[a,b,c]) from wide_table
You can use FROM UNNEST() array handling to UnPivot a dataset, tandem with a correlated subquery (works w/ PG 9.4).
FROM UNNEST() is more powerful & flexible than the typical method of using FROM (VALUES .... ) to unpivot datasets. This is b/c FROM UNNEST() is variadic (with n-ary arity). By using a correlated subquery the need for the lateral ORDINAL clause is eliminated, & Postgres keeps the resulting parallel columnar sets in the proper ordinal sequence.
This is, BTW, FAST -- in practical use spawning 8 million rows in < 15 seconds on a 24-core system.
WITH _students AS ( /** CTE **/
SELECT * FROM
( SELECT 'jane'::TEXT ,'doe'::TEXT , 1::INT
UNION
SELECT 'john'::TEXT ,'doe'::TEXT , 2::INT
UNION
SELECT 'jerry'::TEXT ,'roe'::TEXT , 3::INT
UNION
SELECT 'jodi'::TEXT ,'roe'::TEXT , 4::INT
) s ( fn, ln, id )
) /** end WITH **/
SELECT s.id
, ax.fanm -- field labels, now expanded to two rows
, ax.anm -- field data, now expanded to two rows
, ax.someval -- manually incl. data
, ax.rankednum -- manually assigned ranks
,ax.genser -- auto-generate ranks
FROM _students s
,UNNEST /** MULTI-UNNEST() BLOCK **/
(
( SELECT ARRAY[ fn, ln ]::text[] AS anm -- expanded into two rows by outer UNNEST()
/** CORRELATED SUBQUERY **/
FROM _students s2 WHERE s2.id = s.id -- outer relation
)
,( /** ordinal relationship preserved in variadic UNNEST() **/
SELECT ARRAY[ 'first name', 'last name' ]::text[] -- exp. into 2 rows
AS fanm
)
,( SELECT ARRAY[ 'z','x','y'] -- only 3 rows gen'd, but ordinal rela. kept
AS someval
)
,( SELECT ARRAY[ 1,2,3,4,5 ] -- 5 rows gen'd, ordinal rela. kept.
AS rankednum
)
,( SELECT ARRAY( /** you may go wild ... **/
SELECT generate_series(1, 15, 3 )
AS genser
)
)
) ax ( anm, fanm, someval, rankednum , genser )
;
RESULT SET:
+--------+----------------+-----------+----------+---------+-------
| id | fanm | anm | someval |rankednum| [ etc. ]
+--------+----------------+-----------+----------+---------+-------
| 2 | first name | john | z | 1 | .
| 2 | last name | doe | y | 2 | .
| 2 | [null] | [null] | x | 3 | .
| 2 | [null] | [null] | [null] | 4 | .
| 2 | [null] | [null] | [null] | 5 | .
| 1 | first name | jane | z | 1 | .
| 1 | last name | doe | y | 2 | .
| 1 | | | x | 3 | .
| 1 | | | | 4 | .
| 1 | | | | 5 | .
| 4 | first name | jodi | z | 1 | .
| 4 | last name | roe | y | 2 | .
| 4 | | | x | 3 | .
| 4 | | | | 4 | .
| 4 | | | | 5 | .
| 3 | first name | jerry | z | 1 | .
| 3 | last name | roe | y | 2 | .
| 3 | | | x | 3 | .
| 3 | | | | 4 | .
| 3 | | | | 5 | .
+--------+----------------+-----------+----------+---------+ ----
Here's a way that combines the hstore and CROSS JOIN approaches from other answers.
It's a modified version of my answer to a similar question, which is itself based on the method at https://blog.sql-workbench.eu/post/dynamic-unpivot/ and another answer to that question.
-- Example wide data with a column for each year...
WITH example_wide_data("id", "2001", "2002", "2003", "2004") AS (
VALUES
(1, 4, 5, 6, 7),
(2, 8, 9, 10, 11)
)
-- that is tided to have "year" and "value" columns
SELECT
id,
r.key AS year,
r.value AS value
FROM
example_wide_data w
CROSS JOIN
each(hstore(w.*)) AS r(key, value)
WHERE
-- This chooses columns that look like years
-- In other cases you might need a different condition
r.key ~ '^[0-9]{4}$';
It has a few benefits over other solutions:
By using hstore and not jsonb, it hopefully minimises issues with type conversions (although hstore does convert everything to text)
The columns don't need to be hard coded or known in advance. Here, columns are chosen by a regex on the name, but you could use any SQL logic based on the name, or even the value.
It doesn't require PL/pgSQL - it's all SQL