I have data like below
Id | Data |Parent Id
----------------------------------------------------------------------------------
1 | IceCream # Chocolate # SoftDrink |0
2 | Amul,Havemore#Cadbary,Nestle#Pepsi |1
3 | Party#Wedding |0
I want to split this data in below format where row 2 is dependent on row 1. I have added ParentId which is use to find dependency.
IceCream | Amul | Party
IceCream | Havemore | Party
IceCream | Amul | Wedding
IceCream | Havemore | Wedding
Chocolate | Cadbery | Party
Chocolate | Nestle | Party
Chocolate | Cadbery | Wedding
Chocolate | Nestle | Wedding
SoftDrink | Pepsi | Party
SoftDrink | Pepsi | Wedding
I have used unnest(string_to_array) to split string but unable to traverse through loop to make this combination.
The is a very "unstable",like sitting on a knife edge and could easily fall apart. It depends on assigning values for each delimited value and then joining on those values. Maybe those flags that are known to you (but unfortunately not us) can stabilize it. But it does match your indicated expectations. It uses the function regexp_split_to_table rather than unnest to split the delimiters.
with base (num, list) as
( values (1,'IceCream#Chocolate#SoftDrink')
, (2,'Amul,Havemore#Cadbary,Nestle#Pepsi')
, (3,'Party#Wedding')
)
, product as
(select p, row_number(*) over() pn
from (
select regexp_split_to_table(list,'#') p
from base
where num=1
) x
)
, maker as
(select regexp_split_to_table(m, ',') m, row_number(*) over() mn
from (
select regexp_split_to_table(list,'#') m
from base
where num=2
) y
)
, event as
( select regexp_split_to_table(regexp_split_to_table(list,'#'), ',') e
from base
where num=3
)
select p as product
, m as maker
, e as event
from (product join maker on pn = mn) cross join event e
order by pn, e, m;
Hope it helps.
I am using postgresql and applying window function. previously I had to find first gid with same last name , and address(street_address and city) so i simply put last name in partition by clause in window function.
but now I have requirement to find first g_id of which last name is not same. while address is same How can I do it ?
This is what i was doing previously.
SELECT g_id as g_id,
First_value(g_id)
OVER (PARTITION BY lname,street_address , city ,
order by last_date DESC NULLS LAST )as c_id,
street_address as street_address FROM my table;
lets say this is my db
g_id | l_name | street_address | city | last_date
_________________________________________________
x1 | bar | abc road | khi | 11-6-19
x2 | bar | abc road | khi | 12-6-19
x3 | foo | abc road | khi | 19-6-19
x4 | harry | abc road | khi | 17-6-19
x5 | bar | xyz road | khi | 11-6-19
_________________________________________________
In previous scenario :
for if i run for the first row my c_id, it should return 'x2' as it considers these rows:
_________________________________________________
g_id | l_name | street_address | city | last_date
_________________________________________________
x1 | bar | abc road | khi | 11-6-19
x2 | bar | abc road | khi | 12-6-19
_________________________________________________
and return a row with latest last_date.
what i want now to select these rows (rows with same street_address and city but no same l_name):
g_id | l_name | street_address | city | last_date
_________________________________________________
x1 | bar | abc road | khi | 11-6-19
x3 | foo | abc road | khi | 19-6-19
x4 | harry | abc road | khi | 17-6-19
_________________________________________________
and output will be x3.
somehow i want to compare last_name column if it is not equals to the current value of last name and then partition by address field. and if no rows satisfy the condition c_id should be equal to current g_id
Looking at your expected output,it's not clear whether you want earliest or oldest for each group. You may change the ORDER BY accordingly for last_date in this query which uses DISTINCT ON
SELECT DISTINCT ON ( street_address, city, l_name) *
FROM mytable
ORDER BY street_address,
city,
l_name,
last_date --change this to last_date desc if you want latest
DEMO
After discussing the details in this chat:
demo:db<>fiddle
SELECT DISTINCT ON (t1.g_id)
t1.*,
COALESCE(t2.g_id, t1.g_id) AS g_id
FROM
mytable t1
LEFT JOIN mytable t2
ON t1.street_address = t2.street_address AND t1.l_name != t2.l_name
ORDER BY t1.g_id, t2.last_date DESC
here is how I solved it using subquery
creating example table.
CREATE TABLE mytable
("g_id" varchar(2), "l_name" varchar(5), "street_address" varchar(8), "city" varchar(3), "last_date" date)
;
INSERT INTO mytable
("g_id", "l_name", "street_address", "city", "last_date")
VALUES
('x1', 'bar', 'abc road', 'khi', '11-6-19'),
('x2', 'bar', 'abc road', 'khi', '12-6-19'),
('x3', 'foo', 'abc road', 'khi', '19-6-19'),
('x4', 'harry', 'abc road', 'khi', '17-6-19'),
('x5', 'bar', 'xyz road', 'khi', '11-6-19')
;
query to get g_ids
SELECT * ,
(select b.g_id from mytable b where (base.g_id = b.g_id) or (base.l_name <>
b.l_name and base.street_address = b.street_address and base.city = b.city )
order by b.last_date desc limit 1)
from mytable base
How can I ensure my full text search results are ordered by exact matches and THEN prefixed matches?
SELECT ticker, name, ts_rank(document, to_tsquery('english', 'MAT:*')) AS rank
FROM (
SELECT *, setweight(to_tsvector('english', ticker), 'A') || setweight(to_tsvector('english', name), 'B') AS document
FROM ( VALUES
('MATI-R' , 'MATICHON PCL.NVDR')
,('MATCH-R', 'MATCHING MAXIMIZE SLN. NVDR')
,('MATV' , 'MATAV-CABLE SYS.MEDIA SPN.ADR 1:2 DEAD - DELIST.03/07/06')
,('MAT' , 'MATISSE HOLDINGS DEAD - 03/10/06')
,('MAT' , 'MATTEL')
) data (ticker,name)
) ss ORDER BY rank DESC
I tried a few of the suggestions on https://www.postgresql.org/docs/9.5/static/datatype-textsearch.html like to_tsquery('english', 'MAT:A & MAT:*B') but none seem to give me the ordering I'm looking for. The current output is
ticker | name | rank
---------+----------------------------------------------------------+----------
MATI-R | MATICHON PCL.NVDR | 1.45903
MATCH-R | MATCHING MAXIMIZE SLN. NVDR | 1.27665
MATV | MATAV-CABLE SYS.MEDIA SPN.ADR 1:2 DEAD - DELIST.03/07/06 | 1.09427
MAT | MATISSE HOLDINGS DEAD - 03/10/06 | 0.851098
MAT | MATTEL | 0.851098
when I want something more like
ticker | name | rank
---------+----------------------------------------------------------+----------
MAT | MATTEL | ??
MAT | MATISSE HOLDINGS DEAD - 03/10/06 | ??
MATCH-R | MATCHING MAXIMIZE SLN. NVDR | ??
MATI-R | MATICHON PCL.NVDR | ??
MATV | MATAV-CABLE SYS.MEDIA SPN.ADR 1:2 DEAD - DELIST.03/07/06 | ??
Use LIKE or ILIKE:
SELECT ticker, name, ts_rank(document, to_tsquery('english', 'MAT:*')) AS rank
FROM (
SELECT *, setweight(to_tsvector('english', ticker), 'A') || setweight(to_tsvector('english', name), 'B') AS document
FROM ( VALUES
('MATI-R' , 'MATICHON PCL.NVDR')
,('MATCH-R', 'MATCHING MAXIMIZE SLN. NVDR')
,('MATV' , 'MATAV-CABLE SYS.MEDIA SPN.ADR 1:2 DEAD - DELIST.03/07/06')
,('MAT' , 'MATISSE HOLDINGS DEAD - 03/10/06')
,('MAT' , 'MATTEL')
) data (ticker,name)
) ss ORDER BY name LIKE concat('%', ticker, '%') desc, rank DESC
ticker | name | rank
---------+----------------------------------------------------------+----------
MAT | MATISSE HOLDINGS DEAD - 03/10/06 | 0.851098
MAT | MATTEL | 0.851098
MATI-R | MATICHON PCL.NVDR | 1.45903
MATCH-R | MATCHING MAXIMIZE SLN. NVDR | 1.27665
MATV | MATAV-CABLE SYS.MEDIA SPN.ADR 1:2 DEAD - DELIST.03/07/06 | 1.09427
(5 rows)
I have data in a self-join hierarchical table where Continents have many Countries have many Regions have many States have many Cities.
Self-joining table structure:
|-------------------------------------------------------------|
| ID | Name | Type | ParentID | IsTopLevel |
|-------------------------------------------------------------|
| 1 | North America | Continent | NULL | 1 |
| 12 | United States | Country | 1 | 0 |
| 113 | Midwest | Region | 12 | 0 |
| 155 | Kansas | State | 113 | 0 |
| 225 | Topeka | City | 155 | 0 |
| 2 | South America | Continent | NULL | 1 |
| 22 | Argentina | Country | 2 | 0 |
| 223 | Southern | Region | 22 | 0 |
| 255 | La Pampa | State | 223 | 0 |
| 777 | Santa Rosa | City | 255 | 0 |
|-------------------------------------------------------------|
I have been able to successfully use a recursive CTE to get the tree structure and depth of each node. Where I am failing is using a pivot to create a nice list of all bottom locations and their corresponding parents at each level.
The expected results:
|------------------------------------------------------------------------------------|
| Continent | Country | Region | State | City | Bottom_Level_ID |
|------------------------------------------------------------------------------------|
| North America | United States | Midwest | Kansas | Topeka | 234 |
| South America | Argentina | Southern | La Pampa | Santa Rosa | 777 |
|------------------------------------------------------------------------------------|
There are a few key points I should clarify.
Every single entry has a bottom level and a top level. There are no
cases where all five Types are not present for a given location.
If I filled out this data, I'd have 50 entries for North America at the
State level, so you can imagine how immense this table is at the
City level for every continent on the planet. Billions of rows.
The reason this is a necessity is because I need to be able to join onto a historical table of all addresses a person has lived at, and journey up the tree. I figure if I have the LocationID from that table, I can just LEFT JOIN onto a View of this query and nab the appropriate columns.
This is an old database, 2005, and I don't have sysadmin or control of the schema.
My CTE Code
--CTE
;WITH Tree
AS (
SELECT ID, Name, ParentID, Type, 1 as Depth
FROM LocationTable
WHERE IsTopLevel = 1
UNION ALL
SELECT L.ID, L.Name, L.ParentID, L.Type, T.Depth+1
FROM Tree T
JOIN LocationTable L
ON L.ParentGUID = T.GUID
)
Good solid data, in a mostly useful format. BUT then I got to thinking about it and isn't the table structure already in this format, so why would I bother doing a depth tree search if I wasn't going to join the entries together at the same time?
Anyway, here was the rest.
The Pivot Attempt
;WITH Tree
AS (
SELECT ID, Name, ParentID, Type
FROM LocationTable
WHERE IsTopLevel = 1
UNION ALL
SELECT L.ID, L.Name, L.ParentID, L.Type
FROM Tree T
JOIN LocationTable L
ON L.ParentGUID = T.GUID
)
select *
from Tree
pivot (
max(Name)
for Type in ([Continent],[Country],[Region],[State],[City])
) pvt
And now I have everything by Type in a column, with nulls for everything else. As I have struggled with before, I need to filter/join the CTE data before I attempt my pivot, but I have no idea where to start with that piece. Everything I have tried is soooooooooo sloooooooow.
Everytime I think I understand CTEs and Pivot, something new makes me extremely humbled. Please help me. ; ;
If your structure is as clean as you describe it (no gaps, 5 levels always) you might go the easy way:
This data really demands for a classical 1:n-table-tree, where your Countries, States etc. live in their own tables and link to their parent record
Make sure there's an index on ParentID and ID!
DECLARE #tbl TABLE(ID INT,Name VARCHAR(100),Type VARCHAR(100),ParentID INT,IsTopLevel BIT);
INSERT INTO #tbl VALUES
(1,'North America','Continent',NULL,1)
,(12,'United States','Country',1,0)
,(113,'Midwest','Region',12,0)
,(155,'Kansas','State',113,0)
,(225,'Topeka','City',155,0)
,(2,'South America','Continent',NULL,1)
,(22,'Argentina','Country',2,0)
,(223,'Southern','Region',22,0)
,(255,'La Pampa','State',223,0)
,(777,'Santa Rosa','City',255,0);
SELECT Level1.Name AS Continent
,Level2.Name AS Country
,Level3.Name AS Region
,Level4.Name AS State
,Level5.Name AS City
,Level5.ID AS Bottom_Level_ID
FROM #tbl AS Level1
INNER JOIN #tbl AS Level2 ON Level1.ID=Level2.ParentID
INNER JOIN #tbl AS Level3 ON Level2.ID=Level3.ParentID
INNER JOIN #tbl AS Level4 ON Level3.ID=Level4.ParentID
INNER JOIN #tbl AS Level5 ON Level4.ID=Level5.ParentID
WHERE Level1.ParentID IS NULL
The result
Continent Country Region State City Bottom_Level_ID
North America United States Midwest Kansas Topeka 225
South America Argentina Southern La Pampa Santa Rosa 777
Another solution with CTE could be :
;WITH Tree
AS (
SELECT cast(NULL as varchar(100)) as C1, cast(NULL as varchar(100)) as C2, cast(NULL as varchar(100)) as C3, cast(NULL as varchar(100)) as C4, Name as C5, ID as B_Level
FROM LocationTable
WHERE IsTopLevel = 1
UNION ALL
SELECT T.C2, T.C3, T.C4, T.C5, L.Name, L.ID
FROM Tree T
JOIN LocationTable L
ON L.ParentID = T.B_Level
)
select *
from Tree
where C1 is not null
Is there a unpivot equivalent function in PostgreSQL?
Create an example table:
CREATE TEMP TABLE foo (id int, a text, b text, c text);
INSERT INTO foo VALUES (1, 'ant', 'cat', 'chimp'), (2, 'grape', 'mint', 'basil');
You can 'unpivot' or 'uncrosstab' using UNION ALL:
SELECT id,
'a' AS colname,
a AS thing
FROM foo
UNION ALL
SELECT id,
'b' AS colname,
b AS thing
FROM foo
UNION ALL
SELECT id,
'c' AS colname,
c AS thing
FROM foo
ORDER BY id;
This runs 3 different subqueries on foo, one for each column we want to unpivot, and returns, in one table, every record from each of the subqueries.
But that will scan the table N times, where N is the number of columns you want to unpivot. This is inefficient, and a big problem when, for example, you're working with a very large table that takes a long time to scan.
Instead, use:
SELECT id,
unnest(array['a', 'b', 'c']) AS colname,
unnest(array[a, b, c]) AS thing
FROM foo
ORDER BY id;
This is easier to write, and it will only scan the table once.
array[a, b, c] returns an array object, with the values of a, b, and c as it's elements.
unnest(array[a, b, c]) breaks the results into one row for each of the array's elements.
You could use VALUES() and JOIN LATERAL to unpivot the columns.
Sample data:
CREATE TABLE test(id int, a INT, b INT, c INT);
INSERT INTO test(id,a,b,c) VALUES (1,11,12,13),(2,21,22,23),(3,31,32,33);
Query:
SELECT t.id, s.col_name, s.col_value
FROM test t
JOIN LATERAL(VALUES('a',t.a),('b',t.b),('c',t.c)) s(col_name, col_value) ON TRUE;
DBFiddle Demo
Using this approach it is possible to unpivot multiple groups of columns at once.
EDIT
Using Zack's suggestion:
SELECT t.id, col_name, col_value
FROM test t
CROSS JOIN LATERAL (VALUES('a', t.a),('b', t.b),('c',t.c)) s(col_name, col_value);
<=>
SELECT t.id, col_name, col_value
FROM test t
,LATERAL (VALUES('a', t.a),('b', t.b),('c',t.c)) s(col_name, col_value);
db<>fiddle demo
Great article by Thomas Kellerer found here
Unpivot with Postgres
Sometimes it’s necessary to normalize de-normalized tables - the opposite of a “crosstab” or “pivot” operation. Postgres does not support an UNPIVOT operator like Oracle or SQL Server, but simulating it, is very simple.
Take the following table that stores aggregated values per quarter:
create table customer_turnover
(
customer_id integer,
q1 integer,
q2 integer,
q3 integer,
q4 integer
);
And the following sample data:
customer_id | q1 | q2 | q3 | q4
------------+-----+-----+-----+----
1 | 100 | 210 | 203 | 304
2 | 150 | 118 | 422 | 257
3 | 220 | 311 | 271 | 269
But we want the quarters to be rows (as they should be in a normalized data model).
In Oracle or SQL Server this could be achieved with the UNPIVOT operator, but that is not available in Postgres. However Postgres’ ability to use the VALUES clause like a table makes this actually quite easy:
select c.customer_id, t.*
from customer_turnover c
cross join lateral (
values
(c.q1, 'Q1'),
(c.q2, 'Q2'),
(c.q3, 'Q3'),
(c.q4, 'Q4')
) as t(turnover, quarter)
order by customer_id, quarter;
will return the following result:
customer_id | turnover | quarter
------------+----------+--------
1 | 100 | Q1
1 | 210 | Q2
1 | 203 | Q3
1 | 304 | Q4
2 | 150 | Q1
2 | 118 | Q2
2 | 422 | Q3
2 | 257 | Q4
3 | 220 | Q1
3 | 311 | Q2
3 | 271 | Q3
3 | 269 | Q4
The equivalent query with the standard UNPIVOT operator would be:
select customer_id, turnover, quarter
from customer_turnover c
UNPIVOT (turnover for quarter in (q1 as 'Q1',
q2 as 'Q2',
q3 as 'Q3',
q4 as 'Q4'))
order by customer_id, quarter;
FYI for those of us looking for how to unpivot in RedShift.
The long form solution given by Stew appears to be the only way to accomplish this.
For those who cannot see it there, here is the text pasted below:
We do not have built-in functions that will do pivot or unpivot. However,
you can always write SQL to do that.
create table sales (regionid integer, q1 integer, q2 integer, q3 integer, q4 integer);
insert into sales values (1,10,12,14,16), (2,20,22,24,26);
select * from sales order by regionid;
regionid | q1 | q2 | q3 | q4
----------+----+----+----+----
1 | 10 | 12 | 14 | 16
2 | 20 | 22 | 24 | 26
(2 rows)
pivot query
create table sales_pivoted (regionid, quarter, sales)
as
select regionid, 'Q1', q1 from sales
UNION ALL
select regionid, 'Q2', q2 from sales
UNION ALL
select regionid, 'Q3', q3 from sales
UNION ALL
select regionid, 'Q4', q4 from sales
;
select * from sales_pivoted order by regionid, quarter;
regionid | quarter | sales
----------+---------+-------
1 | Q1 | 10
1 | Q2 | 12
1 | Q3 | 14
1 | Q4 | 16
2 | Q1 | 20
2 | Q2 | 22
2 | Q3 | 24
2 | Q4 | 26
(8 rows)
unpivot query
select regionid, sum(Q1) as Q1, sum(Q2) as Q2, sum(Q3) as Q3, sum(Q4) as Q4
from
(select regionid,
case quarter when 'Q1' then sales else 0 end as Q1,
case quarter when 'Q2' then sales else 0 end as Q2,
case quarter when 'Q3' then sales else 0 end as Q3,
case quarter when 'Q4' then sales else 0 end as Q4
from sales_pivoted)
group by regionid
order by regionid;
regionid | q1 | q2 | q3 | q4
----------+----+----+----+----
1 | 10 | 12 | 14 | 16
2 | 20 | 22 | 24 | 26
(2 rows)
Hope this helps, Neil
Pulling slightly modified content from the link in the comment from #a_horse_with_no_name into an answer because it works:
Installing Hstore
If you don't have hstore installed and are running PostgreSQL 9.1+, you can use the handy
CREATE EXTENSION hstore;
For lower versions, look for the hstore.sql file in share/contrib and run in your database.
Assuming that your source (e.g., wide data) table has one 'id' column, named id_field, and any number of 'value' columns, all of the same type, the following will create an unpivoted view of that table.
CREATE VIEW vw_unpivot AS
SELECT id_field, (h).key AS column_name, (h).value AS column_value
FROM (
SELECT id_field, each(hstore(foo) - 'id_field'::text) AS h
FROM zcta5 as foo
) AS unpiv ;
This works with any number of 'value' columns. All of the resulting values will be text, unless you cast, e.g., (h).value::numeric.
Just use JSON:
with data (id, name) as (
values (1, 'a'), (2, 'b')
)
select t.*
from data, lateral jsonb_each_text(to_jsonb(data)) with ordinality as t
order by data.id, t.ordinality;
This yields
|key |value|ordinality|
|----|-----|----------|
|id |1 |1 |
|name|a |2 |
|id |2 |1 |
|name|b |2 |
dbfiddle
I wrote a horrible unpivot function for PostgreSQL. It's rather slow but it at least returns results like you'd expect an unpivot operation to.
https://cgsrv1.arrc.csiro.au/blog/2010/05/14/unpivotuncrosstab-in-postgresql/
Hopefully you can find it useful..
Depending on what you want to do... something like this can be helpful.
with wide_table as (
select 1 a, 2 b, 3 c
union all
select 4 a, 5 b, 6 c
)
select unnest(array[a,b,c]) from wide_table
You can use FROM UNNEST() array handling to UnPivot a dataset, tandem with a correlated subquery (works w/ PG 9.4).
FROM UNNEST() is more powerful & flexible than the typical method of using FROM (VALUES .... ) to unpivot datasets. This is b/c FROM UNNEST() is variadic (with n-ary arity). By using a correlated subquery the need for the lateral ORDINAL clause is eliminated, & Postgres keeps the resulting parallel columnar sets in the proper ordinal sequence.
This is, BTW, FAST -- in practical use spawning 8 million rows in < 15 seconds on a 24-core system.
WITH _students AS ( /** CTE **/
SELECT * FROM
( SELECT 'jane'::TEXT ,'doe'::TEXT , 1::INT
UNION
SELECT 'john'::TEXT ,'doe'::TEXT , 2::INT
UNION
SELECT 'jerry'::TEXT ,'roe'::TEXT , 3::INT
UNION
SELECT 'jodi'::TEXT ,'roe'::TEXT , 4::INT
) s ( fn, ln, id )
) /** end WITH **/
SELECT s.id
, ax.fanm -- field labels, now expanded to two rows
, ax.anm -- field data, now expanded to two rows
, ax.someval -- manually incl. data
, ax.rankednum -- manually assigned ranks
,ax.genser -- auto-generate ranks
FROM _students s
,UNNEST /** MULTI-UNNEST() BLOCK **/
(
( SELECT ARRAY[ fn, ln ]::text[] AS anm -- expanded into two rows by outer UNNEST()
/** CORRELATED SUBQUERY **/
FROM _students s2 WHERE s2.id = s.id -- outer relation
)
,( /** ordinal relationship preserved in variadic UNNEST() **/
SELECT ARRAY[ 'first name', 'last name' ]::text[] -- exp. into 2 rows
AS fanm
)
,( SELECT ARRAY[ 'z','x','y'] -- only 3 rows gen'd, but ordinal rela. kept
AS someval
)
,( SELECT ARRAY[ 1,2,3,4,5 ] -- 5 rows gen'd, ordinal rela. kept.
AS rankednum
)
,( SELECT ARRAY( /** you may go wild ... **/
SELECT generate_series(1, 15, 3 )
AS genser
)
)
) ax ( anm, fanm, someval, rankednum , genser )
;
RESULT SET:
+--------+----------------+-----------+----------+---------+-------
| id | fanm | anm | someval |rankednum| [ etc. ]
+--------+----------------+-----------+----------+---------+-------
| 2 | first name | john | z | 1 | .
| 2 | last name | doe | y | 2 | .
| 2 | [null] | [null] | x | 3 | .
| 2 | [null] | [null] | [null] | 4 | .
| 2 | [null] | [null] | [null] | 5 | .
| 1 | first name | jane | z | 1 | .
| 1 | last name | doe | y | 2 | .
| 1 | | | x | 3 | .
| 1 | | | | 4 | .
| 1 | | | | 5 | .
| 4 | first name | jodi | z | 1 | .
| 4 | last name | roe | y | 2 | .
| 4 | | | x | 3 | .
| 4 | | | | 4 | .
| 4 | | | | 5 | .
| 3 | first name | jerry | z | 1 | .
| 3 | last name | roe | y | 2 | .
| 3 | | | x | 3 | .
| 3 | | | | 4 | .
| 3 | | | | 5 | .
+--------+----------------+-----------+----------+---------+ ----
Here's a way that combines the hstore and CROSS JOIN approaches from other answers.
It's a modified version of my answer to a similar question, which is itself based on the method at https://blog.sql-workbench.eu/post/dynamic-unpivot/ and another answer to that question.
-- Example wide data with a column for each year...
WITH example_wide_data("id", "2001", "2002", "2003", "2004") AS (
VALUES
(1, 4, 5, 6, 7),
(2, 8, 9, 10, 11)
)
-- that is tided to have "year" and "value" columns
SELECT
id,
r.key AS year,
r.value AS value
FROM
example_wide_data w
CROSS JOIN
each(hstore(w.*)) AS r(key, value)
WHERE
-- This chooses columns that look like years
-- In other cases you might need a different condition
r.key ~ '^[0-9]{4}$';
It has a few benefits over other solutions:
By using hstore and not jsonb, it hopefully minimises issues with type conversions (although hstore does convert everything to text)
The columns don't need to be hard coded or known in advance. Here, columns are chosen by a regex on the name, but you could use any SQL logic based on the name, or even the value.
It doesn't require PL/pgSQL - it's all SQL