Struggling with what I thought would be a straightforward operation...
EDIT: SQLFiddle available here: http://sqlfiddle.com/#!15/11711/1/0
Using PostgreSQL 9.4, pretend I have a query that returns this two-column set:
CATEGORY | TOTAL
all | 14
soccer | 5
baseball | 6
hockey | 3
However I'd prefer to pivot it into a single-row set:
ALL | SOCCER | BASEBALL | HOCKEY
14 | 5 | 6 | 3
In other words, I want all my "CATEGORY" values to become columns, with the corresponding "TOTAL" value to be placed in the first row under the appropriate column.
I've been trying to use CROSSTAB()... but as of now I'm getting the following error:
ERROR: a column definition list is required for functions returning "record"
For reference, here's what I'm trying to put as my SQL command:
SELECT * FROM crosstab(
$$
WITH "countTotal" AS (
SELECT text 'all' AS "sportType", COUNT(*) AS "total"
FROM log
WHERE type = 'SPORT_EVENT_CREATED'
GROUP BY "sportType"
),
"countBySportType" AS (
SELECT sport_type AS "sportType", COUNT(*) AS "total"
FROM log
WHERE type = 'SPORT_EVENT_CREATED'
GROUP BY "sportType"
)
SELECT * FROM "countTotal"
UNION
SELECT * FROM "countBySportType"
$$
)
I think you have to specify names and types of the output columns. From the postgres manual tablefunc
The crosstab function is declared to return setof record, so the
actual names and types of the output columns must be defined in the
FROM clause of the calling SELECT statement, for example:
SELECT * FROM crosstab('...') AS ct(row_name text, category_1 text, category_2 text);
You have to use crosstabN(text) to use it with dynamic number of columns. This PostgreSQL Crosstab Query whole lot of details about the cross tab query.
One more post Dynamic alternative to pivot with CASE and GROUP BY
Related
I have to display a table like this:
Year
Month
Delivered
Not delivered
Not Received
2021
Jan
10
86
75
2021
Feb
13
36
96
2021
March
49
7
61
2021
Apr
3
21
72
Using raw data generated by this query:
SELECT
year,
TO_CHAR( creation_date, 'Month') AS month,
marking,
COUNT(*) AS count
FROM invoices
GROUP BY 1,2,3
I have tried using crosstab() but I got error:
SELECT * FROM crosstab('
SELECT
year,
TO_CHAR( creation_date, ''Month'') AS month,
marking,
COUNT(*) AS count
FROM invoices
GROUP BY 1,2,3
') AS ct(year text, month text, marking text)
I would prefer to not manually type all marking values because they are a lot.
ERROR: invalid source data SQL statement
DETAIL: The provided SQL must return 3 columns: rowid, category, and values.
1. Static solution with a limited list of marking values :
SELECT year
, TO_CHAR( creation_date, 'Month') AS month
, COUNT(*) FILTER (WHERE marking = 'Delivered') AS Delivered
, COUNT(*) FILTER (WHERE marking = 'Not delivered') AS "Not delivered"
, COUNT(*) FILTER (WHERE marking = 'Not Received') AS "Not Received"
FROM invoices
GROUP BY 1,2
2. Full dynamic solution with a large list of marking values :
This proposal is an alternative solution to the crosstab solution as proposed in A and B.
The proposed solution here just requires a dedicated composite type which can be dynamically created and then it relies on the jsonb type and standard functions :
Starting from your query which counts the number of rows per year, month and marking value :
Using the jsonb_object_agg function, the resulting rows are first
aggregated by year and month into jsonb objects whose jsonb keys
correspond to the marking values and whose jsonb values
correspond to the counts.
the resulting jsonb objects are then converted into records using the jsonb_populate_record function and the dedicated composite type.
First we dynamically create a composite type which corresponds to the ordered list of marking values :
CREATE OR REPLACE PROCEDURE create_composite_type() LANGUAGE plpgsql AS $$
DECLARE
column_list text ;
BEGIN
SELECT string_agg(DISTINCT quote_ident(marking) || ' bigint', ',' ORDER BY quote_ident(marking) || ' bigint' ASC)
INTO column_list
FROM invoices ;
EXECUTE 'DROP TYPE IF EXISTS composite_type' ;
EXECUTE 'CREATE TYPE composite_type AS (' || column_list || ')' ;
END ;
$$ ;
CALL create_composite_type() ;
Then the expected result is provided by the following query :
SELECT a.year
, TO_CHAR(a.year_month, 'Month') AS month
, (jsonb_populate_record( null :: composite_type
, jsonb_object_agg(a.marking, a.count)
)
).*
FROM
( SELECT year
, date_trunc('month', creation_date) AS year_month
, marking
, count(*) AS count
FROM invoices AS v
GROUP BY 1,2,3
) AS a
GROUP BY 1,2
ORDER BY month
Obviously, if the list of marking values may vary in time, then you have to recall the create_composite_type() procedure just before executing the query. If you don't update the composite_type, the query will still work (no error !) but some old marking values may be obsolete (not used anymore), and some new marking values may be missing in the query result (not displayed as columns).
See the full demo in dbfiddle.
You need to generate the crosstab() call dynamically.
But since SQL does not allow dynamic return types, you need a two-step workflow:
Generate query
Execute query
If you are unfamiliar with crosstab(), read this first:
PostgreSQL Crosstab Query
It's odd to generate the month from creation_date, but not the year. To simplify, I use a combined column year_month instead.
Query to generate the crosstab() query:
SELECT format(
$f$SELECT * FROM crosstab(
$q$
SELECT to_char(date_trunc('month', creation_date), 'YYYY_Month') AS year_month
, marking
, COUNT(*) AS ct
FROM invoices
GROUP BY date_trunc('month', creation_date), marking
ORDER BY date_trunc('month', creation_date) -- optional
$q$
, $c$VALUES (%s)$c$
) AS ct(year_month text, %s);
$f$, string_agg(quote_literal(sub.marking), '), (')
, string_agg(quote_ident (sub.marking), ' int, ') || ' int'
)
FROM (SELECT DISTINCT marking FROM invoices ORDER BY 1) sub;
If the table invoices is big with only few distinct values for marking (which seems likely) there are faster ways to get distinct values. See:
Optimize GROUP BY query to retrieve latest row per user
Generates a query of the form:
SELECT * FROM crosstab(
$q$
SELECT to_char(date_trunc('month', creation_date), 'YYYY_Month') AS year_month
, marking
, COUNT(*) AS ct
FROM invoices
GROUP BY date_trunc('month', creation_date), marking
ORDER BY date_trunc('month', creation_date) -- optional
$q$
, $c$VALUES ('Delivered'), ('Not Delivered'), ('Not Received')$c$
) AS ct(year_month text, "Delivered" int, "Not Delivered" int, "Not Received" int);
The simplified query does not need "extra columns. See:
Pivot on Multiple Columns using Tablefunc
Note the use date_trunc('month', creation_date) in GROUP BY and ORDER BY. This produces a valid sort order, and faster, too. See:
Cumulative sum of values by month, filling in for missing months
How to get rows by max(date) group by Year-Month in Postgres?
Also note the use of dollar-quotes to avoid quoting hell. See:
Insert text with single quotes in PostgreSQL
Months without entries don't show up in the result, and no markings for an existing month show as NULL. You can adapt either if need be. See:
Join a count query on generate_series() and retrieve Null values as '0'
Then execute the generated query.
db<>fiddle here (reusing
Edouard's fiddle, kudos!)
See:
Execute a dynamic crosstab query
In psql
In psql you can use \qexec to immediately execute the generated query. See:
Simulate CREATE DATABASE IF NOT EXISTS for PostgreSQL?
In Postgres 9.6 or later, you can also use the meta-command \crosstabview instead of crosstab():
test=> SELECT to_char(date_trunc('month', creation_date), 'YYYY_Month') AS year_month
test-> , marking
test-> , COUNT(*) AS count
test-> FROM invoices
test-> GROUP BY date_trunc('month', creation_date), 2
test-> ORDER BY date_trunc('month', creation_date)\crosstabview
year_month | Not Received | Delivered | Not Delivered
----------------+--------------+-----------+---------------
2020_January | 1 | 1 | 1
2020_March | | 2 | 2
2021_January | 1 | 1 | 2
2021_February | 1 | |
2021_March | | 1 |
2021_August | 2 | 1 | 1
2022_August | | 2 |
2022_November | 1 | 2 | 3
2022_December | 2 | |
(9 rows)
Note that \crosstabview - unlike crosstab() - does not support "extra" columns. If you insist on separate year and month columns, you need crosstab().
See:
How do I generate a pivoted CROSS JOIN where the resulting table definition is unknown?
I have a many-to-many relation representing containers holding items.
I have a primary key row_id in the table.
I insert four rows: (container_id, item_id) values (1778712425160346751, 4). These rows will be identical except the aforementioned unique row_id.
I subsequently execute the following query:
delete from contains
where item_id = 4 and
container_id = '1778712425160346751' and
row_id =
(
select max(row_id) from contains
where container_id = '1778712425160346751' and
item_id = 4
)
returning
(
select count(*) from contains
where container_id = '1778712425160346751' and
item_id = 4
);
Now I expected to get 3 returned from this query, but I got a 4. Getting a 4 is the desired behavior, but it is not what was expected.
My question is: can I always expect that the returning clause executes before the delete, or is this an idiosyncrasy of certain versions or specific software?
The use of a query in returning section is allowed but not documented. For the documentation:
output_expression
An expression to be computed and returned by the DELETE command after each row is deleted. The expression can use any column names of the table named by table_name or table(s) listed in USING. Write * to return all columns.
It seems logical that the query sees the table in a state before deleting, as the statement is not completed yet.
create temp table test as
select id from generate_series(1, 4) id;
delete from test
returning id, (select count(*) from test);
id | count
----+-------
1 | 4
2 | 4
3 | 4
4 | 4
(4 rows)
The same concerns update:
create temp table test as
select id from generate_series(1, 4) id;
update test
set id = id+ 1
returning id, (select sum(id) from test);
id | sum
----+-----
2 | 10
3 | 10
4 | 10
5 | 10
(4 rows)
Function string_to_array splits strings without grouping substrings in apostrophes:
# select unnest(string_to_array('one, "two,three"', ','));
unnest
--------
one
"two
three"
(3 rows)
I would like to have a smarter function, like this:
# select unnest(smarter_string_to_array('one, "two,three"', ','));
unnest
--------
one
two,three
(2 rows)
Purpose.
I know that COPY command does it in a proper way, but I need this feature internally.
I want to parse a text representation of rows of existing table. Example:
# select * from dataset limit 2;
id | name | state
----+-----------------+--------
1 | Smith, Reginald | Canada
2 | Jones, Susan |
(2 rows)
# select dataset::text from dataset limit 2;
dataset
------------------------------
(1,"Smith, Reginald",Canada)
(2,"Jones, Susan","")
(2 rows)
I want to do it dynamically in a plpgsql function for different tables. I cannot assume constant number of columns of a table nor a format of columns values.
There is a nice method to transpose a whole table into a one-column table:
select (json_each_text(row_to_json(t))).value from dataset t;
If the column id is unique then
select id, array_agg(value) arr from (
select row_number() over() rn, id, value from (
select id, (json_each_text(row_to_json(t))).value from dataset t
) alias
order by id, rn
) alias
group by id;
gives you exactly what you want. Additional query with row_number() is necessary to keep original order of columns.
As a Test I created this schema:
CREATE TABLE simple_table (client_id int4, order_id int4);
INSERT INTO simple_table (client_id, order_id)
VALUES
(1,2),(1,3),(1,4),(1,6),(1,8),(1,12),(1,16),(1,18),(1,25),(1,32),(1,33),(1,37),(1,43),
(1,56),(1,57),(1,66),(2,2),(2,3),(2,5),(2,7),(2,9),(2,12),(2,17),(2,19),(2,22),(2,30),
(2,33),(2,38),(2,44),(2,56),(2,58),(2,66)
;
Then used array_agg:
SELECT client_id, array_agg(order_id) FROM simple_table GROUP BY client_id;
to create the arrays for client 1 and client 2:
| CLIENT_ID | ARRAY_AGG |
----------------------------------------------------------
| 1 | 2,3,4,6,8,12,16,18,25,32,33,37,43,56,57,66 |
| 2 | 2,3,5,7,9,12,17,19,22,30,33,38,44,56,58,66 |
Now I would like to compare the 2 rows and identify the like values. Tried && overlap (have elements in common) ARRAY[1,4,3] && ARRAY[2,1] from the Postgresql documentation but I am having problems.
Perhaps I am looking at this wrong. Any help or guidance would be appreciated!
The && operator is a predicate that yields a true or false result, not a list of values.
If you're looking for the list of order_id that exist for both client_id=1 and client_id=2, the query would be:
select order_id from simple_table where client_id in (1,2)
group by order_id having count(*)=2;
That's equivalent to the intersections of the two arrays if you consider that these arrays are sets (no duplicates and the positions of the values are irrelevant), except that you don't need to use arrays at all, simple standard SQL is good enough.
Take a look at the "array_intersect" functions here:
Array Intersect
To see elements that are not common to both arrays:
create or replace function arrxor(anyarray,anyarray) returns anyarray as $$
select ARRAY(
(
select r.elements
from (
(select 1,unnest($1))
union all
(select 2,unnest($2))
) as r (arr, elements)
group by 1
having min(arr) = max(arr)
)
)
$$ language sql strict immutable;
I have a table called _sample_table_delme_data_files which contains some duplicates. I want to copy its records, without duplicates, into data_files:
INSERT INTO data_files (SELECT distinct * FROM _sample_table_delme_data_files);
ERROR: could not identify an ordering operator for type box3d
HINT: Use an explicit ordering operator or modify the query.
Problem is, PostgreSQL can not compare (or order) box3d types. How do I supply such an ordering operator so I can get only the distinct into my destination table?
Thanks in advance,
Adam
If you don't add the operator, you could try translating the box3d data to text using its output function, something like:
INSERT INTO data_files (SELECT distinct othercols,box3dout(box3dcol) FROM _sample_table_delme_data_files);
Edit The next step is: cast it back to box3d:
INSERT INTO data_files SELECT othercols, box3din(b) FROM (SELECT distinct othercols,box3dout(box3dcol) AS b FROM _sample_table_delme_data_files);
(I don't have box3d on my system so it's untested.)
The datatype box3d doesn't have an operator for the DISTINCT-operation. You have to create the operator, or ask the PostGIS-project, maybe somebody has already fixed this problem.
Finally, this was solved by a colleague.
Let's see how many dups are there:
SELECT COUNT(*) FROM _sample_table_delme_data_files ;
count
-------
12728
(1 row)
Now, we shall add another column to the source table to help us differentiate similar rows:
ALTER TABLE _sample_table_delme_data_files ADD COLUMN id2 serial;
We can now see the dups:
SELECT id, id2 FROM _sample_table_delme_data_files ORDER BY id LIMIT 10;
id | id2
--------+------
198748 | 6449
198748 | 85
198801 | 166
198801 | 6530
198829 | 87
198829 | 6451
198926 | 88
198926 | 6452
199062 | 6532
199062 | 168
(10 rows)
And remove them:
DELETE FROM _sample_table_delme_data_files
WHERE id2 IN (SELECT max(id2) FROM _sample_table_delme_data_files
GROUP BY id
HAVING COUNT(*)>1);
Let's see it worked:
SELECT id FROM _sample_table_delme_data_files GROUP BY id HAVING COUNT(*)>1;
id
----
(0 rows)
Remove the auxiliary column:
ALTER TABLE _sample_table_delme_data_files DROP COLUMN id2;
ALTER TABLE
Insert the remaining rows into the destination table:
INSERT INTO data_files (SELECT * FROM _sample_table_delme_data_files);
INSERT 0 6364