Select that splits rows with range into several rows with smaller ranges? - postgresql

I have two tables that contain categorized tsrange values. The ranges in each table are non-overlapping per category, but the ranges in b might overlap those in a.
create table a ( id serial primary key, category int, period tsrange );
create table b ( id serial primary key, category int, period tsrange );
What I would like to do is combine these two tables into a CTE for another query. The combined values needs to be the tsranges from table a subtracted by any overlapping tsranges in table b with the same category.
The complication is that in the case where an overlapping b.period is contained inside an a.period, the result of the subtraction is two rows. The Postgres Range - operator does not support this, so I create a function that will return 1 or 2 rows:
create function subtract_tsrange( a tsrange , b tsrange )
returns table (period tsrange)
language 'plpgsql' as $$
begin
if a #> b and not isempty(b) and lower(a) <> lower(b) and upper(b) <> upper(a)
then
period := tsrange(lower(a), lower(b), '[)');
return next;
period := tsrange(upper(b), upper(a), '[)');
return next;
else
period := a - b;
return next;
end if;
return;
end
$$;
There can also be several b.periods overlapping an a.period, so one row from a might be potentially be split into a lot of rows with shorter periods.
Now I want to create a select that takes each row in a and returns:
The original a.period if there is no overlapping b.period with the same category
or
1 or several rows representing the original a.period minus all overlapping b.periods with the same category.
After reading lots of other posts I figure I should use SELECT LATERAL in combination with my function somehow, but I'm still scratching my head as to how?? (We're talking Postgres 9.6 btw!)

Notes: your problem can easily be generalized to every range types, therefore I will use the anyrange pseudo type in my answer, but you don't have to. In fact because of this I had to create a generic constructor for range types, because PostgreSQL have not defined it (yet):
create or replace function to_range(t anyrange, l anyelement, u anyelement, s text default '[)', out to_range anyrange)
language plpgsql as $func$
begin
execute format('select %I($1, $2, $3)', pg_typeof(t)) into to_range using l, u, s;
end
$func$;
Of course, you can use the appropriate range constructor instead of to_range() calls.
Also, I will use the numrange type for testing purposes, as it can be created and checked more easily than the tsrange type, but my answer should work with that as well.
Answer:
I rewrote your function to handle any type of bounds (inclusive, exclusive and even unbounded ranges). Also, it will return an empty result set when a <# b.
create or replace function range_div(a anyrange, b anyrange)
returns setof anyrange
language sql as $func$
select * from unnest(case
when b is null or a <# b then '{}'
when a #> b then array[
to_range(a, case when lower_inf(a) then null else lower(a) end,
case when lower_inf(b) then null else lower(b) end,
case when lower_inc(a) then '[' else '(' end ||
case when lower_inc(b) then ')' else ']' end),
to_range(a, case when upper_inf(b) then null else upper(b) end,
case when upper_inf(a) then null else upper(a) end,
case when upper_inc(b) then '(' else '[' end ||
case when upper_inc(a) then ']' else ')' end)
]
else array[a - b]
end)
$func$;
With this in mind, what you need is somewhat an inverse of aggregation. F.ex. with sum() one can start with an empty value (0) and constantly add some value to that. But you have your initial value and you need to constantly remove some parts of it.
One solution to that is to use recursive CTEs:
with recursive r as (
select *
from a
union
select r.id, r.category, d
from r
left join b using (category)
cross join range_div(r.period, b.period) d -- this is in fact an implicit lateral join
where r.period && b.period
)
select r.*
from r
left join b on r.category = b.category and r.period && b.period
where not isempty(r.period) and b.period is null
My sample data:
create table a (id serial primary key, category int, period numrange);
create table b (id serial primary key, category int, period numrange);
insert into a (category, period) values (1, '[1,4]'), (1, '[2,5]'), (1, '[3,6]'), (2, '(1,6)');
insert into b (category, period) values (1, '[2,3)'), (1, '[1,2]'), (2, '[3,3]');
The query above produces:
id | category | period
3 | 1 | [3,6]
1 | 1 | [3,4]
2 | 1 | [3,5]
4 | 2 | (1,3)
4 | 2 | (3,6)

Related

Concatenate string instead of just replacing it

I have a table with standard columns where I want to perform regular INSERTs.
But one of the columns is of type varchar with special semantics. It's a string that's supposed to behave as a set of strings, where the elements of the set are separated by commas.
Eg. if one row has in that varchar column the value fish,sheep,dove, and I insert the string ,fish,eagle, I want the result to be fish,sheep,dove,eagle (ie. eagle gets added to the set, but fish doesn't because it's already in the set).
I have here this Postgres code that does the "set concatenation" that I want:
SELECT string_agg(unnest, ',') AS x FROM (SELECT DISTINCT unnest(string_to_array('fish,sheep,dove' || ',fish,eagle', ','))) AS x;
But I can't figure out how to apply this logic to insertions.
What I want is something like:
CREATE TABLE IF NOT EXISTS t00(
userid int8 PRIMARY KEY,
a int8,
b varchar);
INSERT INTO t00 (userid,a,b) VALUES (0,1,'fish,sheep,dove');
INSERT INTO t00 (userid,a,b) VALUES (0,1,',fish,eagle')
ON CONFLICT (userid)
DO UPDATE SET
a = EXCLUDED.a,
b = SELECT string_agg(unnest, ',') AS x FROM (SELECT DISTINCT unnest(string_to_array(t00.b || EXCLUDED.b, ','))) AS x;
How can I achieve something like that?
Storing comma separated values is a huge mistake to begin with. But if you really want to make your life harder than it needs to be, you might want to create a function that merges two comma separated lists:
create function merge_lists(p_one text, p_two text)
returns text
as
$$
select string_agg(item, ',')
from (
select e.item
from unnest(string_to_array(p_one, ',')) as e(item)
where e.item <> '' --< necessary because of the leading , in your data
union
select t.item
from unnest(string_to_array(p_two, ',')) t(item)
where t.item <> ''
) t;
$$
language sql;
If you are using Postgres 14 or later, unnest(string_to_array(..., ',')) can be replace with string_to_table(..., ',')
Then your INSERT statement gets a bit simpler:
INSERT INTO t00 (userid,a,b) VALUES (0,1,',fish,eagle')
ON CONFLICT (userid)
DO UPDATE SET
a = EXCLUDED.a,
b = merge_lists(excluded.b, t00.b);
I think I was only missing parentheses around the SELECT statement:
INSERT INTO t00 (userid,a,b) VALUES (0,1,',fish,eagle')
ON CONFLICT (userid)
DO UPDATE SET
a = EXCLUDED.a,
b = (SELECT string_agg(unnest, ',') AS x FROM (SELECT DISTINCT unnest(string_to_array(t00.b || EXCLUDED.b, ','))) AS x);

CTE with bidirectional entries

I want to walk a directed graph. I need all involved nodes.
I want to support both cases (result should be always 1,2,3):
INSERT INTO foo VALUES (1,2),(2,3);
INSERT INTO foo VALUES (1,2),(3,2);
WITH RECURSIVE traverse(id, path, cycle) AS (
SELECT a, ARRAY[a], false
FROM foo WHERE a = 1
UNION ALL
SELECT GREATEST(a,b), traverse.path || GREATEST(a,b), GREATEST(a,b) = ANY(traverse.path)
FROM traverse
INNER JOIN foo
ON LEAST(a,b) = traverse.id
WHERE NOT cycle
)
SELECT * FROM traverse
Table foo can have up to 50 Mio records. Index is on both column (not multicolumn index). It doesnt work very "fast" - without GREATEST and LEAST its very fast. Any other solutions?
Update: An iterative solution is not that bad after analyzing requirements again:
There are 54 Mio edges and 21 Mio nodes in the db - there are distinct graphs in the db each connected graph has 3 to 100 nodes
it worked well with the question "give me all related nodes" --> 20msec (graph depth = 13)
DROP TABLE IF EXISTS edges, nodes;
CREATE TABLE nodes
(
id INTEGER PRIMARY KEY
);
CREATE TABLE edges
(
"from" INTEGER NOT NULL REFERENCES nodes(id),
"to" INTEGER NOT NULL REFERENCES nodes(id)
);
INSERT INTO nodes SELECT generate_series(1,5);
INSERT INTO edges VALUES
-- circle
(1,2),(2,3),(3,1),
-- other direction
(4,3);
CREATE OR REPLACE FUNCTION walk_graph(param_node_id INTEGER)
RETURNS TABLE (id INTEGER)
LANGUAGE plpgsql
AS $$
DECLARE
var_node_ids INTEGER[] := ARRAY[param_node_id];
var_iteration_node_ids INTEGER[] := ARRAY[param_node_id];
BEGIN
WHILE array_length(var_iteration_node_ids, 1) > 0 LOOP
var_iteration_node_ids := ARRAY(SELECT DISTINCT "to" FROM edges
WHERE "from" = ANY(var_iteration_node_ids)
AND NOT("to" = ANY(var_node_ids))
UNION
SELECT DISTINCT "from" FROM edges
WHERE "to" = ANY(var_iteration_node_ids)
AND NOT("from" = ANY(var_node_ids)));
var_node_ids := var_node_ids || var_iteration_node_ids;
END LOOP;
RETURN QUERY SELECT unnest(var_node_ids);
END $$;
SELECT * FROM walk_graph(2);

Get all instances of primary keys of a table

This is a simple example of what I need, for any given table, I need to get all the instances of the primary keys, this is a little example, but I need a generic way to do it.
create table foo
(
a numeric
,b text
,c numeric
constraint pk_foo primary key (a,b)
)
insert into foo(a,b,c) values (1,'a',1),(2,'b',2),(3,'c',3);
select <the magical thing>
result
a|b
1 |1|a|
2 |2|b|
3 |3|c|
.. ...
I need to control if the instances of the primary keys are changed by the user, but I don't want to repeat code in too many tables! I need a generic way to do it, I will put <the magical thing>
in a function to put it on a trigger before update and blah blah blah...
In PostgreSQL you must always provide a resulting type for a query. However, you can obtain the code of the query you need, and then execute the query from the client:
create or replace function get_key_only_sql(regclass) returns string as $$
select 'select '|| (
select string_agg(quote_ident(att.attname), ', ' order by col)
from pg_index i
join lateral unnest(indkey) col on (true)
join pg_attribute att on (att.attrelid = i.indrelid and att.attnum = col)
where i.indrelid = $1 and i.indisprimary
group by i.indexrelid
limit 1) || ' from '||$1::text
end;
$$ language sql;
Here's some client pseudocode using the function above:
sql = pgexecscalar("select get_key_only_sql('mytable'::regclass)");
rs = pgopen(sql);

Get columns that differ between 2 rows

I have a table company with 60 columns. The goal is to create a tool to find, compare and eliminate duplicates in this table.
Example: I find 2 companies that potentially are the same, but I need to know which values (columns) differ between these 2 rows in order to continue.
I think it is possible to compare column by column x 60, but I search for a simpler and more generic solution.
Something like:
SELECT * FROM company where co_id=22
SHOW DIFFERENCE
SELECT * FROM company where co_id=33
The result should be the column names that differ.
For this you may use an intermediate key/value representation of the rows, with JSON functions or alternatively with the hstore extension (now only of historical interest). JSON comes built-in with every reasonably recent version of PostgreSQL, whereas hstore must be installed in the database with CREATE EXTENSION.
Demo:
CREATE TABLE table1 (id int primary key, t1 text, t2 text, t3 text);
Let's insert two rows that differ by the primary key and one other column (t3).
INSERT INTO table1 VALUES
(1,'foo','bar','baz'),
(2,'foo','bar','biz');
Solution with json
First with get a key/value representation of the rows with the original row number, then we pair the rows based on their original row number and
filter out those with the same "value" column
WITH rowcols AS (
select rn, key, value
from (select row_number() over () as rn,
row_to_json(table1.*) as r from table1) AS s
cross join lateral json_each_text(s.r)
)
select r1.key from rowcols r1 join rowcols r2
on (r1.rn=r2.rn-1 and r1.key = r2.key)
where r1.value <> r2.value;
Sample result:
key
-----
id
t3
Solution with hstore
SELECT skeys(h1-h2) from
(select hstore(t.*) as h1 from table1 t where id=1) h1
CROSS JOIN
(select hstore(t.*) as h2 from table1 t where id=2) h2;
h1-h2 computes the difference key by key and skeys() outputs the result as a set.
Result:
skeys
-------
id
t3
The select-list might be refined with skeys((h1-h2)-'id'::text) to always remove id which, as the primary key, will obviously always differ between rows.
Here's a stored procedure that should get you most of the way...
While this should work "as is", it has no error checking, which you should add.
It gets all the columns in the table, and loops over them. A difference is when the count of the distinct items is more than one.
Also, the output is:
The count of the number of differences
Messages for each column where there is a difference
It might be more useful to return a rowset of the columns with the differences. Anyway, good luck!
Usage:
SELECT showdifference('public','company','co_id',22,33)
CREATE OR REPLACE FUNCTION showdifference(p_schema text, p_tablename text,p_idcolumn text,p_firstid integer, p_secondid integer)
RETURNS INTEGER AS
$BODY$
DECLARE
l_diffcount INTEGER;
l_column text;
l_dupcount integer;
column_cursor CURSOR FOR select column_name from information_schema.columns where table_name = p_tablename and table_schema = p_schema and column_name <> p_idcolumn;
BEGIN
-- need error checking here, to ensure the table and schema exist and the columns exist
-- Should also check that the records ids exist.
-- Should also check that the column type of the id field is integer
-- Set the number of differences to zero.
l_diffcount := 0;
-- use a cursor to iterate over the columns found in information_schema.columns
-- open the cursor
OPEN column_cursor;
LOOP
FETCH column_cursor INTO l_column;
EXIT WHEN NOT FOUND;
-- build a query to see if there is a difference between the columns. If there is raise a notice
EXECUTE 'select count(distinct ' || quote_ident(l_column) || ' ) from ' || quote_ident(p_schema) || '.' || quote_ident(p_tablename) || ' where ' || quote_ident(p_idcolumn) || ' in ('|| p_firstid || ',' || p_secondid ||')'
INTO l_dupcount;
IF l_dupcount > 1 THEN
-- increment the counter
l_diffcount := l_diffcount +1;
RAISE NOTICE '% has % differences', l_column, l_dupcount ; -- for "real" you might want to return a rowset and could do something here
END IF;
END LOOP;
-- close the cursor
CLOSE column_cursor;
RETURN l_diffcount;
END;
$BODY$
LANGUAGE plpgsql VOLATILE STRICT
COST 100;

Join 2 sets based on default order

How do I join 2 sets of records solely based on the default order?
So if I have a table x(col(1,2,3,4,5,6,7)) and another table z(col(a,b,c,d,e,f,g))
it will return
c1 c2
-- --
1 a
2 b
3 c
4 d
5 e
6 f
7 g
Actually, I wanted to join a pair of one dimensional arrays from parameters and treat them like columns from a table.
Sample code:
CREATE OR REPLACE FUNCTION "Test"(timestamp without time zone[],
timestamp without time zone[])
RETURNS refcursor AS
$BODY$
DECLARE
curr refcursor;
BEGIN
OPEN curr FOR
SELECT DISTINCT "Start" AS x, "End" AS y, COUNT("A"."id")
FROM UNNEST($1) "Start"
INNER JOIN
(
SELECT "End", ROW_NUMBER() OVER(ORDER BY ("End")) rn
FROM UNNEST($2) "End" ORDER BY ("End")
) "End" ON ROW_NUMBER() OVER(ORDER BY ("Start")) = "End".rn
LEFT JOIN "A" ON ("A"."date" BETWEEN x AND y)
GROUP BY 1,2
ORDER BY "Start";
return curr;
END
$BODY$
Now, to answer the real question that was revealed in comments, which appears to be something like:
Given two arrays 'a' and 'b', how do I pair up their elements so I can get the element pairs as column aliases in a query?
There are a couple of ways to tackle this:
If and only if the arrays are of equal length, use multiple unnest functions in the SELECT clause (a deprecated approach that should only be used for backward compatibility);
Use generate_subscripts to loop over the arrays;
Use generate_series over subqueries against array_lower and array_upper to emulate generate_subscripts if you need to support versions too old to have generate_subscripts;
Relying on the order that unnest returns tuples in and hoping - like in my other answer and as shown below. It'll work, but it's not guaranteed to work in future versions.
Use the WITH ORDINALITY functionality added in PostgreSQL 9.4 (see also its first posting) to get a row number for unnest when 9.4 comes out.
Use multiple-array UNNEST, which is SQL-standard but which PostgreSQL doesn't support yet.
So, say we have function arraypair with array parameters a and b:
CREATE OR REPLACE FUNCTION arraypair (a integer[], b text[])
RETURNS TABLE (col_a integer, col_b text) AS $$
-- blah code here blah
$$ LANGUAGE whatever IMMUTABLE;
and it's invoked as:
SELECT * FROM arraypair( ARRAY[1,2,3,4,5,6,7], ARRAY['a','b','c','d','e','f','g'] );
possible function definitions would be:
SRF-in-SELECT (deprecated)
CREATE OR REPLACE FUNCTION arraypair (a integer[], b text[])
RETURNS TABLE (col_a integer, col_b text) AS $$
SELECT unnest(a), unnest(b);
$$ LANGUAGE sql IMMUTABLE;
Will produce bizarre and unexpected results if the arrays aren't equal in length; see the documentation on set returning functions and their non-standard use in the SELECT list to learn why, and what exactly happens.
generate_subscripts
This is likely the safest option:
CREATE OR REPLACE FUNCTION arraypair (a integer[], b text[])
RETURNS TABLE (col_a integer, col_b text) AS $$
SELECT
a[i], b[i]
FROM generate_subscripts(CASE WHEN array_length(a,1) >= array_length(b,1) THEN a::text[] ELSE b::text[] END, 1) i;
$$ LANGUAGE sql IMMUTABLE;
If the arrays are of unequal length, as written it'll return null elements for the shorter, so it works like a full outer join. Reverse the sense of the case to get an inner-join like effect. The function assumes the arrays are one-dimensional and that they start at index 1. If an entire array argument is NULL then the function returns NULL.
A more generalized version would be written in PL/PgSQL and would check array_ndims(a) = 1, check array_lower(a, 1) = 1, test for null arrays, etc. I'll leave that to you.
Hoping for pair-wise returns:
This isn't guaranteed to work, but does with PostgreSQL's current query executor:
CREATE OR REPLACE FUNCTION arraypair (a integer[], b text[])
RETURNS TABLE (col_a integer, col_b text) AS $$
WITH
rn_c1(rn, col) AS (
SELECT row_number() OVER (), c1.col
FROM unnest(a) c1(col)
),
rn_c2(rn, col) AS (
SELECT row_number() OVER (), c2.col
FROM unnest(b) c2(col)
)
SELECT
rn_c1.col AS c1,
rn_c2.col AS c2
FROM rn_c1
INNER JOIN rn_c2 ON (rn_c1.rn = rn_c2.rn);
$$ LANGUAGE sql IMMUTABLE;
I would consider using generate_subscripts much safer.
Multi-argument unnest:
This should work, but doesn't because PostgreSQL's unnest doesn't accept multiple input arrays (yet):
SELECT * FROM unnest(a,b);
select x.c1, z.c2
from
x
inner join
(
select
c2,
row_number() over(order by c2) rn
from z
order by c2
) z on x.c1 = z.rn
order by x.c1
If x.c1 is not 1,2,3... you can do the same that was done with z
The middle order by is not necessary as pointed by Erwin. I tested it like this:
create table t (i integer);
insert into t
select ceil(random() * 100000)
from generate_series(1, 100000);
select
i,
row_number() over(order by i) rn
from t
;
And i comes out ordered. Before this simple test which I never executed I though it would be possible that the rows would be numbered in any order.
By "default order" it sounds like you probably mean the order in which the rows are returned by select * from tablename without an ORDER BY.
If so, this ordering is undefined. The database can return rows in any order that it feels like. You'll find that if you UPDATE a row, it probably moves to a different position in the table.
If you're stuck in a situation where you assumed tables had an order and they don't, you can as a recovery option add a row number based on the on-disk ordering of the tuples within the table:
select row_number() OVER (), *
from the_table
order by ctid
If the output looks right, I recommend that you CREATE TABLE a new table with an extra field, then do an INSERT INTO ... SELECT to insert the data ordered by ctid, then ALTER TABLE ... RENAME the tables and finally fix any foreign key references so they point to the new table.
ctid can be changed by autovacuum, UPDATE, CLUSTER, etc, so it is not something you should ever be using in applications. I'm using it here only because it sounds like you don't have any real ordering or identifier key.
If you need to pair up rows based on their on-disk ordering (an unreliable and unsafe thing to do as noted above), you could per this SQLFiddle try:
WITH
rn_c1(rn, col) AS (
SELECT row_number() OVER (ORDER BY ctid), c1.col
FROM c1
),
rn_c2(rn, col) AS (
SELECT row_number() OVER (ORDER BY ctid), c2.col
FROM c2
)
SELECT
rn_c1.col AS c1,
rn_c2.col AS c2
FROM rn_c1
INNER JOIN rn_c2 ON (rn_c1.rn = rn_c2.rn);
but never rely on this in a production app. If you're really stuck you can use this with CREATE TABLE AS to construct a new table that you can start with when you're working on recovering data from a DB that lacks a required key, but that's about it.
The same approach given above might work with an empty window clause () instead of (ORDER BY ctid) when using sets that lack a ctid, like interim results from functions. It's even less safe then though, and should be a matter of last resort only.
(See also this newer related answer: https://stackoverflow.com/a/17762282/398670)