Postgres unique index on text array - postgresql

How to add a unique index on text array column.
I have a column in my Postgres table which contains sections.
+----+-----------+
| id | sections |
|----|-----------|
| 1 |['A', 'B'] |
+----+-----------+
| 2 |['A', 'A'] |
+----+-----------+
As you can see for id 2 I can insert two sections with the same text. I do not want to add duplicate text.
I do not want duplicate sections in my column.
Is there a way I can add an index on text array.
I saw the examples for int array but can't find anything for text array
I do not want to create the new function. I want to use the existing function in Postgres.

You can append into the sections column and unnest with distinct element like this:
update class set sections = array(
select distinct unnest(
array_append(
(select section from class where id = 2), 'A'))
where id = 2)

I like arays and do not always it good to normalize tables :-)
CREATE OR REPLACE FUNCTION is_not_unique(a int[]) RETURNS bool AS $f$
SELECT array_upper(a, 1) = array_upper(
(
SELECT array_agg(DISTINCT u)
FROM unnest(a) AS u
), 1);
$f$ LANGUAGE sql;
CREATE TEMP TABLE a (a int[], CHECK (is_not_unique(a)));
Test it:
# INSERT INTO a VALUES (ARRAY[1]);
INSERT 0 1
# INSERT INTO a VALUES (ARRAY[1, 2]);
INSERT 0 1
# INSERT INTO a VALUES (ARRAY[1, 1]);
ERROR: new row for relation "a" violates check constraint "a_a_check"
DETAIL: Failing row contains ({1,1}).

Related

How do I create a column in PgAdmin4 that is based of another column

I want to create a column that gives specific values for words in the previous column.
Example if column 1 says 'cars' I want column 2 to say 1. And if column 1 says 'trains' I want column 2 to say 2. But if column 1 doesn't say cars or trains I want column 2 to return a 0.
ALTER TABLE mytable ADD COLUMN col2 VARCHAR(100);
UPDATE mytable
SET col2 = CASE col1 WHEN 'cars' THEN 1
WHEN 'trains' THEN 2
ELSE 0 END;
You could create the new column and then use a CASE expression to populate it with the values you want:
UPDATE yourTable
SET new_col = CASE old_col WHEN 'cars' THEN 1
WHEN 'trains' THEN 2
ELSE 0 END;
In case the column new_col does not already exist, then create it first:
ALTER TABLE yourTable ADD COLUMN new_col VARCHAR(100);

how to update and then insert a record with unique key constraint

i have a table with columns a,b,c and c having the unique key constraint.
a |b |c(uk)
----------
aa 1 z
I want to insert a new row with values (bb,1,z), if z already exists then i first want to update the existing row to
a |b |c(uk)
----------
aa 1 null
and then insert (bb,1,z), so that the final rows looks as shown below.
a |b |c(uk)
----------
aa 1 null
bb 1 z
how can i do this a single sql statement?
Unfortunately unique constraints are checked per row in Postgres - unless they are created as "deferrable".
So if you want to do the UPDATE and the INSERT in a single statement, you will have to re-create your unique constraint with the attribute deferrable. Then Postgres will check it at the end of the statement.
create table the_table
(
a text,
b int,
c text
);
alter table the_table
add constraint unique_c
unique(c)
deferrable --<< this is the "magic"
;
Then you can do it with a data modifying CTE
with input (a,b,c) as (
values ('bb', 1, 'z')
), change_old as (
update the_table
set c = null
from input
where input.c = the_table.c
)
insert into the_table
select a,b,c
from input;
I think you should use BEFORE INSERT TRIGGER for this.
In the trigger you would update all the records containing value new.c like:
UPDATE my_table SET
c = NULL
WHERE
c = new.c;
If c is not present - nothing will happen.
You may still run into unique constraint violation error if 2 concurent transactions inserts the same value into c. To avoid that you can use advisory locks providing lock ID from value of c.

Check constraint on biggest key of HSTORE in Postgres

I would like to create check constraint on the HSTORE field that contains data in a following format:
{
1 => 2020-03-01, 2 => 2020-03-07, etc, etc, etc,
}
Where key is always a positive digit and value is a date.
Problem here that I want to extract keys ( by akeys), and then somehow get the biggest key and compare it with number_of_episodes(positive integer).
But it says that I can’t use arrays in check constraint.
Question is -is it possible to extract somehow biggest key from HSTORE as an integer and use it in check constraint afterwards?
Thank you.
alter table archives_seasonmodel
add constraint test
check (max((unnest(akeys(episodes))) <= number_of_episodes ))
This doesn’t work.
This works for me in PostgreSQL 10:
# create table tvseries
(number_of_episodes int,
episodes hstore,
check (number_of_episodes >= all (akeys(episodes)::int[]))
);
CREATE TABLE
# insert into tvseries values (2, '1=>"a", 2=>"b"');
INSERT 0 1
# insert into tvseries values (1, '1=>"a", 2=>"b"');
ERROR: new row for relation "tvseries" violates check constraint "tvseries_check"
DETAIL: Failing row contains (1, "1"=>"a", "2"=>"b").
# insert into tvseries values (2, '1=>"a"');
INSERT 0 1
# select * from tvseries;
number_of_episodes | episodes
--------------------+--------------------
2 | "1"=>"a", "2"=>"b"
2 | "1"=>"a"
(2 rows)
This answer outlines a couple ways you can go about this. The first is to use the intarray extension and it's sort_desc function, but I think the better approach here is to use a custom function.
testdb=# create extension hstore;
CREATE EXTENSION
testdb=# create table tt0(h hstore, max_n bigint);
CREATE TABLE
testdb=# CREATE OR REPLACE FUNCTION array_greatest(anyarray)
RETURNS anyelement LANGUAGE SQL AS $$
SELECT max(x) FROM unnest($1) as x;
$$;
CREATE FUNCTION
testdb=# alter table tt0 add check((array_greatest(akeys(h)::integer[]))<=max_n);
ALTER TABLE
testdb=# insert into tt0 select hstore(ARRAY[['1','asdf'],['3','fdsa']]), 2;
ERROR: new row for relation "tt0" violates check constraint "tt0_check"
DETAIL: Failing row contains ("1"=>"asdf", "3"=>"fdsa", 2).
testdb=# insert into tt0 select hstore(ARRAY[['1','asdf'],['2','fdsa']]), 2;
INSERT 0 1
testdb=# select * from tt0
testdb-# ;
h | max_n
--------------------------+-------
"1"=>"asdf", "2"=>"fdsa" | 2
(1 row)
testdb=# \d tt0
Table "public.tt0"
Column | Type | Collation | Nullable | Default
--------+--------+-----------+----------+---------
h | hstore | | |
max_n | bigint | | |
Check constraints:
"tt0_check" CHECK (array_greatest(akeys(h)::integer[]) <= max_n)

Update hstore values with other hstore values

I have a summary table that is updated with new data on a regulary basis. One of the columns is of type hstore. When I update with new data I want to add the value of a key to the existing value of the key if the key exists, otherwise I want to add the pair to the hstore.
Existing data:
id sum keyvalue
--------------------------------------
1 2 "key1"=>"1","key2"=>"1"
New data:
id sum keyvalue
--------------------------------------------------
1 3 "key1"=>"1","key2"=>"1","key3"=>"1"
Wanted result:
id sum keyvalue
--------------------------------------------------
1 5 "key1"=>"2","key2"=>"2","key3"=>"1"
I want to do this in a on conflict part of an insert.
The sum part was easy. But I have not found how to concatenate the hstore in this way.
There is nothing built int. You have to write a function that accepts to hstore values and merges them in the way you want.
create function merge_and_increment(p_one hstore, p_two hstore)
returns hstore
as
$$
select hstore_agg(hstore(k,v))
from (
select k, sum(v::int)::text as v
from (
select *
from each(p_one) as t1(k,v)
union all
select *
from each(p_two) as t2(k,v)
) x
group by k
) s
$$
language sql;
The hstore_agg() function isn't built-in as well, but it's easy to define it:
create aggregate hstore_agg(hstore)
(
sfunc = hs_concat(hstore, hstore),
stype = hstore
);
So the result of this:
select merge_and_increment(hstore('"key1"=>"1","key2"=>"1"'), hstore('"key1"=>"1","key2"=>"1","key3"=>"1"'))
is:
merge_and_increment
-------------------------------------
"key1"=>"2", "key2"=>"2", "key3"=>"1"
Note that the function will fail miserably if there are values that can't be converted to an integer.
With an insert statement you can use it like this:
insert into the_table (id, sum, data)
values (....)
on conflict (id) do update
set sum = the_table.sum + excluded.sum,
data = merge_and_increment(the_table.data, excluded.data);
demo:db<>fiddle
CREATE OR REPLACE FUNCTION sum_hstore(_old hstore, _new hstore) RETURNS hstore
AS $$
DECLARE
_out hstore;
BEGIN
SELECT
hstore(array_agg(key), array_agg(value::text))
FROM (
SELECT
key,
SUM(value::int) AS value
FROM (
SELECT * FROM each('"key1"=>"1","key2"=>"1"'::hstore)
UNION ALL
SELECT * FROM each('"key1"=>"1","key2"=>"1","key3"=>"1"')
) s
GROUP BY key
) s
INTO _out;
RETURN _out;
END;
$$
LANGUAGE plpgsql;
each() expands the key/value pairs into one row per pair with columns key and value
convert type text into type int and group/sum the values
Aggregate into a new hstore value using the hstore(array, array) function. The array elements are the values of the key column and the values of the value column.
You can do such an update:
UPDATE mytable
SET keyvalue = sum_hstore(keyvalue, '"key1"=>"1","key2"=>"1","key3"=>"1"')
WHERE id = 1;

Query with condition on array items in PostgreSQL

I would like to select rows in a table in which a certain number of items in an array column meet a comparison condition (>= n). Is this possible without using unnest?
unnest() is a natural way to count filtered elements in an array.
However, you can hide this in an sql function like this:
create or replace function number_of_elements(arr int[], val int)
returns bigint language sql
as $$
select count(*)
from unnest(arr) e
where e > val;
$$;
with test(id, arr) as (
values
(1, array[1,2,3,4]),
(2, array[3,4,5,6]))
select id, arr, number_of_elements(arr, 3)
from test;
id | arr | number_of_elements
----+-----------+--------------------
1 | {1,2,3,4} | 1
2 | {3,4,5,6} | 3
(2 rows)