Custom progressive sequence (per year) with a column as prefix - postgresql

I need to create a custom sequence based on a specific column, added as a prefix. I know it is possible to customize the sequence as well as the nextval, but I'm not sure if it is possible to use the column of a specific table.
This is the structure of the table with the essential information:
create table tab
(
id serial not null
constraint tab_pkey primary key,
year varchar(4) not null,
seq varchar(20) not null
);
create sequence tab_id_seq as integer;
I would like to automatically populate the "seq" column, as happens for normal sequences, according to this format:
{year}_{sequence}
where {year}_ is the prefix, while {sequence} is a progressive that starts again from 1 every year.
DESIRED RESULT
|--------|----------|---------|
| id | year | seq |
|--------|----------|---------|
| 10 | 2019 | 2019_1 |
|--------|----------|---------|
| 11 | 2019 | 2019_2 |
|--------|----------|---------|
| 12 | 2019 | 2019_3 |
|--------|----------|---------|
| 13 | 2019 | 2019_4 |
|--------|----------|---------|
| 14 | 2020 | 2020_1 | <--- sequence restarting
|--------|----------|---------|
| 15 | 2020 | 2020_2 |
|--------|----------|---------|
| 16 | 2020 | 2020_3 |
|--------|----------|---------|
N.B. there is no direct relationship between the id column and {sequence} element

For the following test structure :
create table test
(
id serial primary key
, year_val int
, seq varchar (10)
);
create or replace function fn_test () returns trigger language plpgsql as $$
declare
res_name varchar;
begin
drop table if exists tmp_test;
create temporary table tmp_test as select * from test;
insert into tmp_test values (new.id, new.year_val);
with cte as
(
select *
, year_val::varchar||'_'||(count(*) over (partition by year_val order by id))::varchar as built_res_name
from tmp_test
)
select built_res_name into res_name
from cte
where id = new.id;
new.seq := res_name;
return new;
end;
$$;
CREATE TRIGGER tg_test BEFORE INSERT ON test
FOR EACH ROW EXECUTE FUNCTION fn_test();
insert into test (year_val)
values (2019),(2019),(2019),(2019),(2020),(2020),(2020);

In the end I found a solution by using multiple sequences (one per year), created dynamically when entering the record.
A trigger, before the insertion invoke a procedure that creates the sequence (if it does not exist) and assigns the value to the seq column (if not assigned).
WORKFLOW
record insertion
sequence creation 'tab_ {year} _seq_id' if it does not exist
if the column seq is empty the value nextval is assigned (tab_ {year} _seq_id)
test insertions and deletions to verify that the column is populated in the correct way
TABLE STRUCTURE
CREATE TABLE tab (
id serial not null constraint tab_pkey primary key,
year varchar(4) not null,
seq varchar(20)
);
FUNCTION
CREATE FUNCTION tab_sequence_trigger_function() RETURNS trigger AS $$
BEGIN
IF NEW.seq IS NULL OR NEW.seq = '''' THEN
EXECUTE ('CREATE SEQUENCE IF NOT EXISTS tab_' || NEW.year || '_id_seq AS INTEGER');
NEW.seq = NEW.year || '_' || nextval('tab_' || NEW.year || '_id_seq');
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
TRIGGER
CREATE TRIGGER tab_sequence_trigger
BEFORE INSERT ON tab
FOR EACH ROW
EXECUTE PROCEDURE tab_sequence_trigger_function();
TEST
INSERT INTO tab (year) VALUES (2019);
INSERT INTO tab (year) VALUES (2019);
INSERT INTO tab (year) VALUES (2019);
INSERT INTO tab (year) VALUES (2019);
INSERT INTO tab (year) VALUES (2019);
DELETE FROM tab WHERE id=5;
INSERT INTO tab (year) VALUES (2019);
INSERT INTO tab (year) VALUES (2019);
INSERT INTO tab (year) VALUES (2020);
INSERT INTO tab (year) VALUES (2020);
INSERT INTO tab (year) VALUES (2021);
DELETE FROM tab WHERE id=8;
DELETE FROM tab WHERE id=9;
INSERT INTO tab (year) VALUES (2021);
INSERT INTO tab (year) VALUES (2020);
RESULT
SELECT * FROM tab;
----------------------
| id | year | seq |
----------------------
| 1 | 2019 | 2019_1 |
----------------------
| 2 | 2019 | 2019_2 |
----------------------
| 3 | 2019 | 2019_3 |
----------------------
| 4 | 2019 | 2019_4 |
----------------------
| 6 | 2019 | 2019_6 |
----------------------
| 7 | 2019 | 2019_7 |
----------------------
| 10 | 2021 | 2021_3 |
----------------------
| 11 | 2021 | 2021_4 |
----------------------
| 12 | 2020 | 2020_3 |
----------------------

Related

PostgreSQL - Loop Over Rows to Fill NULL Values

I have a table named players which has the following data
+------+------------+
| id | username |
|------+------------|
| 1 | mike93 |
| 2 | james_op |
| 3 | will_sniff |
+------+------------+
desired result:
+------+------------+------------+
| id | username | uniqueId |
|------+------------+------------|
| 1 | mike93 | PvS3T5 |
| 2 | james_op | PqWN7C |
| 3 | will_sniff | PHtPrW |
+------+------------+------------+
I need to create a new column called uniqueId. This value is different than the default serial numeric value. uniqueId is a unique, NOT NULL, 6 characters long text with the prefix "P".
In my migration, here's the code I have so far:
ALTER TABLE players ADD COLUMN uniqueId varchar(6) UNIQUE;
(loop comes here)
ALTER TABLE players ALTER COLUMN uniqueId SET NOT NULL;
and here's the SQL code I use to generate these unique IDs
SELECT CONCAT('P', string_agg (substr('abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789', ceil (random() * 62)::integer, 1), ''))
FROM generate_series(1, 5);
So, in other words, I need to create the new column without the NOT NULL constraint, loop over every already existing row, fill the NULL value with a valid ID and eventually add the NOT NULL constraint.
In theory it should be enough to run:
update players
set unique_id = (SELECT CONCAT('P', string_agg ...))
;
However, Postgres will not re-evaluate the expression in the SELECT for every row, so this generates a unique constraint violation. One workaround is to create a function (which you might want to do anyway) that generates these fake IDs
create function generate_fake_id()
returns text
as
$$
SELECT CONCAT('P', string_agg (substr('abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789', ceil (random() * 62)::integer, 1), ''))
FROM generate_series(1, 5)
$$
language sql
volatile;
Then you can update your table using:
update players
set unique_id = generate_fake_id()
;
Online example

Postgres: does updating column value to the same value marks page as dirty?

Consider following scenario in PostgreSQL (any version from 10+):
CREATE TABLE users(
id serial primary key,
name text not null unique,
last_seen timestamp
);
INSERT INTO users(name, last_seen)
VALUES ('Alice', '2019-05-01'),
('Bob', '2019-04-29'),
('Dorian', '2019-05-11');
CREATE TABLE inactive_users(
user_id int primary key references users(id),
last_seen timestamp not null);
INSERT INTO inactive_users(user_id, last_seen)
SELECT id as user_id, last_seen FROM users
WHERE users.last_seen < '2019-05-04'
ON CONFLICT (user_id) DO UPDATE SET last_seen = excluded.last_seen;
Now let's say that I want to insert the same values (execute last statement) multiple times, every now and then. In practice from the database point of view, on conflicting values 90% of the time last_seen column will be updated to the same value it already had. The values of the rows stay the same, so there's no reason to do I/O writes, right? But is this really the case, or will postgres perform corresponding updates even though the actual value didn't change?
In my case the destination table has dozens of millions of rows, but only few hundreds/thousands of them will be really changing on each of the insert calls.
Any UPDATE to a row will actually create a new row (marking the old row deleted/dirty), regardless of the before/after values:
[root#497ba0eaf137 /]# psql
psql (12.1)
Type "help" for help.
postgres=# create table foo (id int, name text);
CREATE TABLE
postgres=# insert into foo values (1,'a');
INSERT 0 1
postgres=# select ctid,* from foo;
ctid | id | name
-------+----+------
(0,1) | 1 | a
(1 row)
postgres=# update foo set name = 'a' where id = 1;
UPDATE 1
postgres=# select ctid,* from foo;
ctid | id | name
-------+----+------
(0,2) | 1 | a
(1 row)
postgres=# update foo set id = 1 where id = 1;
UPDATE 1
postgres=# select ctid,* from foo;
ctid | id | name
-------+----+------
(0,3) | 1 | a
(1 row)
postgres=# select * from pg_stat_user_tables where relname = 'foo';
-[ RECORD 1 ]-------+-------
relid | 16384
schemaname | public
relname | foo
seq_scan | 5
seq_tup_read | 5
idx_scan |
idx_tup_fetch |
n_tup_ins | 1
n_tup_upd | 2
n_tup_del | 0
n_tup_hot_upd | 2
n_live_tup | 1
n_dead_tup | 2
<...>
And according to your example:
postgres=# select ctid,* FROM inactive_users ;
ctid | user_id | last_seen
-------+---------+---------------------
(0,1) | 1 | 2019-05-01 00:00:00
(0,2) | 2 | 2019-04-29 00:00:00
(2 rows)
postgres=# INSERT INTO inactive_users(user_id, last_seen)
postgres-# SELECT id as user_id, last_seen FROM users
postgres-# WHERE users.last_seen < '2019-05-04'
postgres-# ON CONFLICT (user_id) DO UPDATE SET last_seen = excluded.last_seen;
INSERT 0 2
postgres=# select ctid,* FROM inactive_users ;
ctid | user_id | last_seen
-------+---------+---------------------
(0,3) | 1 | 2019-05-01 00:00:00
(0,4) | 2 | 2019-04-29 00:00:00
(2 rows)
Postgres does not do any data validation against the column values -- if you are looking to prevent unnecessary write activity, you will need to surgically craft your WHERE clauses.
Disclosure: I work for EnterpriseDB (EDB)

PostgreSQL add SERIAL column to existing table with values based on ORDER BY

I have a large table (6+ million rows) that I'd like to add an auto-incrementing integer column sid, where sid is set on existing rows based on an ORDER BY inserted_at ASC. In other words, the oldest record based on inserted_at would be set to 1 and the latest record would be the total record count. Any tips on how I might approach this?
Add a sid column and UPDATE SET ... FROM ... WHERE:
UPDATE test
SET sid = t.rownum
FROM (SELECT id, row_number() OVER (ORDER BY inserted_at ASC) as rownum
FROM test) t
WHERE test.id = t.id
Note that this relies on there being a primary key, id.
(If your table did not already have a primary key, you would have to make one first.)
For example,
-- create test table
DROP TABLE IF EXISTS test;
CREATE TABLE test (
id int PRIMARY KEY GENERATED BY DEFAULT AS IDENTITY
, foo text
, inserted_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP
);
INSERT INTO test (foo, inserted_at) VALUES
('XYZ', '2019-02-14 00:00:00-00')
, ('DEF', '2010-02-14 00:00:00-00')
, ('ABC', '2000-02-14 00:00:00-00');
-- +----+-----+------------------------+
-- | id | foo | inserted_at |
-- +----+-----+------------------------+
-- | 1 | XYZ | 2019-02-13 19:00:00-05 |
-- | 2 | DEF | 2010-02-13 19:00:00-05 |
-- | 3 | ABC | 2000-02-13 19:00:00-05 |
-- +----+-----+------------------------+
ALTER TABLE test ADD COLUMN sid INT;
UPDATE test
SET sid = t.rownum
FROM (SELECT id, row_number() OVER (ORDER BY inserted_at ASC) as rownum
FROM test) t
WHERE test.id = t.id
yields
+----+-----+------------------------+-----+
| id | foo | inserted_at | sid |
+----+-----+------------------------+-----+
| 3 | ABC | 2000-02-13 19:00:00-05 | 1 |
| 2 | DEF | 2010-02-13 19:00:00-05 | 2 |
| 1 | XYZ | 2019-02-13 19:00:00-05 | 3 |
+----+-----+------------------------+-----+
Finally, make sid SERIAL (or, better, an IDENTITY column):
ALTER TABLE test ALTER COLUMN sid SET NOT NULL;
-- IDENTITY fixes certain issue which may arise with SERIAL
ALTER TABLE test ALTER COLUMN sid ADD GENERATED BY DEFAULT AS IDENTITY;
-- ALTER TABLE test ALTER COLUMN sid SERIAL;

Use array of IDs to insert records into table if it does not already exist

I have created a postgresql function that takes a comma separated list of ids as input parameter. I then convert this comma separated list into an array.
CREATE FUNCTION myFunction(csvIDs text)
RETURNS void AS $$
DECLARE ids INT[];
BEGIN
ids = string_to_array(csvIDs,',');
-- INSERT INTO tableA
END; $$
LANGUAGE PLPGSQL;
What I want to do now is to INSERT a record for each of the id's(in the array) into TABLE A if the ID does not already exist in table. The new records should have value field set to 0.
Table is created like this
CREATE TABLE TableA (
id int PRIMARY KEY,
value int
);
Is this possible to do?
You can use unnest() function to get each element of your array.
create table tableA (id int);
insert into tableA values(13);
select t.ids
from (select unnest(string_to_array('12,13,14,15', ',')::int[]) ids) t
| ids |
| --: |
| 12 |
| 13 |
| 14 |
| 15 |
Now you can check if ids value exists before insert a new row.
CREATE FUNCTION myFunction(csvIDs text)
RETURNS int AS
$myFunction$
DECLARE
r_count int;
BEGIN
insert into tableA
select t.ids
from (select unnest(string_to_array(csvIDs,',')::int[]) ids) t
where not exists (select 1 from tableA where id = t.ids);
GET DIAGNOSTICS r_count = ROW_COUNT;
return r_count;
END;
$myFunction$
LANGUAGE PLPGSQL;
select myFunction('12,13,14,15') as inserted_rows;
| inserted_rows |
| ------------: |
| 3 |
select * from tableA;
| id |
| -: |
| 13 |
| 12 |
| 14 |
| 15 |
dbfiddle here

postgresql trigger for filling new column on insert

I have a table Table_A:
\d "Table_A";
Table "public.Table_A"
Column | Type | Modifiers
----------+---------+-------------------------------------------------------------
id | integer | not null default nextval('"Table_A_id_seq"'::regclass)
field1 | bigint |
field2 | bigint |
and now I want to add a new column. So I run:
ALTER TABLE "Table_A" ADD COLUMN "newId" BIGINT DEFAULT NULL;
now I have:
\d "Table_A";
Table "public.Table_A"
Column | Type | Modifiers
----------+---------+-------------------------------------------------------------
id | integer | not null default nextval('"Table_A_id_seq"'::regclass)
field1 | bigint |
field2 | bigint |
newId | bigint |
And I want newId to be filled with the same value as id for new/updated rows.
I created the following function and trigger:
CREATE OR REPLACE FUNCTION autoFillNewId() RETURNS TRIGGER AS $$
BEGIN
NEW."newId" := NEW."id";
RETURN NEW;
END $$ LANGUAGE plpgsql;
CREATE TRIGGER "newIdAutoFill" AFTER INSERT OR UPDATE ON "Table_A" EXECUTE PROCEDURE autoFillNewId();
Now if I insert something with:
INSERT INTO "Table_A" values (97, 1, 97);
newId is not filled:
select * from "Table_A" where id = 97;
id | field1 | field2 | newId
----+----------+----------+-------
97 | 1 | 97 |
Note: I also tried with FOR EACH ROW from some answer here in SO
What's missing me?
You need a BEFORE INSERT OR UPDATE ... FOR EACH ROW trigger to make this work:
CREATE TRIGGER "newIdAutoFill"
BEFORE INSERT OR UPDATE ON "Table_A"
FOR EACH ROW EXECUTE PROCEDURE autoFillNewId();
A BEFORE trigger takes place before the new row is inserted or updated, so you can still makes changes to the field values. An AFTER trigger is useful to implement some side effect, like auditing of changes or cascading changes to other tables.
By default, triggers are FOR EACH STATEMENT and then the NEW parameter is not defined (because the trigger does not operate on a row). So you have to specify FOR EACH ROW.