Postgres 10: do rows automatically move between partitions? - postgresql

Assuming I have a parent table with child partitions that are created based on the value of a field.
If the value of that field changes, is there a way to have Postgres automatically move the row into the appropriate partition?
For example:
create table my_table(name text)
partition by list (left(name, 1));
create table my_table_a
partition of my_table
for values in ('a');
create table my_table_b
partition of my_table
for values in ('b');
In this case, if I change the value of name in a row from aaa to bbb, how can I get it to automatically move that row into my_table_b.
When I tried to do that, (i.e. update my_table set name = 'bbb' where name = 'aaa';), I get the following error:
ERROR: new row for relation "my_table_a" violates partition constraint

https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=f0e44751d7175fa3394da2c8f85e3ceb3cdbfe63
it doesn't handle updates that cross partition boundaries.
thus you need to create one yourself... here's your set:
t=# insert into my_table select 'abc';
INSERT 0 1
t=# insert into my_table select 'bcd';
INSERT 0 1
t=# select tableoid::regclass,* from my_table;
tableoid | name
------------+------
my_table_a | abc
my_table_b | bcd
(2 rows)
here's rule and fn():
t=# create or replace function puf(_j json,_o text) returns void as $$
begin
raise info '%',': '||left(_j->>'name',1);
execute format('insert into %I select * from json_populate_record(null::my_table, %L)','my_table_'||left(_j->>'name',1), _j);
execute format('delete from %I where name = %L','my_table_'||left(_o,1), _o);
end;
$$language plpgsql;
CREATE FUNCTION
t=# create rule psr AS ON update to my_table do instead select puf(row_to_json(n),OLD.name) from (select NEW.*) n;
CREATE RULE
here's update:
t=# update my_table set name = 'bbb' where name = 'abc';
INFO: : b
puf
-----
(1 row)
UPDATE 0
checking result:
t=# select tableoid::regclass,* from my_table;
tableoid | name
------------+------
my_table_b | bcd
my_table_b | bbb
(2 rows)
once again:
t=# update my_table set name = 'a1' where name = 'bcd';
INFO: : a
puf
-----
(1 row)
UPDATE 0
t=# select tableoid::regclass,* from my_table;
tableoid | name
------------+------
my_table_a | a1
my_table_b | bbb
(2 rows)
Of course using json to pass NEW record looks ugly. And it is ugly indeed. But I did not have time to study the new PARTITION feature of 10, so don't know the elegant way to do this task. Hopefully I could give the generic idea of how you can possible solve the problem and you will produce a better neat code.
update
its probablygood idea to limit such rule to ON update to my_table where left(NEW.name,1) <> left(OLD.name,1) do instead, to release the heavy manipulations need

Related

PostgreSQL and temp table triggers

I am looking for the best practice to define triggers and sequences on temporary tables with PostgreSQL.
When creating a temp table, PostgreSQL automatically creates a temporary schema with the name "pg_temp_nnn" (alias: "pg_temp")
It appears that one can create user functions and objects in this temporary schema.
I wonder if this is really valid SQL for PostgreSQL or just working by accident?
Tested with various PostgreSQL versions from 10 to 14.
Note: Triggers created on temp tables automatically land in the temp schema because the trigger inherits the schema of its table.
Tx!
CREATE TEMP TABLE tt1 (pk INTEGER NOT NULL, name VARCHAR(50));
CREATE SEQUENCE pg_temp.tt1_seq START 1;
CREATE FUNCTION pg_temp.tt1_srl() RETURNS TRIGGER AS
'DECLARE ls BIGINT;
BEGIN
SELECT INTO ls nextval(''pg_temp.tt1_seq'');
IF new.pk ISNULL OR new.pk=0 THEN
new.pk:=ls;
ELSE
IF new.pk>=ls THEN
PERFORM setval(''pg_temp.tt1_seq'',new.pk);
END IF;
END IF;
RETURN new;
END;'
LANGUAGE 'plpgsql';
CREATE TRIGGER tt1_srlt BEFORE INSERT ON tt1 FOR EACH ROW EXECUTE PROCEDURE pg_temp.tt1_srl();
INSERT INTO tt1 (name) VALUES ('aaaa');
SELECT 'Insert #1:', currval('pg_temp.tt1_seq');
INSERT INTO tt1 VALUES (0,'bbbb');
SELECT 'Insert #2:', currval('pg_temp.tt1_seq');
INSERT INTO tt1 VALUES (100,'cccc');
SELECT 'Insert #3:', currval('pg_temp.tt1_seq');
INSERT INTO tt1 (name) VALUES ('dddd');
SELECT 'Insert #4:', currval('pg_temp.tt1_seq');
SELECT * FROM tt1 ORDER BY pk;
Output:
CREATE TABLE
CREATE SEQUENCE
CREATE FUNCTION
CREATE TRIGGER
INSERT 0 1
?column? | currval
------------+---------
Insert #1: | 1
(1 row)
INSERT 0 1
?column? | currval
------------+---------
Insert #2: | 2
(1 row)
INSERT 0 1
?column? | currval
------------+---------
Insert #3: | 100
(1 row)
INSERT 0 1
?column? | currval
------------+---------
Insert #4: | 101
(1 row)
pk | name
-----+------
1 | aaaa
2 | bbbb
100 | cccc
101 | dddd
(4 rows)
Yes, that works and is supported.
Creating objects in schema pg_temp creates temporary objects that will be removed when the session ends. CREATE TEMP TABLE x (...) is the same as CREATE TABLE pg_temp.x (...).

How can I automatically fix my Postgres sequence?

I want to update a sequence in Postgres, which I can do semi-manually, like so:
SELECT MAX(id) as highest_id FROM users;
ALTER SEQUENCE users_id_seq RESTART WITH 11071;
In this case I have to take the result of the first query, which turns out to be 11070, and insert it into the next query, incremented by 1. I'd rather have a single query that does all of this in one fell swoop.
The "two fell swoops" approach would be like so, if it worked, but this fails:
ALTER SEQUENCE users_id_seq RESTART WITH (SELECT MAX(id) as highest_id FROM users);
ALTER SEQUENCE users_id_seq INCREMENT BY 1;
Even better would be if I could use + 1 in the first ALTER SEQUENCE statement and skip the second one.
Is there any way to fix this so it works? (Either as two steps or one, but without manual intervention by me.)
You can easily do this with:
SELECT setval('users_id_seq',(SELECT max(id) FROM users));
This sets the sequence to the current value, so that when you call nextval(), you'll get the next one:
edb=# create table foo (id serial primary key, name text);
CREATE TABLE
edb=# insert into foo values (generate_series(1,10000),'johndoe');
INSERT 0 10000
edb=# select * from foo_id_seq ;
last_value | log_cnt | is_called
------------+---------+-----------
1 | 0 | f
(1 row)
edb=# select setval('foo_id_seq',(SELECT max(id) FROM foo));
setval
--------
10000
(1 row)
edb=# select * from foo_id_seq ;
last_value | log_cnt | is_called
------------+---------+-----------
10000 | 0 | t
(1 row)
edb=# insert into foo values (default,'bob');
INSERT 0 1
edb=# select * from foo order by id desc limit 1;
id | name
-------+------
10001 | bob
(1 row)

How does postgres db_link_build_sql_insert work?

I can't seem to figure out how this function is supposed to work for pushing data across from one table in your local database to another on a separate database. I have looked at the documentation and still don't understand the example given. I am working with a postgres 9.2 which makes it possible to use dblink.
Here is some example code where I am creating a test database and pushing values from my local table to the table on the test database. Can someone please fill in the missing part of the dblink_build_sql_insert function?
--drop database if exists testdb;
--create database testdb;
drop table if exists t;
create table t ( a integer, b text);
insert into t values (1,'10'), (2,'10'), (3,'30'), (4,'30');
create extension if not exists dblink;
select dblink_connect('dbname=testdb');
select dblink('drop table if exists t;');
select dblink('create table t ( a integer, b text);');
select dblink_build_sql_insert('t', ????);
select * from dblink('select * from t;') as (a integer, b text);
from docs:
Synopsis
dblink_build_sql_insert(text relname,
int2vector primary_key_attnums,
integer num_primary_key_atts,
text[] src_pk_att_vals_array,
text[] tgt_pk_att_vals_array) returns text
You don't have PK specified, so I assume it is on (a), which automatically means, that primary_key_attnums = 1(PK on first column) and num_primary_key_atts=1 (one column PK). Two rest values made same to prepare statement ro "replicate" row with a=1 as is:
b=# select dblink_build_sql_insert('t',
'1'::int2vector,
1::int2, -- num of pkey values
'{1}'::text[], -- old pkey
'{1}'::text[] -- new pkey
)
;
dblink_build_sql_insert
-------------------------------------
INSERT INTO t(a,b) VALUES('1','10')
(1 row)
b=# select dblink($$INSERT INTO t(a,b) VALUES('1','10')$$);
dblink
----------------
("INSERT 0 1")
(1 row)
b=# select * from dblink('select * from t;') as (a integer, b text);
a | b
---+----
1 | 10
(1 row)
b=# select dblink_disconnect();
dblink_disconnect
-------------------
OK
(1 row)

How to Auto Increment Alpha-Numeric value in postgresql?

I am using "PostgreSQL 9.3.5"
I have a Table(StackOverflowTable) with columns (SoId,SoName,SoDob).
I want a Sequence generator for column SoId which is a Alpha-numeric value.
I want to auto increment a Alpha-Numeric Value in postgresql.
For eg : SO10001, SO10002, SO10003.....SO99999.
Edit:
If tomorrow i need to generate a Sequence which can be as SO1000E100, SO1000E101,... and which has a good performance. Then what is the best solution!
Use sequences and default value for id:
postgres=# CREATE SEQUENCE xxx;
CREATE SEQUENCE
postgres=# SELECT setval('xxx', 10000);
setval
--------
10000
(1 row)
postgres=# CREATE TABLE foo(id text PRIMARY KEY
CHECK (id ~ '^SO[0-9]+$' )
DEFAULT 'SO' || nextval('xxx'),
b integer);
CREATE TABLE
postgres=# insert into foo(b) values(10);
INSERT 0 1
postgres=# insert into foo(b) values(20);
INSERT 0 1
postgres=# SELECT * FROM foo;
id | b
---------+----
SO10001 | 10
SO10002 | 20
(2 rows)
You can define default value of your column as a concatenation of S and a normal sequence as bellow:
CREATE SEQUENCE sequence_for_alpha_numeric
INCREMENT 1
MINVALUE 1
MAXVALUE 9223372036854775807
START 1
CACHE 1;
CREATE TABLE table1
(
alpha_num_auto_increment_col character varying NOT NULL,
sample_data_col character varying,
CONSTRAINT table1_pkey PRIMARY KEY (alpha_num_auto_increment_col)
)
;
ALTER TABLE table1 ALTER COLUMN alpha_num_auto_increment_col SET DEFAULT TO_CHAR(nextval('sequence_for_alpha_numeric'::regclass),'"S"fm000000');
Test:
^
insert into table1 (sample_data_col) values ('test1');
insert into table1 (sample_data_col) values ('test2');
insert into table1 (sample_data_col) values ('test3');
select * from table1;
alpha_num_auto_increment_col | sample_data_col
------------------------------+-----------------
S000001 | test1
S000002 | test2
S000003 | test3
(3 lignes)
How to use sequences
How to use to_char function.
Create A sequence like below
CREATE SEQUENCE seq_autoid
INCREMENT 1
MINVALUE 1
MAXVALUE 9223372036854775807
START 10000
Create A Function to generate alpha numeric id
create or replace function auto_id () returns varchar as $$
select 'SO'||nextval('seq_autoid')
$$ language sql
and try this example table
create table AAA(id text ,namez text)
insert into AAA values (auto_id(),'MyName')
insert into AAA values (auto_id(),'MyName1')
insert into AAA values (auto_id(),'MyName2')

How can I get result by using “execute ‘delete from table1'"

when I use execute command to run a sql cmd, I want to get the result of it.
As we know, I can get total counts by variable sc when I use :
execute 'select * from table" into sc;
But How can I get result by using:
execute 'delete from table1'"?
when I use INTO, it turns out
ERROR: "INTO used with a command that cannot return data"
execute 'WITH row_deleted AS (DELETE FROM table1 RETURNING *) SELECT count(*) FROM row_deleted' into c;
You can use it inside a plsql funtion as following:
--Drop the table and the functin if it exist:
DROP TABLE IF EXISTS table1;
DROP FUNCTION if exists _deleted_rows();
--Create the table for the example:
CREATE TABLE table1
(
row_id serial NOT NULL,
col1 character varying,
CONSTRAINT table1_pkey PRIMARY KEY (row_id)
);
--Insert some rows:
insert into table1 (col1) values ('test1');
insert into table1 (col1) values ('test2');
insert into table1 (col1) values ('test3');
--Ctreate the function that count the number of deleted rows of the table: table1
CREATE OR REPLACE FUNCTION _deleted_rows()
RETURNS character varying AS
$BODY$declare
nbr_deleted integer;
begin
execute 'WITH row_deleted AS (DELETE FROM table1 RETURNING *) SELECT count(*) FROM row_deleted' into nbr_deleted;
return (nbr_deleted);
end;$BODY$
LANGUAGE plpgsql VOLATILE;
Test that function (got problem building shema on sqlfidlle):
select * from _deleted_rows();
_deleted_rows
---------------
3
(1 ligne)
Execute command
DELETE command
It's a little unclear to me what you are trying to do, but you should be able use "RETURNING". Here I am just returning the rows that were deleted:
CREATE TEMP TABLE foo(id int, description text);
INSERT INTO foo VALUES
(1, 'HELLO'),
(2, 'WORLD');
DELETE FROM foo returning *;
+----+-------------+
| id | description |
+----+-------------+
| 1 | HELLO |
| 2 | WORLD |
+----+-------------+
(2 rows)
Also, if you need them moved "into" a table (for example), you could do something like:
DROP TABLE IF EXISTS foo;
DROP TABLE IF EXISTS deleted_foo;
CREATE TEMP TABLE foo(id int, description text);
INSERT INTO foo VALUES
(1, 'HELLO'),
(2, 'WORLD');
CREATE TEMP TABLE deleted_foo(id int, description text);
WITH x AS (DELETE FROM foo RETURNING *)
INSERT INTO deleted_foo
SELECT * FROM x;
SELECT * FROM deleted_foo;
+----+-------------+
| id | description |
+----+-------------+
| 1 | HELLO |
| 2 | WORLD |
+----+-------------+
(2 rows)
Assuming that you are doing this from inside a plpgsql function, you could also use the ROW_COUNT variable. For example:
GET DIAGNOSTICS integer_var = ROW_COUNT;
This would give you the number of rows that were deleted.