Does anyone know what pg_catalog.setval does?
I just did a dump off a PostgreSQL database and got lots of lines with that in it. Not sure what it's for.
You might want to check the fine manual:
setval(regclass, bigint) bigint Set
sequence's current value
Example usage:;
# create sequence x;
CREATE SEQUENCE
# select nextval('x');
nextval
---------
1
(1 row)
# select nextval('x');
nextval
---------
2
(1 row)
# select nextval('x');
nextval
---------
3
(1 row)
# select setval('x', 10000);
setval
--------
10000
(1 row)
# select nextval('x');
nextval
---------
10001
(1 row)
# select nextval('x');
nextval
---------
10002
(1 row)
Related
I am looking for the best practice to define triggers and sequences on temporary tables with PostgreSQL.
When creating a temp table, PostgreSQL automatically creates a temporary schema with the name "pg_temp_nnn" (alias: "pg_temp")
It appears that one can create user functions and objects in this temporary schema.
I wonder if this is really valid SQL for PostgreSQL or just working by accident?
Tested with various PostgreSQL versions from 10 to 14.
Note: Triggers created on temp tables automatically land in the temp schema because the trigger inherits the schema of its table.
Tx!
CREATE TEMP TABLE tt1 (pk INTEGER NOT NULL, name VARCHAR(50));
CREATE SEQUENCE pg_temp.tt1_seq START 1;
CREATE FUNCTION pg_temp.tt1_srl() RETURNS TRIGGER AS
'DECLARE ls BIGINT;
BEGIN
SELECT INTO ls nextval(''pg_temp.tt1_seq'');
IF new.pk ISNULL OR new.pk=0 THEN
new.pk:=ls;
ELSE
IF new.pk>=ls THEN
PERFORM setval(''pg_temp.tt1_seq'',new.pk);
END IF;
END IF;
RETURN new;
END;'
LANGUAGE 'plpgsql';
CREATE TRIGGER tt1_srlt BEFORE INSERT ON tt1 FOR EACH ROW EXECUTE PROCEDURE pg_temp.tt1_srl();
INSERT INTO tt1 (name) VALUES ('aaaa');
SELECT 'Insert #1:', currval('pg_temp.tt1_seq');
INSERT INTO tt1 VALUES (0,'bbbb');
SELECT 'Insert #2:', currval('pg_temp.tt1_seq');
INSERT INTO tt1 VALUES (100,'cccc');
SELECT 'Insert #3:', currval('pg_temp.tt1_seq');
INSERT INTO tt1 (name) VALUES ('dddd');
SELECT 'Insert #4:', currval('pg_temp.tt1_seq');
SELECT * FROM tt1 ORDER BY pk;
Output:
CREATE TABLE
CREATE SEQUENCE
CREATE FUNCTION
CREATE TRIGGER
INSERT 0 1
?column? | currval
------------+---------
Insert #1: | 1
(1 row)
INSERT 0 1
?column? | currval
------------+---------
Insert #2: | 2
(1 row)
INSERT 0 1
?column? | currval
------------+---------
Insert #3: | 100
(1 row)
INSERT 0 1
?column? | currval
------------+---------
Insert #4: | 101
(1 row)
pk | name
-----+------
1 | aaaa
2 | bbbb
100 | cccc
101 | dddd
(4 rows)
Yes, that works and is supported.
Creating objects in schema pg_temp creates temporary objects that will be removed when the session ends. CREATE TEMP TABLE x (...) is the same as CREATE TABLE pg_temp.x (...).
I want to update a sequence in Postgres, which I can do semi-manually, like so:
SELECT MAX(id) as highest_id FROM users;
ALTER SEQUENCE users_id_seq RESTART WITH 11071;
In this case I have to take the result of the first query, which turns out to be 11070, and insert it into the next query, incremented by 1. I'd rather have a single query that does all of this in one fell swoop.
The "two fell swoops" approach would be like so, if it worked, but this fails:
ALTER SEQUENCE users_id_seq RESTART WITH (SELECT MAX(id) as highest_id FROM users);
ALTER SEQUENCE users_id_seq INCREMENT BY 1;
Even better would be if I could use + 1 in the first ALTER SEQUENCE statement and skip the second one.
Is there any way to fix this so it works? (Either as two steps or one, but without manual intervention by me.)
You can easily do this with:
SELECT setval('users_id_seq',(SELECT max(id) FROM users));
This sets the sequence to the current value, so that when you call nextval(), you'll get the next one:
edb=# create table foo (id serial primary key, name text);
CREATE TABLE
edb=# insert into foo values (generate_series(1,10000),'johndoe');
INSERT 0 10000
edb=# select * from foo_id_seq ;
last_value | log_cnt | is_called
------------+---------+-----------
1 | 0 | f
(1 row)
edb=# select setval('foo_id_seq',(SELECT max(id) FROM foo));
setval
--------
10000
(1 row)
edb=# select * from foo_id_seq ;
last_value | log_cnt | is_called
------------+---------+-----------
10000 | 0 | t
(1 row)
edb=# insert into foo values (default,'bob');
INSERT 0 1
edb=# select * from foo order by id desc limit 1;
id | name
-------+------
10001 | bob
(1 row)
how to preserve spaces in char?
I create 2 tables ,
create table test_1 (a int, b char(10)) ;
create table test_2 (a int, b varchar(255));
and insert one row into test_1
insert into test_1 values (1 ,' ');
insert into test_2 select * from test_1;
select a, length(b) from test_2;
and it returns
| a | length(b) |
| -------- | -------------- |
| 1 | 0 |
I expect bleow, like oracle does
| a | length(b) |
| -------- | -------------- |
| 1 | 10 |
is there any option can i try ?
There is no way to change this behavior.
Observe also the following:
CREATE TABLE test_1 (a int, b char(10));
INSERT INTO test_1 VALUES (1, 'x');
SELECT length(b) FROM test_1;
length
--------
1
(1 row)
SELECT 'y' || b FROM test_1;
?column?
----------
yx
(1 row)
All this is working as required by the SQL standard.
Do yourself a favor and never use char. If you need the value to always have 10 characters, user a check constraint or a trigger that pads values with blanks.
Split column data into two columns & Inserting it to an existing table in postgresql.
Email
-----
xyz#outlook.com,xyz2#outlook.com
want to segregate like below:
Email1 Email2
-------------
xyz#outlook.com xyz2#outlook.com
insert into s_mas_enrich (email_1, Email_2)
select *,
split_part(email::TEXT,',', 1) Email_1,
split_part(email::TEXT,',', 2) Email_2
from s_mas_enrich
Use a union query:
insert into Table_Name (Email)
select split_part(email::TEXT,',', 1) from s_mas_enrich
union all
select split_part(email::TEXT,',', 2) from s_mas_enrich;
This assumes that your target table Table_Name only has one destination email column, but you want to include both CSV emails from the s_mas_enrich table.
We can use string_to_array to finish the split operation, as below:
postgres=# select string_to_array('yz#outlook.com,xyz2#outlook.com',',');
string_to_array
-----------------------------------
{yz#outlook.com,xyz2#outlook.com}
(1 row)
postgres=# select (string_to_array('yz#outlook.com,xyz2#outlook.com',','))[1] as email1;
email1
----------------
yz#outlook.com
(1 row)
postgres=# select (string_to_array('yz#outlook.com,xyz2#outlook.com',','))[2] as email2;
email2
------------------
xyz2#outlook.com
(1 row)
postgres=# select (string_to_array('yz#outlook.com,xyz2#outlook.com',','))[1] as email1,(string_to_array('yz#outlook.com,xyz2#outlook.com',','))[2] as email2;
email1 | email2
----------------+------------------
yz#outlook.com | xyz2#outlook.com
postgres=# insert into table_name (email_1, email_2)
select
(string_to_array(email::varchar,','))[1],
(string_to_array(email::varchar,','))[2]
from
s_mas_enrich
Notice: the array index of PostgreSQL is start from 1, not zero.
Assuming I have a parent table with child partitions that are created based on the value of a field.
If the value of that field changes, is there a way to have Postgres automatically move the row into the appropriate partition?
For example:
create table my_table(name text)
partition by list (left(name, 1));
create table my_table_a
partition of my_table
for values in ('a');
create table my_table_b
partition of my_table
for values in ('b');
In this case, if I change the value of name in a row from aaa to bbb, how can I get it to automatically move that row into my_table_b.
When I tried to do that, (i.e. update my_table set name = 'bbb' where name = 'aaa';), I get the following error:
ERROR: new row for relation "my_table_a" violates partition constraint
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=f0e44751d7175fa3394da2c8f85e3ceb3cdbfe63
it doesn't handle updates that cross partition boundaries.
thus you need to create one yourself... here's your set:
t=# insert into my_table select 'abc';
INSERT 0 1
t=# insert into my_table select 'bcd';
INSERT 0 1
t=# select tableoid::regclass,* from my_table;
tableoid | name
------------+------
my_table_a | abc
my_table_b | bcd
(2 rows)
here's rule and fn():
t=# create or replace function puf(_j json,_o text) returns void as $$
begin
raise info '%',': '||left(_j->>'name',1);
execute format('insert into %I select * from json_populate_record(null::my_table, %L)','my_table_'||left(_j->>'name',1), _j);
execute format('delete from %I where name = %L','my_table_'||left(_o,1), _o);
end;
$$language plpgsql;
CREATE FUNCTION
t=# create rule psr AS ON update to my_table do instead select puf(row_to_json(n),OLD.name) from (select NEW.*) n;
CREATE RULE
here's update:
t=# update my_table set name = 'bbb' where name = 'abc';
INFO: : b
puf
-----
(1 row)
UPDATE 0
checking result:
t=# select tableoid::regclass,* from my_table;
tableoid | name
------------+------
my_table_b | bcd
my_table_b | bbb
(2 rows)
once again:
t=# update my_table set name = 'a1' where name = 'bcd';
INFO: : a
puf
-----
(1 row)
UPDATE 0
t=# select tableoid::regclass,* from my_table;
tableoid | name
------------+------
my_table_a | a1
my_table_b | bbb
(2 rows)
Of course using json to pass NEW record looks ugly. And it is ugly indeed. But I did not have time to study the new PARTITION feature of 10, so don't know the elegant way to do this task. Hopefully I could give the generic idea of how you can possible solve the problem and you will produce a better neat code.
update
its probablygood idea to limit such rule to ON update to my_table where left(NEW.name,1) <> left(OLD.name,1) do instead, to release the heavy manipulations need