I'm using PostgreSQL 9.4 and pgAdminIII 1.20 client. When launching an INSERT on a particular table, I get an error message saying: Details: the key (gid)=(31509) already exists. (SQL State: 23505).
I do not enter a gid value in the command in order to let the sequence do the job:
INSERT INTO geo_section (idnum, insee, ident) VALUES (25, '015233', '') ;
The sequence is defined as this:
CREATE SEQUENCE geo_section_gid_seq
INCREMENT 1
MINVALUE 1
MAXVALUE 9223372036854775807
START 31509
CACHE 1;
ALTER TABLE geo_section_gid_seq
OWNER TO postgres;
The following query returns 34502:
SELECT max(gid) FROM geo_section ;
Therefore, I've tried to alter the sequence in order to start sequence from 34503:
ALTER SEQUENCE geo_section_gid_seq START 34503 ;
I get a success message saying that the query has been executed properly. But the sequence START parameter remains with 31509 value...
To change the next value for sequence use setval function :
select setval('geo_section_gid_seq'::regclass,34503,false)
false : if you want the next value will be 34503
true : if you want the next value will be 34504
You should execute this command:
SELECT setval('geo_section_gid_seq', (SELECT MAX(gid) FROM 'geo_section'), true)
Related
I have a table that uses a sequence as a default value in one of the columns. Whenever there is an insert into this table, I would like to insert the value of the sequence into another table. However when doing this, I get null values for the values generated by the sequence. Here is example code:
create sequence usq_test
as bigint
increment by 1
start 1
minvalue 1
maxvalue 9223372036854775807
no cycle
owned by none;
create table test1(
id bigint default nextval('usq_test') primary key,
name text not null);
insert into test1(name) values ('test_name');
insert into test1(name) values ('test_name');
insert into test1(name) values ('test_name');
select * from test1;
| id | name |
|----|-----------|
| 1 | test_name |
| 2 | test_name |
| 3 | test_name |
Now when I add the second table, and define the trigger function:
create table test2(id bigint primary key);
create or replace function ufn_insert_trg() returns trigger as
$$
begin
insert into test2(id)
values (NEW.id);
end;
$$ language plpgsql
;
create trigger utr_test1_insert
after insert
on test1
execute function ufn_insert_trg();
insert into test1(name) values ('test_name');
I get:
[2019-04-04 14:00:45] [23502] ERROR: null value in column "id" violates not-null constraint
[2019-04-04 14:00:45] Detail: Failing row contains (null).
[2019-04-04 14:00:45] Where: SQL statement "insert into test2(id)
[2019-04-04 14:00:45] values (NEW.id)"
[2019-04-04 14:00:45] PL/pgSQL function ufn_insert_trg() line 3 at SQL statement
What am I doing wrong?
The problem is in your trigger:
create trigger utr_test1_insert
after insert
on test1
execute function ufn_insert_trg();
Because you didn't explicitly state whether the trigger is row-level or statement-level, by default it becomes a statement-level trigger.
From the documentation:
FOR EACH ROW
FOR EACH STATEMENT
This specifies whether the trigger
function should be fired once for every row affected by the trigger
event, or just once per SQL statement. If neither is specified, FOR
EACH STATEMENT is the default.
And also as per the documentation, the NEW and OLD variables in a statement-level trigger are NULL, which makes sense because any number of rows could have been affected in a statement, so it wouldn't really make sense for NEW or OLD to refer to any specific one - you're dealing with a statement rather than specific rows.
That's why NEW.id is NULL. By changing the trigger to a row-level trigger, it will be fired for every row affected, and the NEW/OLD variables will be set as expected.
So:
create trigger utr_test1_insert
after insert
on test1
for each row
execute function ufn_insert_trg();
I am copying data (importing)from table tmp_header into as_solution2 table, first IdNumber and Date needs to be checked on destiny table, to not copy repeated values. if date and idNumber are found in destiny table, i don't copy the row, if not found ,row is copied into table as_solution2.
Source table has 800.000 records and destiny table already contains 200.000 records.
caveat: the id_solution pk in "as_solution2" table is not serial, so I created a sequence and start from the last id.
v_max_cod_solicitud := (select max(id_solution)+1 from municipalidad.as_solution2);
CREATE SEQUENCE increment START v_max_cod_solicitud;
this provokes an errorerror
tmp_header (id, cod_cause, idNumber , date_sol(2012-05-12), glosa_desc)
as_solution2(id_solution, cod_cause, idNumber, date_sol, desc )
CREATE OR REPLACE FUNCTION municipalidad.as_importar()
RETURNS integer AS
$$
DECLARE
v_max_cod_solicitud numeric;
id_solution numeric;
begin
v_max_cod_solicitud := (select max(id_solution)+1 from municipalidad.as_solution2);
CREATE SEQUENCE increment START v_max_cod_solicitud;
INSERT INTO municipalidad.as_solution2(
id_solution,
cod_cause,
idNumber,
date_sol,
desc,
)
SELECT
(SELECT nextval('increment')), <-- when saving i need to start from the last sequence number
cod_causingreso,
idNumber,
date_sol,
glosa_atenc,
FROM municipalidad.tmp_header as tmp_e
WHERE(SELECT count(*)
FROM municipalidad.as_solution2 as s2
WHERE s2.idNumber = tmp_e.idNumber AND s2.date_sol::date = tmp_e.date_sol::date)=0;
drop sequence increment;
return 1;
end
$$
LANGUAGE 'plpgsql'
thanks in advance
You can brute-force the execution of the sequence with the start parameter as follows:
execute (format ('CREATE SEQUENCE incremento start %s', v_max_cod_solicitud));
Unrelated, but I think you will gain efficiencies by changing your insert to use an anti-join instead of the Where select count (*) = 0:
INSERT INTO as_solution2(
id_solution,
cod_cause,
idNumber,
date_sol,
description
)
SELECT
nextval('incremento'), -- when saving i need to start from the last sequence number
cod_causingreso,
idNumber,
date_sol,
glosa_atenc
FROM tmp_header as tmp_e
WHERE not exists (
select null
from as_solution2 s2
where
s2.idNumber = tmp_e.idNumber AND
s2.date_sol::date = tmp_e.date_sol::date
)
This will scale very nicely as your dataset increases in size.
Even though it's not listed as a reserved key word in https://www.postgresql.org/docs/9.5/sql-keywords-appendix.html, the increment in your create sequence statement might not be allowed here:
CREATE SEQUENCE increment START v_max_cod_solicitud;
As the parser expects this:
ALTER SEQUENCE name [ INCREMENT [ BY ] increment ]
It probably thinks you forgot the name
I've a table with various columns - some of them can be NULL (there is no default value for them).
So I've created a trigger and each time I've a new value inserted in the table I want to trigger a pg_notify. (Hint: I'm a total noob about SQL).
The problem is that if just one of the column that can be null have a void value, then all the payload emitted by pg_notify is null.
A really simple example:
postgres=# create table example(id serial, name varchar);
CREATE TABLE
postgres=# create function new_example() RETURNS trigger AS $$
DECLARE
BEGIN
PERFORM pg_notify('example', 'id: ' || NEW.id || ' name: ' || NEW.name);
RETURN new;
END;
$$ LANGUAGE plpgsql;
CREATE FUNCTION
postgres=# create trigger new_ex AFTER INSERT ON example FOR EACH ROW EXECUTE PROCEDURE new_example();
CREATE TRIGGER
postgres=# LISTEN example;
LISTEN
postgres=# INSERT into example(name) VALUES ('a');
INSERT 0 1
Asynchronous notification "example" with payload "id : 1 name: a" received from server process with PID 22349.
This is correct - I've inserted a new row and the notify is exactly what I expect
postgres=# INSERT into example(name) VALUES (NULL);
INSERT 0 1
Asynchronous notification "example" received from server process with PID 22349.
This does not make any sense. I would expect something like Asynchronous notification "example" with payload "id : 1 name: NULL" received from server process with PID 22349. or Asynchronous notification "example" with payload "id : 1 name:" received from server process with PID 22349.
What do I do wrong?
When you are concatenating strings with || operator and any of the argument is NULL then the entire expression evaluates to NULL.
To get around that use CONCAT() function that handles NULL values by simply removing them from string.
So, to the code:
PERFORM pg_notify('example', CONCAT('id: ', NEW.id, ' name: ', NEW.name));
This will however leave the following message to be empty, so you may want to use COALESCE() function and come up with some fancy string that will tell you the value is really NULL and not a "null" string for example. One way to do that would be:
PERFORM pg_notify('example', CONCAT('id: ', NEW.id, ' name: ', COALESCE(NEW.name, '[#NULL#]')));
But you would still not distinguish string [#NULL#] from a real null value because they would look the same in the message returned from NOTIFY. See below for different approach.
Personally, I'd probably go within a slightly different approach by creating an audit table and inserting ids with schema and table name and this would serve your purpose for multiple tables checking for null values in various columns.
I am trying to create a trigger function in PostgreSQL that should check records with the same id (i.e. comparison by id with existing records) before inserting or updating the records. If the function finds records that have the same id, then that entry is set to be the time_dead. Let me explain with this example:
INSERT INTO persons (id, time_create, time_dead, name)
VALUES (1, 'now();', ' ', 'james');
I want to have a table like this:
id time_create time-dead name
1 06:12 henry
2 07:12 muka
id 1 had a time_create 06.12 but the time_dead was NULL. This is the same as id 2 but next time I try to run the insert query with same id but different names I should get a table like this:
id time_create time-dead name
1 06:12 14:35 henry
2 07:12 muka
1 14:35 waks
henry and waks share the same id 1. After running an insert query henry's time_dead is equal to waks' time_create. If another entry was to made with id 1, lets say for james, the time entry for james will be equal to the time_dead for waks. And so on.
So far my function looks like this. But it's not working:
CREATE FUNCTION tr_function() RETURNS trigger AS '
BEGIN
IF tg_op = ''UPDATE'' THEN
UPDATE persons
SET time_dead = NEW.time_create
Where
id = NEW.id
AND time_dead IS NULL
;
END IF;
RETURN new;
END
' LANGUAGE plpgsql;
CREATE TRIGGER sofgr BEFORE INSERT OR UPDATE
ON persons FOR each ROW
EXECUTE PROCEDURE tr_function();
When I run this its say time_dead is not supposed to be null. Is there a way I can write a trigger function that will automatically enter the time upon inserting or updating but give me results like the above tables when I run a select query?
What am I doing wrong?
My two tables:
CREATE TABLE temporary_object
(
id integer NOT NULL,
time_create timestamp without time zone NOT NULL,
time_dead timestamp without time zone,
PRIMARY KEY (id, time_create)
);
CREATE TABLE persons
(
name text
)
INHERITS (temporary_object);
Trigger function
CREATE FUNCTION tr_function()
RETURNS trigger AS
$func$
BEGIN
UPDATE persons p
SET time_dead = NEW.time_create
WHERE p.id = NEW.id
AND p.time_dead IS NULL
AND p.name <> NEW.name;
RETURN NEW;
END
$func$ LANGUAGE plpgsql;
You were missing the INSERT case in your trigger function (IF tg_op = ''UPDATE''). But there is no need for checking TG_OP to begin with, since the trigger only fires on INSERT OR UPDATE - assuming you don't use the same function in other triggers. So I removed the cruft.
Note that you don't have to escape single quotes inside a dollar-quoted string.
Also added:
AND p.name <> NEW.name
... to prevent INSERT's from terminating themselves instantly (and causing an infinite recursion). This assumes that a row can never succeed another row with the same name.
Aside: The setup is still not bullet-proof. UPDATEs could mess with your system. I could keep updating the id or a row, thereby terminating other rows but not leaving a successor. Consider disallowing updates on id. Of course, that would make the trigger ON UPDATE pointless. I doubt you need that to begin with.
now() as DEFAULT
If you want to use now() as default for time_create just make it so. Read the manual about setting a column DEFAULT. Then skip time_create in INSERTs and it is filled automatically.
If you want to force it (prevent everyone from entering a different value) create a trigger ON INSERT or add the following at the top of your trigger:
IF TG_OP = 'INSERT' THEN
NEW.time_create := now(); -- type timestamp or timestamptz!
RETURN NEW;
END IF;
Assuming your missleadingly named column "time_create" is actually a timestamp type.
That would force the current timestamp for new rows.
I have two tables. Lets say tblA and tblB.
I need to insert a row in tblA and use the returned id as a value to be inserted as one of the columns in tblB.
I tried finding out this in documentation but could not get it. Well, is it possible to write a statement (intended to be used in prepared) like
INSERT INTO tblB VALUES
(DEFAULT, (INSERT INTO tblA (DEFAULT, 'x') RETURNING id), 'y')
like we do for SELECT?
Or should I do this by creating a Stored Procedure?. I'm not sure if I can create a prepared statement out of a Stored Procedure.
Please advise.
Regards,
Mayank
You'll need to wait for PostgreSQL 9.1 for this:
with
ids as (
insert ...
returning id
)
insert ...
from ids;
In the meanwhile, you need to use plpgsql, a temporary table, or some extra logic in your app...
This is possible with 9.0 and the new DO for anonymous blocks:
do $$
declare
new_id integer;
begin
insert into foo1 (id) values (default) returning id into new_id;
insert into foo2 (id) values (new_id);
end$$;
This can be executed as a single statement. I haven't tried creating a PreparedStatement out of that though.
Edit
Another approach would be to simply do it in two steps, first run the insert into tableA using the returning clause, get the generated value through JDBC, then fire the second insert, something like this:
PreparedStatement stmt_1 = con.prepareStatement("INSERT INTO tblA VALUES (DEFAULT, ?) returning id");
stmt_1.setString(1, "x");
stmt_1.execute(); // important! Do not use executeUpdate()!
ResultSet rs = stmt_1.getResult();
long newId = -1;
if (rs.next()) {
newId = rs.getLong(1);
}
PreparedStatement stmt_2 = con.prepareStatement("INSERT INTO tblB VALUES (default,?,?)");
stmt_2.setLong(1, newId);
stmt_2.setString(2, "y");
stmt_2.executeUpdate();
You can do this in two inserts, using currval() to retrieve the foreign key (provided that key is serial):
create temporary table tb1a (id serial primary key, t text);
create temporary table tb1b (id serial primary key,
tb1a_id int references tb1a(id),
t text);
begin;
insert into tb1a values (DEFAULT, 'x');
insert into tb1b values (DEFAULT, currval('tb1a_id_seq'), 'y');
commit;
The result:
select * from tb1a;
id | t
----+---
3 | x
(1 row)
select * from tb1b;
id | tb1a_id | t
----+---------+---
2 | 3 | y
(1 row)
Using currval in this way is safe whether in or outside of a transaction. From the Postgresql 8.4 documentation:
currval
Return the value most recently
obtained by nextval for this sequence
in the current session. (An error is
reported if nextval has never been
called for this sequence in this
session.) Because this is returning a
session-local value, it gives a
predictable answer whether or not
other sessions have executed nextval
since the current session did.
You may want to use AFTER INSERT trigger for that. Something along the lines of:
create function dostuff() returns trigger as $$
begin
insert into table_b(field_1, field_2) values ('foo', NEW.id);
return new; --values returned by after triggers are ignored, anyway
end;
$$ language 'plpgsql';
create trigger trdostuff after insert on table_name for each row execute procedure dostuff();
after insert is needed because you need to have the id to reference it. Hope this helps.
Edit
A trigger will be called in the same "block" as the command that triggered it, even if not using transactions - in other words, it becomes somewhat part of that command.. Therefore, there is no risk of something changing the referenced id between inserts.