PostgreSQL function for last inserted ID - postgresql
In PostgreSQL, how do I get the last id inserted into a table?
In MS SQL there is SCOPE_IDENTITY().
Please do not advise me to use something like this:
select max(id) from table
( tl;dr : goto option 3: INSERT with RETURNING )
Recall that in postgresql there is no "id" concept for tables, just sequences (which are typically but not necessarily used as default values for surrogate primary keys, with the SERIAL pseudo-type).
If you are interested in getting the id of a newly inserted row, there are several ways:
Option 1: CURRVAL(<sequence name>);.
For example:
INSERT INTO persons (lastname,firstname) VALUES ('Smith', 'John');
SELECT currval('persons_id_seq');
The name of the sequence must be known, it's really arbitrary; in this example we assume that the table persons has an id column created with the SERIAL pseudo-type. To avoid relying on this and to feel more clean, you can use instead pg_get_serial_sequence:
INSERT INTO persons (lastname,firstname) VALUES ('Smith', 'John');
SELECT currval(pg_get_serial_sequence('persons','id'));
Caveat: currval() only works after an INSERT (which has executed nextval() ), in the same session.
Option 2: LASTVAL();
This is similar to the previous, only that you don't need to specify the sequence name: it looks for the most recent modified sequence (always inside your session, same caveat as above).
Both CURRVAL and LASTVAL are totally concurrent safe. The behaviour of sequence in PG is designed so that different session will not interfere, so there is no risk of race conditions (if another session inserts another row between my INSERT and my SELECT, I still get my correct value).
However they do have a subtle potential problem. If the database has some TRIGGER (or RULE) that, on insertion into persons table, makes some extra insertions in other tables... then LASTVAL will probably give us the wrong value. The problem can even happen with CURRVAL, if the extra insertions are done intto the same persons table (this is much less usual, but the risk still exists).
Option 3: INSERT with RETURNING
INSERT INTO persons (lastname,firstname) VALUES ('Smith', 'John') RETURNING id;
This is the most clean, efficient and safe way to get the id. It doesn't have any of the risks of the previous.
Drawbacks? Almost none: you might need to modify the way you call your INSERT statement (in the worst case, perhaps your API or DB layer does not expect an INSERT to return a value); it's not standard SQL (who cares); it's available since Postgresql 8.2 (Dec 2006...)
Conclusion: If you can, go for option 3. Elsewhere, prefer 1.
Note: all these methods are useless if you intend to get the last inserted id globally (not necessarily by your session). For this, you must resort to SELECT max(id) FROM table (of course, this will not read uncommitted inserts from other transactions).
Conversely, you should never use SELECT max(id) FROM table instead one of the 3 options above, to get the id just generated by your INSERT statement, because (apart from performance) this is not concurrent safe: between your INSERT and your SELECT another session might have inserted another record.
See the RETURNING clause of the INSERT statement. Basically, the INSERT doubles as a query and gives you back the value that was inserted.
Leonbloy's answer is quite complete. I would only add the special case in which one needs to get the last inserted value from within a PL/pgSQL function where OPTION 3 doesn't fit exactly.
For example, if we have the following tables:
CREATE TABLE person(
id serial,
lastname character varying (50),
firstname character varying (50),
CONSTRAINT person_pk PRIMARY KEY (id)
);
CREATE TABLE client (
id integer,
CONSTRAINT client_pk PRIMARY KEY (id),
CONSTRAINT fk_client_person FOREIGN KEY (id)
REFERENCES person (id) MATCH SIMPLE
);
If we need to insert a client record we must refer to a person record. But let's say we want to devise a PL/pgSQL function that inserts a new record into client but also takes care of inserting the new person record. For that, we must use a slight variation of leonbloy's OPTION 3:
INSERT INTO person(lastname, firstname)
VALUES (lastn, firstn)
RETURNING id INTO [new_variable];
Note that there are two INTO clauses. Therefore, the PL/pgSQL function would be defined like:
CREATE OR REPLACE FUNCTION new_client(lastn character varying, firstn character varying)
RETURNS integer AS
$BODY$
DECLARE
v_id integer;
BEGIN
-- Inserts the new person record and retrieves the last inserted id
INSERT INTO person(lastname, firstname)
VALUES (lastn, firstn)
RETURNING id INTO v_id;
-- Inserts the new client and references the inserted person
INSERT INTO client(id) VALUES (v_id);
-- Return the new id so we can use it in a select clause or return the new id into the user application
RETURN v_id;
END;
$BODY$
LANGUAGE plpgsql VOLATILE;
Now we can insert the new data using:
SELECT new_client('Smith', 'John');
or
SELECT * FROM new_client('Smith', 'John');
And we get the newly created id.
new_client
integer
----------
1
you can use RETURNING clause in INSERT statement,just like the following
wgzhao=# create table foo(id int,name text);
CREATE TABLE
wgzhao=# insert into foo values(1,'wgzhao') returning id;
id
----
1
(1 row)
INSERT 0 1
wgzhao=# insert into foo values(3,'wgzhao') returning id;
id
----
3
(1 row)
INSERT 0 1
wgzhao=# create table bar(id serial,name text);
CREATE TABLE
wgzhao=# insert into bar(name) values('wgzhao') returning id;
id
----
1
(1 row)
INSERT 0 1
wgzhao=# insert into bar(name) values('wgzhao') returning id;
id
----
2
(1 row)
INSERT 0
The other answers don't show how one might use the value(s) returned by RETURNING. Here's an example where the returned value is inserted into another table.
WITH inserted_id AS (
INSERT INTO tbl1 (col1)
VALUES ('foo') RETURNING id
)
INSERT INTO tbl2 (other_id)
VALUES ((select id from inserted_id));
See the below example
CREATE TABLE users (
-- make the "id" column a primary key; this also creates
-- a UNIQUE constraint and a b+-tree index on the column
id SERIAL PRIMARY KEY,
name TEXT,
age INT4
);
INSERT INTO users (name, age) VALUES ('Mozart', 20);
Then for getting last inserted id use this for table "user" seq column name "id"
SELECT currval(pg_get_serial_sequence('users', 'id'));
SELECT CURRVAL(pg_get_serial_sequence('my_tbl_name','id_col_name'))
You need to supply the table name and column name of course.
This will be for the current session / connection
http://www.postgresql.org/docs/8.3/static/functions-sequence.html
For the ones who need to get the all data record, you can add
returning *
to the end of your query to get the all object including the id.
You can use RETURNING id after insert query.
INSERT INTO distributors (id, name) VALUES (DEFAULT, 'ALI') RETURNING id;
and result:
id
----
1
In the above example id is auto-increment filed.
The better way is to use Insert with returning. Though there are already same answers, I just want to add, if you want to save this to a variable then you can do this
insert into my_table(name) returning id into _my_id;
Postgres has an inbuilt mechanism for the same, which in the same query returns the id or whatever you want the query to return.
here is an example. Consider you have a table created which has 2 columns column1 and column2 and you want column1 to be returned after every insert.
# create table users_table(id serial not null primary key, name character varying);
CREATE TABLE
#insert into users_table(name) VALUES ('Jon Snow') RETURNING id;
id
----
1
(1 row)
# insert into users_table(name) VALUES ('Arya Stark') RETURNING id;
id
----
2
(1 row)
Try this:
select nextval('my_seq_name'); // Returns next value
If this return 1 (or whatever is the start_value for your sequence), then reset the sequence back to the original value, passing the false flag:
select setval('my_seq_name', 1, false);
Otherwise,
select setval('my_seq_name', nextValue - 1, true);
This will restore the sequence value to the original state and "setval" will return with the sequence value you are looking for.
I had this issue with Java and Postgres.
I fixed it by updating a new Connector-J version.
postgresql-9.2-1002.jdbc4.jar
https://jdbc.postgresql.org/download.html:
Version 42.2.12
https://jdbc.postgresql.org/download/postgresql-42.2.12.jar
Based on #ooZman 's answer above, this seems to work for PostgreSQL v12 when you need to INSERT with the next value of a "sequence" (akin to auto_increment) without goofing anything up in your table(s) counter(s). (Note: I haven't tested it in more complex DB cluster configurations though...)
Psuedo Code
$insert_next_id = $return_result->query("select (setval('"your_id_seq"', (select nextval('"your_id_seq"')) - 1, true)) +1");
Related
How to upgrade table inside a trigger function in POSTGRESQL?
I would like to create a trigger function inside my database which checks, if the newly "inserted" value (max_bid) is at least +1 greater than the largest max_bid value currently in the table. If this is the case, the max_bid value inside the table should be updated, although not with the newly "inserted" value, but instead it should be increased by 1. For instance, if max_bid is 10 and the newly "inserted" max_bid is 20, the max_bid value inside the table should be increased by +1 (in this case 11). I tried to do it with a trigger, but unfortunatelly it doesn't work. Please help me to solve this problem. Here is my code: CREATE TABLE bidtable ( mail_buyer VARCHAR(80) NOT NULL, auction_id INTEGER NOT NULL, max_bid INTEGER, PRIMARY KEY (mail_buyer), ); CREATE OR REPLACE FUNCTION max_bid() RETURNS TRIGGER LANGUAGE PLPGSQL AS $$ DECLARE current_maxbid INTEGER; BEGIN SELECT MAX(max_bid) INTO current_maxbid FROM bidtable WHERE NEW.auction_id = OLD.auction_id; IF (NEW.max_bid < (current_maxbid + 1)) THEN RAISE EXCEPTION 'error'; RETURN NULL; END IF; UPDATE bidtable SET max_bid = (current_maxbid + 1) WHERE NEW.auction_id = OLD.auction_id AND NEW.mail_buyer = OLD.mail_buyer; RETURN NEW; END; $$; CREATE OR REPLACE TRIGGER max_bid_trigger BEFORE INSERT ON bidtable FOR EACH ROW EXECUTE PROCEDURE max_bid(); Thank you very much for your help.
In a trigger function that is called for an INSERT operation the OLD implicit record variable is null, which is probably the cause of "unfortunately it doesn't work". Trigger function In a case like this there is a much easier solution. First of all, disregard the value for max_bid upon input because you require a specific value in all cases. Instead, you are going to set it to that specific value in the function. The trigger function can then be simplified to: CREATE OR REPLACE FUNCTION set_max_bid() -- Function name different from column name RETURNS TRIGGER LANGUAGE PLPGSQL AS $$ BEGIN SELECT MAX(max_bid) + 1 INTO NEW.max_bid FROM bidtable WHERE auction_id = NEW.auction_id; RETURN NEW; END; $$; That's all there is to it for the trigger function. Update the trigger to the new function name and it should work. Concurrency As several comments to your question pointed out, you run the risk of getting duplicates. This will currently not generate an error because you do not have an appropriate constraint on your table. Avoiding duplicates requires a table constraint like: UNIQUE (auction_id, max_bid) You cannot deal with any concurrency issue in the trigger function because the INSERT operation will take place after the trigger function completes with a RETURN NEW statement. What would be the most appropriate way to deal with this depends on your application. Your options are table locking to block any concurrent inserts, or looping in a function until the insert succeeds. Avoid the concurrency issue altogether If you can change the structure of the bidtable table, you can get rid of the whole concurrency issue by changing your business logic to not require the max_bid column. The max_bid column appears to indicate the order in which bids were placed for each auction_id. If that is the case then you could add a serial column to your table and use that to indicate order of bids being placed (for all auctions). That serial column could then also be the PRIMARY KEY to make your table more agile (no indexing on a large text column). The table would look something like this: CREATE TABLE bidtable ( id SERIAL PRIMARY KEY, mail_buyer VARCHAR(80) NOT NULL, auction_id INTEGER NOT NULL ); You can drop your trigger and trigger function and just depend on the proper id value being supplied by the system. The bids for a specific action can then be extracted using a straightforward SELECT: SELECT id, mail_buyer FROM bidtable WHERE auction_id = xxx ORDER BY id; If you require a max_bid-like value (the id values increment over the full set of auctions), you can use a simple window function: SELECT mail_buyer, row_number() AS max_bid OVER (PARTITION BY auction_id ORDER BY id) FROM bidtable WHERE auction_id = xxx;
Postgres Sequence Value Not Incrementing After Inserting Records
I have the following table: DROP TABLE IF EXISTS TBL_CACL; CREATE TABLE TBL_CACL ( ID INT GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY, NAME VARCHAR(250) UNIQUE NOT NULL ); I am able to query postgres likes this: SELECT c.relname FROM pg_class c WHERE c.relkind = 'S'; to determine that the default name for the sequence on the table is tbl_cacl_id_seq I can then read its nextVal like this: SELECT nextval('tbl_cacl_id_seq'); Directly after creating the table the value is '1' However, if I insert several rows of data like this: INSERT INTO TBL_CACL VALUES (1, 'CACL_123'), (2, 'CACL_234'), (3, 'CACL_345'); and I read nextVal for the table, it returns '2' I would have thought that tbl_cacl_id_seq would be '4'. Clearly I misunderstanding how the insert is related to the nextVal. Why is the sequence out of sync with the inserts and how do I get them in sequence? Thanks for any advice.
The tbl_cacl_id_seq gets incremented only when the function nextval('tbl_cacl_id_seq') is called. This function call will not happen since you have provided values for ID column during insert (hence no need to get the default value).
Sequence in postgresql
Converting below SQL Server procedures and tables to store and generate sequence to postgresql. Can anyone guide how to do this in Postgres (via table and this function) and not via sequence or nextval or currval Sequence table IF NOT EXISTS (SELECT name FROM sys.tables WHERE name = 'testtable') CREATE TABLE dbo.testtable(Sequence int NOT NULL ) go IF NOT EXISTS (SELECT * FROM testtable) INSERT INTO testtable VALUES (-2147483648) go Sequence generating proc CREATE PROCEDURE test_proc AS SET NOCOUNT ON DECLARE #iReturn int BEGIN TRANSACTION SELECT #iReturn = Sequence FROM schema.test (TABLOCKX) -- set exclusive table lock UPDATE schema.test SET Sequence = ( Sequence + 1 ) COMMIT TRANSACTION SELECT #iReturn RETURN #iReturn go grant execute on schema.test to public go
Disclaimer: using a sequence is the only scalable and efficient way to generate unique numbers. Having said that, it is possible to implement your own sequence generator. The only situation where makes any sense is, if you are required to generate gapless numbers. If you do not have such a requirement, use a sequence. You need one table that stores the values of the sequences. I usually use one table with a row for each "generator" that avoids costly table locks. create table seq_generator ( entity varchar(30) not null primary key, seq_value integer default 0 not null ); insert into seq_generator (entity) values ('testsequence'); Then create a function to increment the sequence value: create or replace function next_value(p_entity varchar) returns integer as $$ update seq_generator set seq_value = seq_value + 1 where entity = lower(p_entity) returning seq_value; $$ language sql; To obtain the next sequence value, e.g. inside an insert: insert into some_table (id, ...) values (next_value('testsequence'), ...); Or make it a default value: create table some_table ( id integer not null primary key default next_value('testsequence'), ... ); The UPDATE increments and locks the row in a single statement returning the new value for the sequence. If the calling transaction commits, the update to seq_generator will also be committed. If the calling transaction rolls back, the update will roll back as well. If a second transaction calls next_value() for the same entity, it has to wait until the first transaction commits or rolls back. So access to the generator is serialized through this function. Only one transaction at a time can do that. If you need a second gapless sequence, just insert a new row in the `seq_generator' table. This will seriously affect performance when you use in an environment that does a lot of concurrent inserts. The only reason that would justify this is a legal requirement to have a gapless number. In every other case you should really, really use a native Postgres sequence.
Postgresql function not working as expected with INSERT INTO
I have function to insert data from one table to another $BODY$ BEGIN INSERT INTO backups.calls2 (uid,queue_id,connected,callerid2) SELECT distinct (c.uid) ,c.queue_id,c.connected,c.callerid2 FROM public.calls c WHERE c.connected is not null; RETURN; EXCEPTION WHEN unique_violation THEN NULL; END; $BODY$ And structure of table: CREATE TABLE backups.nc_calls_id ( uid character(30) NOT NULL, queue_id integer, callerid2 text, connected timestamp without time zone, id serial NOT NULL, CONSTRAINT calls2_pkey PRIMARY KEY (uid) ) WITH ( OIDS=FALSE ); When I have first executed this query, everything went ok, 200000 rows was inserted to new table with unique Id. But now, when I executing it again, no rows are being inserted
From the rather minimalist description given (no PostgreSQL version, no CREATE FUNCTION statement showing params etc, no other table structure, no function invocation) I'm guessing that you're attempting to do a merge, where you insert a row only if it doesn't exist by skipping rows if they already exist. What the above function will do is skip all rows if any row already exists. You need to either use a loop to do the insert within individual BEGIN ... EXCEPTION blocks (slow) or LOCK the table and do an INSERT INTO ... SELECT ... FROM newtable WHERE NOT EXISTS (SELECT 1 FROM oldtable where oldtable.key = newtable.key). The INSERT INTO ... SELECT ... WHERE NOT EXISTS method will perform a lot better but will fail if more than one runs concurrently or if anything else inserts into the destination table at the same time. LOCKing the destination table before running it will make sure it's safe. The PL/PgSQL looping BEGIN ... EXCEPTION method sounds nice and safe at first glance. Then you think about what happens when you run two of them at once. One will insert some keys first, one will insert other keys first, so they have a split of the values between them. That's OK, together they make up the full set. But what if only one of them commits and the other fails for some reason? You'll have an interesting sparsely inserted result. For that reason it's probably best to lock the destination table if using this approach too ... in which case you might as well use the vastly more efficient single pass INSERT with subquery-based uniqueness violation check.
Returning multiple SERIAL values from Posgtres batch insert
Im working with Postgres, using SERIAL as my primary key. After I insert a row I can get the generated key either by using 'RETURNING' or CURRVAL(). Now my problem is that I want to do a batch insert inside a transaction and get ALL the generated keys. All I get with RETURNING and CURRVAL is the last generated id, the rest of the result get discarded. How can I get it to return all of them? Thanks
You can use RETURNING with multiple values: psql=> create table t (id serial not null, x varchar not null); psql=> insert into t (x) values ('a'),('b'),('c') returning id; id ---- 1 2 3 (3 rows) So you want something more like this: INSERT INTO AutoKeyEntity (Name,Description,EntityKey) VALUES ('AutoKey 254e3c64-485e-42a4-b1cf-d2e1e629df6a','Testing 5/4/2011 8:59:43 AM',DEFAULT) returning EntityKey; INSERT INTO AutoKeyEntityListed (EntityKey,Listed,ItemIndex) VALUES (CURRVAL('autokeyentity_entityKey_seq'),'Test 1 AutoKey 254e3c64-485e-42a4-b1cf-d2e1e629df6a', 0), (CURRVAL('autokeyentity_entityKey_seq'),'Test 2 AutoKey 254e3c64-485e-42a4-b1cf-d2e1e629df6a', 1), (CURRVAL('autokeyentity_entityKey_seq'),'Test 3 AutoKey 254e3c64-485e-42a4-b1cf-d2e1e629df6a', 2) returning EntityKey; -- etc. And then you'll have to gather the returned EntityKey values from each statement in your transaction. You could try to grab the sequence's current value at the beginning and end of the transaction and use those to figure out which sequence values were used but that is not reliable: Furthermore, although multiple sessions are guaranteed to allocate distinct sequence values, the values might be generated out of sequence when all the sessions are considered. For example, with a cache setting of 10, session A might reserve values 1..10 and return nextval=1, then session B might reserve values 11..20 and return nextval=11 before session A has generated nextval=2. Thus, with a cache setting of one it is safe to assume that nextval values are generated sequentially; with a cache setting greater than one you should only assume that the nextval values are all distinct, not that they are generated purely sequentially. Also, last_value will reflect the latest value reserved by any session, whether or not it has yet been returned by nextval. So, even if your sequences have cache values of one you can still have non-contiguous sequence values in your transaction. However, you might be safe if the sequence's cache value matches the number of INSERTs in your transaction but I'd guess that that's going to be too large to make sense. UPDATE: I just noticed (thanks to the questioner's comments) that there are two tables involved, got a bit lost in the wall of text. In that case, you should be able to use the current INSERTS: INSERT INTO AutoKeyEntity (Name,Description,EntityKey) VALUES ('AutoKey 254e3c64-485e-42a4-b1cf-d2e1e629df6a','Testing 5/4/2011 8:59:43 AM',DEFAULT) returning EntityKey; INSERT INTO AutoKeyEntityListed (EntityKey,Listed,ItemIndex) VALUES (CURRVAL('autokeyentity_entityKey_seq'),'Test 1 AutoKey 254e3c64-485e-42a4-b1cf-d2e1e629df6a', 0), (CURRVAL('autokeyentity_entityKey_seq'),'Test 2 AutoKey 254e3c64-485e-42a4-b1cf-d2e1e629df6a', 1), (CURRVAL('autokeyentity_entityKey_seq'),'Test 3 AutoKey 254e3c64-485e-42a4-b1cf-d2e1e629df6a', 2); -- etc. And grab the EntityKey values one at a time from the INSERTs on AutoEntityKey. Some sort of script might be needed to handle the RETURNING values. You could also wrap the AutoKeyEntity and related AutoKeyEntityListed INSERTs in a function, then use INTO to grab the EntityKey value and return it from the function: INSERT INTO AutoKeyEntity /*...*/ RETURNING EntityKey INTO ek; /* AutoKeyEntityListed INSERTs ... */ RETURN ek;
you can pre-assign consecutive ids using this: SELECT setval(seq, nextval(seq) + num_rows - 1, true) as stop it should be a faster alternative to calling nextval() gazillions of times. you could also store ids in a temporary table: create temporary blah ( id int ) on commit drop; insert into table1 (...) values (...) returning id into blah; in postgres 9.1, can able to use CTEs: with ids as ( insert into table1 (...) values (...) returning id ) insert into table2 (...) select ... from ids;
In your application, gather values from the sequence : SELECT nextval( ... ) FROM generate_series( 1, number_of_values ) n Create your rows using those values, and simply insert (using a multiline insert). It's safe (SERIAL works as you'd expect, no reuse of values, concurrent proof, etc) and fast (you insert all the rows at once without many client-server roundtrips).
Replying to Scott Marlowe's comment in more detail : Say you have a tree table with the usual parent_id reference to itself, and you want to import a large tree of records. Problem is you need the parent's PK value to be known to insert the children, so potentially this can need lots of individual INSERT statements. So a solution could be : build the tree in the application grab as many sequence values as nodes to insert, using "SELECT nextval( ... ) FROM generate_series( 1, number_of_values ) n" (the order of the values does not matter) assign those primary key values to the nodes do a bulk insert (or COPY) traversing the tree structure, since the PKs used for relations are known
There are three ways to do this. Use currval(), use returning, or write a stored procdure to wrap either of those methods in a nice little blanket that keeps you from doing it all in half client half postgres. Currval method: begin; insert into table a (col1, col2) values ('val1','val2'); select currval('a_id_seq'); 123 -- returned value -- client code creates next statement with value from select currval insert into table b (a_fk, col3, col4) values (123, 'val3','val4'); -- repeat the above as many times as needed then... commit; Returning method: begin; insert into table a (col1, col2) values ('val1','val2'), ('val1','val2'), ('val1','val2') returning a_id; -- note we inserted three rows 123 -- return values 124 126 insert into table b (a_fk, col3, col4) values (123, 'val3','val4'), (124, 'val3','val4'), (126, 'val3','val4'); commit;
Perform a FOR LOOP and process records one by one. It might be less performant but it is concurrency safe. Example code: DO $$ DECLARE r record; BEGIN FOR r IN SELECT id FROM {table} WHERE {condition} LOOP WITH idlist AS ( INSERT INTO {anotherTable} ({columns}) VALUES ({values}) RETURNING id UPDATE {table} c SET {column} = (SELECT id FROM idlist) WHERE c.id = {table}.id; END LOOP; END $$;