I made a primary key called t_id in the table of DB by using the query t_id SERIAL PRIMARY KEY.
At the very first time, it started well on 1. Then I deleted all the data and from then, it starts on 2 even though I set it on 1.
Here's the screenshot of what I did on seq - definition on pgAdmin4 :
Anyone has an idea where is the problem ?
Thanks a lot!!
The current value is 1, so the next value that will be served is 2. This is expected.
The doc is helpful on this topic.
Remember that the sequence will always give you a value that was not used before. So if you insert 10 rows, then delete them, the next sequence value will still be 11 (the last served value + 1)
To reset the sequence so the next time it is called it returns 1, you would do
SELECT setval('my_sequence_name', 1, false);
Your t_id is a primary key auto incremented serial type. These are sequenced, you can set the value of the next sequence using the sequence manipulation functions.
Postgres sequence manipulation functions documentation
Related
Whenever I am creating a table in Postgres, I like using SERIAL as primary key so that I don't do duplicates. But I have not been able to set a starting value for this. Let say I am creating student IDs that all have to be 8 digits, but SERIAL always starts from 1, how can I choose the starting value and then just increment from there? I have looked through the answered questions but I couldn't find the answer. Thanks!
Use setval() to change the sequence and pg_get_serial_sequence() to obtain the name of the sequence:
select setval(pg_get_serial_sequence(table_name, column_name), 9999999);
I'm using Postgres 9.5 and seeing some wired things here.
I've a cron job running ever 5 mins firing a sql statement that is adding a list of records if not existing.
INSERT INTO
sometable (customer, balance)
VALUES
(:customer, :balance)
ON CONFLICT (customer) DO NOTHING
sometable.customer is a primary key (text)
sometable structure is:
id: serial
customer: text
balance: bigint
Now it seems like everytime this job runs, the id field is silently incremented +1. So next time, I really add a field, it is thousands of numbers above my last value. I thought this query checks for conflicts and if so, do nothing but currently it seems like it tries to insert the record, increased the id and then stops.
Any suggestions?
The reason this feels weird to you is that you are thinking of the increment on the counter as part of the insert operation, and therefore the "DO NOTHING" ought to mean "don't increment anything". You're picturing this:
Check values to insert against constraint
If duplicate detected, abort
Increment sequence
Insert data
But in fact, the increment has to happen before the insert is attempted. A SERIAL column in Postgres is implemented as a DEFAULT which executes the nextval() function on a bound SEQUENCE. Before the DBMS can do anything with the data, it's got to have a complete set of columns, so the order of operations is like this:
Resolve default values, including incrementing the sequence
Check values to insert against constraint
If duplicate detected, abort
Insert data
This can be seen intuitively if the duplicate key is in the autoincrement field itself:
CREATE TABLE foo ( id SERIAL NOT NULL PRIMARY KEY, bar text );
-- Insert row 1
INSERT INTO foo ( bar ) VALUES ( 'test' );
-- Reset the sequence
SELECT setval(pg_get_serial_sequence('foo', 'id'), 0, true);
-- Attempt to insert row 1 again
INSERT INTO foo ( bar ) VALUES ( 'test 2' )
ON CONFLICT (id) DO NOTHING;
Clearly, this can't know if there's a conflict without incrementing the sequence, so the "do nothing" has to come after that increment.
As already said by #a_horse_with_no_name and #Serge Ballesta serials are always incremented even if INSERT fails.
You can try to "rollback" serial value to maximum id used by changing the corresponding sequence:
SELECT setval('sometable_id_seq', MAX(id), true) FROM sometable;
As said by #a_horse_with_no_name, that is by design. Serial type fields are implemented under the hood through sequences, and for evident reasons, once you have gotten a new value from a sequence, you cannot rollback the last value. Imagine the following scenario:
sequence is at n
A requires a new value : got n+1
in a concurrent transaction B requires a new value: got n+2
for any reason A rollbacks its transaction - would you feel safe to reset sequence?
That is the reason why sequences (and serial field) just document that in case of rollbacked transactions holes can occur in the returned values. Only unicity is guaranteed.
Well there is technique that allows you to do stuff like that. They call insert mutex. It is old old old, but it works.
https://www.percona.com/blog/2011/11/29/avoiding-auto-increment-holes-on-innodb-with-insert-ignore/
Generally idea is that you do INSERT SELECT and if your values are duplicating the SELECT does not return any results that of course prevents INSERT and the index is not incremented. Bit of mind boggling, but perfectly valid and performant.
This of course completely ignores ON DUPLICATE but one gets back control over the index.
Here is my situation:
I have a table that gets truncated once a week and new values are placed in it.
What I want to do:
I want to add a primary key that starts at 1 and increases by 1 for each row in the table that gets inserted. When the table gets truncated, I want this count to start back at 1.
Is this possible?
Use a serial column and use the option restart identity
truncate table foo restart identity
http://www.postgresql.org/docs/current/static/sql-truncate.html
I am getting a duplicate key error, DB2 SQL Error: SQLCODE=-803, SQLSTATE=23505, when I try to INSERT records. The primary key is one column, INTEGER 4, Generated, and it is the first column.
the insert looks like this: INSERT INTO SCHEMA.TABLE1 values (DEFAULT, ?, ?, ...)
It's my understanding that using the value DEFAULT will just let DB2 auto-generate the key at the time of insert, which is what I want. This works most of the time, but sometimes/randomly I get the duplicate key error. Thoughts?
More specifically, I'm running against DB2 9.7.0.3, using Scriptella to copy a bunch of records from one database to another. Sometimes I can process a bunch with no problems, other times I'll get the error right away, other times after 2 records, or 20 records, or 30 records, etc. Does not seem to be a pattern, nor is it the same record every time. If I change the data to copy 1 record instead of a bunch, sometimes I'll get the error one time, then it's fine the next time.
I thought maybe some other process was inserting records during my batch program, and creating keys at the same time. However, the tables I'm copying TO should not have any other users/processes trying to INSERT records during this same time frame, although there could be READS happening.
Edit: adding create info:
Create table SCHEMA.TABLE1 (
SYSTEM_USER_KEY INTEGER NOT NULL
generated by default as identity (start with 1 increment by 1 cache 20),
COL2...,
)
alter table SCHEMA.TABLE1
add constraint SYSTEM_USER_SYSTEM_USER_KEY_IDX
Primary Key (SYSTEM_USER_KEY);
You most likely have records in your table with IDs that are bigger then the next value in your identity sequence. To find out what the current value your sequence is about at, run the following query.
select s.nextcachefirstvalue-s.cache, s.nextcachefirstvalue-s.increment
from syscat.COLIDENTATTRIBUTES as a inner join syscat.sequences as s on a.seqid=s.seqid
where a.tabschema='SCHEMA'
and a.TABNAME='TABLE1'
and a.COLNAME='SYSTEM_USER_KEY'
So basically what happened is that somehow you got records in your table with ids that are bigger then the current last value of your identity sequence. So sooner or later these ids will collide with identity generated ids.
There are different reasons on how this could have happened. One possibility is that data was loaded which already contained values for the id column or that records were inserted with an actual value for the ID. Another option is that the identity sequence was reset to start at a lower value than the max id in the table.
Whatever the cause, you may also want the fix:
SELECT MAX(<primary_key_column>) FROM onsite.forms;
ALTER TABLE <table> ALTER COLUMN <primary_key_column> RESTART WITH <number from previous query + 1>;
I have a table a Postgres 9.04 database with a table in it called Versions:
CREATE TABLE tracking."Versions"
(
"ObjectId" UUID NOT NULL,
"From" BIGINT NOT NULL,
"To" BIGINT,
"DataTypeId" INTEGER NOT NULL REFERENCES tracking."DataTypes" ( "DataTypeId" ),
CONSTRAINT "Versions_pkey" PRIMARY KEY ("ObjectId", "DataTypeId")
);
There is also a sequence defined in the database that is used by the From & To columns:
CREATE SEQUENCE tracking."dbVersion"
INCREMENT 1
MINVALUE 1
MAXVALUE 9223372036854775807
START 1
CACHE 1;
The Versions table is actually keeping track of changes made to other tables. Without going into the details:
When a row is created in one of these other tables, a row is added to the Versions table and the From column is supposed to be set to the next value of the sequence.
If an existing row in one of those tables is updated, the From value of the corresponding row in the Versions table has to be set to the next value of the sequence.
When a row in one of these other tables is deleted, the To column has to be set to the next value of the sequence.
Rather than setting the Default value of the From column to "nextval('tracking."dbVersion'), I implemented a stored function that returns the result of calling this function:
CREATE OR REPLACE FUNCTION tracking."NextVersion"() RETURNS BIGINT
AS $$
SELECT nextval('tracking."dbVersion"'::regclass);
$$ LANGUAGE Sql;
All my code for inserting rows into the tables is implemented in C# using Entity Framework 4. All of the C# code is working fine. The weird thing is that when I look at the data in the Versions table, the values in the From column are all even. When I look at the sequence's properties in PgAdmin, it's odd. But the next time a row is inserted, the value stored is even.
What am I doing wrong? How does Postgres always use all of the values when you put that nextval call in the default property of a column?
Well, time for me to feel sheepish.
I looked over my C# code for inserting rows into the Versions table & I found that I was actually calling the NextVersion stored procedure twice. That explains why the sequence was always even when it was written to the From field. I've removed the second call & problem solved.
Tony