Here is my situation:
I have a table that gets truncated once a week and new values are placed in it.
What I want to do:
I want to add a primary key that starts at 1 and increases by 1 for each row in the table that gets inserted. When the table gets truncated, I want this count to start back at 1.
Is this possible?
Use a serial column and use the option restart identity
truncate table foo restart identity
http://www.postgresql.org/docs/current/static/sql-truncate.html
Related
I made a primary key called t_id in the table of DB by using the query t_id SERIAL PRIMARY KEY.
At the very first time, it started well on 1. Then I deleted all the data and from then, it starts on 2 even though I set it on 1.
Here's the screenshot of what I did on seq - definition on pgAdmin4 :
Anyone has an idea where is the problem ?
Thanks a lot!!
The current value is 1, so the next value that will be served is 2. This is expected.
The doc is helpful on this topic.
Remember that the sequence will always give you a value that was not used before. So if you insert 10 rows, then delete them, the next sequence value will still be 11 (the last served value + 1)
To reset the sequence so the next time it is called it returns 1, you would do
SELECT setval('my_sequence_name', 1, false);
Your t_id is a primary key auto incremented serial type. These are sequenced, you can set the value of the next sequence using the sequence manipulation functions.
Postgres sequence manipulation functions documentation
I have existing entries in a table with primary keys starting with 300 and ending at 400. Now I would like to insert new entries with an auto generated primary key. The generation should skip the existing ones. Is there an easier way to do this instead of using triggers or a key table?
if you didnt change the seed, the news rows will continue from 401 so shouldnt be any problem
But if you want change it use ALTER SEQUENCE:
ALTER SEQUENCE yourTable_something_seq RESTART 500;
I made a mistake by the creation of my table. The primary key was incorrect. I delete the constraint and now I don't have a primary key in my table, only the field with the data. Now I want to set again this field as auto_increment primary key without losing my data. How I can do this?
I tryed this:
ALTER TABLE name_table ADD COLUMN name_column serial primary key;
But with this I am losing my data and creating a new column, that I don't want
try this
ALTER TABLE table_name ADD CONSTRAINT some_name primary key (name_column);
For my suggestion,
backup your database first in sql or csv or xml or excel something
restore-able.
Then alter your table structure, column data type, from UI or command
Then if data recorded on your table are lost or gone, restore your
backup data only, (not the structure of table)
After that you have changed column data type and also get your required data. I hope it will work.
Hi guys I was trying several ways and I found this one and maybe also somebody later can use:
Create a sequenz: Sequenz is the way that Postgresql implement to generate auto_increment fields. Ones we have a auto_increment is also a primary key. Should not be like this, is not a rule, but in most of the cases a auto_increment field is a primary key.
To create a sequenz is like this:
CREATE SEQUENCE exemplo_id_seq
INCREMENT 1 --the increment upgrate will be made 1 + 1
MINVALUE 1
MAXVALUE
START 1 --the start counting is in 1
CACHE 1;
After this is only to give this sequenz to the affected field using NEXTVAL, like this:
ALTER TABLE table_name ALTER COLUMN id SET DEFAULT NEXTVAL("exemplo_id_seq"::regclass);
Is working good without losing the data from old errors
I am getting a duplicate key error, DB2 SQL Error: SQLCODE=-803, SQLSTATE=23505, when I try to INSERT records. The primary key is one column, INTEGER 4, Generated, and it is the first column.
the insert looks like this: INSERT INTO SCHEMA.TABLE1 values (DEFAULT, ?, ?, ...)
It's my understanding that using the value DEFAULT will just let DB2 auto-generate the key at the time of insert, which is what I want. This works most of the time, but sometimes/randomly I get the duplicate key error. Thoughts?
More specifically, I'm running against DB2 9.7.0.3, using Scriptella to copy a bunch of records from one database to another. Sometimes I can process a bunch with no problems, other times I'll get the error right away, other times after 2 records, or 20 records, or 30 records, etc. Does not seem to be a pattern, nor is it the same record every time. If I change the data to copy 1 record instead of a bunch, sometimes I'll get the error one time, then it's fine the next time.
I thought maybe some other process was inserting records during my batch program, and creating keys at the same time. However, the tables I'm copying TO should not have any other users/processes trying to INSERT records during this same time frame, although there could be READS happening.
Edit: adding create info:
Create table SCHEMA.TABLE1 (
SYSTEM_USER_KEY INTEGER NOT NULL
generated by default as identity (start with 1 increment by 1 cache 20),
COL2...,
)
alter table SCHEMA.TABLE1
add constraint SYSTEM_USER_SYSTEM_USER_KEY_IDX
Primary Key (SYSTEM_USER_KEY);
You most likely have records in your table with IDs that are bigger then the next value in your identity sequence. To find out what the current value your sequence is about at, run the following query.
select s.nextcachefirstvalue-s.cache, s.nextcachefirstvalue-s.increment
from syscat.COLIDENTATTRIBUTES as a inner join syscat.sequences as s on a.seqid=s.seqid
where a.tabschema='SCHEMA'
and a.TABNAME='TABLE1'
and a.COLNAME='SYSTEM_USER_KEY'
So basically what happened is that somehow you got records in your table with ids that are bigger then the current last value of your identity sequence. So sooner or later these ids will collide with identity generated ids.
There are different reasons on how this could have happened. One possibility is that data was loaded which already contained values for the id column or that records were inserted with an actual value for the ID. Another option is that the identity sequence was reset to start at a lower value than the max id in the table.
Whatever the cause, you may also want the fix:
SELECT MAX(<primary_key_column>) FROM onsite.forms;
ALTER TABLE <table> ALTER COLUMN <primary_key_column> RESTART WITH <number from previous query + 1>;
I have a postgresql DB and a table with almost billion of rows.
when I try to add a new column with default value:
ALTER TABLE big_table
ADD COLUMN some_flag integer NOT NULL DEFAULT 0;
The transaction goes on for 30+ min .. and the DB logs starts to shoots warnings.
Any way to optimize the query ?
Besides doing it in batches (which will still take a while):
You could dump the table as COPY statements and write a script to edit the contents of the COPY statements to insert another column (COPY can be CSV IIRC).
Then you just reload your altered COPY dump and it should in theory be faster than the ALTER because COPY will not log transactions.
The other option is to turn off fsync while you run the command... just remember to turn it back on.
You can also do both of the above in batches.
Starting from PostgreSQL 11 this behaviour will change.
Waiting for PostgreSQL 11 – Fast ALTER TABLE ADD COLUMN with a non-NULL default:
So, for the longest time, when you did:
alter table x add column z text;
it was virtually instantaneous. Get a lock on table, add information about new column to system catalogs, and it's done.
But when you tried:
alter table x add column z text default 'some value';
then it took long time. How long it did depend on size of table.
This was because postgresql was actually rewriting the whole table, adding the column to each row, and filling it with default value.
"What happens if you want to set the column to NOT NULL also? Are we back to the slow version in that case or does this handle that as well?"
not null doesn’t change anything. it is a constraint for new rows. so adding a column with “not null default ‘xxx'” will be fast.
I'd consider creating the column without the default and manually updating the rows in batches with intermittent commits to apply the default.