Reset postgres sequence to take un-used primary key ids - postgresql

I am using postgres 9.5. As part of application initialization I make some inserts in database at application startup with random ids. Something like insert into student values(1,'abc') , insert into student values(10,'xyz'). Then I have some rest APIs developed which insert new rows programatically. Is there any way we can tell postgres to skip already taken ids?
It tried to take up already used ids. I noticed it does not have the sequence updated accounting for the initial inserts
Here is how I create the table
CREATE TABLE student(
id SERIAL PRIMARY KEY,
name VARCHAR(64) NOT NULL UNIQUE
);

It sounds like you might be better served with UUIDs as your primary key values, if your data is distributed.

You can advance the sequence that is populating the id column to the highest value:
insert into student (id, name)
values
(1, 'abc'),
(2, 'xyz');
select setval(pg_get_serial_sequence('student', 'id'), (select max(id) from student));

Related

Replacing two columns (first name, last name) with an auto-increment id

I have a time-series location data table containing the following columns (time, first_name, last_name, loc_lat, loc_long) with the first three columns as the primary key. The table has more than 1M rows.
I notice that first_name and last_name duplicate quite often. There are only 100 combinations in 1M rows. Therefore, to save disk space, I am thinking about creating a separate people table with columns (id, first_name, last_name) where (first_name, last_name) is a unique constraint, in order to simplify the time-series location table to be (time, person_id, loc_lat, loc_long) where person_id is a foreign key for the people table.
I want to first create a new table from my existing 1M row table to test if there is indeed meaningful disk space save with this change. I feel like this task is quite doable but cannot find a concrete way to do so yet. Any suggestions?
That's a basic step of database normalization.
If you can afford to do so, it will be faster to write a new table exchanging full names for IDs, than altering the schema of the existing table and update all rows. Basically:
BEGIN; -- wrap in single transaction (optional, but safer)
CREATE TABLE people (
people_id integer GENERATED ALWAYS AS IDENTITY PRIMARY KEY
, first_name text NOT NULL
, last_name text NOT NULL
, CONSTRAINT full_name_uni UNIQUE (first_name, last_name)
);
INSERT INTO people (first_name, last_name)
SELECT DISTINCT first_name, last_name
FROM tbl
ORDER BY 1, 2; -- optional
ALTER TABLE tbl RENAME TO tbl_old; -- free up org. table name
CREATE TABLE tbl AS
SELECT t.time, p.people_id, t.loc_lat, t.loc_long
FROM tbl_old t
JOIN people p USING (first_name, last_name);
-- ORDER BY ??
ALTER TABLE tbl ADD CONSTRAINT people_id_fk FOREIGN KEY (people_id) REFERENCES people(people_id);
-- make sure the new table is complete. indexes? constraints?
-- Finally:
DROP TABLE tbl_old;
COMMIT;
Related:
Best way to populate a new column in a large table?
Add new column without table lock?
Updating database rows without locking the table in PostgreSQL 9.2
DISTINCT is simple. But for only 100 distinct full names - and with the right index support! - there are more sophisticated, (much) faster ways. See:
Optimize GROUP BY query to retrieve latest row per user

Unable to insert table in Postgres due to sequence being out of order

I have a table called person with primary key on id;
I am trying to insert into this table with:
insert into person (first_name, last_name, email, gender, date_of_birth, country_of_birth) values ('Ellissa', 'Gordge', 'ggordge0#gnu.org', 'Male', '2022-03-19', 'Fiji');
There should not be any ID constraint which are being violated since it is a BIGSERIAL yet I am getting this:
It says Key id=(8) already exists and it is incrementing on each attempt to run this command. How can ID already exist? And why is it not incrementing from the bottom of the list?
If i specify the id in the insert statement, with a number which i know is unique it works. I just don't understand why is it not doing it automatically since I am using BIGSERIAL.
Your sequence apparently is out of sync with the values in the column. This can happen when someone did INSERT INTO person(id, …) VALUES (8, …) (or maybe a csv COPY import, or anything else that did provide values for the id column instead of using the default), or when someone did reset the sequence of having inserted data.
You can alter the sequence to fix this:
ALTER SEQUENCE person_id_seq RESTART WITH (SELECT MAX(id)+1 FROM person);
You can set the sequence value to fix this:
SELECT setval('person_id_seq', MAX(id)+1) FROM person;
Also notice that it is recommended to use an identity column rather than a serial one to avoid this kind of problem.
SELECT pg_catalog.setval(pg_get_serial_sequence('table_name', 'id'), MAX(id)) FROM table_name;
This should kickstart your sequence table back in sync, which should fix everything. Make sure to change 'table_name' to the actual name. Cheers!

Firebird dump tables

I have a database.gdb running with Firebird 3.0.
This database has two tables:
Table1 and
Table2.
Every day I add records to these tables and when I have finished my work I need to export the two tables to another newer database.
I need a procedure which dumps the two tables into a script so to import data in the newer database using the script.
I am only able to create a script of tables which have always the same number of records (no records added every day).
This script should include:
CREATE TABLE
Export all records procedure of the two tables
I do not need code but just a hint. I will study how to write code by myself.
I have created a handmade script.
CREATE TABLE TABYEARS (
ID INTEGER NOT NULL,
YEARS INTEGER,
/* Keys */
PRIMARY KEY (ID)
);
CREATE TABLE TABCODE (
ID INTEGER NOT NULL,
NAME VARCHAR(50),
CODE VARCHAR(50),
/* Keys */
PRIMARY KEY (ID)
);
COMMIT;
INSERT INTO TABYEARS (ID, YEARS) VALUES (1, 2021);
INSERT INTO TABYEARS (ID, YEARS) VALUES (2, 2022);
INSERT INTO TABCODE (ID, NAME, CODE) VALUES (1, 'Robert', '10');
INSERT INTO TABCODE (ID, NAME, CODE) VALUES (2, 'Paul', '87');
COMMIT;
I do not add records very often to these tables. The first one has just one record every year.
How to create (not manually) a script like this but regarding
two tables in which every day I add 50 records?
I can use FlameRobin or IBExpert or similar.

PostgreSQL self referential table - how to store parent ID in script?

I've the following table:
DROP SEQUENCE IF EXISTS CATEGORY_SEQ CASCADE;
CREATE SEQUENCE CATEGORY_SEQ START 1;
DROP TABLE IF EXISTS CATEGORY CASCADE;
CREATE TABLE CATEGORY (
ID BIGINT NOT NULL DEFAULT nextval('CATEGORY_SEQ'),
NAME CHARACTER VARYING(255) NOT NULL,
PARENT_ID BIGINT
);
ALTER TABLE CATEGORY
ADD CONSTRAINT CATEGORY_PK PRIMARY KEY (ID);
ALTER TABLE CATEGORY
ADD CONSTRAINT CATEGORY_SELF_FK FOREIGN KEY (PARENT_ID) REFERENCES CATEGORY (ID);
Now I need to insert the data. So I start with parent:
INSERT INTO CATEGORY (NAME) VALUES ('PARENT_1');
And now I need the ID of the just inserted parent to add children to it:
INSERT INTO CATEGORY (NAME, PARENT_ID) VALUES ('CHILDREN_1_1', <what_goes_here>);
INSERT INTO CATEGORY (NAME, PARENT_ID) VALUES ('CHILDREN_1_2', <what_goes_here>);
How can I get and store the ID of the parent to later use it in the subsequent inserts?
You can use a data modifying CTE with the returning clause:
with parent_cat (parent_id) as (
INSERT INTO CATEGORY (NAME) VALUES ('PARENT_1')
returning id
)
INSERT INTO CATEGORY (NAME, PARENT_ID)
VALUES
('CHILDREN_1_1', (select parent_id from parent_cat) ),
('CHILDREN_1_2', (select parent_id from parent_cat) );
The answer is to use RETURNING along with WITH
WITH inserted AS (
INSERT INTO CATEGORY (NAME) VALUES ('PARENT_1')
RETURNING id
) INSERT INTO CATEGORY (NAME, PARENT_ID) VALUES
('CHILD_1_1', (SELECT inserted.id FROM inserted)),
('CHILD_2_1', (SELECT inserted.id FROM inserted));
( tl;dr : goto option 3: INSERT with RETURNING )
Recall that in postgresql there is no "id" concept for tables, just sequences (which are typically but not necessarily used as default values for surrogate primary keys, with the SERIAL pseudo-type).
If you are interested in getting the id of a newly inserted row, there are several ways:
Option 1: CURRVAL(<sequence name>);.
For example:
INSERT INTO persons (lastname,firstname) VALUES ('Smith', 'John');
SELECT currval('persons_id_seq');
The name of the sequence must be known, it's really arbitrary; in this example we assume that the table persons has an id column created with the SERIAL pseudo-type. To avoid relying on this and to feel more clean, you can use instead pg_get_serial_sequence:
INSERT INTO persons (lastname,firstname) VALUES ('Smith', 'John');
SELECT currval(pg_get_serial_sequence('persons','id'));
Caveat: currval() only works after an INSERT (which has executed nextval() ), in the same session.
Option 2: LASTVAL();
This is similar to the previous, only that you don't need to specify the sequence number: it looks for the most recent modified sequence (always inside your session, same caveat as above).
Both CURRVAL and LASTVAL are totally concurrent safe. The behaviour of sequence in PG is designed so that different session will not interfere, so there is no risk of race conditions (if another session inserts another row between my INSERT and my SELECT, I still get my correct value).
However they do have a subtle potential problem. If the database has some TRIGGER (or RULE) that, on insertion into persons table, makes some extra insertions in other tables... then LASTVAL will probably give us the wrong value. The problem can even happen with CURRVAL, if the extra insertions are done intto the same persons table (this is much less usual, but the risk still exists).
Option 3: INSERT with RETURNING
INSERT INTO persons (lastname,firstname) VALUES ('Smith', 'John') RETURNING id;
This is the most clean, efficient and safe way to get the id. It doesn't have any of the risks of the previous.
Drawbacks? Almost none: you might need to modify the way you call your INSERT statement (in the worst case, perhaps your API or DB layer does not expect an INSERT to return a value); it's not standard SQL (who cares); it's available since Postgresql 8.2 (Dec 2006...)
Conclusion: If you can, go for option 3. Elsewhere, prefer 1.
Note: all these methods are useless if you intend to get the last globally inserted id (not necessarily in your session). For this, you must resort to select max(id) from table (of course, this will not read uncommitted inserts from other transactions).

How to AUTO_INCREMENT in db2?

I thought this would be simple, but I can't seem to use AUTO_INCREMENT in my db2 database. I did some searching and people seem to be using "Generated by Default", but this doesn't work for me.
If it helps, here's the table I want to create with the sid being auto incremented.
create table student(
sid integer NOT NULL <auto increment?>
sname varchar(30),
PRIMARY KEY (sid)
);
Any pointers are appreciated.
You're looking for is called an IDENTITY column:
create table student (
sid integer not null GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1)
,sname varchar(30)
,PRIMARY KEY (sid)
);
A sequence is another option for doing this, but you need to determine which one is proper for your particular situation. Read this for more information comparing sequences to identity columns.
You will have to create an auto-increment field with the sequence object (this object generates a number sequence).
Use the following CREATE SEQUENCE syntax:
CREATE SEQUENCE seq_person
MINVALUE 1
START WITH 1
INCREMENT BY 1
CACHE 10
The code above creates a sequence object called seq_person, that starts with 1 and will increment by 1. It will also cache up to 10 values for performance. The cache option specifies how many sequence values will be stored in memory for faster access.
To insert a new record into the "Persons" table, we will have to use the nextval function (this function retrieves the next value from seq_person sequence):
INSERT INTO Persons (P_Id,FirstName,LastName)
VALUES (seq_person.nextval,'Lars','Monsen')
The SQL statement above would insert a new record into the "Persons" table. The "P_Id" column would be assigned the next number from the seq_person sequence. The "FirstName" column would be set to "Lars" and the "LastName" column would be set to "Monsen".
hi If you are still not able to make column as AUTO_INCREMENT while creating table. As a work around first create table that is:
create table student(
sid integer NOT NULL
sname varchar(30),
PRIMARY KEY (sid)
);
and then explicitly try to alter column bu using the following
alter table student alter column sid set GENERATED BY DEFAULT AS
IDENTITY
Or
alter table student alter column sid set GENERATED BY DEFAULT
AS IDENTITY (start with 100)
Added a few optional parameters for creating "future safe" sequences.
CREATE SEQUENCE <NAME>
START WITH 1
INCREMENT BY 1
NO MAXVALUE
NO CYCLE
CACHE 10;