I have existing entries in a table with primary keys starting with 300 and ending at 400. Now I would like to insert new entries with an auto generated primary key. The generation should skip the existing ones. Is there an easier way to do this instead of using triggers or a key table?
if you didnt change the seed, the news rows will continue from 401 so shouldnt be any problem
But if you want change it use ALTER SEQUENCE:
ALTER SEQUENCE yourTable_something_seq RESTART 500;
Related
I am trying to populate some tables using data that I extracted from Google BigQuery. For that purpose I essentially normalized a flattened table into multiple tables that include the primary key of each row in the multiple tables. The important point is that I need to load those primary keys in order to satisfy foreign key references.
Having inserted this data into tables, I then try to add new rows to these tables. I don't specify the primary key, presuming that Postgres will auto-generate those key values.
However, I always get a 'duplicate key value violates unique constraint "xxx_pkey" ' type error, e.g.
"..duplicate key value violates unique constraint "collection_pkey" DETAIL: Key (id)=(1) already exists.
It seems this is triggered by including the primary key in the data when initializing table. That is, explicitly setting primary keys, somehow seems to disable or reset the expected autogeneration of the primary key. I.E. I was expecting that new rows would be assigned primary keys starting from the highest value already in a table.
Interestingly I get the same error whether I try to add a row via SQLAlchemy or from the psql console.
So, is this as expected? And if so, is there some way to get the system to again auto-generate keys? There must be some hidden psql state that controls this...the schema is unchanged by directly inserting keys, but psql behavior is changed by that action.
I am happy to provide additional information.
Thanks
I have table without primary keys and I need to perform some update or delete operations on it while other processes might add new rows and change other rows. In order to asses problems with data integrity and threadlocks I want to know if anything or everything is locked when not having primary keys?
Hope you can help me
Let me start by saying that I’m a Linux/Unix admin. That being said my manager has tasked me with moving older PostgreSQL databases to a RedHat server running 8.4.20. I was successful moving a 7.2.1 db but I’m running into issues moving a 7.4.20 db.
I use pg_dump –c filename and psql < filename. For the problematic db everything runs until I get to a CREATE CONSTRAINT TRIGGER statement. If I run it as it is in the file I get :
NOTICE: ignoring incomplete trigger group for constraint "" FOREIGN KEY data(ups) REFERENCES upsinfo(ups)
DETAIL: Found referenced table's DELETE trigger.
CREATE TRIGGER
If I run set schema 'pg_catalog'; I get:
ERROR: relation "upsinfo" does not exist
The tables (I think) involved are:
CREATE TABLE upsinfo (
ups text NOT NULL,
ipaddr inet,
rcomm text,
wcomm text,
reachable boolean,
managed boolean,
comments text,
region text
);
CREATE TABLE data (
date timestamp with time zone,
ups text,
mib text,
value text
);
The trigger problem trigger statement:
CREATE CONSTRAINT TRIGGER "<unnamed>"
AFTER DELETE ON upsinfo
FROM data
NOT DEFERRABLE INITIALLY IMMEDIATE
FOR EACH ROW
EXECUTE PROCEDURE "RI_FKey_cascade_del"('<unnamed>', 'data', 'upsinfo', 'UNSPECIFIED', 'ups', 'ups');
I know that the RI_FKey_cascade_del function is defined differently in the different versions of pg_catalog. Note that search_path is set to ‘public, pg_catalog’ so I’m also confused why I have to set the schema.
Again I’m not a real PostgreSQL DBA so try to be kind.
Oof, those are really old postgres versions, including the version you're upgrading to (8.4 was released in 2009, and support ended in 2014).
The short answer is that, as long as upsinfo and data are being created and populated, you're probably fine, and good to go. But one of your foreign key relationships is broken.
The long answer, well, let me see if I can explain what is going on (or, at least, what I think is going on).
I'm guessing that the original table definition of data included something like FOREIGN KEY (ups) REFERENCES upsinfo (ups) ON DELETE CASCADE. That causes postgres to automatically make some trigger constraints: 1- every time there's a new row for data, make sure that its ups column matches an existing row in upsinfo, and 2- every time you delete a row from upsinfo, delete the corresponding rows in data, based on the matching ups value.
That (not very informative) error message can come up when the foreign key relationship doesn't work. In order for a foreign key to make sense, the referenced value needs to be unique -- there should be only one row in upsinfo for each distinct value of ups. In order for postgres to know that, there needs to be a unique index or primary key on upsinfo.ups.
In this case, one of a couple things could be breaking it:
There's no primary key or unique index on upsinfo.ups (postgres should not have allowed a foreign key, but may have in very old versions)
There used to be a unique index, but it hadn't properly enforced uniqueness, so it didn't get successfully imported (a bug, again likely from a very old version)
In either case, if that foreign key relationship is important, you can try to fix it once the import is complete. Start by trying to make a unique index on upsinfo.ups, and see if you have problems. If you do, resolve the duplicate entries, and try again till it works. Then issue something like:
ALTER TABLE data
ADD FOREIGN KEY (ups) REFERENCES upsinfo (ups) ON DELETE CASCADE;
Of course, if things are working, it's possible you don't need to fix the foreign key, in which case you're probably able to ignore those errors and just move forward.
Hope that helps, and good luck!
This seems to be a part of ON DELETE CONSTRAINT. If I were you I would delete all such statements and replace them with a proper constraint definition on the target table.
Table definition should then look like this:
CREATE TABLE bookings (
boo_id serial NOT NULL,
boo_hotelid character varying NOT NULL,
boo_roomid integer NOT NULL,
CONSTRAINT pk_bookings
PRIMARY KEY (boo_id),
CONSTRAINT fk_bookings_boo_roomid
FOREIGN KEY (boo_roomid)
REFERENCES rooms (roo_id) MATCH SIMPLE
ON UPDATE CASCADE ON DELETE CASCADE
) WITHOUT OIDS;
And this part is what will internally create the trigger:
CONSTRAINT fk_bookings_boo_roomid
FOREIGN KEY (boo_roomid)
REFERENCES rooms (roo_id) MATCH SIMPLE
ON UPDATE CASCADE ON DELETE CASCADE
But, to be honest, I do not have an understanding for an upgrade to an unsupported version. You know the Postgres is version 9.5 now, right?
I made a mistake by the creation of my table. The primary key was incorrect. I delete the constraint and now I don't have a primary key in my table, only the field with the data. Now I want to set again this field as auto_increment primary key without losing my data. How I can do this?
I tryed this:
ALTER TABLE name_table ADD COLUMN name_column serial primary key;
But with this I am losing my data and creating a new column, that I don't want
try this
ALTER TABLE table_name ADD CONSTRAINT some_name primary key (name_column);
For my suggestion,
backup your database first in sql or csv or xml or excel something
restore-able.
Then alter your table structure, column data type, from UI or command
Then if data recorded on your table are lost or gone, restore your
backup data only, (not the structure of table)
After that you have changed column data type and also get your required data. I hope it will work.
Hi guys I was trying several ways and I found this one and maybe also somebody later can use:
Create a sequenz: Sequenz is the way that Postgresql implement to generate auto_increment fields. Ones we have a auto_increment is also a primary key. Should not be like this, is not a rule, but in most of the cases a auto_increment field is a primary key.
To create a sequenz is like this:
CREATE SEQUENCE exemplo_id_seq
INCREMENT 1 --the increment upgrate will be made 1 + 1
MINVALUE 1
MAXVALUE
START 1 --the start counting is in 1
CACHE 1;
After this is only to give this sequenz to the affected field using NEXTVAL, like this:
ALTER TABLE table_name ALTER COLUMN id SET DEFAULT NEXTVAL("exemplo_id_seq"::regclass);
Is working good without losing the data from old errors
I'm currently working with Firebird and attempting to utilize UPDATE OR INSERT functionality in order to solve a particular new case within our software. Basically, we are needing to pull data off of a source and put it into an existing table and then update that data at regular intervals and adding any new references. The source is not a database so it isn't a matter of using MERGE to link the two tables (unless we make a separate table and then merge it, but that seems unnecessary).
The problem rests on the fact we cannot use the primary key of the existing table for matching, because we need to match based off of the ID we get from the source. We can use the MATCHING clause no problem but the issue becomes that the primary key of the existing table will be updated to the next key every time because it has to be in the query because of the insertion chance. Here is the query (along with c# parameter additions) to demonstrate the problem.
UPDATE OR INSERT INTO existingtable (PrimaryKey, UniqueSourceID, Data) VALUES (?,?,?) MATCHING (UniqueSourceID);
this.AddInParameter("PrimaryKey", FbDbType.Integer, itemID);
this.AddInParameter("UniqueSourceID", FbDbType.Integer, source.id);
this.AddInParameter("Data", FbDbType.SmallInt, source.data);
Problem is shown that every time the UPDATE triggers, the primary key will also change to the next incremented key I need a way to leave the primary key alone when updating, but if it is inserting I need to insert it.
Do not generate primary key manually, let a trigger generate it when nessesary:
CREATE SEQUENCE seq_existingtable;
SET TERM ^ ;
CREATE TRIGGER Gen_PK FOR existingtable
ACTIVE BEFORE INSERT
AS
BEGIN
IF(NEW.PrimaryKey IS NULL)THEN NEW.PrimaryKey = NEXT VALUE FOR seq_existingtable;
END^
SET TERM ; ^
Now you can omit the PK field from your statement:
UPDATE OR INSERT INTO existingtable (UniqueSourceID, Data) VALUES (?,?) MATCHING (UniqueSourceID);
and when the insert is triggered by the statement then the trigger will take care of creating the PK. If you need to know the generated PK then use the RETURNING clause of the UPDATE OR INSERT statement.