Precondition:
I am using Liquibase with SQL scripts in my app. I started testing with Oracle DB, but now I need to switch to PostgreSQL DB.
Problem:
When I added constraints, I didn't add the names of the constraints.
Liquibase changelog contains a script for dropping unique and primary key constraints:
alter table SCENARIO drop primary key/
alter table SCENARIO drop unique (OWNER_ID)/
This syntax doesn't sync with PostgreSQL
Could you give some advice, on how to resolve this problem?
Screenshots:
Let me start by saying that I’m a Linux/Unix admin. That being said my manager has tasked me with moving older PostgreSQL databases to a RedHat server running 8.4.20. I was successful moving a 7.2.1 db but I’m running into issues moving a 7.4.20 db.
I use pg_dump –c filename and psql < filename. For the problematic db everything runs until I get to a CREATE CONSTRAINT TRIGGER statement. If I run it as it is in the file I get :
NOTICE: ignoring incomplete trigger group for constraint "" FOREIGN KEY data(ups) REFERENCES upsinfo(ups)
DETAIL: Found referenced table's DELETE trigger.
CREATE TRIGGER
If I run set schema 'pg_catalog'; I get:
ERROR: relation "upsinfo" does not exist
The tables (I think) involved are:
CREATE TABLE upsinfo (
ups text NOT NULL,
ipaddr inet,
rcomm text,
wcomm text,
reachable boolean,
managed boolean,
comments text,
region text
);
CREATE TABLE data (
date timestamp with time zone,
ups text,
mib text,
value text
);
The trigger problem trigger statement:
CREATE CONSTRAINT TRIGGER "<unnamed>"
AFTER DELETE ON upsinfo
FROM data
NOT DEFERRABLE INITIALLY IMMEDIATE
FOR EACH ROW
EXECUTE PROCEDURE "RI_FKey_cascade_del"('<unnamed>', 'data', 'upsinfo', 'UNSPECIFIED', 'ups', 'ups');
I know that the RI_FKey_cascade_del function is defined differently in the different versions of pg_catalog. Note that search_path is set to ‘public, pg_catalog’ so I’m also confused why I have to set the schema.
Again I’m not a real PostgreSQL DBA so try to be kind.
Oof, those are really old postgres versions, including the version you're upgrading to (8.4 was released in 2009, and support ended in 2014).
The short answer is that, as long as upsinfo and data are being created and populated, you're probably fine, and good to go. But one of your foreign key relationships is broken.
The long answer, well, let me see if I can explain what is going on (or, at least, what I think is going on).
I'm guessing that the original table definition of data included something like FOREIGN KEY (ups) REFERENCES upsinfo (ups) ON DELETE CASCADE. That causes postgres to automatically make some trigger constraints: 1- every time there's a new row for data, make sure that its ups column matches an existing row in upsinfo, and 2- every time you delete a row from upsinfo, delete the corresponding rows in data, based on the matching ups value.
That (not very informative) error message can come up when the foreign key relationship doesn't work. In order for a foreign key to make sense, the referenced value needs to be unique -- there should be only one row in upsinfo for each distinct value of ups. In order for postgres to know that, there needs to be a unique index or primary key on upsinfo.ups.
In this case, one of a couple things could be breaking it:
There's no primary key or unique index on upsinfo.ups (postgres should not have allowed a foreign key, but may have in very old versions)
There used to be a unique index, but it hadn't properly enforced uniqueness, so it didn't get successfully imported (a bug, again likely from a very old version)
In either case, if that foreign key relationship is important, you can try to fix it once the import is complete. Start by trying to make a unique index on upsinfo.ups, and see if you have problems. If you do, resolve the duplicate entries, and try again till it works. Then issue something like:
ALTER TABLE data
ADD FOREIGN KEY (ups) REFERENCES upsinfo (ups) ON DELETE CASCADE;
Of course, if things are working, it's possible you don't need to fix the foreign key, in which case you're probably able to ignore those errors and just move forward.
Hope that helps, and good luck!
This seems to be a part of ON DELETE CONSTRAINT. If I were you I would delete all such statements and replace them with a proper constraint definition on the target table.
Table definition should then look like this:
CREATE TABLE bookings (
boo_id serial NOT NULL,
boo_hotelid character varying NOT NULL,
boo_roomid integer NOT NULL,
CONSTRAINT pk_bookings
PRIMARY KEY (boo_id),
CONSTRAINT fk_bookings_boo_roomid
FOREIGN KEY (boo_roomid)
REFERENCES rooms (roo_id) MATCH SIMPLE
ON UPDATE CASCADE ON DELETE CASCADE
) WITHOUT OIDS;
And this part is what will internally create the trigger:
CONSTRAINT fk_bookings_boo_roomid
FOREIGN KEY (boo_roomid)
REFERENCES rooms (roo_id) MATCH SIMPLE
ON UPDATE CASCADE ON DELETE CASCADE
But, to be honest, I do not have an understanding for an upgrade to an unsupported version. You know the Postgres is version 9.5 now, right?
I have a PostgreSQL 9.5 back-end and with LibreOffice Base v 5.1.3.2 (x64) I am trying to create some data entry forms for various tables all with 1:many relationships. These tables all have UUID auto generated primary-keys.
LibreOffice does not like these PostgreSQL auto generated primary keys. It keeps giving me errors when I try to create a new record, sometimes when I try to edit a new record and won't give me access to the sub-form after I try to create a new parent record. Its like it cannot commit the records and isn't getting an "update" from PSQL upon a new create record.
I have discovered on the net that this is a known problem with all PostgreSQL auto generated PKEYS (UUID, SERIAL, etc) and the LibreOffice native PostgreSQL drivers.
Does anyone have a solution to this problem?
Phil
We have two copies of a simple application that is based on SQLite. The application has 10 tables with a variety of relations between the tables. We would like to merge the databases to a single Postgres database with the same schema. We can use Talend to facilitate this, however the issue is that there would be duplicate keys (as both the source databases are independent). Is there a systematic method by which we can insert data into Postgres with the original key plus an offset resulting from loading the first database?
Step 1. Restore the first database.
Step 2. Change foreign keys of all tables by adding the option on update cascade.
For example, if the column table_b.a_id refers to the column table_a.id:
alter table table_b
drop constraint table_b_a_id_fkey,
add constraint table_b_a_id_fkey
foreign key (a_id) references table_a(id)
on update cascade;
Step 3. Update primary keys of the tables by adding the desired offset, e.g.:
update table_a
set id = 10000+ id;
Step 4. Restore the second database.
If you have the possibility to edit the script with database schema (or do the transfer manually with your own script), you can merge steps 1 and 2 and edit the script before the restore (adding the option on update cascade for foreign keys in tables declarations).
I designed an entity data model with two entities between which there exists a many to many relationship. When I auto-generate SQL code to generate the database for this model, it has generated a table (two columns) to keep track of this many-to-many association. However, this table has a PRIMARY KEY NONCLUSTERED on both columns.
Since I want this to work on SQL Azure which doesn't like tables with only nonclustered indices, I was wondering whether there is a good way of telling the code generation to generate clustered indices? Thanks!
I have another file called Model.indexes.sql that contains scripts to create additional indexes beyond the basic ones EF generates, such as those for performance optimizations.
Although this is not ideal, I added into this an index drop and create for each EF association to convert the Non-Clustered indexes into indexed ones:
ALTER TABLE [dbo].[MyAssociation]
DROP CONSTRAINT [PK_MyAssociation]
GO
ALTER TABLE [dbo].[MyAssociation]
ADD CONSTRAINT [PK_MyAssociation]
PRIMARY KEY CLUSTERED ([Table1_Id], [Table2_Id] ASC);
GO
This is executed after every "Generate Database from Model...". I would love a more elegant solution.