Dropping primary key from a materialized view but unable to recreate it - why? - oracle10g

I have created a materialized view with fast refresh. It has a primary key (with using index) which I want to alter. I ran the following statement in sqlplus:
SQL> alter table
2 MV
3 drop constraint PK_MV;
Table altered.
SQL> alter table
2 MV
3 add constraint PK_MV primary key
4 (
5 A_ID
6 , B_ID
7 )
8 using index
9 tablespace IDX;
alter table
*
ERROR in line 1:
ORA-00955: name is already being used by existing object
It seems that the primary key PK_MV still exists. However, isn't it dropped by the first statement?
Oracle version is Enterprise Edition Release 10.2.0.5.0 - 64bit.

Oracle tends to do certain things in an odd way, out of pure spite, causing odd errors, and to make things worse, when errors occur, it tends to give error messages that are anywhere from useless to outright misleading.
In your case, dropping the constraint PK_MV does not also drop the index behind it, so you are still left with a PK_MV index. Then, later, when you try to re-create the constraint, Oracle insists to also create an index for it, and it just won't stand the possibility that an index with that name might already exist.
To make matters worse, the error message does not give you any hints about the nature of the existing object, so it leaves you with the impression that the existing object is a constraint, since that's what you are trying to create, while in fact the existing object is an index, which you never dealt with, have no use for, and probably don't want to know anything about.
Ah, lovely Oracle. My condolences for having to use it.
So, try the following:
alter table MV drop constraint PK_MV cascade;
The cascade keyword will cause the index behind the constraint to also be dropped.

Related

A single Postgresql table is not allowing deletes and foreign keys are not working

We have a Postgresql database table with many tables. All of the tables seem to be functioning perfectly except for 1. In the last day or two it has stopped performing row deletes. When we try something simple like
delete from bad_table where id_foo = 123;
It acts as if it has successfully deleted the row. But when we do
select * from bad_table where id_foo = 123;
the row is still there.
The same type of queries work fine on all the other tables we tried.
In addition, the foreign keys on this table are not working. There is a foreign key constraint on a column that references a different table. There is an id in the "bad_table", but that id does not exist in the referenced table. Again, foreign key constraints appear to be working fine in all other tables, it is just this one. We tried dropping and recreating the foreign key (which seemed to be successful), but it had no effect.
Between my coworkers and myself we probably have 80 years of relational database experience across oracle, sql server, postgres, etc. and none of us has ever seen anything like this. We've been banging our heads against a wall and are now reaching out to the wider world to see if anyone has any ideas of what we could try. Has anyone else ever seen something like this in Postgres?
It turned out that the issue with foreign keys was solved by dropping the foreign key constraint and then immediately adding it again.
The issue with not being able to delete rows was fixed by dropping a trigger than was called on row delete and then immediate recreating the same trigger.
I don't know what happened, but it is acting like a constraint and a trigger on that particular table were corrupted.

Unexpected creation of duplicate unique constraints in Postgres

I am writing an idempotent schema change script for a Postgres 12 database. However I noticed that if I include the IF NOT EXISTS in an ADD COLUMN statement then even if the column already exists it is adding duplicate Indexes for the uniqueness constraint which already exists. Simple example:
-- set up base table
CREATE TABLE IF NOT EXISTS test_table
(id SERIAL PRIMARY KEY
);
-- statement intended to be idempotent
ALTER TABLE test_table
ADD COLUMN IF NOT EXISTS name varchar(50) UNIQUE;
Running this script creates a new index test_table_name_key[n] each time it is run. I can't find anything in the Postgres documentation and don't understand why this is allowed to happen? If I break it into two parts eg:
ALTER TABLE test_table
ADD COLUMN IF NOT EXISTS name varchar(50);
ALTER TABLE
ADD CONSTRAINT test_table_name_key UNIQUE (name);
Then the transaction fails because Postgres rejects the creation of a constraint which already exists (which I can then catch in a DO EXCEPTION block). As far as I can tell this is because doing it by this approach I am forced to give the constraint a name. This constrasts with the ALTER COLUMN SET NOT NULL which can be run multiple times without error or side effects as far as I can tell.
Question: why does it add a duplicate unique constraint and are there any problems with having multiple identical indexes on a table column? (I think this is a subtle 'error' and only spotted it by chance so am concerned it may arise in a production situation)
You can create multiple unique constraints on the same column as long as they have different names, simply because there is nothing in the PostgreSQL code that forbids that. Each unique constraint will create a unique index with the same name, because that is how unique constraints are implemented.
This can be a valid use case: for example, if the index is bloated, you could create a new constraint and then drop the old one.
But normally, it is useless and does harm, because each index will make data modifications on the table slower.

Is it safe to drop a table column constraint in postgres

I'm looking at a production table in postgres with the following constraint which due to third party collaboration we need to remove.
"customer_email_unique" UNIQUE CONSTRAINT, btree (customer_email)
This is a production table, what risks are there if I remove the constraint? If it causes problems can it be recreated after to an existing table, with existing data in it?
It looks like the command to drop the constraint is
ALTER TABLE your_table DROP CONSTRAINT customer_email_unique;
We're a React/ Node stack and I can see what the code is doing with regard to what will happen if the constraint is dropped, my lack of knowledge is more towards data and what happens if you drop a constraint.
Thanks,
This is a production table, what risks are there if I remove the constraint? If it causes problems can it be recreated after to an existing table, with existing data in it?
The risk is that you'll drop the constraint and non-unique entries will be inserted. You won't be able to reapply the unique constraint without deleting the non-unique rows or updating them to be non-unique. Another risk it that you'll drop the wrong constraint, or reapply the constraint incorrectly. Finally, there may be code which assumes that column is unique.
To mitigate this risk, write a script to drop the constraint ("up"), and one to restore uniqueness and reapply the constraint ("down"). Test it on an equivalent table on a non-production database.
This is the general idea of schema migrations. Every schema change is done by two scripts, an "up" script to apply the change and a "down" script to undo the change. Many ORMs, such as typeorm, support migrations. They make schemas reproducible so all environments know they have the same schemas, schemas can be tested, and in general mitigate the risk of schema changes.

How to ensure validity of foreign keys in Postgres

Using Postgres 10.6
The issue:
Some data in my tables violates the foreign key constraints (not sure how). The constraints are ON DELETE CASCADE ON UPDATE CASCADE
On a pg_dump of the database, those foreign keys are dropped (due to being in an invalid state?)
A pg_restore is done into a blank database, which no longer has the foreign keys
The new database has all its primary keys updated to valid keys not used in a second database. Tables which had invalid data do not have their foreign keys updated, due to the now missing constraint.
A pg_dump of the new database is done, then the database is deleted
On a pg_restore into a second database which has the foreign key constraints, the data gets imported in an invalid state, and corrupts the new database.
What I want to do is this: Every few hours (or once a day, depending of how long the query would take), is to verify that all data in all the tables which have foreign keys are valid.
I have read about ALTER TABLE ... VALIDATE CONSTRAINT ... but this wouldn't fix my issue, as the data is not currently marked as NOT VALID. I know could do statements like:
DELETE FROM a WHERE a.b_id NOT IN ( SELECT b.id )
However, I have 144 tables with foreign keys, so this would be rather tedious. I would also maybe not want to immediately delete the data, but log the issue and inform user about a correction which will happen.
Of course, I'd like to know how the original corruption occurred, and prevent that; however at the moment I'm just trying to prevent it from spreading.
Example table:
CREATE TABLE dependencies (
...
from_task int references tasks(id) ON DELETE CASCADE ON UPDATE CASCADE NOT NULL,
to_task int references tasks(id) ON DELETE CASCADE ON UPDATE CASCADE NOT NULL,
...
);
Dependencies will end up with values for to_task and from_task which do not exist in the tasks table (see image)
Note:
Have tried EXPLAIN ANALYZE nothing odd
pg_tablespace, has just two records. pg_default and pg_global
relforcerowsecurity, relispartition are both 'false' on both tables
Arguments to pg_dump (from c++ call) arguments << "--file=" + fileName << "--username=" + connection.userName() << databaseName << "--format=c"
This is either an index (or table) corruption problem, or the constraint has been created invalid to defer the validity check till later.
pg_dump will never silently "drop" a constraint — perhaps there was an error while restoring the dump that you didn't notice.
The proper fix is to clean up the data that violate the constraint and re-create it.
If it is a data corruption problem, check your hardware.
There is no need to regularly check for data corruption, PostgreSQL is not in the habit of corrupting data by itself.
The best test would be to take a pg_dump regularly and see if restoring the dump causes any errors.

Errors creating constraint trigger

Let me start by saying that I’m a Linux/Unix admin. That being said my manager has tasked me with moving older PostgreSQL databases to a RedHat server running 8.4.20. I was successful moving a 7.2.1 db but I’m running into issues moving a 7.4.20 db.
I use pg_dump –c filename and psql < filename. For the problematic db everything runs until I get to a CREATE CONSTRAINT TRIGGER statement. If I run it as it is in the file I get :
NOTICE: ignoring incomplete trigger group for constraint "" FOREIGN KEY data(ups) REFERENCES upsinfo(ups)
DETAIL: Found referenced table's DELETE trigger.
CREATE TRIGGER
If I run set schema 'pg_catalog'; I get:
ERROR: relation "upsinfo" does not exist
The tables (I think) involved are:
CREATE TABLE upsinfo (
ups text NOT NULL,
ipaddr inet,
rcomm text,
wcomm text,
reachable boolean,
managed boolean,
comments text,
region text
);
CREATE TABLE data (
date timestamp with time zone,
ups text,
mib text,
value text
);
The trigger problem trigger statement:
CREATE CONSTRAINT TRIGGER "<unnamed>"
AFTER DELETE ON upsinfo
FROM data
NOT DEFERRABLE INITIALLY IMMEDIATE
FOR EACH ROW
EXECUTE PROCEDURE "RI_FKey_cascade_del"('<unnamed>', 'data', 'upsinfo', 'UNSPECIFIED', 'ups', 'ups');
I know that the RI_FKey_cascade_del function is defined differently in the different versions of pg_catalog. Note that search_path is set to ‘public, pg_catalog’ so I’m also confused why I have to set the schema.
Again I’m not a real PostgreSQL DBA so try to be kind.
Oof, those are really old postgres versions, including the version you're upgrading to (8.4 was released in 2009, and support ended in 2014).
The short answer is that, as long as upsinfo and data are being created and populated, you're probably fine, and good to go. But one of your foreign key relationships is broken.
The long answer, well, let me see if I can explain what is going on (or, at least, what I think is going on).
I'm guessing that the original table definition of data included something like FOREIGN KEY (ups) REFERENCES upsinfo (ups) ON DELETE CASCADE. That causes postgres to automatically make some trigger constraints: 1- every time there's a new row for data, make sure that its ups column matches an existing row in upsinfo, and 2- every time you delete a row from upsinfo, delete the corresponding rows in data, based on the matching ups value.
That (not very informative) error message can come up when the foreign key relationship doesn't work. In order for a foreign key to make sense, the referenced value needs to be unique -- there should be only one row in upsinfo for each distinct value of ups. In order for postgres to know that, there needs to be a unique index or primary key on upsinfo.ups.
In this case, one of a couple things could be breaking it:
There's no primary key or unique index on upsinfo.ups (postgres should not have allowed a foreign key, but may have in very old versions)
There used to be a unique index, but it hadn't properly enforced uniqueness, so it didn't get successfully imported (a bug, again likely from a very old version)
In either case, if that foreign key relationship is important, you can try to fix it once the import is complete. Start by trying to make a unique index on upsinfo.ups, and see if you have problems. If you do, resolve the duplicate entries, and try again till it works. Then issue something like:
ALTER TABLE data
ADD FOREIGN KEY (ups) REFERENCES upsinfo (ups) ON DELETE CASCADE;
Of course, if things are working, it's possible you don't need to fix the foreign key, in which case you're probably able to ignore those errors and just move forward.
Hope that helps, and good luck!
This seems to be a part of ON DELETE CONSTRAINT. If I were you I would delete all such statements and replace them with a proper constraint definition on the target table.
Table definition should then look like this:
CREATE TABLE bookings (
boo_id serial NOT NULL,
boo_hotelid character varying NOT NULL,
boo_roomid integer NOT NULL,
CONSTRAINT pk_bookings
PRIMARY KEY (boo_id),
CONSTRAINT fk_bookings_boo_roomid
FOREIGN KEY (boo_roomid)
REFERENCES rooms (roo_id) MATCH SIMPLE
ON UPDATE CASCADE ON DELETE CASCADE
) WITHOUT OIDS;
And this part is what will internally create the trigger:
CONSTRAINT fk_bookings_boo_roomid
FOREIGN KEY (boo_roomid)
REFERENCES rooms (roo_id) MATCH SIMPLE
ON UPDATE CASCADE ON DELETE CASCADE
But, to be honest, I do not have an understanding for an upgrade to an unsupported version. You know the Postgres is version 9.5 now, right?