In PostgreSQL 9.2 / PostGIS 2.0.2 I had an index on a spatial column, created with
CREATE INDEX tiger_data_sld_the_geom_gist ON tiger_data.sld USING gist(the_geom);
Subsequently dropped the index with
DROP INDEX tiger_data_sld_the_geom_gist;
But now, when I try to recreate, I get this error:
# CREATE INDEX tiger_data_sld_the_geom_gist ON tiger_data.sld USING gist(the_geom);
ERROR: relation "tiger_data_sld_the_geom_gist" already exists
Dropping again doesn't work. It says that the index doesn't exist:
# DROP INDEX tiger_data_sld_the_geom_gist;
ERROR: index "tiger_data_sld_the_geom_gist" does not exist
I haven't found the relation "tiger_data_sld_the_geom_gist" in any list of database objects, have tried DROP TABLE, and searched around for solutions.
What is this mystery relation "tiger_data_sld_the_geom_gist", and how do I remove it so that I can create the index?
Edit:
Also have tried restarting the server, and dumping / dropping / reloading the table (dropped with CASCADE).
Unless you are setting the search_path GUC to (or at least including) the tiger_data schema, you need to add the schema to the index name to issue the DROP INDEX (I'd use it in any case for safety):
DROP INDEX tiger_data.tiger_data_sld_the_geom_gist;
That's because the index always go to the same schema of the table it belongs to. If the above doesn't solve your problem, you can check if this relation name exists and on each schema it is in with the following:
SELECT r.relname, r.relkind, n.nspname
FROM pg_class r INNER JOIN pg_namespace n ON r.relnamespace = n.oid
WHERE r.relname = 'tiger_data_sld_the_geom_gist';
It will return the kind (i for indexes, r for tables, S for sequences and v for views) of any relation that has the name tiger_data_sld_the_geom_gist and name of the schema it belongs to.
Though not particularly efficient, this appears to have done the trick:
Dump the table with pg_dump.
Drop the table.
Dump the database with pg_dump.
Drop the database.
Recreate the database and reload from dump files.
Related
I am having a problem with dropping schema to create it again.
When I run:
drop schema 'schema_name' cascade
I get the error message saying "schema does not exist".
But when I search pg_namespace, the 'schema_name' is still there; even with \dn in SQL shell, the 'schema_name' still exists.
I tried to run:
delete
from pg_namespace pn
--where nspname = 'schema_name'
I had no rows returned. When I ran again, I found that the row is deleted (I ran the SELECT query to check), but again the row is alive with another oid.
So when I try to create a new schema with the same namespace, I get the error message saying that the duplicate key value is violating the condition of "pg_namespace_nspname_index": (nspname)=(schema_name) key already exists.
So I cannot create the new schema with the same name, and in the navigator panel I can still see the schema_name schema.
How can I permanently delete/drop this schema correctly?
Congratulations. By messing with the catalog tables, you have probably destroyed this database beyond recovery. You cannot drop a schema by deleting a row from pg_namespace. This is the time to get your backup.
Before you did that, the problem was probably simple enough, like an uppercase character, and you forgot the double quotes.
I have a table MAIN_SCHEMA.TEST in which I created a Index on a column CHECK_ID.
CHECK_ID is also a FOREIGN_KEY constraint in TEST table.
This table contains only 50 records.
By Mistake the index got created in Default schema DEFAULT_SCHEMA.CHECK_ID_IDX.
CREATE INDEX DEFAULT_SCHEMA.CHECK_ID_IDX(CHECK_ID ASC);
So I am trying to drop this index but the drop query gets stuck for long time.
DROP INDEX DEFAULT_SCHEMA.CHECK_ID_IDX.
there are no locks on this table when I checked.
Instead of dropping and recreating the index with the right schema, could you just try to RENAME the index? It requires the existing SCHEMA.NAME pair together with the new as input. It will not move any data, but just update the metadata.
In a postgres database, I have a unique constraint and two unique indexes created for the same. I deleted the constraint using the query below:
alter table my_schema.users drop constraint users_dept_uk
It has dropped the constraint and one index but there was a second index which still exists.
The follwoing query is still telling me the index exists:
SELECT r.relname, r.relkind, n.nspname
FROM pg_class r INNER JOIN pg_namespace n ON r.relnamespace = n.oid
WHERE r.relname = 'users_dept_idx';
It gives the following output:
users_dept_idx, i, my_schema
When I execute the query below:
drop index my_schema.users_dept_idx
I am getting the error:
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedObject) index "users_dept_idx" does not exist
What am I missing here? Not able to delete it and not able to insert data because of this index which I no longer want.
It is weird that the index name needs quotation around it. Below command worked absolutely fine and dropped the index as well
drop index my_schema."users_dept_idx"
I had a similar problem. It turns out that when you create an index it is created in the same schema as the underlying table. In this case, you don't have to use the "schema" part of the name.
When you drop it you have to use the full name of the index:
create index if not exists ah_login_minute on api.acc_history(login,updated_minute);
drop index if exists ah_login_minute; -- this fails, you have to use full name
drop index if exists api.ah_login_minute; -- this works
I have many tables in different databases and want to bring them to a database.
It seems like I have to create foreign table in the database (where I want to merge them all) with schemas of all the tables.
I am sure, there is a way to automate this (by the way, I am going to use psql command) but I do not know where to start.
what I have found so far is I can use
select * from information_schema.columns
where table_schema = 'public' and table_name = 'mytable'
I added more detail explanation.
I wanted to copy tables from another database
the tables have same column names and data type
using postgres_fdw, I needed to set up a field name and data type for each tables (the table names are also same)
then, I want to union the tables have same name all to have one single table.
for that, I am going to add prefix on table
for instance, mytable in db1, mytable in db2, mytable in db3 as in
db1_mytable, db2_mytable, db3_mytable in my local database.
Thanks to Albe's comment, I managed it and now I need to figure out doing 4th step using psql command.
I have moved some records from my SOURCE table in DB_1 into an ARCHIVE table in another DB_2 (ie. INSERTED the records from SOURCE into ARCHIVE and then DELETED the records from SOURCE.)
My SOURCE table has the following index created as SOURCE_1:
CREATE UNIQUE NONCLUSTERED INDEX SOURCE_1
ON dbo.SOURCE(TRADE_SET_ID, ORDER_ID)
The problem is - when I try to insert the rows back into SOURCE from ARCHIVE, Sybase throws the following error:
Attempt to insert duplicate key row in object 'SOURCE' with unique index 'SOURCE_1'
And, of course, subsequently fails the insertions.
I confirmed that my SOURCE table does not have these duplicates because the following query returned empty:
select * from DB_1.dbo.SOURCE
join DB_2.dbo.ARCHIVE
on DB_1.dbo.SOURCE.TRADE_SET_ID = DB_2.dbo.ARCHIVE.TRADE_SET_ID
AND DB_1.dbo.SOURCE.ORDER_ID = DB_2.dbo.ARCHIVE.ORDER_ID
If the above query returned nothing, then that means I haven not violated my unique index constraint on the 2 columns, however Sybase complains that I have.
Does anyone have any ideas on why this is happening?
If Sybase is anything like SQL Server in this regard (Which I'm more familiar with), I would suspect that the index is blocking the insert. Try disabling the index (along with any other indexes or autoincrement columns) on your archive version before copying over to it, then re-enabling. Its probable that Sybase would try to automatically create IDs for the insertions, which would interfere with the existing records.