I'm trying to drop a table from the PGadmin3 GUI, which means right click and then DROP TABLE. In few seconds PGadmin crash.
Similar happen if I use psql adopting the shell; here I write:
myDBname=> DROP TABLE my_table
and it returns,
myDBname->
(note the difference between => and ->)
It seems that nothing is happen. But if I write \dt the table is still there.
Anyone can help me?
Related
I'm trying to add a unique constraint to an existing table using the PSQL command line:
ALTER TABLE table_name ADD CONSTRAINT test_key UNIQUE (col1, col2);
The problem is that after I hit enter, nothing happens. I just get a new blank line without the command prompt, but the cursor is still there. I have to eventually hit ctrl+C to cancel the statement. Can anyone tell me what I'm missing?
It turns out it was running the script. There was nothing wrong with my script, it just didn't give me any indication it was running, other than me being able to enter in to new blank lines and the cursor blinking. It took a while but it eventually created the constraint.
DataGrip is connected to a PostgreSQL database running on localhost.
DROP TABLE users returns 'table does not exist'
Dropping the table via the context menu -> drop works just fine, checked on another table (which had the same problems). In preview, exactly the same SQL I'm trying to run in the console is shown.
public.users yields the same results.
herokulocal.public.users yields "cross-database references are not implemented"
Other queries, such as select * from pg_catalog.pg_tables; work just fine.
Additionally, users is not visible in the results yielded by select * from pg_catalog.pg_tables;.
Given that the exact same query generated by WebStorm for context menu -> drop does not work in the console, it makes me think my console's running in some different context. Please note I'm a database layman.
What possibly could be wrong here?
I connected with a wrong database - postgres instead of herokulocal.
I've been running a postgres database on an external hard drive and it appears it got corrupted after reconnecting it to a sleeping laptop that THOUGHT the server was still running. After running a bunch of reindex commands to fix some other errors I'm now getting the below error.
ERROR: missing chunk number 0 for toast value 12942 in pg_toast_2618
An example of a command that returns this error is:
select table_name, view_definition from INFORMATION_SCHEMA.views;
I've run the command "select 2618::regclass;" that gives you the problem table. However reindexing this doesn't seem to solve the problem. I see a lot of suggestions out there about finding the corrupted row and deleting it. However, the table that appears to have corruption in my instance is pg_rewrite and it appears to NOT be a corrupted row but a corrupted COLUMN.
I've run the following commands, but they aren't fixing the problem.
REINDEX table pg_toast.pg_toast_$$$$;
REINDEX table pg_catalog.pg_rewrite;
VACUUM ANALYZE pg_rewrite; -- just returns succeeded.
I can run the following SQL statement and it will return data.
SELECT oid, rulename, ev_class, ev_type, ev_enabled, is_instead, ev_qual FROM pg_rewrite;
However, if I add the ev_action column to the above query it throws a similar error of:
ERROR: missing chunk number 0 for toast value 11598 in pg_toast_2618
This error appears to affect all schema related queries to things like INFORMATION_SCHEMA tables. Luckily it seems as though all of my tables and data in my tables are fine but I cannot query the sql that generates those tables and any views I have created seem inaccessible (although I've noticed I can create new views).
I'm not familiar enough with Postgresql to know exactly what pg_rewrite is, but I'm guessing I can't just truncate the data in the table or set ev_action = null.
I'm not sure what to do next with the information I've gathered so far.
(At least) your pg_rewrite catalog has data corruption. This table contains the definition of all views, including system views that are necessary for the system to work.
The best thing to do is to restore a backup.
You won't be able to get the database back to work, the best you can do is to salvage as much of the data as you can.
Try a pg_dump. I don't know off-hand if that needs any views, but if it works, that is good. You will have to explicitly exclude all views from the dump, else it will most likely fail.
If that doesn't work, try to use COPY for each table to get at least the data out. The metadata will be more difficult.
If this is an important database, hire an expert.
In a given schema we can see the following tables:
The search_path is :
aact=# set search_path to public, ctgov ;
SET
The problem is that in the middle of a psql session a number of these tables just started showing zero rows. Then I exit'ed psql and restarted it: again those tables are still empty.
The interesting thing is this is not affecting all tables: see below:
I am not a regular user of postgres and have no clue what happened / is happening here. What should I look for?
We have a table someone created in DB2. I have no idea how they created it. But when I edit the table, It edits just fine. But after edit I can not query the table at all THE COLUMN CANNOT BE ADDED TO THE TABLE BECAUSE THE TABLE HAS AN EDIT PROCEDURE.
I looked ibm site and found this how to edit table using procedure
But I have no idea how to do this.
Is there anything that I can do to fix this with out following the procedure mentioned in second link?
I restarted server, but still no help. First I'm not able to figure out why I get the error in first place.
I'm using DB Visualizer and DB2 on linux.
This is sometimes default behavior of DB2. We need to run reorgchk command to fix these errors. More info below..
http://publib.boulder.ibm.com/infocenter/db2luw/v9/index.jsp?topic=/com.ibm.db2.udb.admin.doc/doc/r0000888.htm
http://publib.boulder.ibm.com/infocenter/db2luw/v9/index.jsp?topic=/com.ibm.db2.udb.admin.doc/doc/c0023297.htm