PostgreSQL with DataGrip - "table does not exist" - postgresql

DataGrip is connected to a PostgreSQL database running on localhost.
DROP TABLE users returns 'table does not exist'
Dropping the table via the context menu -> drop works just fine, checked on another table (which had the same problems). In preview, exactly the same SQL I'm trying to run in the console is shown.
public.users yields the same results.
herokulocal.public.users yields "cross-database references are not implemented"
Other queries, such as select * from pg_catalog.pg_tables; work just fine.
Additionally, users is not visible in the results yielded by select * from pg_catalog.pg_tables;.
Given that the exact same query generated by WebStorm for context menu -> drop does not work in the console, it makes me think my console's running in some different context. Please note I'm a database layman.
What possibly could be wrong here?

I connected with a wrong database - postgres instead of herokulocal.

Related

Why is this error coming in datagrip IDE?

I have downloaded pgAdmin, made a server, restored the database. Then I connected datagrip IDE with postresql. But then this error is coming
.
The search path set for the console is postgres.poublic (see top-right corner).
The table film seems to exist in schema dvdrental.public (see the database explorer on the left)
Potential explanations:
the table has a different name (for example "Film")
the schema of the table is not on your search_path
the table was created in a different database
the transaction that created the table never committed

After numerous (error-free) inserts to Aurora (PostgreSQL) RDS serverless cluster with SQLAlchemy I can't see the table. What happened to my data?

After changes to some Terraform code, I can no longer access the data I've added into an Aurora (PostgreSQL) database. The data gets added into the database as expected without errors in the logs but I can't find the data after connecting to the database with AWS RDS Query Editor.
I have added thousands of rows with Python code that uses the SQLAlchemy/PostgreSQL engine object to insert a batch of rows from a mappings dictionary, like so:
if (count % batch_size) == 0:
self.engine.execute(Building.__table__.insert(), mappings)
self.session.commit()
The logs from this data ingest show no errors, the commits all appear to have completed successfully. So the data was inserted someplace, I just can't work out where that is, as it's not showing up in the AWS Console RDS Query Editor. I run the SQL below to find the table, with zero rows returned:
SELECT * FROM information_schema.tables WHERE table_name = 'buildings'
This has worked as expected before (i.e. I could see the data in the Aurora database via the Query Editor) so I'm trying to work out which of the recently modified Terraform settings have caused the issue.
Where else can I look to find where the data was inserted, assuming that it was actually inserted somewhere? If I can work that out it may help reveal the culprit.
I suspect misleading capitalization. Like "Buildings". Search again with:
SELECT * FROM information_schema.tables WHERE table_name ~* 'building';
Or:
SELECT * FROM pg_catalog.pg_tables WHERE tablename ~* 'building';
Or maybe your target wasn't a table? You can "write" to simple views. Check with:
SELECT * FROM pg_catalog.pg_class WHERE relname ~* 'building';
None of this is specific to RDS. It's the same in plain Postgres.
If the last query returns nothing, you are in the wrong database. (You are aware that there can be multiple databases in one DB cluster?) Or you have a serious problem.
See:
How to check if a table exists in a given schema
Are PostgreSQL column names case-sensitive?
Once I logged more information regarding the connection I discovered that the database name being used was incorrect, so I have been querying the Aurora instance using the wrong database name. Once I worked this out and used the correct database name the select statements in AWS RDS Query Editor worked as expected.

Postgresql Corrupted pg_catalog table

I've been running a postgres database on an external hard drive and it appears it got corrupted after reconnecting it to a sleeping laptop that THOUGHT the server was still running. After running a bunch of reindex commands to fix some other errors I'm now getting the below error.
ERROR: missing chunk number 0 for toast value 12942 in pg_toast_2618
An example of a command that returns this error is:
select table_name, view_definition from INFORMATION_SCHEMA.views;
I've run the command "select 2618::regclass;" that gives you the problem table. However reindexing this doesn't seem to solve the problem. I see a lot of suggestions out there about finding the corrupted row and deleting it. However, the table that appears to have corruption in my instance is pg_rewrite and it appears to NOT be a corrupted row but a corrupted COLUMN.
I've run the following commands, but they aren't fixing the problem.
REINDEX table pg_toast.pg_toast_$$$$;
REINDEX table pg_catalog.pg_rewrite;
VACUUM ANALYZE pg_rewrite; -- just returns succeeded.
I can run the following SQL statement and it will return data.
SELECT oid, rulename, ev_class, ev_type, ev_enabled, is_instead, ev_qual FROM pg_rewrite;
However, if I add the ev_action column to the above query it throws a similar error of:
ERROR: missing chunk number 0 for toast value 11598 in pg_toast_2618
This error appears to affect all schema related queries to things like INFORMATION_SCHEMA tables. Luckily it seems as though all of my tables and data in my tables are fine but I cannot query the sql that generates those tables and any views I have created seem inaccessible (although I've noticed I can create new views).
I'm not familiar enough with Postgresql to know exactly what pg_rewrite is, but I'm guessing I can't just truncate the data in the table or set ev_action = null.
I'm not sure what to do next with the information I've gathered so far.
(At least) your pg_rewrite catalog has data corruption. This table contains the definition of all views, including system views that are necessary for the system to work.
The best thing to do is to restore a backup.
You won't be able to get the database back to work, the best you can do is to salvage as much of the data as you can.
Try a pg_dump. I don't know off-hand if that needs any views, but if it works, that is good. You will have to explicitly exclude all views from the dump, else it will most likely fail.
If that doesn't work, try to use COPY for each table to get at least the data out. The metadata will be more difficult.
If this is an important database, hire an expert.

SQl Table displays no table under connection

I am using Orable DB in my application and SQL Developer Query Browser for UI Purpose.
Today I faced a very strage issue, I run a query in the query browser which gives me records successfully. But under the connection -> Table tree hierarchy, no tables are displayed.
From stackoverflow I got a solution from here : SQLDeveloper displays no tables under connections where it says tables
But under "Other Users" tree, it displays large list of schemas. So I am confused which schema is used by me query.
Any suggestion ??
Assuming that you are running a query that involves an unqualified identifier
SELECT *
FROM some_object
and that you haven't done something odd like changing the current_schema of the session, the object is either something that exists in your schema or is a public synonym.
SELECT owner, object_type, object_name
FROM all_objects
WHERE object_name = 'SOME_OBJECT'
will show you all the objects that you have access to with that name.

How to View Execution Plan for Query Containing a Temp Table in Toad for SQL Server?

I am trying to tune the performance of a stored procedure that contains a temp table in Toad for SQL Server. After selecting "Include Actual Execution Plan" from the 'Editor' menu, I run the query. The Results Set returns values as expected, however, the Execution Plan tab shows the following error:
Invalid object name '#temp'.
I have tried creating the temp tables first then just executing the SELECT statement that references it, I tried creating the temp tables as global temp tables and running the SELECT statement in another window, and I have messed with the SHOWPLAN_TEXT and STATISTICS PROFILE (as mentioned in this question) but I keep receiving the same error. The only thing I have not tried is using a table variable, but the changes I will be making cannot be done on table variables, so this is not really an option for me at this time.
Has anyone else come across this or have any ideas as to what I might be doing wrong?
You'll want to use the ISQL command line utility on a machine that has SQL Server client package installed. Or any other utility that can submit a query to SQL Server.
ISQL Docs and How to get an execution plan (2nd part of the post)