How to find relationships of removed column in PgAdmin4/ - postgresql

I have a psql rds aws database which is opened in my PgAdmin4 client.
And recently i removed one column from table in this database.
But now in my app i see error from backend column student.scale does not exist
I know it's because of me who deleted this column.
But how do i find out what is callind this column.
As in backend code there nothing what could call this column, so it's something in Database some relationship or key.
Is there way how i can find out it the easy way ?

Related

I am using aws RDS Postgres as my database. I want to remove specific columns from getting logged of RDS audits

Lets say I have table of Salary containing columns id,created_on,company,value. From this as value is the sensitive information I do not want that to be audited. How to do this?
I understand we can disable log per table level but i want to understand how can we do that at column level

What happens to existing data with psql dbname < pg_dump_file [duplicate]

This question already has an answer here:
will pg_restore overwrite the existing tables?
(1 answer)
Closed 9 months ago.
I have an database on aws' rds and I use a pg_dump from a local version of the database, then psql dbname > pg_dump_file with proper arguments for remote upload to populate the database.
I'd like to know what is expected to happen if that rds db already contains data. More specifically:
Data present in the local dump, but absent in rds
Data present on rds, but absent in the local data
Data present in both but that have been modified
My current understanding:
New data will be added and be present in both after upload
Data in rds should be unaffected?
The data from the pg_dump will be present in both (assuming the same pk, but different fields otherwise)
Is that about correct? I've been reading this, but it's a little thin on how the restore is actually performed, so I'm having a harder time figuring that out. Thanks.
EDIT: following #wildplasser comment, by looking at the pg_dump file it appears that the following happens:
CREATE TABLE [....]
ALTER TABLE [setting table owner]
ALTER SEQUENCE [....]
For each table in the db. Then, again one table at a time:
COPY [tablename] (list of cols) FROM stdin;
[data to be copied]
Finally, more ALTER statements to set contraints, foreign keys etc.
So I guess the ultimate answer is "it depends". One could I suppose remove the CREATE TABLE [...], ALTER TABLE, ALTER SEQUENCE statements if those are already created as they should. I am not positive yet what happens if one tries CREATE TABLE with an existing table (error thrown perhaps?).
Then I guess the COPY statements would overwrite whatever already exists. Or perhaps throw an error. I'll have to test that. I'll write up an answer once I figure it out.
So the answer is a bit dull. Turns out that even if one removes the initial statements before the copy, if the table as an primary key (thus uniqueness constrains) then it won't work:
ERROR: duplicate key value violates unique constraint
So one gets shutdown pretty quickly there. One would have I guess to rewrite the dump as a list of UPDATE statements instead, but then I guess might as well write a script to do so. Unsure if pg_dump is all that useful in that case.

Restoring PG database from dump fails due to generated columns

We use our Postgres database dumps as a way to backup/reset our staging DB. As part of that, we frequently remove all rows in the DB and insert the rows from the PG dump. However, the generated columns are included as part of the PG dump, but with their values instead of the DEFAULT keyword.
Trying to run the DB dump triggers cannot insert into column errors since one cannot insert values into a generated column. How do we dump our DB and recreate it from the dump despite the generated columns?
EDIT: Note that we cannot use GENERATED BY DEFAULT or OVERRIDING SYSTEM VALUE since those are only available for identity columns and not generated columns.
EDIT 2: It seems that it's a special case for us that the values are dumped instead of as DEFAULT. Any idea why that might be?

Odoo 10 is not backing up DB in PostgreSQL 9.5. Shows "SQL state: 22008. Timestamp out of range on account_bank_statement_line."

At our company we had a DB crash a few days ago due to hardware reasons. We recovered from that but since then we're having this following error every time we try to back up our DB.
pg_dump: ERROR: timestamp out of range
pg_dump: SQL command to dump the contents of table "account_bank_statement_line"
The error is in "account_bank_statement_line" table, where we have 5 rows created with only the 'create_date' column has a date of year 4855(!!!!), the rest of the columns have null value, even the id (primary key). We can't even delete or update those rows using pgAdmin 4 or PostgreSQL terminal.
We're in a very risky stage right now with no back up of few days of retail sales. Any hints at least would be very highly appreciated.
First, if the data are important, hire a specialist.
Second, run your pg_dump with the option --exclude-table=account_bank_statement_line so that you at least have a backup of the rest of your database.
The next thing you should do is to stop the database and take a cold backup of all the files. That way you have something to go back to if you mess up.
The key point to proceed is to find out the ctids (physical addresses) of the problematic rows. Then you can use that to delete the rows.
You can approach that by running queries like
SELECT create_date FROM account_bank_statement_line
WHERE ctid < '(42,0)';
and try to find the ctids where you get an error. Once you have found a row where the following falls over:
SELECT * FROM account_bank_statement_line
WHERE ctid = '(42,14)';
you can delete the row by its ctid.
Once you are done, take a pg_dumpall of the database cluster, create a new one and restore the dump. It is dangerous to continue working with a cluster that has experienced corruption, because corruption can remain unseen and spread.
I know what we did might not be the most technically advanced, but it solved our issue. We consulted a few experts and what we did was:
migrated all the data to a new table (account_bank_statement_line2), this transferred all the rows that had valid data.
Then we deleted the "account_bank_statement_line" table and
renamed the new table to "account_bank_statement_line".
After that we could DROP the table.
Then the db backup ran smoothly like always.
Hope this helps anyone who's in deep trouble like us. Cheers!

Linking MS Access table to PG Admin schema

I would like to link a MS Access table to a table in PG admin if it is possible for use in a Postgres query. I have searched for an answer but all I can find is answers for listing postgres tables in Access which is almost the opposite of what I want to do.
I want to be able to access the data entered in an access form without having to continually import the data into a table in PG Admin.
I'm not even sure that is possible but any method that is easier than importing the table into PG Admin every day would be useful.
Thanks
Gary
Try the PostgreSQL OGR Foreign Data Wrapper. Its built for spatial data, but it works perfectly well with non-spatial tables. If you have the PostGIS extension installed you will already have it.
https://github.com/pramsey/pgsql-ogr-fdw
There are several examples on that page, but the command
ogr_fdw_info -s <pathToAccessFile> -l <tablename>
will return create server and a create foreign table statements which you can edit as required then run in pgAdmin.