Backup file and query issue - callback

Implement SQL script that find the differences between the contents of a relational table BOOK and a relational table with the same name as _DOC.
The script must first list the rows added to the relational table BOOK after the backup file was created, and finally list the rows changed in the relational table BOOK after the backup file was created.
In brief, the script must first list all added rows, then all deleted rows, and finally all changed rows in a relational table BOOK. It is allowed to use more than one SELECT statement to implement this task.
Created data as the backup file and unable to query the issue

Related

Return name of the Postgres database containing value in a row in a table

I am looking for some implementation to essentially search the entire Postgres instance, all of it's databases to find the database containing a specific row in a specific table.
So I have a Postgres instance with 170 databases, every database has the exact same schema and table layout. In each database there is a specific table "SM_Project" and specific row "ProjectName" in said table. We have instances where we know a ProjectName or at least a partial match (can use LIKE with %) but we have no idea which database it lives in.
I am wanting to script, simplify the ability to enter a ProjectName and search the entire Postgres instance (all databases in there) and return the name of the db that contains that record.
I foolishly thought this would be a simple task, with my lack of experience I've tried to do this with several SELECT statements and while I can explicitly connect to a database and search for the record from there, I can't find a way to return the parent database name. I was thinking ti clunkily script it in bash to iterate through the databases until we get a true return on a EXISTS in a SELECT statement. But I feel like I'm overlooking something fundamental.
So my setup is like this"
Postgres
db1
SM_Project
ProjectName
db2
SM_Project
ProjectName
db3
SM_Project
ProjectName
In short I'm looking to return the name of the database that contains a record of ProjectName equal to a string.
Any thoughts would be very welcomed!

Cloudant/Db2 - How to determine if a database table row was read from?

I have two databases - Cloudant and IBM Db2. I have a table in each of these databases that hold static data that is only read from and never updated. These were created a long time ago and I'm not sure if they are used today so I wish to do a clean-up.
I want to determine if these tables or rows from these tables, are still being read from.
Is there a way to record the read timestamp (or at least know if it is simply accessed like a dirty bit) on a row of the table when it is read from?
OR
Record the read timestamp of the entire table (if any record from it is accessed)?
There is SYSCAT.TABLES.LASTUSED system catalog column in Db2 for DML statements on whole table.
There is no way to track each table row read access.

How to get a history of latest postgres table writes regardless of which table it is

Assume I don't know which tables have been written to (not queries, I mean writes). Can I find out names of tables that were last written to?
Don't want constant reporting. Just a query I can run after testing newly added code.
Once I know the names, I'm set; I can just query them using normal sql and see the records. But need to know which tables in a 200 table database. Something like:
select names of last 10 tables that have been written to

copy csv postgres ignore rows that violate constraints

I have a .csv file with ~300,000 rows, some of which violate certain constraints I set in my postgres database. Is there a way to copy my .csv file into the database and have postgres filter out the rows that violate the constraints? I do not want these rows to show up in the database.
If this is not possible, is there any other way to solve this problem?
what I'm doing right now is
COPY blocksequences from '/tmp/blocksequences.csv CSV HEADER;
And I get
'ERROR: new row for relation "blocksequences" violates check constraint "blocksequences_partid3_check"
DETAIL: Failing row contains (M001-M049-S186, M001, null, M049, S186).
CONTEXT: COPY blocksequences, line 680: "M001-M049-S186,M001,,M049,S186"
reason for the error: column that contains M049 is not allowed to have that string entered. Many other rows have violations like this.
I read a little about exception when check violation --do nothing am I on the right track here? seems like it's only a mysql thing maybe
Usually this is done in this way:
create a temporary table with the same structure as the destination one but without constraints,
copy data to the temporary table with COPY command,
copy rows that do fulfill constraints from temp table to the destination one, using INSERT command with conditions in the WHERE clause based on the table constraint,
drop the temporary table.
When dealing with really large CSV files or very limited server resources, use the extension file_fdw instead of temporary tables. It's much more efficient way but it requires server access to a CSV file (while copying to a temporary table can be done over the network).
In Postgres 12 you can use the WHERE clause in COPY FROM.

How to UPDATE table from csv file?

How to update table from csv file in PostgreSQL? (version 9.2.4)
Copy command is for insert. But I need to update table. How can I update table from csv file without temp table?
I don't want to copy to temp table from csv file and update table from temp table.
And no merge command like Oracle?
The simple and fast way is with a temporary staging table, like detailed in this closely related answer:
How to update selected rows with values from a CSV file in Postgres?
If you don't "want" that for some unknown reason, there are more ways:
A foreign data wrapper with file_fdw.
You can run UPDATE commands directly using this one.
pg_read_file(). For special use cases.
Details in this related answer:
Read data from a text file inside a trigger
There is no MERGE command in Postgres, even less for COPY.
Discussion about whether and how to add it is ongoing. Check out the Postgres Wiki for details.