track changes in database tables - postgresql

I have a large postgresql database, and I want to track all it's tables if a change has been made.
The reason for that is that I can't know a relation between two different tables in the database.
I googled about it but I couldn't find anything helpful.
So how can I know if a change has been made to a table ?

There isn't currently a global audit function in PostgreSQL.
It'll be possible to build one using the new logical changeset extraction feature in 9.4, and I know some people are working on that.
In the mean time, you need to add some form of audit trigger to every table.

Related

Is there a way to give write permissions only in a transaction in Postgres?

I work with a software that is used by a lot of different clients in several countries, with different needs, rules and constraints on their data.
When I make a change to the database's structure, I have a tool to test it on every client's database, obviously with read-only rights. This means that the best way to test a query like UPDATE table SET x = y WHERE condition
is to call the "read-only part" SELECT x FROM table WHERE condition.
It works but it's not ideal, as sometimes it is writing data that causes problems (mostly deadlocks or timeouts), meaning I can't see the problem until a client suffers from it.
I'm wondering if there is a way to grant write permissions in Postgres, but only when inside a transaction, and force a rollback on every transaction. This way, changes could be tested accurately on real data and still prevent any dev from editing it.
Any ideas?
Edit: the volumes are too large to consider cloning data for every dev who needs to run a query
This sounds similar to creating an audit table to record information about transactions. I would consider using a trigger to write a copy of the data to a "rollback" table/row and then copy the "rollback" table/row back on completion of the update.

How to replicate rows into different tables of different database in postgresql?

I use postgresql. I have many databases in a server. There is one database which I use the most say 'main'. This 'main' has many tables inside it. And also other databases have many tables inside them.
What I want to do is, whenever a new row is inserted into 'main.users' table I wish to insert the same data into 'users' table of other databases. How shall I do it in postgresql? Similarly I wish to do the same for all actions like UPDATE, DELETE etc.,
I had gone through the "logical replication" concept as suggested by you. In my case I know the source db name up front and I will come to know the target db name as part of the query. So it is going to be dynamic.
How to achieve this? is there any db concept available in postgresql? Or I welcome all other possible ways as well. Please share me some idea on this.
If this is all on the same Postgres instance (aka "cluster"), then I would recommend to use a foreign table to access the tables from the "main" database in the other databases.
Those foreign tables look like "local" tables inside each database, but access the original data in the source database directly, so there is no need to synchronize anything.
Upgrade to a recent PostgreSQL release and use logical replication.
Add a trigger on the table in the master database that uses dblink to access and write the other databases.
Be sure to consider what should be done if the row alreasdy exists remotely, or if the rome server is unreachable.
Also not that updates propogated usign dblink are not rolled back if the inboking transaction is rolled back

Found ddl change for a table in CDC management console

our target db is DB2 and source is ORACLE, we found ddl changes in CDC management console and i need to fix the instance in to proper running condition.
Paul Vernon answer assumes that what you are looking for is how to replicate DDL changes. I will assume that you don't want to replicate DDL changes, but just restart the subscription after minor layout changes (for example, after a column size has been increased or a column you are not going to replicate, changes).
If that is the case, right-click the specific table map on your subscription, and update table definition. I am not sure but I think after that, you have to refresh the entire subscription. If the table is very large, you will want to avoid refreshing them all, but that's another question.
Off course, if in the table change, a column has been added and you want to deal with it, you can edit column map and make the specific assignment you want to that column.
I hope this helps.

DB2 updated rows since last check

I want to periodically export data from db2 and load it in another database for analysis.
In order to do this, I would need to know which rows have been inserted/updated since the last time I've exported things from a given table.
A simple solution would probably be to add a timestamp to every table and use that as a reference, but I don't have such a TS at the moment, and I would like to avoid adding it if possible.
Is there any other solution for finding the rows which have been added/updated after a given time (or something else that would solve my issue)?
There is an easy option for a timestamp in Db2 (for LUW) called
ROW CHANGE TIMESTAMP
This is managed by Db2 and could be defined as HIDDEN so existing SELECT * FROM queries will not retrieve the new row which would cause extra costs.
Check out the Db2 CREATE TABLE documentation
This functionality was originally added for optimistic locking but can be used for such situations as well.
There is a similar concept for Db2 z/OS - you have to check that out as I have not tried this one.
Of cause there are other ways to solve it like Replication etc.
That is not possible if you do not have a timestamp column. With a timestamp, you can know which are new or modified rows.
You can also use the TimeTravel feature, in order to get the new values, but that implies a timestamp column.
Another option, is to put the tables in append mode, and then get the rows after a given one. However, this option is not sure after a reorg, and affects the performance and space utilisation.
One possible option is to use SQL replication, but that needs extra tables for staging.
Finally, another option is to read the logs, with the db2ReadLog API, but that implies a development. Also, just appliying the archived logs into the new database is possible, however the database will remain in roll forward pending.

Why do all my 'Sum of Units' rows all have the same values in PowerPivot?

I'm pretty new to PowerPivot and have a problem.
I created an SSIS project (.dtsx) to import around 10 million rows of data and an Analysis Services Tabular Project (.bim) to process the data model.
Up until today, everything worked as expected, but after making a schema change to add further columns to a table and updating the model, I now have a problem. When opening the existing connection in Business Intelligence Development Studio (BIDS) to update the schema changes, I was told that I would have to drop and reload the Sales and Returns tables as they were related.
Now, when I try to filter on a particular attribute, the Sales 'Sum of Units' column always displays the total sum of units for every row, instead of the correct values. I remember having this problem once when I was building the system, but it went away after re-processing the tables in BIDS... this time however, no amount of processing is making any difference.
I'm really hoping that this is a common problem and that someone has a nice easy solution for me, but I'll take whatever I can get at this stage. I'd also quite like to understand what is causing this. Many thanks in advance.
For anyone with a similar problem, I found the answer.
Basically, I had made a schema change and BIDS told me that I had to drop my SalesFact and ReturnsFact tables before updating the model with the new database schema. The problem was that I did not realise that relationships had been set up on these tables and so after re-adding them, the model was missing its relationships to the other tables... that's why all rows showed the same value.
The fix was to put the model into design view and to create relationships between the tables by clicking and dragging between them.
I knew it was something simple.