how to find user creation date along with all permissions in postgresql - postgresql

I am trying to find users I have created in postgresql with date and time stamp along with permissions assigned to them. I have searched few tables like pg_user but couldn't find it.
Please suggest.

Questions like that come up frequently, usually asking for the table creation time, so I'll try to answer that for good.
PostgreSQL does not track the creation time of objects or other auditing information like that in the database. Even if we considered such a feature, it would be questionable how that should work. If you restore a pg_dump, would the creation time of a table or role be the time of the restore? If not, there would have to be a way to override the table or role creation time in SQL. But then you cannot prevent people from dropping an object and creating it again with a fake old creation timestamp.
The correct way to approach such a requirement is the log file. Set log_statement = 'ddl' and all DDL statements will get logged.

Related

How can I discover which user and when an index was created?

I have a postgres table with duplicated indexes (called someName and someName1) applied to the same columns. I would like to know which user executed the ddl that created these indexes, and when it happened. Is this possible on postgres?
If you hadn't already set up some kind of auditing or aggressive logging before this happened, then your options are pretty limited.
If you retain WAL files, you could go exploring through those (with pg_waldump and other tools, or by doing PITR) to pinpoint the time. This will probably not be a quick and painless exercise. By looking at surrounding changes, or at log files from the same time, you might be able to figure out who was logged on at the time and also had permissions to create the index.

Optimize the trigger to add audit log

I have a local database which is the production database, on which all operations are being done real time. I am storing the log on each action in an audit log table in another database via trigger. It basically checks if any change is made in any of the row's column it will remove that row and add it AGAIN (which is not a good way I think as it should simply update it but due to some reasons I need to delete and add it again).
There are some tables on which operations are being done rapidly like 100s of rows are being added in database. This is slowing the process of saving the data into audit log table. Now if trigger has to like delete 100 rows and add 100 again it will affect the performance obviously and if number of rows increases it will reduce the performance more.
What should be the best practice to tackle this, I have been looking into Read Replica and Foreign Data Wrapper but as for Read Replica it's only Readable and not writable for PostgreSQL and I don't really get to know how Foreign Data Wrapper gonna help me as this was suggested by one of my colleague.
Hope someone can guide me in right direction.
A log is append-only by definition. Loggers should never be modifying or removing existing entries.
Audit logs are no different. Audit triggers should INSERT an entry for each change (however you want to define "change"). They should never UPDATE or DELETE anything*.
The change and the corresponding log entry should be written to the same database within the same transaction, to ensure atomicity/consistency; logging directly to a remote database will always leave you with a window where the log is committed but the change is not (or vice versa).
If you need to aggregate these log entries and push them to a different database, you should do it from an external process, not within the trigger itself. If you need this to happen in real time, you can inform the process of new changes via a notification channel.
* In fact, you should revoke UPDATE/DELETE privileges on the audit table from the user inserting the logs. Furthermore, the trigger should ideally be a SECURITY DEFINER function owned by a privileged user with INSERT rights on the log table. The user connecting to the database should not be given permission to write to the log table directly.
This ensures that if your client application is compromised (whether due to a malfunction, or a malicious user e.g. exploiting an SQL injection vulnerability), then your audit log retains a complete and accurate record of everything it changed.

DB2 updated rows since last check

I want to periodically export data from db2 and load it in another database for analysis.
In order to do this, I would need to know which rows have been inserted/updated since the last time I've exported things from a given table.
A simple solution would probably be to add a timestamp to every table and use that as a reference, but I don't have such a TS at the moment, and I would like to avoid adding it if possible.
Is there any other solution for finding the rows which have been added/updated after a given time (or something else that would solve my issue)?
There is an easy option for a timestamp in Db2 (for LUW) called
ROW CHANGE TIMESTAMP
This is managed by Db2 and could be defined as HIDDEN so existing SELECT * FROM queries will not retrieve the new row which would cause extra costs.
Check out the Db2 CREATE TABLE documentation
This functionality was originally added for optimistic locking but can be used for such situations as well.
There is a similar concept for Db2 z/OS - you have to check that out as I have not tried this one.
Of cause there are other ways to solve it like Replication etc.
That is not possible if you do not have a timestamp column. With a timestamp, you can know which are new or modified rows.
You can also use the TimeTravel feature, in order to get the new values, but that implies a timestamp column.
Another option, is to put the tables in append mode, and then get the rows after a given one. However, this option is not sure after a reorg, and affects the performance and space utilisation.
One possible option is to use SQL replication, but that needs extra tables for staging.
Finally, another option is to read the logs, with the db2ReadLog API, but that implies a development. Also, just appliying the archived logs into the new database is possible, however the database will remain in roll forward pending.

track changes in database tables

I have a large postgresql database, and I want to track all it's tables if a change has been made.
The reason for that is that I can't know a relation between two different tables in the database.
I googled about it but I couldn't find anything helpful.
So how can I know if a change has been made to a table ?
There isn't currently a global audit function in PostgreSQL.
It'll be possible to build one using the new logical changeset extraction feature in 9.4, and I know some people are working on that.
In the mean time, you need to add some form of audit trigger to every table.

Seamlessly updating a postgres database - schemas, rename, how?

Actually a simple question, but I wasn't able to find any good conclusive answer.
Assuming a production database foo_prd, and a newer version of the same foo_new (on the same server) that is supposed to replace the old one. What is the cleanest way to seamlessly switch from _prd to _new?
RENAME-ing the databases would require to disconnect the current users via their pid. That would take down some requests, and new users might connect during the process. I was thinking of creating the tables of the new database as different SCHEMA and then change the search_path, e.g. from "$user",prd to "$user",new,prd.
What could possibly go wrong? Do you have any better suggestions? Am I taking the wrong approach altogether?
Do as you suggest: create the tables of the new database as different schema and then change the search_path.
But also create a user with the same name as the new schema and test everything before changing the search_path by logging in as this user with each of your apps - the new schema will be first in that user's search_path by default because the name matches.
Finally, take care when you come to drop the old schema - I suggest renaming first in case anything refers to it's objects using a qualified reference (eg prd.table or prd.function). After a few days/weeks it can then be dropped with confidence.
I would version my schema, and change my app to point to the new schema when ready.