I wonder if anyone had ever stumbled upon finding a user which made a data change (update) in the table that has activated CDC (Change Data Capture).
For example, table "User" has its corresponding table, in which info about changes is stored, called "User_CT" and it has a column called __$start_lsn which can be used in a function sys.fn_cdc_map_lsn_to_time to get the time of that transaction. Can you, somehow, find which user executed that transaction?
I really hope that there is an answer to that question.
Related
I am trying to find users I have created in postgresql with date and time stamp along with permissions assigned to them. I have searched few tables like pg_user but couldn't find it.
Please suggest.
Questions like that come up frequently, usually asking for the table creation time, so I'll try to answer that for good.
PostgreSQL does not track the creation time of objects or other auditing information like that in the database. Even if we considered such a feature, it would be questionable how that should work. If you restore a pg_dump, would the creation time of a table or role be the time of the restore? If not, there would have to be a way to override the table or role creation time in SQL. But then you cannot prevent people from dropping an object and creating it again with a fake old creation timestamp.
The correct way to approach such a requirement is the log file. Set log_statement = 'ddl' and all DDL statements will get logged.
I have a local database which is the production database, on which all operations are being done real time. I am storing the log on each action in an audit log table in another database via trigger. It basically checks if any change is made in any of the row's column it will remove that row and add it AGAIN (which is not a good way I think as it should simply update it but due to some reasons I need to delete and add it again).
There are some tables on which operations are being done rapidly like 100s of rows are being added in database. This is slowing the process of saving the data into audit log table. Now if trigger has to like delete 100 rows and add 100 again it will affect the performance obviously and if number of rows increases it will reduce the performance more.
What should be the best practice to tackle this, I have been looking into Read Replica and Foreign Data Wrapper but as for Read Replica it's only Readable and not writable for PostgreSQL and I don't really get to know how Foreign Data Wrapper gonna help me as this was suggested by one of my colleague.
Hope someone can guide me in right direction.
A log is append-only by definition. Loggers should never be modifying or removing existing entries.
Audit logs are no different. Audit triggers should INSERT an entry for each change (however you want to define "change"). They should never UPDATE or DELETE anything*.
The change and the corresponding log entry should be written to the same database within the same transaction, to ensure atomicity/consistency; logging directly to a remote database will always leave you with a window where the log is committed but the change is not (or vice versa).
If you need to aggregate these log entries and push them to a different database, you should do it from an external process, not within the trigger itself. If you need this to happen in real time, you can inform the process of new changes via a notification channel.
* In fact, you should revoke UPDATE/DELETE privileges on the audit table from the user inserting the logs. Furthermore, the trigger should ideally be a SECURITY DEFINER function owned by a privileged user with INSERT rights on the log table. The user connecting to the database should not be given permission to write to the log table directly.
This ensures that if your client application is compromised (whether due to a malfunction, or a malicious user e.g. exploiting an SQL injection vulnerability), then your audit log retains a complete and accurate record of everything it changed.
As we know details of every job are stored in rdbms in table Hsp_Job_Status. But unfortunately this table gets truncated each time we re-start services. As per business requirement we needed to keep a record of BR's launched by user and it's details. So we had developed a work around and created a trigger on the table such that it inserted each new row/update in a backup table. This was working fine uptill now.
Recently after re-start the the values of old Job_id (i.e primary key), are not appearing in order. It started series form a previous number. It was going in series of 106XX but after re-start the numbering started from 100XX. As Hsp_job_status was truncated during restart, there was no issue of duplicate primary key in that table. But it created duplicate values in backup table. And this has created issues with backup table and procedure that we use.
Usually the series is continuous one even after table truncate. So may be some thing has gone wrong during restart. Can you please suggest me as to what should i check and do to resolve this issue.
Thanks in advance.
Partial answer: the simple solution is to insert an instance prefix to the Job_Id, and on service startup increment the active instance. The instance table can then include details from startup/shutdown events to help drive SLA metrics. Unfortunately, I don't know how you would go about implementing such a scheme, since it's been many years since I've spoken any SQL dialects.
Does anyone here knows how to get the name
of the table that was changed,updated or deleted
in SQLite?..i found the function changes() and totalChanges()
but they only return the number of database rows that were
changed or inserted or deleted by the most recently completed SQL statement.
In most RDBMS's you have some kind of journaling that captures all database transactions for data backup and recovery. In Oracle, it's called a redo log. That is where you would go to check if a table name has changed.
But I'm not familiar enough with SqlLite to know if this is available. I did find a thread where a similar question was asked, and it was recommended to implement it yourself. Try reading through the this link and see if this satisfies your requirements:
But aside from all of that, I would also recommend that your app use views, that way you protect the model from changes.
I am currently in the process of setting up a database structure to manage events.
Events have properties which are stored in separate tables like 'location', 'timeslots', 'files' etc.
This in itself is not so difficult to set up. However, the tool needs to be able to host multiple events at the same time. So, for example a user can manage a the ABC event which occurs simultaneously with the DEF event. Obviously the database needs to be able to differentiate between these different events.
My first idea would be to add a table with unique identifiers describing the event (name:ABC) and then add a field to all my tables with this unique identifier.
This would however mean that the tool can become a bit slow because it has to query tables that contain data completely irrelevant to that particular event.
Are there any other solutions or should I just not worry about the bloat?
Answering a pretty old question, but it comes out 6th in a google query for postgre database events so it could be helpful to others: no, don't worry about it. Just create indices on the foreign key in the referencing tables to speed up the look up.