I have to investigate who or what caused tables rows to disappear.
So, I am thinking about creating "on before delete" trigger that logs the script that invokes the deletion. Is this possible? Can I get the db client name or event better - the script that invokes delete query and log it to another temporarly created log table?
I am open to other solutions, too.
Thanks in advance!
You can't get "the script" which issued the delete statement, but you can get various other information:
current_user will return the current Postgres user that initiated the delete statement
inet_client_addr() will return the IP address of the client's computer
current_query() will return the complete statement that caused the trigger to fire
More details about that kind of of functions are available in the manual:
http://www.postgresql.org/docs/current/static/functions-info.html
The Postgres Wiki contains two examples of such an audit trigger:
https://wiki.postgresql.org/wiki/Audit_trigger_91plus
https://wiki.postgresql.org/wiki/Audit_trigger (somewhat outdated)
Related
We are trying to debug a very old web application that uses DB2.
I would like to run a trace to see what happens when I click on a button but as soon as I try I receive this error:
create event monitor ........ for statement where AUTH_ID='.......' write to table
"USER" does not have privilege to perform operation "CREATE EVENT MONITOR".. SQLCODE=-552, SQLSTATE=42502,
Is evident to me that our user doesn't has enough privilege to run a trace.
In T-SQL there is a way to impersonate another user:
USE AdventureWorks2019
GO
EXECUTE AS USER = 'Test';
SELECT * FROM Customer;
REVERT;
I would like to know if there is the same command in DB2.
The goal is to try to run something like SQL Server Profiler for DB2 and sniff the queries.
Yes, I already tried to run GRANT DBADM ON DATABASE TO USER E.....O and of course the system replied:
"E.....O" does not have the privilege to perform operation "GRANT".. SQLCODE=-552, SQLSTATE=42502, DRIVER=3.69.56
We are stuck and we cannot move because we cannot know how the queries work. Asking more privileges to our user is not an option as we are migrating a customer from a competitor to our side.
What I'm trying to do is a sort of privilege escalation without committing any crime.
I also taught about connecting to the DB2 database from SQL Server and use PolyBase but as far as I know such feature only allows me to query and I cannot sniff the parameters.
Db2 has a couple of ways to "impersonate", but all within the security architecture and fully audited.
I would recommend checking out "Trusted Context", basically adding privileges or switching roles based on predefined connection properties.
Another option is to look into SET SESSION AUTHORIZATION (also known as SET SESSION_USER). It switches the SESSION_USER to a different user ID.
As said, that works with the proper privileges and the security admin involved.
Depending on what you want to inspect, db2trc and other command could be of use, too.
I want to monitor all queries to my postgresql instance. Following these steps, I created a custom db parameter group, set log_statement to all and log_min_duration_statement to 1, applied the parameter group to my instance, and rebooted it. Then I fired a POST request to the instance but a record of query wasn't to be found in the Recent Events & Logs tab of my instance. Doing a SELECT * FROM table query in psql however shows that the resource was created and the post request worked. What am I missing in order to see the logs?
Setting log_min_duration_statement to 1 tells Postgres to only log queries that take longer than 1 ms. If you set it to 0 instead, all queries will be logged (the default, no logging, is -1).
You followed the right steps, all that is left is to make sure the parameter group is properly applied to your Postgres instance. Look at the Parameter Group in the Configuration Details tab of your instance, and make sure they have the right group name followed by "in-sync".
A reboot is usually required when changing parameter groups on an instance, this may be what is (was?) missing in your case.
I am using Perl's getpwnam to check whether an entry exists in the LDAP database I'm using to store user information. This is used in a script that deletes LDAP entries.
The thing is, when I run the script once, it's successful and I can no longer see the entry via the Unix command getent passwd and it's deleted from the LDAP database as well. The problem is, when I try to run the script again and ask it to delete the same user entry (to check that it's idempotent), the getpwnam test still returns success (and prints the entry that was just deleted) which causes the script to throw an error about attempting to delete a non-existent entry.
Why is Perl's getpwnam behaving like this? Is there a more robust test for LDAP entry existence short of binding to the LDAP server and querying it?
nscd cache is not keeping track of your deletions, apparently.
I'm reluctant to call this an "answer" since I don't know if nscd is supposed to stay synchronized with deletions, or how to fix it. The only thing I've ever done with nscd is remove it.
I am trying to validate one field through postgres trigger.
If targeted field has value in decimals,i need to through a warning but allowing the user to save the record.
I tried with options
RAISE EXCEPTION,RAISE - USING
but it's throwing error on UI and transaction is aborted.
I tried with options
RAISE NOTICE,RAISE WARNING
through which warning is not shown and record is simply saved.
It would be great if any one help on this.
Thanks in Advance
You need to set client_min_messages to a level that'll show NOTICEs and WARNINGs. You can do this:
At the transaction level with SET LOCAL
At the session level with SET
At the user level with ALTER USER
At the database level with ALTER DATABASE
Globally in postgresql.conf
You must then check for messages from the server after running queries and display them to the user or otherwise handle them. How to do that depends on the database driver you're using, which you haven't specified. PgJDBC? libpq? other?
Note that raising a notice or warning will not cause the transaction to pause and wait for user input. You really don't want to do that. Instead RAISE an EXCEPTION that aborts the transaction. Tell the user about the problem, and re-run the transaction if they approve it, possibly with a flag set to indicate that an exception should not be raised again.
It would be technically possible to have a PL/Perlu, PL/Pythonu, or PL/Java trigger pause execution while it asked the client via a side-channel (like a TCP socket) to approve an action. It'd be a really bad idea, though.
HI,
Iam trying to insert batch of records at a time when any of the record fails to insert i need to trap that record and log that to my failed record maintanance table and then the insert should continue. Kindly help on how to do this.
If using a Spring or EJB container there is a simple trick which works very well : provide a LogService witn a logWarning(String message) method. The method must be annotated/configured with the REQUIRES_NEW transaction setting.
If not then you'll have to simulate it using API calls. Open a different connection for the logging, when you enter the method begin the transaction, before leaving commit the transaction.
When not using transactions for the insert, there is actually nothing special you need to do, as by default most database run in autocommit and commit after every statement.