I'd like to see a transactional history of operations that have been executed on one of my tables, and which user executed each operation. Does PostgreSQL offer any tools that allow that kind of historical lookup?
Maybe others can inform you if there are any good utilities that handle this for you, but I know triggers can be used to create audit logs of tables. If you need more complex logic for how and what you want to audit you can also write procedural functions and incorporate them in your triggers. Example: Postgres trigger function
See this link: http://wiki.postgresql.org/wiki/Audit_trigger_91plus
Related
I need to implement transparent encryption in Postgres (TDE). To do this, I found which functions are called when INSERT and SELECT are triggered. Used LLVM-LLDB on SELECT.
I'm trying to do the same with INSERT - does not work
the base process stops and does not allow insertion. I did everything about one manual https://eax.me/lldb/.
What could be wrong? how to find out which functions are called upon insertion (in the case of SELECT, this is secure_read, etc.)? And, if anyone knows how to change the function code in the source?
First, the client and server are located on the same machine, the same user adds data and reads them
Unfortunately I do not have enough reputation to add a screenshots.
The SQL statements are the wrong level to start debugging. You should look at the code where blocks are read and written. That would be in src/backend/storage/smgr.
Look at the functions mdread and mdwrite in md.c. This is probably where you'd start hacking.
PostgreSQL v12 has introduced “pluggable storage”, so you can write your own storage manager. See the documentation. If you don't want to patch PostgreSQL, but have an extension that will work with standard PostgreSQL, that would be the direction to take.
So far I have only covered block storage, but you must not forget WAL. Encrypting that will require hacking PostgreSQL.
This is a complex question which you should post to PostgreSQL hackers distribution list https://www.postgresql.org/list/pgsql-hackers/.
You could start by setting a GDB breakpoint in Executor_Start in execMain.c
We are imagining scenarios for rolling upgrades to our cloud environments and are imagining leveraging CREATE OR ALTER statements for our procedures, triggers, views and functions.
There is a gap between
DROP <OBJECT_CLASS> IF EXISTS <theThing>
GO
and the eventual
CREATE <OBJECT_CLASS> <theNewThing>
GO
that will mean clients wont find the object, and in the case of triggers, the app may not even be able to detect an error. I am hoping that
CREATE OR ALTER <OBJECT_CLASS> <theOldOrNewThing>
can hopefully, maybe, provide a means to perform this otherwise non-transactional operation in the middle of transactions, perhaps locking out all new transactions until it is done (would that not be ideal!).
What are the risk or real-life experiences that you see?
Does anyone have authoritative technical insight into how SQL Server (2016 SP1+) will react to the CREATE OR ALTER statement on a running database?
After researching triggers, I've only come up with thing showing how to update, insert and delete. It seems like that's even part of the syntax itself. DB2 Docs on Triggers
Is there any kind of trigger, or something similar, which would let me track a larger set of actions, things like SELECT and ALTER TABLE?
We (unfortunately) share a database with some teams which we don't strictly trust to not do things like run insane SELECT statements (locking up the databases) or ALTER TABLE without us knowing. We'd like to be able to track when these happen and what user made the change.
Please, no suggestions recommending we get our database separated in some way. We're working towards that in the long term, but we need this in the short term.
The link for DB2 docs given in your post points to IBM i. Is your database DB2 for i?
For IBM i, you can use detailed database monitor to capture all SQL statements including DDL commands like alter table. However, running detailed database monitor for all users causes performance problems.
We were in same situation as you with multiple teams using same server as database. We ended up writing custom user exits to capture all SQLs (with user details) in our case.
Link to database monitor:
https://www.ibm.com/support/knowledgecenter/en/ssw_ibm_i_72/rzajq/strdbmon.htm
Is it possible to get the table structure like db2look from SQL?
Or the only way is from command line? Thus, by wrapping a external stored procedure in C I could call the db2look, but that is not what I am looking for.
Clarification added later:
I want to know which tables have the non logged option from SQL.
It is possible to create the table structure from regular SQL and the public DB2 catalog - however, it is complex and requires some deeper skills.
The metadata is available in the DB2 catalog views in the SYSCAT schema. For a regular table you would first start off by looking into the values in SYSCAT.TABLES and SYSCAT.COLUMNS. From there you would need to branch off to other views depending on what table and column options you are after, whether time-travel tables, special partitioning rules, or many other options are involved.
Serge Rielau published an article on developerWorks called Backup and restore SQL schemas for DB2 Universal Database that provides a set of stored procedures that will do exactly what you're looking for.
The article is quite old (2006) so you may need to put some time in to update the procedures to be able to handle features that were added to DB2 since the date of publication, but the procedures may work for you now and are a nice jumping off point.
As there is no support for user defined functions or stored procedures in RedShift, how can i achieve UPSERT mechanism in RedShift which is using ParAccel, a PostgreSQL 8.0.2 fork.
Currently, i'm trying to achieve UPSERT mechanism using IF...THEN...ELSE... statement
e.g:-
IF NOT EXISTS(SELECT...WHERE(SELECT..))
THEN INSERT INTO tblABC() SELECT... FROM tblXYZ
ELSE UPDATE tblABC SET.,.,.,. FROM tblXYZ WHERE...
which is giving me error. As i'm writing this code independently without including it in function or SP's.
So, is there any solution to achieve UPSERT.
Thanks
You should probably read this article on upsert by depesz. You can't rely on SERIALIABLE for this since, AFAIK, ParAccel doesn't support full serializability support like in Pg 9.1+. As outlined in that post, you can't really do what you want purely in the DB anyway.
The short version is that even on current PostgreSQL versions that support writable CTEs it's still hard. On an 8.0 based ParAccel, you're pretty much out of luck.
I'd do a staged merge. COPY the new data to a temporary table on the server, LOCK the destination table, then do an UPDATE ... FROM followed by an INSERT INTO ... SELECT. Doing the data uploads in big chunks and locking the table for the upserts is reasonably in keeping with how Redshift is used anyway.
Another approach is to externally co-ordinate the upserts via something local to your application cluster. Have all your tools communicate via an external tool where they take an "insert-intent lock" before doing an insert. You want a distributed locking tool appropriate to your system. If everything's running inside one application server, it might be as simple as a synchronized singleton object.