Convert simple INSERT to multi-table transaction inside Postres? - postgresql

I have an application that uses a library that sends simple INSERTs to Postgres.
Here is an illustrative example:
INSERT INTO COMPANIES (company_name)
VALUES ('Acme, Inc.');
However, I need to take this a bit further and embed the simple INSERT into a transaction that also inserts into other tables, like this:
BEGIN;
-- some insert to another table first
INSERT INTO COMPANIES (company_name)
VALUES ('Acme, Inc.');
-- some insert to yet another table last
COMMIT;
The simplest solution would be to modify the library's code to do what I need. However, I don't really want to do that as it makes patch management for the library very difficult.
An alternative would be to catch and modify the INSERT via the ORM just before it gets handed over to Postgres. However, this is also not a great approach as it requires dealing with undocumented internals of the ORM, which might change with new versions of the ORM.
I have therefore been looking for a way to do this inside Postgres.
My focus has been on a TRIGGER that would fire on INSERT to the relevant table, using the INSTEAD OF specifier.
And this is the point where I'm stuck, as this seems to only work for views.
Is what I'm trying to do feasible? Might somebody have a code example?
Thank you very much.

Related

Is it possible to change field value(+1) into Postgres table by each request?

I decided to make visit counter and change value of table_field by each request. I know that Postgresql has sequences option like "CREATE SEQUENCE...bla-bla-bla.." but i don't know how to hook it with an each request to database table. Is it possible to change value of table by this way? Does somebody know such thing?
Thank you so much my kind samaritian friend )
It is not a good idea to turn all read operations into write operations just to have some stats. Ideally you should have a dedicated table for stats, and enqueue updates to that table so it is updated asynchronously.
Said that, you can replace your plain select queries by stored procedures, so in the procedure you can do whatever you want and return whatever you want. But I do not recommend that for production for this use case.

DB2 "Triggers" on actions beyond update, insert and delete

After researching triggers, I've only come up with thing showing how to update, insert and delete. It seems like that's even part of the syntax itself. DB2 Docs on Triggers
Is there any kind of trigger, or something similar, which would let me track a larger set of actions, things like SELECT and ALTER TABLE?
We (unfortunately) share a database with some teams which we don't strictly trust to not do things like run insane SELECT statements (locking up the databases) or ALTER TABLE without us knowing. We'd like to be able to track when these happen and what user made the change.
Please, no suggestions recommending we get our database separated in some way. We're working towards that in the long term, but we need this in the short term.
The link for DB2 docs given in your post points to IBM i. Is your database DB2 for i?
For IBM i, you can use detailed database monitor to capture all SQL statements including DDL commands like alter table. However, running detailed database monitor for all users causes performance problems.
We were in same situation as you with multiple teams using same server as database. We ended up writing custom user exits to capture all SQLs (with user details) in our case.
Link to database monitor:
https://www.ibm.com/support/knowledgecenter/en/ssw_ibm_i_72/rzajq/strdbmon.htm

PostgreSQL: OK to allow errors?

Before I try to insert a row into a PostgreSQL table, should I query whether the insert would violate a constraint?
I do check when the insert would cause unwanted side-effects (e.g., auto-increment) upon an error.
But, if there are no possible side effects, is it OK to just blindly try to insert into a table? Or, is it better practice to prevent errors by anticipating them when possible (as advised in Objective-C)?
Also, when performing the insert inside an SQL function, will other queries (e.g., CTEs) inside the function get rolled back if the insert fails?
In general testing before hand is not a good idea because it requires you to explicitly lock tables to prevent other clients from changing or inserting data between your test and inserts. Explicit locking is bad for concurrency.
Serials getting auto incremented by failed inserts is in general not a problem. Just don't assume the values inserted into the database are consecutive.
A database and obj-c are two completely different things. Let the database check for problems, it is much easier to add the appropriate constraints to your schema then it is to check everything in your client program.
The default is to rollback to the start of the transaction. But you can control it with savepoints and rollback to savepoint. However a CTE is part of the query and queries are always rolled back completely when part of them fails. However you might be able to work around that by splitting the CTE of into a full query that creates a temp table. Then you can use the temp table instead of the cte.

Writing scripts for PostgreSQL to update database?

I need to write an update script that will check to see if certain tables, indexes, etc. exist in the database, and if not, create them. I've been unable to figure out how to do these checks, as I keep getting Syntax Error at IF messages when I type them into a query window in PgAdmin.
Do I have to do something like write a stored procedure in the public schema that does these updates using Pl/pgSQL and execute it to make the updates? Hopefully, I can just write a script that I can run without creating extra database objects to get the job done.
If you are on PostgreSQL 9.1, you can use CREATE TABLE ... IF NOT EXISTS
On 9.0 you can wrap your IF condition code into a DO block: http://www.postgresql.org/docs/current/static/sql-do.html
For anything before that, you will have to write a function to achieve what you want.
Have you looked into pg_tables?
select * from pg_tables;
This will return (among other things) the schemas and tables that exist in the database. Without knowing more of what you're looking for, this seems like a reasonable place to start.

Is it possible to query data from tables by object_id?

I was wondering whether it is possible to query tables by specifying their object_id instead of table names in SELECT statements.
The reason for this is that some tables are created dynamically, and their structure (and names) are not known before, and yet I would like to be able to write sprocs that are capable of querying these tables and working on their content.
I know I can create dynamic statements and execute it, but maybe there are some better ways, and I would be grateful if someone could share how to approach it.
Thanks.
You have to query sys.columns and build a dynamic query based on that.
There are no better ways: SQL isn't designed for adhoc or unknown sturctures.
I've never worked on an application in 20 years where I don't know what my data looks like. Either your data is persisted or it should be in XML or JSON or such if it's transient-