Postgres AUTONOMOUS_TRANSACTION equivalent on the same DB - postgresql

I'm currently working on a SpringBatch application that should insert some logs in case a certain type of error happens. The problem is that if the BatchJob fails, it automatically rollback everything done and that’s perfect, but it also rollback the error logs.
I need to achieve something similar to the AUTONOMOUS_TRANSACTION of Oracle while using PostgreSQL (14).
I’ve seen the DBLINK and it seem the only thing close to an alternative, but I have found some problems:
I need to avoid the connection string because the database host/port/name changes in the different environments, is it possible? I need to persist the data in the same database to technically I don’t need to connect to any other database but use the calling connection.
Is it possible to create a Function/Procedure that creates the takes care of all and I only have to call it Java side? Maybe this way I can somehow pass the connection data as a parameter in case that is not possible to avoid.
In a best case scenario I would be able to do something like:
dblink_exec(text sql);
That without arguments considers the same database where is been executed.
The problem is that I need this to be done without specifying any connection data, this will be inside a function on the executing db, in the same schema… that function will pass from one environment to the next one and the code needs to be the same so any name/user/pass needed must be avoided since they will change by environment. And since doing it in the same db and schema technically they can be inferred.
Thanks in advance!
At the moment I haven't try anything, I'm trying to get some information first.

Related

How to create warning message in trigger?

Is it possible to create a warning message in a trigger in Firebird 2.5?
I know I can create an exception message which will stop the user from saving the record changes, but in this instance I don't mind if the user continues.
Could I call a procedure that generates the message?
There is no mechanism in Firebird to produce warnings in PSQL code, you can only raise exceptions, which in triggers will result in the effect of the executed statement that fired the trigger to be undone.
In short, this is not possible.
There are workarounds possible, but those would require 'external' protocols, like, for example, inserting the warning message into a global temporary table, requiring the calling code to explicitly select from that temporary table after execution.
SQL model does provide putting query on pause and then waiting for extra input from client to either unfreeze it or fail it. SQL is not user-interactive service and there is no confirmation dialogs. You have to rethink your application design.
One possible avenue, nominally staying withing 2-tier client-server framework, would be creating temporary tabless for all the data you want to save (for example transaction-scope GTTs), and then have TWO stored procedures. One SP would be sanity-checking and returning list of warnings, if any. Another SP then would dump the data from GTTs to main, persistent tables without doing those checks.
Your client app would select warnings from the check-SP first, if it returns any then show them to the user, then either call save-SP and commit, or rollback without calling save-SP.
This is abusing C/S idea, so there would be dragons. First of all, you would have to have several GTTs and two SPs for E-V-E-R-Y pausable data saving in your app. And that can be a lot.
Also, notice, that database data may change after you called check-SP and before you called save-SP. Becuse some OTHER application running elsewhere could be changing and committing data during that pause. Especially if you transaction was of READ COMMMITTED kind. But with SNAPSHOT tx too.
Better approach would be to drop C/S scheme and go to 3-tier model, AKA multi-tier, AKA "Application Server". That way your client app sends the "briefcase" of data to the app-server, it would be app-server (not SQL triggers) doing all the data validation, and then it would be saving it to data storage backend, SQL or any other.
There, of course, still would be that problem, that data could had been changed by other users, why you paused one user and waited him to read and decide. But you would have more flexibility in app-server on data reconcilation, than you would have with plain SQL.

Best way to track the progress of a long-running function (from outside) - PostgreSQL 11?

What is the best way to track progress of a
long-running function in PostgreSQL 11?
Since every function executes in a single transaction, even if the function writes to some "log" table no other session/transaction can see this output unless the function completes with SUCCESS.
I read about some attempts here but they are from 2010.
https://www.endpointdev.com/blog/2010/04/viewing-postgres-function-progress-from/
Also, this approach looks terribly inconvenient.
As of today what is the best way to track progress?
One approach that I know... is to turn the func to a procedure and then do partial commits in the SP. But what if I want to return some result set from the func... In that case I cannot turn it into a SP, right? So... how to proceed in that case?
Many thanks in advance.
NOTE: The function is written in PL/pgSQL, the most common procedural SQL language available in PostgreSQL.
I don't know that there's a great way to do it built-in to postgres yet, but there are a couple ways to achieve logging that will be visible outside of a function.
You can use the pg_background extension to run an insert in the background that will be visible outside of the function. This requires compiling and installing this extension.
Use dblink to connect to the same database and insert data. This will most likely require setting up some permissions.
Neither option is ideal, but hopefully one can work for you. Converting your function to a procedure may also work, but you won't be able to call the procedure from within a transaction.

Cached plan must not change result type

Our service team is getting the error Cached plan must not change result type sometimes when I modify the length of a column or add a new column in the table.
I tried solutions mentioned on Stack Overflow like Postgres: "ERROR: cached plan must not change result type"
I have tried autosave=conservative to resolve this issue but still, I am able to reproduce this issue. I used below JDBC connection string
jdbc-url: jdbc:postgresql://172.16.244.10:5432/testdb?autosave=conservative
why is this property not working in my case?
Also, I tested with prepareThreshold=0 and its working fine. But I think it will impact performance because it will never use client-side prepared statements.
I just want to know the best solution to avoid this error.

Finding all input parameter and the queries corresponding to those input parameter

I have Postgresql DB on my pc and I'm trying to connect different database application to Postgresql but before that(An research issue), for each application, I need to see all the input parameter and all the queries corresponding to those input parameter that application can do.
How?
Look in the code of every application and see what calls are being made. In addition figure out all the parameter values that can be sent based on an almost infinite combination of characters and numbers the user can select from.
Or to remain sane turn on postgresql logging and let the users do their thing and analyse what calls are being made.

How do you specify a local database instance in TSQL with the USE keyword?

I have several database names which exist on local, dev and live servers.
I want to ensure a potentially dangerous T-SQL script will always use the local db and not any other db by accident.
I can't seem to use the [USE] keyword with the local instance name followed by the db name.
It seems pretty trivial but I can't seem to get it to work.
I've tried this but no luck:
USE [MYMACHINE/SQLEXPRESS].[DBNAME]
The instance is going to be determined through your connection/connection string. You connect to a specific instance and then all subsequent T-SQL will be executed against that instance and that instance alone.
The current answer is not correct for the question asked. As you can specify a specific LocalDB file via the USE command in T-SQL. You just have to specify the fully qualified path name, which is what you will also see in the dropdown for the database list.
USE [C:\MyPath\MyData.mdf]
GO