Does pg-promise automatically close connections? - pg-promise

Does pg-promise automatically close connections without needing me to explicitly call client.end() when finished debugging/compiling code?

Related

Postgres AUTONOMOUS_TRANSACTION equivalent on the same DB

I'm currently working on a SpringBatch application that should insert some logs in case a certain type of error happens. The problem is that if the BatchJob fails, it automatically rollback everything done and that’s perfect, but it also rollback the error logs.
I need to achieve something similar to the AUTONOMOUS_TRANSACTION of Oracle while using PostgreSQL (14).
I’ve seen the DBLINK and it seem the only thing close to an alternative, but I have found some problems:
I need to avoid the connection string because the database host/port/name changes in the different environments, is it possible? I need to persist the data in the same database to technically I don’t need to connect to any other database but use the calling connection.
Is it possible to create a Function/Procedure that creates the takes care of all and I only have to call it Java side? Maybe this way I can somehow pass the connection data as a parameter in case that is not possible to avoid.
In a best case scenario I would be able to do something like:
dblink_exec(text sql);
That without arguments considers the same database where is been executed.
The problem is that I need this to be done without specifying any connection data, this will be inside a function on the executing db, in the same schema… that function will pass from one environment to the next one and the code needs to be the same so any name/user/pass needed must be avoided since they will change by environment. And since doing it in the same db and schema technically they can be inferred.
Thanks in advance!
At the moment I haven't try anything, I'm trying to get some information first.

Await statement execution completion in Slick

In my tests, I've got some database actions that aren't exposed as Futures at the test level. Sometimes, my tests run fast enough that close() in my cleanup happens before those database actions complete, and then I get ugly errors. Is there a way to detect how many statements are in-flight or otherwise hold off close()?
When you execute a query you get Future[A] where A is the result of the query.
You can compose all your queries using Future.sequence() to get a single future composedFuture which will be completed when all your queries have returned result.
Now you can use composedFuture.map(_ => close()) to make sure that all queries have finished execution and then you close the resource.
Best option is to expose the actions as future and then compose them.
Otherwise you can put Thread.sleep(someSensibleTime) and hope your future completes within someSensibleTime, but this will make your tests slow and errorprone.
I think it may be database-dependant rather than slick-dependant.
For example, mysql technologies allow you to see currently running queries with the query "show processlist", and act accordingly.
If that's not an option, I suppose that you could poll the db to observe a selected side effect, and close() afterwards ?

Processing a row externally that fired a trigger

I'm working on a PostgreSQL 9.3-database on an Ubuntu 14 server.
I try to write a trigger-function (AFTER EACH ROW) that launches an external process that needs to access the row that fired that trigger.
My problem:
Even tough I can run queries on the table including the new row inside the trigger, the external process does not see it (while the trigger function is still running).
Is there a way to manage that?
I thought about starting some kind of asynchronous function call to give the trigger some time to terminate first, but that's of course really ugly.
Also I read about notifiers and listeners, but that would require some refactoring of my existing code and also some additional listener, which I tried to prevent with my trigger. (I'm also afraid of new problems which may occur on this road.)
Any more thoughts?
Robin

Activating Autocommit in managed transaction

In an application with managed context (Play!, Eclipselink) I do have a Method which uses JPA.withTransaction, but must not do a rollback. It has external communication and XML-marshalling and unmarshalling and so on, so different Exceptions may occur.
The normal behaviour of JPA.withTransaction is to rollback the current transaction on (most) Exceptions.
If such an Exception is thrown after external Ressources are touched, the database must keep the current step to enable a continue/cleanup afterwards.
I did not find a way to achieve autocommit or to disable rollback. I have read that just catching the Exception would not do the trick, since the transaction is already marked for rollback.
So which is a correct way to disable rollback and to commit every query as soon as possible? I do not want to disturb the rest of the application, so I would avoid
JPA.em().getTransaction().commit();
JPA.em().getTransaction().begin();
after every write.
Which way I can simply keep the written data?

Is there a Perl POE module for monitoring a database table for changes?

Is there any Wheel /POCO /Option to do this in Perl using the POE module:
I want to monitor a DB table for changed records (deleting /insert/ update) and react accordingly to those changes.
If yes could one provide some code or a link that shows this?
Not that I'm aware of, but if you were really industrious you could write one. I can think of two ways to do it.
Better one first: get access to a transaction log / replication feed, e.g. the MySQL binlog. Write a POE::Filter for its format, then use POE::Wheel::FollowTail to get a stream of events, one for each statement that affects the DB. Then you can filter the data to find what you're interested in.
Not-so-good idea: using EasyDBI to run periodic selects against the table and see what changed. If your data is small it could work (but it's still prone to timing issues); if your data is big this will be a miserable failure.
If you were using PostgreSQL, you could create a trigger on your table's changes that called NOTIFY and in your client app open a connection and execute a LISTEN for the same notification(s). You can then have POE listen for file events on the DBD::Pg pg_socket file descriptor.
Alternatively you could create a SQL trigger that caused another file or network event to be triggered (write to a file, named pipe or socket) and let POE listen on that.