Is there a way to configure postgresql to send records to a listener port - postgresql

I am having a situation where I need to send new record in a specific table, to a tcp port on server.
Is there a chance to configure this in postgresql.conf file. I am also open to any suggestion of how to send data to the port from a trigger function.
Can anyone help me?

You can write a trigger in PL/Python that can do about anything with the new row, but that doesn't sound like a good idea. For example, it would mean that the INSERT would fail if the trigger function encounters an error, and it would make the inserting transaction last unduly long.
I think what you want is logical decoding with a plugin like wal2json.

Related

Postgres AUTONOMOUS_TRANSACTION equivalent on the same DB

I'm currently working on a SpringBatch application that should insert some logs in case a certain type of error happens. The problem is that if the BatchJob fails, it automatically rollback everything done and that’s perfect, but it also rollback the error logs.
I need to achieve something similar to the AUTONOMOUS_TRANSACTION of Oracle while using PostgreSQL (14).
I’ve seen the DBLINK and it seem the only thing close to an alternative, but I have found some problems:
I need to avoid the connection string because the database host/port/name changes in the different environments, is it possible? I need to persist the data in the same database to technically I don’t need to connect to any other database but use the calling connection.
Is it possible to create a Function/Procedure that creates the takes care of all and I only have to call it Java side? Maybe this way I can somehow pass the connection data as a parameter in case that is not possible to avoid.
In a best case scenario I would be able to do something like:
dblink_exec(text sql);
That without arguments considers the same database where is been executed.
The problem is that I need this to be done without specifying any connection data, this will be inside a function on the executing db, in the same schema… that function will pass from one environment to the next one and the code needs to be the same so any name/user/pass needed must be avoided since they will change by environment. And since doing it in the same db and schema technically they can be inferred.
Thanks in advance!
At the moment I haven't try anything, I'm trying to get some information first.

modbus TCP server with data from SQL-query using pymodbus

I need to make a SQL_to_Modbus TCP server. I looked at the https://pymodbus.readthedocs.io/en/latest/source/example/payload_server.html example, but I'm not sure that I understand correctly how to organize the transfer of data from the database to the modbus registers. The data changes dynamically and every master must be able to receive this data. I would be very grateful if you could tell me where in the code I should execute the SQL-query, get the data set and fill the server registers with them. Probably, this code should be executed in parallel with the main one and use synchronization?

why is my ctp not getting any data for one of the tables i am subscribing to?

I have few ctps subscribing to a tp.
subscription is established with no problems but data doesn't seem to hit 1 of my ctps.
I have 2 ctps subscribing to the same table. one is getting data the other doesn't.
I checked. u.w and I can see the handles being open for the said table but when I check the upd on my ctp... it receives all other tables except this one.
upd on my ctp it's a simple insert. I cannot see any data at all for the table. the t parameter is never set to the the name of the table I am interested in. I don't know what else to check. any suggestions would be greatly appreciated. the pub logic is the default pub logic.
no errors in the tp.
UPDATE1: I can send other messages and I receive data from the tp for other tables. issue doesn't seem to persist I dr just prod. I cannot debug much in prod
Without seeing more of your code it's hard to create a good answer.
Couple things you could try:
Check if you can send a generic message (e.g. h(+;1;2)) from your tp to ctp via the handle in .u.w this will make sure the connection is ok.
If you can send a message then you can check if the issue is in your ctp. You can see exactly what is being sent by adding some logging to your upd function, or if you thing the message isn't getting that far, to your .z.ps message handler function, e.g. .z.ps:{0N!x;value x} will perform some very basic client side logging.
If you can't send a message down the handle in the tp then it's possible there's other network issues at play (although I expect you would be seeing errors in your tp if that was the case). You could check .z.W in your ctp in this case to see if the corresponding handle for the tp is present there.
Can also send a test update to your tickerplant and add logging along each step of the way if you really want to see the chain of events but this could be quite invasive.

Processing a row externally that fired a trigger

I'm working on a PostgreSQL 9.3-database on an Ubuntu 14 server.
I try to write a trigger-function (AFTER EACH ROW) that launches an external process that needs to access the row that fired that trigger.
My problem:
Even tough I can run queries on the table including the new row inside the trigger, the external process does not see it (while the trigger function is still running).
Is there a way to manage that?
I thought about starting some kind of asynchronous function call to give the trigger some time to terminate first, but that's of course really ugly.
Also I read about notifiers and listeners, but that would require some refactoring of my existing code and also some additional listener, which I tried to prevent with my trigger. (I'm also afraid of new problems which may occur on this road.)
Any more thoughts?
Robin

Is there a Perl POE module for monitoring a database table for changes?

Is there any Wheel /POCO /Option to do this in Perl using the POE module:
I want to monitor a DB table for changed records (deleting /insert/ update) and react accordingly to those changes.
If yes could one provide some code or a link that shows this?
Not that I'm aware of, but if you were really industrious you could write one. I can think of two ways to do it.
Better one first: get access to a transaction log / replication feed, e.g. the MySQL binlog. Write a POE::Filter for its format, then use POE::Wheel::FollowTail to get a stream of events, one for each statement that affects the DB. Then you can filter the data to find what you're interested in.
Not-so-good idea: using EasyDBI to run periodic selects against the table and see what changed. If your data is small it could work (but it's still prone to timing issues); if your data is big this will be a miserable failure.
If you were using PostgreSQL, you could create a trigger on your table's changes that called NOTIFY and in your client app open a connection and execute a LISTEN for the same notification(s). You can then have POE listen for file events on the DBD::Pg pg_socket file descriptor.
Alternatively you could create a SQL trigger that caused another file or network event to be triggered (write to a file, named pipe or socket) and let POE listen on that.