Deleting notifications from postgres message queue - postgresql

I have a functionality where I have a table where for each insert I have to notify a channel with a JSON . I am using pg_notify in a trigger function to do that. Now I have to manage that message queue so that if its size reaches 80% it have to delete older messages till it reaches to 50%. I have searched online for it but I havent received any answers. Can any one help.
I am using below mentioned trigger for notifying.
CREATE OR REPLACE FUNCTION action_notify() RETURNS TRIGGER AS $BODY$
DECLARE
action_json TEXT;
BEGIN
RAISE DEBUG 'begin: action_notify';
action_json := '{"action": "'||NEW.action||'"
"action_params": "'||NEW.action_params||'"}';
PERFORM pg_notify(TG_TABLE_NAME, action_json);
RAISE DEBUG 'end: action_notify';
RETURN NULL;
END;
$BODY$ LANGUAGE plpgsql;
It will be great help if someone can guide me how to manage this message queue. I am not using any other message queue like rabbitmq ...just managing it from postgres..whats the best way to implement this.
Using PostgreSQL 9.3.

Short version: you can't do that with LISTEN and NOTIFY. It's a reliable-delivery queue, and there's no facility for purging old entries.
Instead of passing the message as a notification payload, you should probably insert it into a table, then just NOTIFY that the table has changed. You can manage the size of the table with periodic jobs to truncate old entries.

Related

Using row data in pg_notify trigger as channel name?

Is it possible to use data from the row a trigger is firing on, as the channel of a pg_notify, like this:
CREATE OR REPLACE FUNCTION notify_pricesinserted()
RETURNS trigger AS $$
DECLARE
BEGIN
PERFORM pg_notify(
NEW.my_label,
row_to_json(NEW)::text);
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER notify_pricesinserted
AFTER INSERT ON prices
FOR EACH ROW
EXECUTE PROCEDURE notify_pricesinserted();
EDIT: I found out the reason it was not working is due to the case of my label. If I replace it with lower(NEW.my_label) and also do the same for the listener then it works.
The pg_notify() part would work without throwing an error. PostgreSQL places very few restrictions on what a channel name could be. But in practice it is probably useless because you would need to establish a LISTEN some_channel command prior to the pg_notify() statement to pick up the payload message somewhere outside of the trigger function and doing that on some dynamic value is difficult in most situations and probably terribly inefficient in all cases.
If - in your trigger - NEW.my_label has a small number of well-defined values, then you might work it out by establishing listening channels on all possible values, but you are probably better off defining a single channel identifier for your table, or perhaps for this specific trigger, and then construct the payload message in such a way that you can easily extract the appropriate information for some response. If you cannot predict the values of NEW.my_label then it is plain impossible.
In your specific case you could have a channel name 'prices' and then do something like:
pg_notify('prices', format('%s: %s, NEW.my_label, row_to_json(NEW)::text));
The session with LISTEN prices will receive:
Asynchronous notification "prices" with payload "some_label: {new_row_to_json}" received from server process with PID 8448.
That is a rather silly response (why the "Asynchronous notification "channel" with payload ..." instead of just the payload and the PID?) but you can easily extract the relevant parts and work with those. Since you would have to manipulate the string anyway it is not a big burden to strip away all the PG overhead in one go, on a single channel, making management of the trigger actions far easier.

How a Java client app. can "catch" (via JDBC) the result produced by a trigger procedure query?

I'm trying to understand how a java (client) application that communicates, through JDBC, with a pgSQL database (server) can "catch" the result produced by a query that will be fired (using a trigger) whenever a record is inserted into a table.
So, to clarify, via JDBC I install a trigger procedure prepared to execute a query whenever a record is inserted into a given database table, and from this query's execution will result an output (wrapped in a resultSet, I suppose). And my problem is that I have no idea how the client will be aware of those results, that are asynchronously produced.
I wonder if JDBC supports any "callback" mechanism able to catch the results produced by a query that is fired through a trigger procedure under the "INSERT INTO table" condition. And if there is no such "callback" mechanism, what is the best approach to achieve this result?
Thank you in advance :)
Triggers can't return a resultset.
There's no way to send such a result to the JDBC driver.
There are a few dirty hacks you can use to get results from a trigger to the client, but they're all exactly that. Things like:
DECLARE a cursor for the resultset, then send the cursor name as a NOTIFY payload, so the app can FETCH ALL FROM <cursorname>;
Create a TEMPORARY table and report the name via NOTIFY
It is more typical to append anything the trigger needs to communicate to the app to a table that exists for that purpose and have the app SELECT from it after the operation that fired the trigger ran.
In most cases if you need to do this, you're probably using a trigger where a regular function is a better fit.

Service Broker tutorial - left with conversations on the Target database

I'm trying to learn SQL Service Broker (SSB), and I'm starting by going through some simple tutorials on MSDN. I'm looking at "Completing a Conversation Between Databases". Request and Reply Messages get set up on an Initiator and a Target database, there's a Contract on both dbs using these messages, a Service on the Initiator db that uses a Queue, and a Service on the Target that uses a Queue and the Contract.
The Initiator sends a message to the Target Service, beginning a dialog. The Target picks up this message and sends a reply (and calls END CONVERSATION), and finally the Initiator picks up the reply and also calls END CONVERSATION.
If I now do SELECT * FROM sys.conversation_endpoints on the Initiator, no rows are returned. However there is a row returned on the Target database; the conversation is in a CLOSED state.
Is this correct (i.e. should the Target db still be storing this conversation)? If not, how do I get rid of the conversation on the Target db? If it is correct, when do these conversations disappear?
This is the code for the Target db picking up the request and sending the reply:
DECLARE #RecvReqDlgHandle UNIQUEIDENTIFIER;
DECLARE #RecvReqMsg NVARCHAR(100);
DECLARE #RecvReqMsgName sysname;
BEGIN TRANSACTION;
WAITFOR
( RECEIVE TOP(1)
#RecvReqDlgHandle = conversation_handle,
#RecvReqMsg = message_body,
#RecvReqMsgName = message_type_name
FROM TargetQueue2DB
), TIMEOUT 1000;
SELECT #RecvReqMsg AS ReceivedRequestMsg;
IF #RecvReqMsgName =
N'//BothDB/2DBSample/RequestMessage'
BEGIN
DECLARE #ReplyMsg NVARCHAR(100);
SELECT #ReplyMsg =
N'<ReplyMsg>Message for Initiator service.</ReplyMsg>';
SEND ON CONVERSATION #RecvReqDlgHandle
MESSAGE TYPE
[//BothDB/2DBSample/ReplyMessage] (#ReplyMsg);
END CONVERSATION #RecvReqDlgHandle;
END
COMMIT TRANSACTION;
GO
The target endpoints are removed with a 30 minutes delay in order to prevent replay attacks. Is sys.conversation_endpoint.security_timestamp set on the CLOSED endpoints?
Further clarification in addition to Remus: If the conversation cannot complete normally, it can still show up in sys.conversation_endpoint beyond the security_timestamp. In such a case you can use the unique identifier sys.conversation_endpoint.conversation_handle (i.e. 94798AB6-DF37-E211-BF0F-002215A14A37) to remove the conversation manually:
END CONVERSATION conversation_handle WITH CLEANUP
WITH CLEANUP is important, please see MSDN: END CONVERSATION (Transact-SQL)

LISTEN query timeout with node-postgres?

I have an "article" table on a Postgresql 9.1 database and a trigger that notifies a channel on each insert.
I'd like to create a node.js script that catches those inserts and pushes notifications to connected clients using Socket.io. So far I'm using the node-postgres module to LISTEN to the channel but it seems the LISTEN query times out after about 10-15 seconds and stops catching the inserts. I could query a new listen when the timeout happens, but I'm not sure how to properly implement the continuation.
Here's my postgresql notification procedure:
CREATE FUNCTION article_insert_notify() RETURNS trigger AS $$
BEGIN
NOTIFY "article_watcher";
RETURN NULL;
END;
$$ LANGUAGE plpgsql;
The trigger:
CREATE TRIGGER article_insert_trigger
AFTER INSERT ON article
FOR EACH ROW EXECUTE PROCEDURE article_insert_notify();
And the node.js code:
var pg = require ('pg'),
pgConnection = "postgres://user:pass#localhost/db"
pg.connect(pgConnection, function(err, client) {
client.query('LISTEN "article_watcher"');
client.on('notification', function(data) {
console.log(data.payload);
});
});
How can I ensure a fulltime LISTEN or how could I catch those timeouts to reissue a listen query ? Or maybe a module other than node-postgres offers more appropriate tools to do so ?
I got answer to my issue on the node-postgres repo. To quote Brianc:
pg.connect is use to create pooled connections. Using a connection
pool connection for listen events really isn't supported or a good
idea though.
[...]
To 'listen' a connection by definition must stay open permanently. For
a connection to stay open permanently it can never be returned to the
connection pool.
The correct way to listen in this case is to use a standalone client:
var pg = require ('pg'),
pgConnectionString = "postgres://user:pass#localhost/db";
var client = new pg.Client(pgConnectionString);
client.connect();
client.query('LISTEN "article_watcher"');
client.on('notification', function(data) {
console.log(data.payload);
});
LISTEN is supposed to last for the session lifetime or until you do UNLISTEN. So, as long as your code is running, notifications should be delivered. Note, that IIRC, postgresql gives no promises to deliver one notification per NOTIFY — if you have many inserts it may choose to deliver a single NOTIFY. Not sure about 9.1, they introduced LISTEN payload, so it may make a bit less sense.
A permanent connection in pg-promise is
db.connect({direct: true})
.then(...)
.catch(...);

continue insert when exception is raised in postgres

HI,
Iam trying to insert batch of records at a time when any of the record fails to insert i need to trap that record and log that to my failed record maintanance table and then the insert should continue. Kindly help on how to do this.
If using a Spring or EJB container there is a simple trick which works very well : provide a LogService witn a logWarning(String message) method. The method must be annotated/configured with the REQUIRES_NEW transaction setting.
If not then you'll have to simulate it using API calls. Open a different connection for the logging, when you enter the method begin the transaction, before leaving commit the transaction.
When not using transactions for the insert, there is actually nothing special you need to do, as by default most database run in autocommit and commit after every statement.