I am handling the application which use pg_notify to capture any data based on event insert, update, and delete. The problem is some of data were not synced properly. Now with this regard, I would like to monitor every single data notified by pg_notify function to ensure that it is successfully sent to client/subscriber who listen. The reason is I would like to point out the failure point is not caused by pg_notify which means it is the application who can't process the data accordingly or some other factors. Is it possible to do the monitoring ?
Sure, no problem. Write a process that listens on all available channels and writes every notification it gets into a log file. Then you know which notifications were sent. If any client did not react to an event, the client was either not connected at the time or ignored the message.
Related
I'm building my first CloudKit application, and am using CKFetchRecordZoneChangesOperation on startup to get any new records changed while the current device was offline.
I am also calling CKFetchRecordZoneChangesOperation when I receive a subscription notification of changes.
It is possible the subscription notification could come in before the startup call finishes. I am currently using a lock to prevent the 2nd call from starting until the recordZoneFetchCompletionBlock handler is called, signalling that the first one is done. This works, but it also smells a bit hacky.
I have a huge database where, in some places, we use Postgres notifications. We noticed that the queue size is increasing. The way we check is executing this simple command: select pg_notification_queue_usage();.
When it reaches 100% then all messages are gone. The problem I have is that I don't know who listens to notifications and what channels we have there. I identified only two services that listen for those notifications but it seems that's not the case.
My task is to find other places where we use notifications (consume or produce) to find the root cause. How can I do it?
The only thing I found about it is the query select pg_notification_queue_usage(); but it seems that Postgres doesn't provide other useful functions related to this feature.
I did some experiments regarding it. I launched a local Postgres instance and started publishing notifications there. Everything worked as expected. When I did it once again but without actual consuming notifications, the queue size started to grow. That's what I expected, tho.
Then, I restarted the process and the queue size dropped to 0. That's exactly what the docs say about it.
A session's listen registrations are automatically cleared when the session ends.
On the production, we did exactly the same - we restarted known services but the notification queue didn't drop to 0 as we expected.
It means, there's something else listening to one of the channels but it doesn't consume it or does it too slow.
Is there any way of identifying such listeners?
I have implemented a Notify/Listen mechanism, so when a special request is sent to the web server, using notify I can notify the workers (in Python) that there's a pending request waiting to be processed.
The implementation works fine, but the problem is that if the workers server is restarting, the notification gets lost, since at that particular time there's no listener.
I can implement a service like MQRabbit or similar, but my needs are so simple that implement such a monster is too much.
Is there any way, a configuration variable perhaps, that can give some persistence to the notification mechanism?
Thanks in advance
I don't think there is a way to persist notification channels, but you can simply store the pending requests to a table, and have the worker check for any missed work on startup.
Either a timestamp or a pending/completed flag would work, depending on what kind of work it's doing.
For consistency, you can have the NOTIFY fire from an INSERT trigger on the queue table, and have the worker always check for any remaining work (not just a specific request) when notified.
In my Service Broker design, I need to make an asynchronous calls and needed some work to get done in background (Inside SQL Server only, like updating tables).
There are certain points to be taken under consideration based on the requirement :
It's kind of one-way data push. Just place a message into the SB queue and forget. No acknowledgement required.
Only one database involved in the design. There is no need for multiple databases.
Message will be placed to the SB queue using a Stored Proc ( This SP will be called by an application).
By observing above points, it seems that requirement doesn't suits for creating 2 different SB services as only one service would suffice. I designed the scenario with having only one SB Service, and while creating a conversation dialog, I assigned same service name to the 'From' & 'To' clauses. Program pushes data to the SB queue and activator will activate associated Store Procedure.. It works just fine.
BEGIN DIALOG CONVERSATION #RecordConversationHandle
FROM SERVICE **UpdateQueueStatus**
TO SERVICE '**UpdateQueueStatus**'
WITH ENCRYPTION = OFF;
Please help me by any suggestion on the above proposed design.. ? Any suggestions/issues or anything which demands attention to improve the design for better performance & scalability would be much appreciated.
Service broker is designed for dialogs, not monolog conversations. Don't design something new (There is tons of good reasons why they are always dialogs)
You can create sending service (Service1), witch is used for sending messages and receives "End Dialog" messages and ends dialog. The other (Service2) witch receives messages and does some processing with them + ends dialog when work is done.
The main reason of two services in a dialog and dialog-oriented conversations is the ability to disable queue. The initiator's queue may be enabled while at the same time, for some purpose or reasons, the target's queue may be disabled. In this case, sedning messages runs without the "disabled queue" error and messages will wait in the transmition queue until the target queue become enabled again.
That is why a contract may contain just one message type, and a queue may be created without specifying any contract. It's the initiator's queue.
There is a caveat: BEGIN CONVERSATION TIMER. It puts the standard message https://schemas.microsoft.com/SQL/ServiceBroker/DialogTimer into a local queue the specified conversation dialog belongs to.
One use case when a dialog on the same service may be usefull, is a recovery process. However, in this case, there should be a specific message type received in a higher priority than ordinary messages. The activation procedure first receives a recovery message, tries to recover, rollback if unsuccessfull, then receives an ordinary messages and commits receiving messages of both the types or just rollback if unsuccessfull again.
Let's say I wanted to have a saga that get's created by some event, then sits and wait for a few hours, and if nothing happens, sends off some command.
Now, if this Saga was all in-memory and I had to restart the app/server, the saga would be unloaded and never seen again, right?
Would I use Event Sourcing to bring this Saga up to speed once the system is back online?
If so, I would need pretty much a separate Event Store with "active sagas" that can be replayed at system startup, to get my Sagas up to speed. So far it seems good to me, but how would I implement the timeout?
I would need some way of "faking" the timeouts at replay, taking into account there may be several, subsequent timeouts depending on the events going into the saga.
The best way to achieve this capability is with another endpoint that is capable of returning a message back to you at a certain point in time. For example, your saga may dispatch a message to this "timeout manager" and say wake me in 1 hour or 1 day or even 1 year. The message would then be returned to you at that time. Ideally this message would have business meaning that would cause an action to occur.
Perhaps the best example of this is something like customer signup where, if the customer hasn't confirmed their account within 7 days from signup, you'd notify them via email. The "timeout message" would effectively be: RemindUserToConfirmAccountMessage. When this message is received back by the saga after 7 days, the saga would determine based upon its current state, if that message needs to be handled and a customer email needs to be sent. But if the user has already confirm his/her account, the message can be discarded with no action taken.