I'm trying to learn SQL Service Broker (SSB), and I'm starting by going through some simple tutorials on MSDN. I'm looking at "Completing a Conversation Between Databases". Request and Reply Messages get set up on an Initiator and a Target database, there's a Contract on both dbs using these messages, a Service on the Initiator db that uses a Queue, and a Service on the Target that uses a Queue and the Contract.
The Initiator sends a message to the Target Service, beginning a dialog. The Target picks up this message and sends a reply (and calls END CONVERSATION), and finally the Initiator picks up the reply and also calls END CONVERSATION.
If I now do SELECT * FROM sys.conversation_endpoints on the Initiator, no rows are returned. However there is a row returned on the Target database; the conversation is in a CLOSED state.
Is this correct (i.e. should the Target db still be storing this conversation)? If not, how do I get rid of the conversation on the Target db? If it is correct, when do these conversations disappear?
This is the code for the Target db picking up the request and sending the reply:
DECLARE #RecvReqDlgHandle UNIQUEIDENTIFIER;
DECLARE #RecvReqMsg NVARCHAR(100);
DECLARE #RecvReqMsgName sysname;
BEGIN TRANSACTION;
WAITFOR
( RECEIVE TOP(1)
#RecvReqDlgHandle = conversation_handle,
#RecvReqMsg = message_body,
#RecvReqMsgName = message_type_name
FROM TargetQueue2DB
), TIMEOUT 1000;
SELECT #RecvReqMsg AS ReceivedRequestMsg;
IF #RecvReqMsgName =
N'//BothDB/2DBSample/RequestMessage'
BEGIN
DECLARE #ReplyMsg NVARCHAR(100);
SELECT #ReplyMsg =
N'<ReplyMsg>Message for Initiator service.</ReplyMsg>';
SEND ON CONVERSATION #RecvReqDlgHandle
MESSAGE TYPE
[//BothDB/2DBSample/ReplyMessage] (#ReplyMsg);
END CONVERSATION #RecvReqDlgHandle;
END
COMMIT TRANSACTION;
GO
The target endpoints are removed with a 30 minutes delay in order to prevent replay attacks. Is sys.conversation_endpoint.security_timestamp set on the CLOSED endpoints?
Further clarification in addition to Remus: If the conversation cannot complete normally, it can still show up in sys.conversation_endpoint beyond the security_timestamp. In such a case you can use the unique identifier sys.conversation_endpoint.conversation_handle (i.e. 94798AB6-DF37-E211-BF0F-002215A14A37) to remove the conversation manually:
END CONVERSATION conversation_handle WITH CLEANUP
WITH CLEANUP is important, please see MSDN: END CONVERSATION (Transact-SQL)
Related
I have a functionality where I have a table where for each insert I have to notify a channel with a JSON . I am using pg_notify in a trigger function to do that. Now I have to manage that message queue so that if its size reaches 80% it have to delete older messages till it reaches to 50%. I have searched online for it but I havent received any answers. Can any one help.
I am using below mentioned trigger for notifying.
CREATE OR REPLACE FUNCTION action_notify() RETURNS TRIGGER AS $BODY$
DECLARE
action_json TEXT;
BEGIN
RAISE DEBUG 'begin: action_notify';
action_json := '{"action": "'||NEW.action||'"
"action_params": "'||NEW.action_params||'"}';
PERFORM pg_notify(TG_TABLE_NAME, action_json);
RAISE DEBUG 'end: action_notify';
RETURN NULL;
END;
$BODY$ LANGUAGE plpgsql;
It will be great help if someone can guide me how to manage this message queue. I am not using any other message queue like rabbitmq ...just managing it from postgres..whats the best way to implement this.
Using PostgreSQL 9.3.
Short version: you can't do that with LISTEN and NOTIFY. It's a reliable-delivery queue, and there's no facility for purging old entries.
Instead of passing the message as a notification payload, you should probably insert it into a table, then just NOTIFY that the table has changed. You can manage the size of the table with periodic jobs to truncate old entries.
After reading tons of information on the web about Azure WebJobs, documentation says a job should be idempotent, on the other hand, blogs say they use WebJobs to do actions such as "charging a customer", "sending an e-mail".
This documentation says that running a continuous WebJob on multiple instances with a queue could result in being called more than once. Do people really ignore the fact that they could charge their customer twice, or send an e-mail twice?
How can I make sure I can run a WebJob with a queue on a scaled web app and messages are processed only once?
I do this using a database, an update query with a row lock and TransactionScope object.
In your Order table, create a column to manage the state of the action you are taking in your WebJob. i.e. EmailSent.
In the QueueTrigger function begin a transaction, then execute an UPDATE query on the customer order with ROWLOCK set, that sets EmailSent = 1 with a WHERE EmailSent = 0. If the return value from SqlCommand = 0 then exit the function. Another WebJob has already sent the email. Otherwise, send the email and call Complete() on the TransactionScope object if sent successfully.
That should provide the idempotency you want.
Hope that helps.
I have a postgresql-plsh function
CREATE OR REPLACE FUNCTION MDN_REG_LOCATE(MDN VARCHAR(50), CALLID VARCHAR(50)) RETURN AS '
#!/bin/sh
/home/infoobjects/Projects/java/execute.sh $1 $2 &
logger -t "data" "$1$2"
' LANGUAGE plsh;
and execute.sh call's a java process(method) which takes 3 minutes to execute . I have made the script asynchronous with appending & at the end of script(execute.sh)
My problem is that the postgresql function still waits for the result to come and does not act in asynchronous manner although the shell script behaves asynchronously because logger in above function logs up after jst a call to MDN_REG_LOCATE() but still this postgresql function(MDN_REG_LOCATE) waits for complete process of 3 minutes to over , I don't know what I am lacking , please help me in this .
Thanks In Adv.
Simply backgrounding a process isn't enough; it's still attached to its parent process. See this answer.
You're likely to be much better off reworking your logger program to keep a persistent connection to the database where it LISTENs for tasks. Your client (or a trigger, or whatever would normally invoke your PL/Sh function) sends a NOTIFY with the parameters as a payload, or (for older Pg versions) INSERTs a row into a queue table then sends a NOTIFY to tell the listening client to look at the queue table.
The listening client can then run the background process with no worries about holding up the database. Best of all, the NOTIFY is transactional; it's only delivered when the transaction that sent it commits, at the time it commits.
I want to implement a stored procedure (within a service broker infrasturture) which calls a web service. I looked some examples from Aschenbrenner's book on Service Broker. However I don't find any with a web service call. Could anyone help?
Thanks
Sqlbs
We had a similar task at my company and figured out an optimal solution was to use asynchronous triggers with external activator which calls webservices from .NET and de-queues messages after successful call. What that meas is that you create a regular database trigger that sends a message to the service broker queue for asynchronous processing. AKA Asynchronous trigger. here is a sample from the chapter 10 of Klause's book
-- Create the trigger written with T-SQL
CREATE TRIGGER OnCustomerInserted ON Customers FOR INSERT
AS
DECLARE #conversationHandle UNIQUEIDENTIFIER
DECLARE #fromService SYSNAME
DECLARE #toService SYSNAME
DECLARE #onContract SYSNAME
DECLARE #messageBody XML
SET #fromService = 'CustomerInsertedClient'
SET #toService = 'CustomerInsertedService'
SET #onContract = 'http://ssb.csharp.at/SSB_Book/c10/CustomerInsertContract'
-- Check if there is already an ongoing conversation with the TargetService
SELECT #conversationHandle = ConversationHandle FROM SessionConversations
WHERE SPID = ##SPID
AND FromService = #fromService
AND ToService = #toService
AND OnContract = #onContract
IF #conversationHandle IS NULL
BEGIN
-- We have to begin a new Service Broker conversation with the TargetService
BEGIN DIALOG CONVERSATION #conversationHandle
FROM SERVICE #fromService
TO SERVICE #toService
ON CONTRACT #onContract
WITH ENCRYPTION = OFF;
-- Create the dialog timer for ending the ongoing conversation
BEGIN CONVERSATION TIMER (#conversationHandle) TIMEOUT = 5;
-- Store the ongoing conversation for further use
INSERT INTO SessionConversations (SPID, FromService, ToService, OnContract, ConversationHandle)
VALUES
(
##SPID,
#fromService,
#toService,
#onContract,
#conversationHandle
)
END
-- Construct the request message
SET #messageBody = (SELECT * FROM INSERTED FOR XML AUTO, ELEMENTS);
-- Send the message to the TargetService
;SEND ON CONVERSATION #conversationHandle
MESSAGE TYPE [http://ssb.csharp.at/SSB_Book/c10/CustomerInsertedRequestMessage] (#messageBody);
Instead of using stored procedures which would call web services through managed code (internal activation) we decided that it's better to offload that processing outside of sql server. And found this nice little tool created by Microsoft - External Activator
that will listen to the activation queue and launch an application when there is a new message in the queue. For implementation please refer to Klaus's Chapter 4 in the book.
I would make windows service which is in the end of service broker (and call web service as in any win app). Somehow don't think calling web service from db is nice idea.
can find aout external activator here. and download service broker interface/external activator here. Service broker interface is just great! easy to use.
HI,
Iam trying to insert batch of records at a time when any of the record fails to insert i need to trap that record and log that to my failed record maintanance table and then the insert should continue. Kindly help on how to do this.
If using a Spring or EJB container there is a simple trick which works very well : provide a LogService witn a logWarning(String message) method. The method must be annotated/configured with the REQUIRES_NEW transaction setting.
If not then you'll have to simulate it using API calls. Open a different connection for the logging, when you enter the method begin the transaction, before leaving commit the transaction.
When not using transactions for the insert, there is actually nothing special you need to do, as by default most database run in autocommit and commit after every statement.