I want to implement a stored procedure (within a service broker infrasturture) which calls a web service. I looked some examples from Aschenbrenner's book on Service Broker. However I don't find any with a web service call. Could anyone help?
Thanks
Sqlbs
We had a similar task at my company and figured out an optimal solution was to use asynchronous triggers with external activator which calls webservices from .NET and de-queues messages after successful call. What that meas is that you create a regular database trigger that sends a message to the service broker queue for asynchronous processing. AKA Asynchronous trigger. here is a sample from the chapter 10 of Klause's book
-- Create the trigger written with T-SQL
CREATE TRIGGER OnCustomerInserted ON Customers FOR INSERT
AS
DECLARE #conversationHandle UNIQUEIDENTIFIER
DECLARE #fromService SYSNAME
DECLARE #toService SYSNAME
DECLARE #onContract SYSNAME
DECLARE #messageBody XML
SET #fromService = 'CustomerInsertedClient'
SET #toService = 'CustomerInsertedService'
SET #onContract = 'http://ssb.csharp.at/SSB_Book/c10/CustomerInsertContract'
-- Check if there is already an ongoing conversation with the TargetService
SELECT #conversationHandle = ConversationHandle FROM SessionConversations
WHERE SPID = ##SPID
AND FromService = #fromService
AND ToService = #toService
AND OnContract = #onContract
IF #conversationHandle IS NULL
BEGIN
-- We have to begin a new Service Broker conversation with the TargetService
BEGIN DIALOG CONVERSATION #conversationHandle
FROM SERVICE #fromService
TO SERVICE #toService
ON CONTRACT #onContract
WITH ENCRYPTION = OFF;
-- Create the dialog timer for ending the ongoing conversation
BEGIN CONVERSATION TIMER (#conversationHandle) TIMEOUT = 5;
-- Store the ongoing conversation for further use
INSERT INTO SessionConversations (SPID, FromService, ToService, OnContract, ConversationHandle)
VALUES
(
##SPID,
#fromService,
#toService,
#onContract,
#conversationHandle
)
END
-- Construct the request message
SET #messageBody = (SELECT * FROM INSERTED FOR XML AUTO, ELEMENTS);
-- Send the message to the TargetService
;SEND ON CONVERSATION #conversationHandle
MESSAGE TYPE [http://ssb.csharp.at/SSB_Book/c10/CustomerInsertedRequestMessage] (#messageBody);
Instead of using stored procedures which would call web services through managed code (internal activation) we decided that it's better to offload that processing outside of sql server. And found this nice little tool created by Microsoft - External Activator
that will listen to the activation queue and launch an application when there is a new message in the queue. For implementation please refer to Klaus's Chapter 4 in the book.
I would make windows service which is in the end of service broker (and call web service as in any win app). Somehow don't think calling web service from db is nice idea.
can find aout external activator here. and download service broker interface/external activator here. Service broker interface is just great! easy to use.
Related
I'm new to Google Cloud SQL and Pub/Sub. I couldn't find documentation anywhere about this. But another question's accepted and upvoted answer seems to say it is possible to publish a Pub/Sub message whenever there is an insert happen to the database. Excerpt from that answer:
2 - The ideal solution would be to create the Pub/Sub topic and publish to it when you insert new data to the database.
But since my question is a different one, thus I asked a new question here.
Background: I'm using a combination of Google Cloud SQL, Firestore and Realtime Database for my app for its own unique strengths.
What I want to do is to be able to write into Firestore and Realtime databases once an insert is successful in Google Cloud SQL. According to the answer above, this is the steps I should do:
The app calls a Cloud Function to insert a data into Google Cloud SQL database (PostgreSQL). Note: The Postgres tables has some important constraints and triggers Postgres functions, thats why we want to start here.
When the insert is successful I want Google Cloud SQL to publish a message to Pub/Sub.
Then there is another Cloud Function that subscribes to the Pub/Sub topic. This function will write into Firestore / Realtime Database accordingly.
I got steps #1 & #3 all figured out. The solution I'm looking for is for step #2.
The answer in the other question is simply suggesting that your code do both of the following:
Write to Cloud SQL.
If the write is successful, send a message to a pubsub topic.
There isn't anything that will automate or simplify either of these tasks. There are no triggers for Cloud Functions that will respond to writes to Cloud SQL. You write code for task 1, then write the code for task 2. Both of these things should be straightforward and covered in product documentation. I suggest making an attempt at both (separately), and posting again with the code you have that isn't working the way you expect.
If you need to get started with pubsub, there are SDKs for pretty much every major server platform, and the documentation for sending a message is here.
While Google Cloud SQL doesn't manage triggers automatically, you can create a trigger in Postgres:
CREATE OR REPLACE FUNCTION notify_new_record() RETURNS TRIGGER AS $$
BEGIN
PERFORM pg_notify('on_new_record', row_to_json(NEW)::text);
RETURN NULL;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER on_insert
AFTER INSERT ON your_table
FOR EACH ROW EXECUTE FUNCTION notify_new_record();
Then, in your client, listen to that event:
import pg from 'pg'
const client = new pg.Client()
client.connect()
client.query('LISTEN on_new_record') // same as arg to pg_notify
client.on('notification', msg => {
console.log(msg.channel) // on_new_record
console.log(msg.payload) // {"id":"...",...}
// ... do stuff
})
In the listener, you can either push to pubsub or cloud tasks, or, alternatively, write to firebase/firestore directly (or whatever you need to do).
Source: https://edernegrete.medium.com/psql-event-triggers-in-node-js-ec27a0ba9baa
You could also check out Supabase which now supports triggering cloud functions (in beta) after a row has been created/updated/deleted (essentially does the code above but you get a nice UI to configure it).
I have a functionality where I have a table where for each insert I have to notify a channel with a JSON . I am using pg_notify in a trigger function to do that. Now I have to manage that message queue so that if its size reaches 80% it have to delete older messages till it reaches to 50%. I have searched online for it but I havent received any answers. Can any one help.
I am using below mentioned trigger for notifying.
CREATE OR REPLACE FUNCTION action_notify() RETURNS TRIGGER AS $BODY$
DECLARE
action_json TEXT;
BEGIN
RAISE DEBUG 'begin: action_notify';
action_json := '{"action": "'||NEW.action||'"
"action_params": "'||NEW.action_params||'"}';
PERFORM pg_notify(TG_TABLE_NAME, action_json);
RAISE DEBUG 'end: action_notify';
RETURN NULL;
END;
$BODY$ LANGUAGE plpgsql;
It will be great help if someone can guide me how to manage this message queue. I am not using any other message queue like rabbitmq ...just managing it from postgres..whats the best way to implement this.
Using PostgreSQL 9.3.
Short version: you can't do that with LISTEN and NOTIFY. It's a reliable-delivery queue, and there's no facility for purging old entries.
Instead of passing the message as a notification payload, you should probably insert it into a table, then just NOTIFY that the table has changed. You can manage the size of the table with periodic jobs to truncate old entries.
After reading tons of information on the web about Azure WebJobs, documentation says a job should be idempotent, on the other hand, blogs say they use WebJobs to do actions such as "charging a customer", "sending an e-mail".
This documentation says that running a continuous WebJob on multiple instances with a queue could result in being called more than once. Do people really ignore the fact that they could charge their customer twice, or send an e-mail twice?
How can I make sure I can run a WebJob with a queue on a scaled web app and messages are processed only once?
I do this using a database, an update query with a row lock and TransactionScope object.
In your Order table, create a column to manage the state of the action you are taking in your WebJob. i.e. EmailSent.
In the QueueTrigger function begin a transaction, then execute an UPDATE query on the customer order with ROWLOCK set, that sets EmailSent = 1 with a WHERE EmailSent = 0. If the return value from SqlCommand = 0 then exit the function. Another WebJob has already sent the email. Otherwise, send the email and call Complete() on the TransactionScope object if sent successfully.
That should provide the idempotency you want.
Hope that helps.
I have a postgresql-plsh function
CREATE OR REPLACE FUNCTION MDN_REG_LOCATE(MDN VARCHAR(50), CALLID VARCHAR(50)) RETURN AS '
#!/bin/sh
/home/infoobjects/Projects/java/execute.sh $1 $2 &
logger -t "data" "$1$2"
' LANGUAGE plsh;
and execute.sh call's a java process(method) which takes 3 minutes to execute . I have made the script asynchronous with appending & at the end of script(execute.sh)
My problem is that the postgresql function still waits for the result to come and does not act in asynchronous manner although the shell script behaves asynchronously because logger in above function logs up after jst a call to MDN_REG_LOCATE() but still this postgresql function(MDN_REG_LOCATE) waits for complete process of 3 minutes to over , I don't know what I am lacking , please help me in this .
Thanks In Adv.
Simply backgrounding a process isn't enough; it's still attached to its parent process. See this answer.
You're likely to be much better off reworking your logger program to keep a persistent connection to the database where it LISTENs for tasks. Your client (or a trigger, or whatever would normally invoke your PL/Sh function) sends a NOTIFY with the parameters as a payload, or (for older Pg versions) INSERTs a row into a queue table then sends a NOTIFY to tell the listening client to look at the queue table.
The listening client can then run the background process with no worries about holding up the database. Best of all, the NOTIFY is transactional; it's only delivered when the transaction that sent it commits, at the time it commits.
I'm trying to learn SQL Service Broker (SSB), and I'm starting by going through some simple tutorials on MSDN. I'm looking at "Completing a Conversation Between Databases". Request and Reply Messages get set up on an Initiator and a Target database, there's a Contract on both dbs using these messages, a Service on the Initiator db that uses a Queue, and a Service on the Target that uses a Queue and the Contract.
The Initiator sends a message to the Target Service, beginning a dialog. The Target picks up this message and sends a reply (and calls END CONVERSATION), and finally the Initiator picks up the reply and also calls END CONVERSATION.
If I now do SELECT * FROM sys.conversation_endpoints on the Initiator, no rows are returned. However there is a row returned on the Target database; the conversation is in a CLOSED state.
Is this correct (i.e. should the Target db still be storing this conversation)? If not, how do I get rid of the conversation on the Target db? If it is correct, when do these conversations disappear?
This is the code for the Target db picking up the request and sending the reply:
DECLARE #RecvReqDlgHandle UNIQUEIDENTIFIER;
DECLARE #RecvReqMsg NVARCHAR(100);
DECLARE #RecvReqMsgName sysname;
BEGIN TRANSACTION;
WAITFOR
( RECEIVE TOP(1)
#RecvReqDlgHandle = conversation_handle,
#RecvReqMsg = message_body,
#RecvReqMsgName = message_type_name
FROM TargetQueue2DB
), TIMEOUT 1000;
SELECT #RecvReqMsg AS ReceivedRequestMsg;
IF #RecvReqMsgName =
N'//BothDB/2DBSample/RequestMessage'
BEGIN
DECLARE #ReplyMsg NVARCHAR(100);
SELECT #ReplyMsg =
N'<ReplyMsg>Message for Initiator service.</ReplyMsg>';
SEND ON CONVERSATION #RecvReqDlgHandle
MESSAGE TYPE
[//BothDB/2DBSample/ReplyMessage] (#ReplyMsg);
END CONVERSATION #RecvReqDlgHandle;
END
COMMIT TRANSACTION;
GO
The target endpoints are removed with a 30 minutes delay in order to prevent replay attacks. Is sys.conversation_endpoint.security_timestamp set on the CLOSED endpoints?
Further clarification in addition to Remus: If the conversation cannot complete normally, it can still show up in sys.conversation_endpoint beyond the security_timestamp. In such a case you can use the unique identifier sys.conversation_endpoint.conversation_handle (i.e. 94798AB6-DF37-E211-BF0F-002215A14A37) to remove the conversation manually:
END CONVERSATION conversation_handle WITH CLEANUP
WITH CLEANUP is important, please see MSDN: END CONVERSATION (Transact-SQL)