Service Broker : Same Service in 'From' & 'To' clause in BEGIN DIALOG Statement - tsql

In my Service Broker design, I need to make an asynchronous calls and needed some work to get done in background (Inside SQL Server only, like updating tables).
There are certain points to be taken under consideration based on the requirement :
It's kind of one-way data push. Just place a message into the SB queue and forget. No acknowledgement required.
Only one database involved in the design. There is no need for multiple databases.
Message will be placed to the SB queue using a Stored Proc ( This SP will be called by an application).
By observing above points, it seems that requirement doesn't suits for creating 2 different SB services as only one service would suffice. I designed the scenario with having only one SB Service, and while creating a conversation dialog, I assigned same service name to the 'From' & 'To' clauses. Program pushes data to the SB queue and activator will activate associated Store Procedure.. It works just fine.
BEGIN DIALOG CONVERSATION #RecordConversationHandle
FROM SERVICE **UpdateQueueStatus**
TO SERVICE '**UpdateQueueStatus**'
WITH ENCRYPTION = OFF;
Please help me by any suggestion on the above proposed design.. ? Any suggestions/issues or anything which demands attention to improve the design for better performance & scalability would be much appreciated.

Service broker is designed for dialogs, not monolog conversations. Don't design something new (There is tons of good reasons why they are always dialogs)
You can create sending service (Service1), witch is used for sending messages and receives "End Dialog" messages and ends dialog. The other (Service2) witch receives messages and does some processing with them + ends dialog when work is done.

The main reason of two services in a dialog and dialog-oriented conversations is the ability to disable queue. The initiator's queue may be enabled while at the same time, for some purpose or reasons, the target's queue may be disabled. In this case, sedning messages runs without the "disabled queue" error and messages will wait in the transmition queue until the target queue become enabled again.
That is why a contract may contain just one message type, and a queue may be created without specifying any contract. It's the initiator's queue.
There is a caveat: BEGIN CONVERSATION TIMER. It puts the standard message https://schemas.microsoft.com/SQL/ServiceBroker/DialogTimer into a local queue the specified conversation dialog belongs to.
One use case when a dialog on the same service may be usefull, is a recovery process. However, in this case, there should be a specific message type received in a higher priority than ordinary messages. The activation procedure first receives a recovery message, tries to recover, rollback if unsuccessfull, then receives an ordinary messages and commits receiving messages of both the types or just rollback if unsuccessfull again.

Related

Is it acceptable to model an event queue as a restful service?

I have been looking at RESTful Web Services and was wondering about modelling an event queue in REST.
Assuming the event queue is accessible at URL: http://my.domain/events, it seems to me that a POST operation applied to this URL is okay because it will add the event to the end of the list that represents the queue. Further, if I perform a GET operation on this URL, it seems to me that returning the head of queue also is okay.
My question is - is it okay for the GET operation to also remove the head of the queue or should this be performed by a separate DELETE operation?
is it okay for the GET operation to also remove the head of the queue
No, it is not from REST perspective. GET request should be safe according to REST best practices. Making any number of GET requests to a URL should have the same effect as making no requests at all.
There's one more concern about your design. There are usually two common patterns to retrieve a queue head:
The first one is to just get a head, process it and then notify the queue to remove the message if it was processed successfully, if not, the message gets back to the queue to be processed later again. It's a more robust approach.
The second one is to just get a queue head and remove it at the same time just like you described in your question.
To support both patterns I think you should only retrieve a message when doing GET and implement DELETE method so it returns a deleted message object as a response. This way you will comply with REST uniform interface and your queue client will be able to implement both patters.
Hope it helps!
Does your integrity requirements allow GET + DELETE in one step?
Events normally should not get lost. What happens if the response retrieval fails after the delete was executed?
I would GET the head of the queue and then send an acknowledgement containing the event ID that was received and successfully processed. Thus, you guarantee an at-least-once-delivery.
Depending on the number of events you are processing, a message bus might be the more suitable option here.
Do not become an overzealous REST paradigm worshipper. REST is a protocol but it does not necessarily need to convey the contract of the service.
What you say is perfectly fine as long as the contract between the consumer and the queue are clear and documented.

spring cloud sleuth: manually triggered async services

I have a service A that creates an email and sends it to a customer. The customer will receive the email and will, eventually, click on the link in the body to trigger service B.
How can I correlate two different and completely isolated services that are part of the same business process with sleuth?
Should I leave the span "opened" or is there a way to "embed" the trace id somehow on the email?
You can use asynchronous communication (http://cloud.spring.io/spring-cloud-sleuth/spring-cloud-sleuth.html#_asynchronous_communication) for example via a trace representation of the ExecutorService called the TraceableExecutorService (http://cloud.spring.io/spring-cloud-sleuth/spring-cloud-sleuth.html#_executor_executorservice_and_scheduledexecutorservice). You emit a completable future that will process the data in a separate thread. At some point you block and then you can retrieve the data. The trace representation of the ExecutorService will take care of passing of tracing data.
UPDATE:
If however, these are completely two separate processes then I'd close the span and create a completely separate span the moment someone clicks on the link. You should never leave spans explicitly open. What will bind the 2 processes will be the trace id. Zipkin doesn't yet support these long living tasks in the best possible way from the point of view of the UI but there's some work in progress going on to improve it (via so-called linked spans)

Is there a way to rely on Postgres Notify/Listen mechanism?

I have implemented a Notify/Listen mechanism, so when a special request is sent to the web server, using notify I can notify the workers (in Python) that there's a pending request waiting to be processed.
The implementation works fine, but the problem is that if the workers server is restarting, the notification gets lost, since at that particular time there's no listener.
I can implement a service like MQRabbit or similar, but my needs are so simple that implement such a monster is too much.
Is there any way, a configuration variable perhaps, that can give some persistence to the notification mechanism?
Thanks in advance
I don't think there is a way to persist notification channels, but you can simply store the pending requests to a table, and have the worker check for any missed work on startup.
Either a timestamp or a pending/completed flag would work, depending on what kind of work it's doing.
For consistency, you can have the NOTIFY fire from an INSERT trigger on the queue table, and have the worker always check for any remaining work (not just a specific request) when notified.

What triggers UI refresh in CQRS client app?

I am attempting to learn and apply the CQRS design approach (pattern and architecture) to a new project but seem to be missing a key piece.
My client application executes a query and retrieves a list of light-weight, read-only DTOs from the read model. The user selects an item and clicks a button to initiate some action. The action is performed by creating and sending the corresponding command object to the write model (where the command handler carries out the action, updates the data store, etc.) At some point, however, I need to update the UI to reflect changes to the state of the application resulting from the action.
How does the UI know when it is time to refresh the original list?
Additional Info
I have noticed that most articles/blogs discussing CQRS use MVC client apps in their examples. I am working on a Silverlight client right now and am beginning to wonder if the pattern simply doesn't work in that case.
Follow-Up Question
After thinking more about Bartlomiej's response and subsequent discussion, I am wondering about error handling in CQRS. Given that commands are basically fire-and-forget asynchronous operations, how do we report an error condition to the UI?
I see 'refreshing the UI' to take one of two forms:
The operation succeeds, data has changed and the UI should be updated to reflect these changes
The operation fails, data has not changed but the user should be notified of the failure and potential corrective actions.
Even with a Post-Redirect-Get pattern in an MVC, you can't really Redirect until you know the outcome of the operation. None of the examples I've seen thus far address these real-world concerns.
I've been struggling with similar issues for a WPF client. The re-query trigger for any data is dependent on the data your updating, commands tend to fall into categories:
The command is a true fire and forget method, it informs the back-end of a state change but this change does not need to be reflected in the UI, or the change simply isn't important to the UI.
The command will alter the result of a single query
The command will alter the result of multiple queries, usually (in my domain at least) in a cascading fashion, that is, changing the state of a single "high level" piece of data will likely affect many "low level" caches.
My first trigger is the page load, very few items are exempt from this as most pages must assume data has been updated since it was last visited. Though some systems may be able to escape with only updating financial and other critical data in this way.
For short commands I also update data when 'success' is returned from a command. Though this is mostly laziness as IMHO all CQRS commands should be fired asynchronously. It's still an option I couldn't live without but one you may have to if your implementation expects high latency between command and query.
One pattern I'm starting to make use of is the mediator (most MVVM frameworks come with one). When I fire a command, I also fire a message to the mediator specifying which command was launched. Each Cache (A view model property Retriever<T>) listens for commands which affect it and then updates appropriately. I try to minimise the number of messages while still minimising the number of caches that update unnecessary from a single message so I'll (hopefully) eventually end up with a shortlist of update reasons, with each 'reason' updating a list of caches.
Another approach is simple honesty, I find that by exposing graphically how the system updates itself makes users more willing to be patient with it. On firing a command show some UI indicating you're waiting for the successful response, on error you could offer to retry / show the error, on success you start the update of the relevant fields. Baring in mind that this command could have been fired from another terminal (of which you have no knowledge) so data will need to timeout eventually to avoid missing state changes invoked by other machines also.
Noting the irony that the only efficient method of updating cache's and values on a client is to un-separate the commands and queries again, be it through hardcoding or something like a hashmap.
My two cents.
I think MVVM actually fits into CQRS quite well. The ViewModel simply becomes an observable ReadModel.
1 - You initialize your ViewModel state via a query on the ReadModel.
2 - Changes on your ViewModel are automatically reflected on any Views that are bound to it.
3 - Certain changes on your ViewModel trigger a command to propegate to a message queue, an object responsible for sending those commands to the server takes those messages off the queue and sends them to the WriteModel.
4 - Clients should be well formed, meaning the ViewModel should have performed appropriate validation before it ever triggered the command. Once the command has been triggered, any event notifications can be published onto an event bus for the client to communicate changes to other ViewModels or components in the system interested in those changes. These events should carry the relevant information necessary. Typically, this means that other view models usually don't have to re-query the read model as a result of the change unless they are dependent on other data that needs to be retrieved.
5 - There is an object that connects to the message bus on the server for real-time push notifications when other clients make changes that this client is interested in knowing about, falling back to long-polling if necessary. It propagates those to the internal message bus that ties the components on the client together.
6 - The last part to handle is the fact that clients can be occasionally connected, which should be the only reason a command fails (they don't have internet access at the moment), which is when the client should be notified of problems.
In my ASP.NET MVC 3 I use 2 techniques depending on use case:
already well-known Post-Redirect-Get pattern which fits nicely with CQRS. Your MVC action that triggers the command returns a redirection to action that performs a query.
in some cases, like real-time updates of other clients, I rely on domain events/messages. I create an event handler that uses singlarR to push changes to all connected and interested clients.
There are two major ways you can take as far as I know :
1) design your UI , so that the user does not see its changes right away. Like for instance a message to tell him his action is a success, and offering him different choices to continue his work. this should buy you enough time to have updated your readmodel.
2) more complex, but you might keep the information you have send to the server and shows them in the interface.
The most important I guess, educate your user if you can so that they know why the data is not here... yet!
I am thinking about it only now, but these are for sync command handling, not async, in async things go really harder on the brain...the client interface becomes an event eater too..

Silly WebSphere MQ questions

I have two very basic questions on WebSphere MQ - given that I had been kind of administrating it for past few months I tend to think that these are silly questions
Is there a way to "deactivate" a
queue ? (for example through a
runmqsc command or through the
explorer interface) - I think not. I
think what I can do is just delete
it.
What will happen if I create a
remote queue definition if the real
remote queue is not in place? Will
it cause any issues on the queue
manager? - I think not. I think all
I will have are error messages in
the logs.
Please let me know your thoughts.
Thanks!
1 Is there a way to "deactivate" a
queue?
Yes. You can change the queue attributes like so:
ALTER Q(QUEUE_NAME) PUT(DISABLED) GET(DISABLED)
Any connected applications will receive a return code on the next API call telling them that the queue is no longer available for PUT/GET. If these are well-behaved programs they will then report the error and either end or go into a retry loop.
2 What will happen if I create a
remote queue definition if the real
remote queue is not in place?
The QRemote definition will resolve to a transmit queue. If the message can successfully be placed there your application will receive a return code of zero. (Any unsuccessful PUT will be due to hitting MAXDEPTH or other local problem not connected to the fact that the remote definition does not exist.)
The problem will be visible when the channel tries to deliver the message. If the remote QMgr has a Dead Letter Queue, the message will go there. If not, it will be backed out onto the local XMitQ and the channel will stop.