Best way of implementing a batch like operation in a RESTful architecture? - rest

We have a predominantly RESTful architecture for our product. This enables us to nicely implement almost all of the required functionality, except this new requirement thats come in.
I need to implement a page which lets the user to large scale DB operations synchronously. They can stop the operation in between, if they realized they made a mistake (rather than waiting for it to complete and doing an undo operation)
I was wondering if some one could give some pointers as to what would be the best way to implement such a functionality?
Cheers!
Nirav

How about a resource that encapsulates a set of batch operations? Creating the resource means kicking off the operations (data to indicate what the operations should do is submitted via POST). Updating the resource allows stopping it or modifying it while processing.

I would kick off the large operation in a separate thread. Show the user a constantly updated status of the thread, along with a Cancel button. If the user clicks the Cancel button, you kill thread.
This is the way I've implemented similar things in the past.
The idea is to give them their control back immediately, but don't let them do anything else until the thread is complete except cancel.
In generic terms, you need a "job queue" and a way to manage the queue.

You need to integrate a batch manager, or implement your own. There are several products that can help you. As an example, read this article
http://www.ibm.com/developerworks/websphere/techjournal/0801_vignola/0801_vignola.html

Related

CQRS - Single command handler?

I´m just trying to wrap my head around CQRS(/ES). I have not done anything serious with CQRS. Probably I´m just missing something very fundamental right now. Currently, I´m reading "Exploring CQRS and Event Sourcing". There is one sentence that somehow puzzles me in regards to commands:
"A single recipient processes a command."
I´ve seen this also in the CQRS sample application from Greg Young (FakeBus.cs) where an exception is thrown when more then one command handler is registered for any command type.
For me, this is an indication that this is a fundamental principle for CQRS (or Commands?). What is the reason? For me, it is somewhat counter-intuitive.
Imagine I have two components that need to perform some action in response to a command (it doesn´t matter if I have two instances of the same component or two independent components). Then I would need to create a handler that delegates the command to these components.
In my opinion, this is introducing an unnecessary dependency. In terms of CQRS, a command is nothing more than a message that is sent. I don´t get the reason why there should be only one handler for this message.
Can someone tell me what I am missing here? There is probably a very good reason for this that I just don´t see right now.
Regards
I am by no means an expert myself with CQRS, but perhaps I can help shed some light.
"A single recipient processes a command.", What is the reason?
One of the fundamental reasons for this is transactional consistency. A command needs to be handled in one discrete (and isolated) part of the application so that it can be committed in a single transaction. As soon as you start to have multiple handlers, distributing the application beyond a single process (and maintaining transactional consistency) is nearly impossible. So, while you could design that way, it is not recommended.
Hope this helps.
Imagine I have two components that need to perform some action in response to a command (it doesn´t matter if I have two instances of the same component or two independent components). Then I would need to create a handler that delegates the command to these components.
That's the responsibility of events.
A command must be handled by one command handler and must change the state for a single aggregate root. The aggregate root then raises one or more events indicating that something happened. These events can have multiple listeners that perform desired actions.
For example, you have a PurchaseGift command. Your command handler loads the Purchase aggregate root and performs the desired operation raising a GiftPurchased event. You can have one or more listeners to the GiftPurchase event, one for sending an email to the buyer confirming the operation and another to send the gift by mail.

Best way to get status when Iron.io task is completed using Laravel 4 queues

I am going to use Laravel 4 queues and integrate them with Iron.io
All of that is pretty straight forward, and I dont think I am going to have problems with that.
Thing that interests me is what is the best way to get status once task is completed?
Iron.io is going to do return call to my server to trigger job, and once that job completes I need to notify user about that...
How could I store this responses, and still be aware of job its related to, because there will be number of different job types?
I would like to hear how did you implement this.
Thanks
As Joseph pointed out in the comments, you'll need to:
Hopefully have a job quick enough for it to be finished when a user is still in browser
Use some sort of way to PUSH data back to your web browser.
The popular methods are:
Websockets
Server-sent events
Long polling - with ajax

How do I introduce a new event denormalizer in a CQRS system?

According to CQRS à la Greg Young, event handlers (and the downstream event denormalizers) react on incoming events that were published before by the event publisher.
Now lets suppose that at runtime we want to add a new event denormalizer: Basically, this is easy, but it needs to get to its data to the current state.
What is the best way to do this?
Should I send an out-of-order request to the event store and ask for all previously emitted events?
Or is there a better way to do this?
You can fetch and replay all (required) events against the new handler. This can be done in a separate process since what you essentially want is to get the persisted view models into the proper state.
Have a look at Rinat Abdullin's Lokad.CQRS sample project for a production example. Especially the SaaS.Engine.StartupProjectionRebuilder might be an interesting source even though it's rather complex.
One can also build the projections so that they remember what event they saw last. Then on any startup, they ask for this event and all forward. Re-starting an old projection and building a new one then become roughly the same thing.
If you embrace bounded context complex integration you may need to drop the entire read model and rebuild it.

Multiple request with AFNetworking

I'm trying to do multiple request in background to download many jsons and check data from them but I don't know how to use AFNetworking in that case.
I tried to do like Wiki explaings but when it's going to download the second file then the app breaks. I want to do all the process in background.
Thanks
AFNetworking will definitely handle this. We use it for exchanging data with a RESTful set of services. The things to keep in mind:
An operation (eg. AFHTTPRequestOperation) can only be used once.
An operation is asynchronous.
Put your operations in an NSOperationQueue, or use AFHTTPClient (suggested) to manage the operations for you.
When sending multiple requests, always assume that the responses will come back in a random sequence. There is no guarantee that you will get the responses in the same sequence as the requests.
Hope this helps to point you towards a solution to your problem. Without more detail in your question, it's difficult to give you a specific answer.
Check out AFHTTPClient's
enqueueBatchOfHTTPRequestOperations:progressBlock:completionBlock:, which lets you enqueue multiple requests operations at once with the added bonus of having a completion handler that is called when all of those requests have finished, as well as a block for tracking the progress. Also note, that every single operation can still have its own completion handler (useful if you have to process the results of a request, for example).
If you don't need to customize the request operation (and don't need individual completion blocks), you can also use enqueueBatchOfHTTPRequestOperationsWithRequests:progressBlock:completionBlock:, which allows you to pass an array of NSURLRequest directly without having to build the operations yourself.

How to stop runaway processes

So I'm working on this application that requests and retrieves webservice content for iPhone. The problem I am running into is this: When I initially request data, it is spawned off as an independent thread so that the application does not become unresponsive due to the network being slow. What this means is that if the user navigates away from the current page before this data finishes downloading, unexpected things can happen.
I have managed to narrow down the problem cases to one relatively simple one: I have some nested tables, so if a user goes down into the "Messages" table, which can sometimes take a little while to download, then back out immediately, and select a different set of messages to view, the previous set of messages ends up loading, because it was still in the queue.
Here are things I have tried:
1) I tried cancelling the operations, but this is futile, because since I only allow one operation in the queue at the time, it triggers immediately
2) I tried validating that the recipient of the data is the same, but this doesn't work because the actual table object is the between the two selections, it just needs a different data set.
Anyone have any general programming suggestions on how to solve this tricky threading problem?
On an iPhone specific note: if I could just stop the user from being able to back out of the messages table, I wouldn't have this problem, because they would basically be locked into that view until the data has finished loading.
Thanks!
This post has some design advice relating to iOS networking and threading. The basic gist of it is "Don't use explicit threading", and I couldn't agree more. NSURLConnection has great built-in functionality for asynchronously loading data from a URL while managing all of the threading for you. They can also be cancelled easily at will.
If you were to use the NSURLConnection paradigm, you can simply cancel any pending request when you back out of the requesting view controller.