GWT request factory: Please help explain a complete end-to-end WriteOperation.DELETE scenario? - gwt

Been looking at a lot of gwt request factory examples lately but still can't quite find the complete picture:
GWT request factory's sweet spot is CRUD (create/read/update/delete). Having said that:
Even in the "update" case, it is not clear to me who is responsible for firing a EntityProxyChange (Event)
I read somewhere (forget where) that the client side request factory keeps a local cache of EntityProxy that it has 'seen', and, if it 'sees' a new one then it fires a EntityProxyChange (Event)
does that mean that, if my "updatePerson()" method returns a (newly updated) PersonProxy, then does the local client side request factory infrastructure 'see' this newly updated person (i.e., by virtue of its updated versionId) and then does it automatically fire a EntityProxyChange (Event) ?
In the "delete" case, say I create a function called "deletePerson()" in my request context, I understand how the request arrives at the server and one might do, e.g., a SQL DELETE to delete the entity, but, who is responsible for firing a EntityProxyChange (Event) w/WriteOperation=DELETE ? do these events get fired at the server side? the client side?
I've had a look at the listwidget example (http://code.google.com/p/listwidget), but, on a 'itemlist' delete, it kinda 'cheats' by just doing a brute force refresh of the entire list (though I do understand that that detail is not necessarily what listwidget is trying to illustrate in the first place); I would have expected to see a EntityProxyChange (Event) handler that listens for WriteOperation.DELETE events and then would just remove just that entity from the ListDataProvider.
Does ServiceLayer/ServiceLayerDecorator.isLive() factor into any of this?

See http://code.google.com/p/google-web-toolkit/wiki/RequestFactoryMovingParts#Flow
The client-side doesn't keep a cache (that was probably the case in the early iterations a year ago though, but has never been the case in non-milestones releases), and the server-side is responsible for "firing" the events (you'll see them in the JSON response payload).
When the wiki page referenced above says:
Entities that can be no longer retrieved ...
What it really means is that isLive has returned false, and isLive's implementation defaults to doing a get by the ID and checking for a non-null result.

Related

How to model a CANCEL action in a RESTful way?

We are currently in the process of wrangling smaller services from our monoliths. Our domain is very similar to a ticketing system. We have decided to start with the cancellation process of the domain.
Our cancel service has as simple endpoint "Cancel" which takes in the id of the ticket. Internally, we retrieve the id, perform some operations related to cancel on it and update the state of the entity in the store. From the store's perspective the only difference between a cancelled ticket and a live ticket are a few properties.
From what I have read, PATCH seems to be the correct verb to be used in this case, as am updating only a simple property in the resource.
PATCH /api/tickets/{id}
Payload {isCancelled: true}
But isCancelled is not an actual property in the entity. Is it fair to send properties in the payload that are not part of the entity or should I think of some other form of modeling this request? I would not want to send the entire entity as part of the payload, since it is large.
I have considered creating a new resource CancelledTickets, but in our domain we would never have the need to a GET on cancelled tickets. Hence stayed away from having to create a new resource.
Exposing the GET interface of a resource is not compulsory.
For example, use
PUT /api/tickets/{id}/actions/cancel
to submit the cancellation request. I choose PUT since there would be no more than one cancellation request in effect.
Hope it be helpful.
Take a look what exactly is RESTful way. No matter if you send PATCH request with isCancelled as payload or even DELETE if you want tickets to disappear. It's still RESTful.
Your move depends on your needs. As you said
I have considered creating a new resource CancelledTickets, but in our
domain we would never have the need to a GET on cancelled tickets.
I would just send DELETE. You don't have to remove it physically. If it's possible to un-cancel, then implement isCancelled mechanism. It's just question of taste.
REST is basically a generalization of the browser based Web. Any concepts you apply for the Web can also be applied to REST.
So, how would you design a cancel activity in a Web page? You'd probably have a table row with certain activities like edit and delete outlined with icons and mouse-over text that on clicking invoke a URI on the server and lead to a followup state. You are not that much interested how the URI of that button might look like or if a PATCH or DELETE command is invoked in the back. You are just interested that the request is processed.
The same holds true if you want to perform the same via REST. Instead of images that hint the user that an edit or cancel activity is performed on an entry, a meaningful link-relation name should be used to hint the client about the possiblities. In your case this might be something like reserve new tickets, edit reservation or cancel reservation. Each link relation name is accompanied by a URL the client can invoke if he wants to perform one of the activities. The exact characters of the URI is not of importance here to the client also. Which method to invoke might either be provided already in the response (as further accompanying field) or via the media type the response was processed for. If neither the media type nor an accompanying field gives a hint on which HTTP operation to use an OPTIONS request may be issued on the URI beforehand. The rule of thumb here is, the server should teach a client on how to achieve something in the end.
By following such a concept you decouple a client from the API and make it robust to changes. Instead of a client generating a URI to invoke the client is fed by the server with possible URIs to invoke. If the server ever changes its iternal URI structure a client using one of the provided URIs will still be able to invoke the service as it simply used one of the URIs provided by the server. Which one to use is determined by analyzing the link relation name that hints the client when to invoke such URI. As mentioned above, such link relation names need to be defined somewhere. But this is exactly what Fielding claimed back in 2008 by:
A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types. (Source)
Which HTTP operation to choose for canceling a ticket/reservation may depend on your desing. While some of the answers recommended DELETE RFC 7231 states that only the association between the URI and the resource is removed and no gurantee is given that the actual resource is also being removed here as well. If you design a system where a cancelation might be undone, then DELETE is not the right choice for you as the mapping of the URI to the resource should not exist further after handling a DELETE request. If you, however, consider a cancelation to also lead to a removal of the reservation then you MAY use DELETE.
If you model your resource in a way that maintains the state as property within the resource, PATCHing the resource might be a valid option. However, simply sending something like state=canceled is probably not enough here as PATCH is a calculation of steps done by the client in order to transform a certain resource (or multipe resources) into a desired target state. JSON Patch might give a clue on how this might be done. A further note needs to be done on the atomicy requirement PATCH has. Either all of the instructions succeed or none at all.
As also PUT was mentioned in one of the other answers. PUT has the semantics of replacing the current representation available at the given URI with the one given in the request' payload. The server is further allowed to either reject the content or transform it to a more suitable representation and also affect other resources as well, i.e. if they mimic a version history of the resource.
If neither of the above mentioned operations really satisfies your needs you should use POST as this is the all-purpose, swiss-army-knife toolkit of HTTP. While this operation is usually used to create new resources, it isn't limited to it. It should be used in any situation where the semantics of the other operations aren't applicable. According to the HTTP specification
The POST method requests that the target resource process the representation enclosed in the request according to the resource's own specific semantics.
This is basically the get-free-out-of-jail card. Here you can literally process anything at the server according to your own rules. If you want to cancel or pause something, just do it.
I highly discourage to use GET for creating, altering or canceling/removing something. GET is a safe operation and gurantees any invoking client that it wont alter any state for the invoked resource on the server. Note that it might have minor side-effects, i.e. logging, though the actual state should be unaffected by an invocation. This is a property Web crawler rely on. They will simply invoke any URI via GET and learn the content of the received response. And I assume you don't want Google (or any other crawler) to cancel all of your reservations, or do you?
As mentioned above, which HTTP operation you should use depends on your design. DELETE should only be used if you are also going to remove the representation, eventhough the spec does not necessarily require this, but once the URI mapping to the resource is gone, you basically have no way to invoke this resource further (unless you have created a further URI mapping first, of course). If you designed your resource to keep the state within a property I'd probably go for PATCH but in general I'd basically opt for POST here as here you have all the choices at your hands.
I would suggest having a state resource.
This keeps things RESTful. You have your HTTP method acting as the verb. The state part of the URI is a noun. Then your request body is simple and consistent with the URI.
The only thing I don't like about this is the value of state requires documentation, meaning what states are there? This could be solved via the API by providing possible states on a meta resource or part of the ticket body in meta object.
PUT /api/tickets/:id/state
{state: "canceled"}
GET /api/meta/tickets/state
// returns
[
"canceled",
...
]
GET /api/tickets/:id
{
id: ...
meta: {
states: [
"canceled",
...
]
}
}
For modelling CANCEL Action in Restful way :
Suppose we have to delete a note in DB by providing noteId(Note's ID) and Note is a pojo
1] At controller :
#DeleteMapping(value="/delete/{noteId}")
public ResponseEntity<Note> deleteNote( #PathVariable Long noteId)
{
noteServiceImpl.deleteNote(noteId);
return ResponseEntity.ok().build();
}
2]Service Layer :
#Service
public class NoteServiceImpl {
#Autowired
private NotesRepository notesDao;
public void deleteNote(Long id) {
notesDao.delete(id);
}
}
3] Repository layer :
#Repository
public interface NotesRepository extends CrudRepository<Note, Long> {
}
and in 4] postman : http://localhost:8080/delete/1
So we have deleted note Id 1 from DB by CANCEL Action

CQRS and REST HATEOAS mismatch

Suppose you have a model Foo.
One business case is to simply create an instance of Foo, so there is a corresponding CreateFooCommand in my model, triggered by invoking a POST request to a given REST endpoint.
There are of course other Commands too.
But now, there is a ViewModel, which is derived from my DomainModel. It's simply a sql table with raw data - each Foo instance from DomainModel has corresponding derived ViewModel instance. Both have different IDs (on DomainModel there is a DomainID, on ViewModel it's simply a long value).
Now: should I even care about HATEOAS in such a case? In a proper REST implementation, I should at least return location-url in the header. But since my view model is only derived from DomainModel, should I care? I don't even have the view model's ID at the time my DomainModel is created.
Since CQRS means that Queries are separated from Commands, you may not be able to perform a Query right away, because the Command may not yet have been applied (perhaps it never will).
In order to reconcile that with HATEOAS, instead of returning 200 OK from the POST request, the service can return 202 Accepted:
The request has been accepted for processing, but the processing has not been completed. The request might or might not eventually be acted upon, as it might be disallowed when processing actually takes place. There is no facility for re-sending a status code from an asynchronous operation such as this.
The 202 response is intentionally non-committal. Its purpose is to allow a server to accept a request for some other process (perhaps a batch-oriented process that is only run once per day) without requiring that the user agent's connection to the server persist until the process is completed. The entity returned with this response SHOULD include an indication of the request's current status and either a pointer to a status monitor or some estimate of when the user can expect the request to be fulfilled.
(My emphasis)
That pointer could be a link that the client can query to get the status of the Command. When/if the Command completes and the View is updated, that status resource could then contain a link to the view.
This is pretty much a workflow straight out of REST in Practice - very reminiscent of its Restbucks example.
Another option to deal with the ID issue is to generate the ID before accepting the Command - perhaps even asking the client to supply the ID. Read more about such options here.
As Greg Young explains, CQRS is nothing more than "splitting one object into two". So assume that you have one domain aggregate and it has an id. Now you are talking about your view model having another id. However, you are unable to update your view model unless you have the aggregate id in your view model as well. From my point of view, your REST POST request should return a result that has the aggregate id in it. This is your id, the view model id has no interest to anyone except the read model storage.
Should it return a command status URI like Mark suggests is a topic for another discussion. Many CQRS practitioners currently tend to handle commands synchronously to avoid FE/BE mismatch in case of failure and give the FE an ability to react on errors on the BE. There is no real win to execute commands asynchronously for one user. Commands do mutate the state and in 99% of cases the user needs to know if the state was mutated properly.

Transactional batch processing with OData

Working with Web API OData, I have $batch processing working, however, the persistence to the database is not Transactional. If I include multiple requests in a Changeset in my request, and one of those items fails, the other still completes, because each separate call to the controller has it's own DbContext.
for example, if I submit a Batch with two change sets:
Batch 1
- ChangeSet 1
- - Patch valid object
- - Patch invalid object
- End Changeset 1
- ChangeSet 2
- - Insert Valid Object
- End ChangeSet 2
End Batch
I would expect that the first valid patch would be rolled back, as the change set could not be completed in its entirety, however, since each call gets its own DbContext, the first Patch is committed, the second is not, and the insert is committed.
Is there a standard way to support transactions through a $batch request with OData?
The theory: let's make sure we're talking about the same thing.
In practice: addressing the problem as far as I can (no definitive answer) could.
In practice, really (update): a canonical way of implementing the backend-specific parts.
Wait, does it solve my problem?: let's not forget that the implementation (3) is bound by the specification (1).
Alternatively: the usual "do you really need it?" (no definitive answer).
The theory
For the record, here is what the OData spec has to say about it (emphasis mine):
All operations in a change set represent a single change unit so a
service MUST successfully process and apply all the requests in the
change set or else apply none of them. It is up to the service
implementation to define rollback semantics to undo any requests
within a change set that may have been applied before another request
in that same change set failed and thereby apply this all-or-nothing
requirement. The service MAY execute the requests within a change set
in any order and MAY return the responses to the individual requests
in any order. (...)
http://docs.oasis-open.org/odata/odata/v4.0/cos01/part1-protocol/odata-v4.0-cos01-part1-protocol.html#_Toc372793753
This is V4, which barely updates V3 regarding Batch Requests, so the same considerations apply for V3 services AFAIK.
To understand this, you need a tiny bit of background:
Batch requests are sets of ordered requests and change sets.
Change Sets themselves are atomic units of work consisting of sets of unordered requests, though these requests can only be Data Modification requests (POST, PUT, PATCH, DELETE, but not GET) or Action Invocation requests.
You might raise an eyebrow at the fact that requests within change sets are unordered, and quite frankly I don't have proper rationale to offer. The examples in the spec clearly show requests referencing each other, which would imply that an order in which to process them must be deduced. In reality, my guess is that change sets must really be thought of as single requests themselves (hence the atomic requirement) that are parsed together and are possibly collapsed into a single backend operation (depending on the backend of course). In most SQL databases, we can reasonably start a transaction and apply each subrequest in a certain order defined by their interdependencies, but for some other backends, it may be required that these subrequests be mangled and made sense of together before sending any change to the platters. This would explain why they aren't required to be applied in order (the very notion might not make sense to some backends).
An implication of that interpretation is that all your change sets must be logically consistent on their own; for example you can't have a PUT and a PATCH that touch the same properties on the same change set. This would be ambiguous. It's thus the responsibility of the client to merge operations as efficiently as possible before sending the requests to the server. This should always be possible.
(I'd love someone to confirm this.) I'm now fairly confident that this is the correct interpretation.
While this may seem like an obvious good practice, it's not generally how people think of batch processing. I stress again that all of this applies to requests within change sets, not requests and change sets within batch requests (which are ordered and work pretty much as you'd expect, minus their non-atomic/non-transactional nature).
In practice
To come back to your question, which is specific to ASP.NET Web API, it seems they claim full support of OData batch requests. More information here. It also seems true that, as you say, a new controller instance is created for each subrequest (well, I take your word for it), which in turn brings a new context and breaks the atomicity requirement. So, who's right?
Well, as you rightly point out too, if you're going to have SaveChanges calls in your handlers, no amount of framework hackery will help much. It looks like you're supposed to handle these subrequests yourself with the considerations I outlined above (looking out for inconsistent change sets). Quite obviously, you'd need to (1) detect that you're processing a subrequest that is part of a changeset (so that you can conditionally commit) and (2) keep state between invocations.
Update: See next section for how to do (2) while keeping controllers oblivious to the functionality (no need for (1)). The next two paragraphs may still be of interest if you want more context on the problems that are solved by the HttpMessageHandler solution.
I don't know if you can detect whether you're in a changeset or not (1) with the current APIs they provide. I don't know if you can force ASP.NET to keep the controller alive for (2) either. What you could do for the latter however (if you can't keep it alive) is to keep a reference to the context elsewhere (for example in some kind of session state Request.Properties) and reuse it conditionally (update: or unconditionally if you manage the transaction at a higher-level, see below). I realize this is probably not as helpful as you might have hoped, but at least now you should have the right questions to direct to the developers/documentation writers of your implementation.
Dangerously rambling: Instead of conditionally calling SaveChanges, you could conditionally create and terminate a TransactionScope for every changeset. This doesn't remove the need for (1) or (2), just another way of doing things. It sort of follows that the framework could technically implement this automatically (as long as the same controller instance can be reused), but without knowing the internals enough I wouldn't revisit my statement that the framework doesn't have enough to go with to do everything itself just yet. After all, the semantics of TransactionScope might be too specific, irrelevant or even undesired for certain backends.
Update: This is indeed what the proper way of doing things look like. The next section shows a sample implementation that uses the Entity Framework explicit transaction API instead of TransactionScope, but this has the same end-result. Although I feel there are ways to make a generic Entity Framework implementation, currently ASP.NET doesn't provide any EF-specific functionality so you're required to implement this yourself. If you ever extract your code to make it reusable, please do share it outside of the ASP.NET project if you can (or convince the ASP.NET team that they should include it in their tree).
In practice, really (update)
See snow_FFFFFF's helpful answer, which references a sample project.
To put it in context of this answer, it shows how to use an HttpMessageHandler to implement requirement #2 which I outlined above (keeping state between invocations of the controllers within a single request). This works by hooking at a higher-level than controllers, and splitting the request in multiple "subrequests", all the while keeping state oblivious to the controllers (the transactions) and even exposing state to the controllers (the Entity Framework context, in this case via HttpRequestMessage.Properties). The controllers happily process each subrequest without knowing if they are normal requests, part of a batch request, or even part of a changeset. All they need to do is use the Entity Framework context in the properties of the request instead of using their own.
Note that you actually have a lot of built-in support to achieve this. This implementation builds on top of the DefaultODataBatchHandler, which builds on top of the ODataBatchHandler code, which builds on top of the HttpBatchHandler code, which is an HttpMessageHandler. The relevant requests are explicitly routed to that handler using Routes.MapODataServiceRoute.
How does this implementation map to the theory? Quite well, actually. You can see that each subrequest is either sent to be processed as-is by the relevant controller if it is an "operation" (normal request), or handled by more specific code if it is a changeset. At this level, they are processed in order, but not atomically.
The changeset handling code however does indeed wrap each of its own subrequests in a transaction (one transaction for each changeset). While the code could at this point try to figure out the order in which to execute statements in the transaction by looking at the Content-ID headers of each subrequest to build a dependency graph, this implementation takes the more straightforward approach of requiring the client to order these subrequests in the right order itself, which is fair enough.
Wait, does it solve my problem?
If you can wrap all your operations in a single changeset, then yes, the request will be transactional. If you can't, then you must modify this implementation so that it wraps the entire batch in a single transaction. While the specification supposedly doesn't preclude this, there are obvious performance considerations to take into account. You could also add a non-standard HTTP header to flag whether you want the batch request to be transactional or not, and have your implementation act accordingly.
In any case, this wouldn't be standard, and you couldn't count on it if you ever wanted to use other OData servers in an interoperable manner. To fix this, you need to argue for optional atomic batch requests to the OData committee at OASIS.
Alternatively
If you can't find a way to branch code when processing a changeset, or you can't convince the developers to provide you with a way to do so, or you can't keep changeset-specific state in any satisfactory way, then it looks like you must [you may alternatively want to] expose a brand new HTTP resource with semantics specific to the operation you need to perform.
You probably know this, and this is likely what you're trying to avoid, but this involves using DTOs (Data Transfer Objects) to populate with the data in the requests. You then interpret these DTOs to manipulate your entities within a single handler controller action and hence with full control over the atomicity of the resulting operations.
Note that some people actually prefer this approach (more process-oriented, less data-oriented), though it can be very difficult to model. There's no right answer, it always depends on the domain and use-cases, and it's easy to fall into traps that would make your API not very RESTful. It's the art of API design. Unrelated: The same remarks can be said about data modeling, which some people actually find harder. YMMV.
Summary
There's a few approaches to explore, some information to retrieve from the developers a canonical implementation technique to use, an opportunity to create a generic Entity Framework implementation, and a non-generic alternative.
It'd be nice if you could update this thread as you gather answers elsewhere (well, if you feel motivated enough) and with what you eventually decide to do, as it seems like something many people would love to have some kind of definitive guidance for.
Good luck ;).
The following link shows the Web API OData implementation that is required to process the changeset in transactions. You are correct that the default batch handler does not do this for you:
http://aspnet.codeplex.com/SourceControl/latest#Samples/WebApi/OData/v3/ODataEFBatchSample/
UPDATE
The original link seems to have gone away - the following link includes similar logic (and for v4) for transactional processing:
https://damienbod.com/2014/08/14/web-api-odata-v4-batching-part-10/
I just learned about the TransactionScope class.
A few points I'd like to make before moving on:
The original question posited that each controller called received
its own DbContext. This isn't true. The DBContext lifetime is scoped to the entire request. Review Dependency lifetime in ASP.NET Core for
further details.
I suspect that the original poster is experiencing issues because each sub-request in the batch is invoking its assigned controller method,
and each method is calling DbContext.SaveChanges() individually - causing that unit of work to be committed.
My understanding of the
question comes from the basis of performing database transactions,
i.e. (pseudo-code for SQL expected):
BEGIN TRAN
DO SOMETHING
DO MORE THINGS
DO EVEN MORE THINGS
IF FAILURES OCCURRED ROLLBACK EVERYTHING. OTHERWISE, COMMIT EVERYTHING.
This is a reasonable request that I would expect OData to be able to perform with a single POST operation to [base URL]/odata/$batch.
Batch Execution Order Concerns
For our purposes, we may or may not necessarily care what order work is done against the DbContext. We definitely care though that the work being performed is being done as part of a batch. We want it to all succeed or all be rolled back in the database(s) being updated.
If you are using old school Web API (in other words, prior to ASP.Net Core), then your batch handler class is likely the DefaultHttpBatchHandler class. According to Microsoft's documentation here Introducing batch support in Web API and Web API OData, batch transactions using the DefaultHttpBatchHandler in OData are sequential by default. It has an ExecutionOrder property that can be set to change this behavior so that operations are performed concurrently.
If you are using ASP.Net Core, it appears we have two options:
If your batch operation is using the "old school" payload format, it
appears that batch operations are performed sequentially by default
(assuming I am interpreting the source code correctly).
ASP.Net Core provides a new option though. A new
DefaultODataBatchHandler has replaced the old DefaultHttpBatchHandler
class. Support for ExecutionOrder has been dropped in favor of adopting a
model where metadata in the payload communicates whether what batch
operations should happen in order and/or can be executed concurrently. To
utilize this feature, the request payload Content-Type is changed to
application/json and the payload itself is in JSON format (see below). Flow
control is established within the payload by adding dependency and group
directives to control execution order so that batch requests can be split
into multiple groups of individual requests that can be executed
asynchronously and in parallel where no dependencies exist, or in order where
dependencies do exist. We can take advantage of this fact and simply create
"Id", "atomicityGroup", and "DependsOn" tags in out payload to ensure
operations are performed in the appropriate order.
Transaction Control
As stated previously, your code is likely using either the DefaultHttpBatchHandler class or the DefaultODataBatchHandler class. In either case, these classes aren't sealed and we can easily derive from them to wrap the work being done in a TransactionScope. By default, if no unhandled exceptions occurred within the scope, the transaction is committed when it is disposed. Otherwise, it is rolled back:
/// <summary>
/// An OData Batch Handler derived from <see cref="DefaultODataBatchHandler"/> that wraps the work being done
/// in a <see cref="TransactionScope"/> so that if any errors occur, the entire unit of work is rolled back.
/// </summary>
public class TransactionedODataBatchHandler : DefaultODataBatchHandler
{
public override async Task ProcessBatchAsync(HttpContext context, RequestDelegate nextHandler)
{
using (TransactionScope scope = new TransactionScope( TransactionScopeAsyncFlowOption.Enabled))
{
await base.ProcessBatchAsync(context, nextHandler);
}
}
}
Just replace your default class with an instance of this one and you are good to go!
routeBuilder.MapODataServiceRoute("ODataRoutes", "odata",
modelBuilder.GetEdmModel(app.ApplicationServices),
new TransactionedODataBatchHandler());
Controlling Execution Order in the ASP.Net Core POST to Batch Payload
The payload for the ASP.Net Core batch handler uses "Id", "atomicityGroup", and "DependsOn" tags to control execution order of the sub-requests. We also gain a benefit in that the boundary parameter on the Content-Type header is not necessary as it was in prior versions:
HEADER
Content-Type: application/json
BODY
{
"requests": [
{
"method": "POST",
"id": "PIG1",
"url": "http://localhost:50548/odata/DoSomeWork",
"headers": {
"content-type": "application/json; odata.metadata=minimal; odata.streaming=true",
"odata-version": "4.0"
},
"body": { "message": "Went to market and had roast beef" }
},
{
"method": "POST",
"id": "PIG2",
"dependsOn": [ "PIG1" ],
"url": "http://localhost:50548/odata/DoSomeWork",
"headers": {
"content-type": "application/json; odata.metadata=minimal; odata.streaming=true",
"odata-version": "4.0"
},
"body": { "message": "Stayed home, stared longingly at the roast beef, and remained famished" }
},
{
"method": "POST",
"id": "PIG3",
"dependsOn": [ "PIG2" ],
"url": "http://localhost:50548/odata/DoSomeWork",
"headers": {
"content-type": "application/json; odata.metadata=minimal; odata.streaming=true",
"odata-version": "4.0"
},
"body": { "message": "Did not play nice with the others and did his own thing" }
},
{
"method": "POST",
"id": "TEnd",
"dependsOn": [ "PIG1", "PIG2", "PIG3" ],
"url": "http://localhost:50548/odata/HuffAndPuff",
"headers": {
"content-type": "application/json; odata.metadata=minimal; odata.streaming=true",
"odata-version": "4.0"
}
}
]
}
And that's pretty much it. With the batch operations being wrapped in a TransactionScope, if anything fails, it all gets rolled back.
There should only be one DbContext for the OData batch request. Both WCF Data Services and HTTP Web API support OData batch scenario and handle it in a transactional manner. You can check this example: http://blogs.msdn.com/b/webdev/archive/2013/11/01/introducing-batch-support-in-web-api-and-web-api-odata.aspx
I used the same from V3 of the Odata Samples, I saw that my transaction.rollback was called but the data did not rollback. something is lacking but I can't work out what. this may be an issue with having each Odata call using save changes and do they actually see the transaction as in scope. we may need a guru from the Entity Framework team to help solve this one.
Although very detailed, Mitselpliks answer didn't work right out of the box for me, as the transaction got rolled back after each request. To commit the transaction and apply it to the database, it is necessary to call scope.Complete() before disposing/leaving the using scope.
The next issue was that, although everything now ran in a transaction, Exceptions/failures of a single request didn't cause the request or transaction to fail, the status code of the batch response was still a 200 and all other changes were still applied.
As there was no direct way to read the status code of individual requests of the batch request directly from HttpContext, I had to also overload the ExecuteRequestMessageAsync Method and check the results there. So my final code, which causes transactions to be applied when everything is successful and else to rollback everyhing, looks like this:
public class TransactionODataBatchHandler : DefaultODataBatchHandler
{
protected bool Failed { get; set; }
public override async Task ProcessBatchAsync(HttpContext context, RequestDelegate nextHandler)
{
using (var scope = new TransactionScope(
TransactionScopeOption.Required,
new TransactionOptions { IsolationLevel = IsolationLevel.ReadCommitted },
TransactionScopeAsyncFlowOption.Enabled))
{
Failed = false;
await base.ProcessBatchAsync(context, nextHandler);
if (!Failed)
{
scope.Complete();
}
}
}
public override async Task<IList<ODataBatchResponseItem>> ExecuteRequestMessagesAsync(IEnumerable<ODataBatchRequestItem> requests, RequestDelegate handler)
{
var responses = await base.ExecuteRequestMessagesAsync(requests, handler);
Failed = responses.Cast<OperationResponseItem>().Any(r => !r.Context.Response.IsSuccessStatusCode());
return responses;
}
}

How do you ensure consistent client reads in an eventual consistent system?

I'm digging into CQRS and I am looking for articles on how to solve client reads in an eventual consistent system. Consider for example a web shop where users can add items to their cart. How can you ensure that the client displays items in the cart if the actual processing of the command "AddItemToCart" is done async? I understand the principles of dispatching commands async and updating the read model async based on domain events, but I fail to see how this is handled from the clients perspective.
There are a few different ways of doing it;
Wait at user till consistent
Just poll the server until you get the read model updated. This is similar to what Ben showed.
Ensure consistency through 2PC
You have a queue that supports DTC; and your commands are put there first. They are then; executed, events sent, read model updated; all inside a single transaction. You have not actually gained anything with this method though, so don't do it this way.
Fool the client
Place the read models in local storage at the client and update them when the corresponding event is sent -- but you were expecting this event anyway, so you had already updated the javascript view of the shopping cart.
I'd recommend you have a look at the Microsoft Patterns & Practices team's guidance on CQRS. Although this is still work-in-progress they have given one solution to the issue you've raised.
Their approach for commands requiring feedback is to submit the command asynchronously, redirect to another controller action and then poll the read model for the expected change or a time-out occurs. This is using the Post-Redirect-Get pattern which works better with the browser's forward and back navigation buttons, and gives the infrastructure more time to process the command before the MVC controller starts polling.
Example code from the RegistrationController using ASP.NET MVC 4 asynchronous controllers.
[HttpGet]
[OutputCache(Duration = 0, NoStore = true)]
public Task<ActionResult> SpecifyRegistrantAndPaymentDetails(Guid orderId, int orderVersion)
{
return this.WaitUntilOrderIsPriced(orderId, orderVersion)
.ContinueWith<ActionResult>(
...
);
}
...
private Task<PricedOrder> WaitUntilOrderIsPriced(Guid orderId, int lastOrderVersion)
{
return
TimerTaskFactory.StartNew<PricedOrder>(
() => this.orderDao.FindPricedOrder(orderId),
order => order != null && order.OrderVersion > lastOrderVersion,
PricedOrderPollPeriodInMilliseconds,
DateTime.Now.AddSeconds(PricedOrderWaitTimeoutInSeconds));
}
I'd probably use AJAX polling instead of having a blocked web request at the server.
Post-Redirect-Get
You're hoping that the save command executes on time before Get is called. What if the command takes 10 seconds to complete in the back end but Get is called in 1 second?
Local Storage
With storing the result of the command on the client while the command goes off to execute, you're assuming that the command will go through without errors. What if the back-end runs into an error while processing the command? Then what you have locally isn't consistent.
Polling
Polling seems to be the option that is actually in line with eventual consistency; you're not faking or assuming. Your polling mechanism can be an asynchronous as a part of your page, e.g. shopping cart page component polls until it gets an update without refreshing the page.
Callbacks
You could introduce something like web hooks to make a call back to the client if the client is capable of receiving such. By providing a correlation Id once the command is accepted by the back-end, once the command has finished processing, the back-end can notify the front end of the command's status along with the correlation Id on whether the command went through successfully or not. There is no need for any kind of polling with this approach.

RIA Services matching a response to the request

I was wondering if someone could provide some advice on the following problem. We are currently developing a Silverlight 4 application based on RIA .NET Services. One of the screens in the application allows users to type in a search string and after 2 seconds of inactivity the request is submitted to our domain service. This is all handles nicely with Rx.
Now currently it is possible for a second search to be executed before the original has returned. Its also possible that the second request could return before the first.
Really I'm just trying to find out what patterns and approaches people are using to manage the correct response to the correct request.
Are you using some sort of operation identifier in your requests?
Are you creating new instances of your domain services for each request?
Is there away to tie the completed event of a request to the Rx observable monitoring the textchange event?
Any steer would be helpful really,
Dave
It should be quite easy for you to solve this problem.
If I assume you have an observable of string that initiates the search and that you have a domain service that returns a Result object when given the string then this is the kind of code you need:
IObservable<string> searchText
= ...;
Func<string, IObservable<Result>> searchRequest
= Observable.FromAsyncPattern<string, Result>(
search.BeginInvoke,
search.EndInvoke);
IObservable<Result> results
= (from st in searchText
select searchRequest(st))
.Switch();
The magic is in the Switch extension method which "switches" to the latest observable returned from the IObservable<IObservable<Result>> - yes, it is a nested observable.
When a new searchText comes in, the query returns a new IObservable<Result> created from the incoming search text. The Switch then switches the results observable to use this latest observable and just ignores any previously created observables.
So the result is that only the latest search results are observed and any previous results are ignored.
Hopefully that makes sense. :-)
Erik Meijer addresses this here (after about 30 minutes): http://channel9.msdn.com/Events/MIX/MIX10/FTL01
He explains the Switch statement after about 36 minutes.
The simples way IMO is to have a subject for requests that you notify before any request is dispatched to WCF. Then rather than subscribing to observable created from the completed event subscribe to CompletedEventObservable.TakeUntil(RequestsSubject). This way you will never be notified with the response to the previous request.
Check out rxx http://rxx.codeplex.com/
It has tons of extra stuff that will help, particularly in your case, I think Dynamic Objects and observable object props might be something that will make your life easier.