I was wondering if someone could provide some advice on the following problem. We are currently developing a Silverlight 4 application based on RIA .NET Services. One of the screens in the application allows users to type in a search string and after 2 seconds of inactivity the request is submitted to our domain service. This is all handles nicely with Rx.
Now currently it is possible for a second search to be executed before the original has returned. Its also possible that the second request could return before the first.
Really I'm just trying to find out what patterns and approaches people are using to manage the correct response to the correct request.
Are you using some sort of operation identifier in your requests?
Are you creating new instances of your domain services for each request?
Is there away to tie the completed event of a request to the Rx observable monitoring the textchange event?
Any steer would be helpful really,
Dave
It should be quite easy for you to solve this problem.
If I assume you have an observable of string that initiates the search and that you have a domain service that returns a Result object when given the string then this is the kind of code you need:
IObservable<string> searchText
= ...;
Func<string, IObservable<Result>> searchRequest
= Observable.FromAsyncPattern<string, Result>(
search.BeginInvoke,
search.EndInvoke);
IObservable<Result> results
= (from st in searchText
select searchRequest(st))
.Switch();
The magic is in the Switch extension method which "switches" to the latest observable returned from the IObservable<IObservable<Result>> - yes, it is a nested observable.
When a new searchText comes in, the query returns a new IObservable<Result> created from the incoming search text. The Switch then switches the results observable to use this latest observable and just ignores any previously created observables.
So the result is that only the latest search results are observed and any previous results are ignored.
Hopefully that makes sense. :-)
Erik Meijer addresses this here (after about 30 minutes): http://channel9.msdn.com/Events/MIX/MIX10/FTL01
He explains the Switch statement after about 36 minutes.
The simples way IMO is to have a subject for requests that you notify before any request is dispatched to WCF. Then rather than subscribing to observable created from the completed event subscribe to CompletedEventObservable.TakeUntil(RequestsSubject). This way you will never be notified with the response to the previous request.
Check out rxx http://rxx.codeplex.com/
It has tons of extra stuff that will help, particularly in your case, I think Dynamic Objects and observable object props might be something that will make your life easier.
Related
the question I'd like to ask was raised some time ago (FIWARE Orion: How to retrieve the servicePath of an entity?) but as far as I've seen, there is no final answer.
In short, I'd like to retrieve the service path of entities when I exec a GET query to /v2/entitites which returns multiple results.
In our FIWARE instance, we strongly rely on the servicePath element to differentiate between entities with the same id. It is not a good design choice but, unfortunately, we cannot change it as many applications use that id convention at the moment.
There was an attempt three years ago to add a virtual field 'servicePath' to the query result (https://github.com/telefonicaid/fiware-orion/pull/2880) but the pull request was discarded because it didn't include test coverage for that feature and the final NGSIv2 spec didn't include that field.
Is there any plan to implement such feature in the future? I guess the answer is no, what brings me to the next question: is there any other way to do it which does not involve creating subscriptions (we found that the initial notification of a subscription does give you that info, but the notification is limited to 1000 results, what is too low for the number of entities we want to retrieve, and it does not allow pagination either)?
Thanks in advance for your responses.
A possible workaround is to use an attribute (provided by the context producer application) to keep the service path. Somehow, this is the same idea of the builtin attribute proposed in PR #2880.
Suppose you have a model Foo.
One business case is to simply create an instance of Foo, so there is a corresponding CreateFooCommand in my model, triggered by invoking a POST request to a given REST endpoint.
There are of course other Commands too.
But now, there is a ViewModel, which is derived from my DomainModel. It's simply a sql table with raw data - each Foo instance from DomainModel has corresponding derived ViewModel instance. Both have different IDs (on DomainModel there is a DomainID, on ViewModel it's simply a long value).
Now: should I even care about HATEOAS in such a case? In a proper REST implementation, I should at least return location-url in the header. But since my view model is only derived from DomainModel, should I care? I don't even have the view model's ID at the time my DomainModel is created.
Since CQRS means that Queries are separated from Commands, you may not be able to perform a Query right away, because the Command may not yet have been applied (perhaps it never will).
In order to reconcile that with HATEOAS, instead of returning 200 OK from the POST request, the service can return 202 Accepted:
The request has been accepted for processing, but the processing has not been completed. The request might or might not eventually be acted upon, as it might be disallowed when processing actually takes place. There is no facility for re-sending a status code from an asynchronous operation such as this.
The 202 response is intentionally non-committal. Its purpose is to allow a server to accept a request for some other process (perhaps a batch-oriented process that is only run once per day) without requiring that the user agent's connection to the server persist until the process is completed. The entity returned with this response SHOULD include an indication of the request's current status and either a pointer to a status monitor or some estimate of when the user can expect the request to be fulfilled.
(My emphasis)
That pointer could be a link that the client can query to get the status of the Command. When/if the Command completes and the View is updated, that status resource could then contain a link to the view.
This is pretty much a workflow straight out of REST in Practice - very reminiscent of its Restbucks example.
Another option to deal with the ID issue is to generate the ID before accepting the Command - perhaps even asking the client to supply the ID. Read more about such options here.
As Greg Young explains, CQRS is nothing more than "splitting one object into two". So assume that you have one domain aggregate and it has an id. Now you are talking about your view model having another id. However, you are unable to update your view model unless you have the aggregate id in your view model as well. From my point of view, your REST POST request should return a result that has the aggregate id in it. This is your id, the view model id has no interest to anyone except the read model storage.
Should it return a command status URI like Mark suggests is a topic for another discussion. Many CQRS practitioners currently tend to handle commands synchronously to avoid FE/BE mismatch in case of failure and give the FE an ability to react on errors on the BE. There is no real win to execute commands asynchronously for one user. Commands do mutate the state and in 99% of cases the user needs to know if the state was mutated properly.
I'm using Web Api and have a scenario where clients are sending a heartbeat notification every n seconds. There is a heartbeat object which is sent in a POST rather than a PUT, because as I see it they are creating a new heartbeat rather than updating an existing heartbeat.
Additionally, the clients have a requirement that calls for them to retrieve all of the other currently online clients and the number of unread messages that individual client has. It seems to me that I have two options:
Perform the POST followed by a GET, which to me seems cleaner from a pure REST standpoint. I am doing a creation and a retrieval and I think the SOLID principles would prefer to split them accordingly. However, this approach means two round trips.
Have the POST return an object which contains the same information that the GET would otherwise have done. This consolidates everything into a single request, but I'm concerned that this approach would be considered ill-advised. It's not a pure POST.
Option #2 stubbed out looks like this:
public HeartbeatEcho Post(Heartbeat heartbeat)
{
}
HeartbeatEcho is a class which contains properties for the other online clients and the number of unread messages.
Web Api certainly supports option #2, but just because I can do something doesn't mean I should. Is option #2 an abomination, premature optimization, or pragmatism?
The option 2 is not an abomination at all. A POST request creates a new resource, but it's quite common that the resource itself is returned to the caller. For example, if your resources are items in a database (e.g., a Person), the POST request would send the required members for the INSERT operation (e.g., name, age, address), and the response would contain a Person object which in addition to the parameters passed as input it would also have an identifier (the DB primary key) which can be used to uniquely identify the object.
Notice that it's also perfectly valid for the POST request only return the id of the newly created resource, but that's a choice you have, depending on the requirements of the client.
public HttpResponseMessage Post(Person p)
{
var id = InsertPersonInDBAndReturnId(p);
p.Id = id;
var result = this.Request.CreateResponse(HttpStatusCode.Created, p);
result.Headers.Location = new Uri("the location for the newly created resource");
return result;
}
Whichever way solves your business problem will work. You're correct POST for new record vs PUT for update to existing record.
SUGGESTION:
One thing you may want to consider is adding Redis to your stack and the apps can post very fast, then you could use the Pub/Sub functionality for the echo part or Blpop (blocking until record matches criteria). It's super fast and may help you scale and perfectly designed for what you are trying to do.
See: http://redis.io/topics/pubsub/
See: http://redis.io/commands/blpop
I've used both Redis for similar, but also RabbitMQ and with RabbitMQ we added socket.io connection to "stream" the heartbeat in real time without need for long polling.
I am diving into my first forays with CQRS and Event Sourcing and I have a few points Id like some guidance on. I would like to implement a SO style reputation system. This seems a perfect fit for this architecture.
Keeping SO as the example. Say a question is upvoted this generates an UpvoteCommand which increases the questions total score and fires off a QuestionUpvotedEvent.
It seems like the author's User aggregate should subscribe to the QuestionUpvotedEvent which could increase the reputation score. But how/when you do this subscription is not clear to me? In Greg Youngs example the event/command handling is wired up in the global.asax but this doesn't seem to involve any routing based on aggregate Id.
It seems as though every User aggregate would subscribe to every QuestionUpvotedEvent which doesn't seem correct, to make such a scheme work the event handler would have to exhibit behavior to identify if that user owned the question that was just upvoted. Greg Young implied this should not be in event handler code, which should merely involve state change.
What am i getting wrong here?
Any guidance much appreciated.
EDIT
I guess what we are talking about here is inter-aggregate communication between the Question & User aggregates. One solution I can see is that the QuestionUpvotedEvent is subscribed to by a ReputationEventHandler which could then fetch the corresponding User AR and call a corresponding method on this object e.g. YourQuestionWasUpvoted. This would in turn generated a user specific UserQuestionUpvoted event thereby preserving replay ability in the future. Is this heading in the right direction?
EDIT 2
See also the discussion on google groups here.
My understanding is that aggregates themselves should not be be subscribing to events. The domain model only raises events. It's the query side or other infrastructure components (such as an emailing component) that subscribe to events.
Domain Services are designed to work with use-cases/commands that involve more than one aggregate.
What I would do in this situation:
VoteUpQuestionCommand gets invoked.
The handler for VoteUpQuestionCommand calls:
IQuestionVotingService.VoteUpQuestion(Guid questionId, Guid UserId);
This then fecthes both the question & user aggregates, calling the appropriate methods on both, such as user.IncrementReputation(int amount) and question.VoteUp(). This would raise two events; UsersReputationIncreasedEvent and QuestionUpVotedEvent respectively, which would be handled by the query side.
My rule of thumb: if you do inter-AR communication use a saga. It keeps things within the transactional boundary and makes your links explicit => easier to handle/maintain.
The user aggregate should have a QuestionAuthored event... in that event is subscribes to the QuestionUpvotedEvent... similarly it should have a QuestionDeletedEvent and/or QuestionClosedEvent in which it does the proper handling like unsibscribing from the QuestionUpvotedEvent etc.
EDIT - as per comment:
I would implement the Question is an external event source and handle it via a gateway. The gateway in turn is the one responsible for handling any replay correctly so the end result stays exactly the same - except for special events like rejection events...
This is the old question and tagged as answered but I think can add something to it.
After few months of reading, practice and create small framework and application base on CQRS+ES, I think CQRS try to decouple components dependencies and responsibilities. In some resources write for each command you Should change maximum one aggregate on command handler (you can load more than one aggregate on handler but only one of them can change).
So in your case I think the best practice is #Tom answer and you should use saga. If your framework doesn't support saga (Like my small framework) you can create some event handler like UpdateUserReputationByQuestionVotedEvent. In that, handler create UpdateUserReputation(Guid user id, int amount) OR UpdateUserReputation(Guid user id, Guid QuestionId, int amount) OR
UpdateUserReputation(Guid user id, string description, int amount). After command sends to handler, the handler load user by user id and update states and properties. In this type of handling you can create a more complex scenario or workflow.
I have a questing regarding MSMQ...
I designed an async arhitecture like this:
CLient - > WCF Service (hosted in WinService) -> MSMQ
so basically the WCF service takes the requests, processes them, adds them to an INPUT queue and returns a GUID. The same WCF service (through a listener) takes first message from queue (does some stuff...) and then it puts it back into another queue (OUTPUT).
The problem is how can I retrieve the result from the OUTPUT queue when a client requests it... because MSMQ does not allow random access to it's messages and the only solution would be to iterate through all messages and push them back in until I find the exact one I need. I do not want to use DB for this OUTPUT queue, because of some limitations imposed by the client...
You can look in your Output-Queue for your message by using
var mq = new MessageQueue(outputQueueName);
mq.PeekById(yourId);
Receiving by Id:
mq.ReceiveById(yourId);
A queue is inherently a "first-in-first-out" kind of data structure, while what you want is a "random access" data structure. It's just not designed for what you're trying to achieve here, so there isn't any "clean" way of doing this. Even if there was a way, it would be a hack.
If you elaborate on the limitations imposed by the client perhaps there might be other alternatives. Why don't you want to use a DB? Can you use a local SQLite DB, perhaps, or even an in-memory one?
Edit: If you have a client dictating implementation details to their own detriment then there are really only three ways you can go:
Work around them. In this case, that could involve using a SQLite DB - it's just a file and the client probably wouldn't even think of it as a "database".
Probe deeper and find out just what the underlying issue is, ie. why don't they want to use a DB? What are their real concerns and underlying assumptions?
Accept a poor solution and explain to the client that this is due to their own restriction. This is never nice and never easy, so it's really a last resort.
You may could use CorrelationId and set it when you send the message. Then, to receive the same message you can pick the specific message with ReceiveByCorrelationId as follow:
message = queue.ReceiveByCorrelationId(correlationId);
Moreover, CorrelationId is a string with the following format:
Guid()\\Number