observer design pattern in rest aplications - rest

I'm trying to learn design patterns, and I have come with the Observer pattern. I think I understand the concept itself, but I don't see when to use it.
I try to explain myself. I work mostly with web applications, so, stateless applications. Normally, the client makes a petition from the browser (for example, update a record). then the operation is completed.
Let us suppose that I want to notify some persons every time a record is updated. It seems to me the perfect scenario for the Observer patter, but when I think of it, it will end something like:
user makes petition of update.
get all the persons that have to be notified.
put all the persons that have to be notified in the observer.
make the update. (that also will make the notification for the observer pattern).
but... doing it this way, I have to iterate all the persons that I want to notify twice!
And because its a stateless application, I have to go and get all the persons than needs to be notified every time!
I don't know if the observer pattern is more useful for other types of applications, but I can only think of this pattern in a static form, I mean, making the Observer static.
I know I'm losing something, it's a common and accepted pattern, everyone accept it as a valid solution for this concrete problem. What I am not understanding?

First, let's straighten out the terminology.
Each person who wants to be notified is an Observer.
Each type of event which can trigger a notification is an Observable.
Each Observer (person) needs to register itself with the server. It sends a request essentially saying, "I'm interested in foo Observables," which in this case would be, "I'm interested in update events." The server maintains mappings of who is interested in which events.
Every time the server makes an update, it iterates over the mapping of update Observers and sends a notification to each of them.
The advantage is that the server and its Observables have no compile-time knowledge of who the Observers are. Observers are free to register (and unregister) themselves at runtime for any event(s) they are interested in.

Related

What are the best practices when working with data from multiple sources in Flutter/Bloc?

The Bloc manual describes the example of a simple Todos app. It works as an example, but I get stuck when trying to make it into a more realistic app. Clearly, a more realistic Todos app needs to keep working when the user temporarily loses network connection, and also needs to occasionally check the server for updates that the user might have added from another device.
So as a basic data model I have:
dataFromServer, which is refreshed every five minutes, and
localData, that describes what changes have been made locally but haven't been synchronized to the server yet.
My current idea is to have three kinds of events:
on<GetTodosFromServer>() which runs every few minutes to check the server for updates and only changes the dataFromServer,
on<TodoAdded>() (and its friends TodoDeleted, TodoChecked, and so on) which get triggered when the user changes the data, and only change the localData, and
on<SyncTodoToServer>() which runs whenever the user changes the todo list, or when network connectivity is restored, and tries to send the changes to the server, retrieves the new value from the server, and then sets the new dataFromServer and localData.
So obviously there's a lot of interaction between these three methods. When a new todo is added after the synchronization to the server starts, but before synchronization is finished, it needs to stay in the local changes object. When GetTodosFromServer and SyncTodoToServer both return server data, they need to find out who has the latest data and keep that. And so on.
Coming from a Redux background, I'm used to having two reducers (one for local data, one for server data) that would only respond to simple actions. E.g. an action { "type": "TodoSuccessfullySyncedToServer", uploadedData: [...], serverResponse: [...] } would be straightforward to parse for both the localData and the dataFromServer reducer. The reducer doesn't contain any of the business logic, it receives actions one by one and all you need to think about inside the reducer is the state before the action, the action itself, and the state after the action. Anything you rely on to handle the action will be in the action itself, not in the context. So different pieces of code that generate those actions can just fire these actions without thinking, knowing that the reducer will handle them correctly.
Bloc on the other hand seems to mix business logic and updating the state. API calls are made within the event handlers, which will emit a value possibly many seconds later. So every time you return from an asynchronous call in an event handler, you need to think about how the state might have changed while that call was happening and the consequences this has on what you're currently doing. Also, an object in the state can be updated by different events that need to coordinate among themselves how to avoid conflicts while doing so.
Is there a best practice on how to avoid the complexity that brings? Is it best practice to split large events into "StartSyncToServer" and "SuccessfullySyncedToServer" events where the second behaves a lot like a Redux reducer? I don't see any of that in the examples, so is there another way this complexity is typically avoided in Bloc? Or is Bloc entirely unopinionated on these things?
I'm not looking for personal opinions here, only if there's something I missed in the Bloc manual (or other authoritative source) about how this was intended to work.

Axon- Replay event for a particular type or for one particular Id

Using Axon framework- I was able to replay the entire event store and re-create the view model. But is it possible to replay event for a particular type or for a particular Id.
Let's say, I have a customer event and I want to replay all the event of a customer with Id= 100. Is it make sense to do a replay for a particular customer or it make more sense to replay for the entire event store always?
Thanks in advance
It is OK to do whatever makes sense for you, for this particular ReadModel.
One reason to re-process only one customer is the speed. If it's a lot faster than a complete rebuild (i.e because you have a lot of customers) and the outcome is the same then do it.
As Constantin points out, this request to replay a specific view makes total sense.
The provided replay process in Axon Framework at this point only provides to trigger a replay for a specific Processing Group allowing you to set the point in time from when you want to replay it.
There are ideas to provide a more fine grained solution to replaying, I'd however be hard pressed to tell you when that'll happen.
Thus replaying just a single view model for, for example speed, will require some custom code.
Let me know if you'd be interested in some pointers on how to do that.
Update
I'd like to state that with more recent versions of Axon Framework is is possible to tell a TrackingEventProcessor to reset itself, thus replay a set of events.
The API for this is the TrackingEventProcessor#resetTokens(TrackingToken), which allows you to reset a Tracking Event Processor from a given point in time.
This still doesn't give you the option to replay a given instance of a Read Model. This would still require some handy work from your part.

Why do we need event.stopPropagation() in DOM? Is it bad architectural pattern?

In the everyday front-end development I often use DOM as a global event bus that is accessible to every part of my client-side application.
But there is one "feature" in it, that can be considered harmful, in my opinion: any listener can prevent propagation of an event emitted via this "bus".
So, I'm wondering, when this feature can be helpful. Is it wise to allow one listener to "disable" all the other? What if that listener does not have all information needed to make right decision about such action?
Upd
This is not a question about "what is bubbling and capturing", or "how Event.stopPropagation actually works".
This is question about "Is this good solution, to allow any subscriber to affect an event flow"?
We need (I am talking about current usage in JS) stopPropagation() when we want to prevent listeners to interfere with each other. However, it is not mandatory to do so.
Actual reasons to avoid stopPropagation:
Using it usually means that you are aware of code waiting for the same event, and interfering with what the current listener does. If it is the case, then there may (see below) be a design problem here. We try to avoid managing a single thing at multiple different places.
There may be other listeners waiting for the same type of event, while not interfering with what the current listener does. In this case, stopPropagation() may become a problem.
But let's say that you put a magic listener on a container-element, fired on every click to perform some magic. The magic listener only knows about magic, not about the document (at least not before its magic). It does one thing. In this case, it is a good design choice to leave it knowing only magic.
If one day you need to prevent clicks in a particular zone from firing this magic, as it is bad to expose document-specific distinctions to the magic listener, then it is wise to prevent propagation elsewhere.
An even better solution though might be (I think) to have a single listener which decides if it needs to call the magic function or not, instead of the magic function being a stoppable listener. This way you keep a clean logic while exposing nothing.
To provide (I am talking about API design) a way for subscribers to affect the flow is not wrong; it depends on the needs behing this feature. It might be useful to the developers using it. For example, stopPropagation has been (and is) quite useful for lots of people.
Some systems implement a continueX method instead of stopX. In JavaScript, it is very useful when the callees may perform some asynchronous processing like an AJA* request. However, it is not appliable to the DOM, as the DOM needs results in time. I see stopPropagation as a clever design choice for the DOM API.

Post single notification with varying object types

I have a class that acts as a wrapper around AVPlayer, and one of the functions it serves is to post notifications every 1 and 10 seconds during playback (ie make addPeriodicTimeObserverForInterval: more convenient in the general case).
Previously, the object I was sending with this notification was the player wrapper itself (ie ABPlayer.sharedPlayer). Today I had the need to allow for some objects to only receive notifications about a specific media item's playback. This can be accomplished by sending [[someAVURLAsset URL] absoluteString] as the notification object (when the asset in the AVPlayer is an AVURLAsset, of course).
The prompted the question: is it appropriate for a single notification to, in different situations, post with different types of objects? I understand the value in sending specific objects or sending nil (catch-all), but I don't recall seeing a situation where an alternative type of object could be sent. In my case, though, it seems to make sense.
I could simply send two distinct notifications, but since these are always only ever being sent to notify observers of a single event, and they are always being sent from the same place in code, they simply feel like a single notification.
I realize what I have is possible and working, but I'm curious if there's a compelling reason to avoid this pattern.
As long as the scenarios in which the different object types will be sent to the observers are well understood and documented, there's no technical reason why you can't do it. It may make more contextual sense to post a different notification for each object type. It would certainly help any developers who may end up maintaining your code.

What triggers UI refresh in CQRS client app?

I am attempting to learn and apply the CQRS design approach (pattern and architecture) to a new project but seem to be missing a key piece.
My client application executes a query and retrieves a list of light-weight, read-only DTOs from the read model. The user selects an item and clicks a button to initiate some action. The action is performed by creating and sending the corresponding command object to the write model (where the command handler carries out the action, updates the data store, etc.) At some point, however, I need to update the UI to reflect changes to the state of the application resulting from the action.
How does the UI know when it is time to refresh the original list?
Additional Info
I have noticed that most articles/blogs discussing CQRS use MVC client apps in their examples. I am working on a Silverlight client right now and am beginning to wonder if the pattern simply doesn't work in that case.
Follow-Up Question
After thinking more about Bartlomiej's response and subsequent discussion, I am wondering about error handling in CQRS. Given that commands are basically fire-and-forget asynchronous operations, how do we report an error condition to the UI?
I see 'refreshing the UI' to take one of two forms:
The operation succeeds, data has changed and the UI should be updated to reflect these changes
The operation fails, data has not changed but the user should be notified of the failure and potential corrective actions.
Even with a Post-Redirect-Get pattern in an MVC, you can't really Redirect until you know the outcome of the operation. None of the examples I've seen thus far address these real-world concerns.
I've been struggling with similar issues for a WPF client. The re-query trigger for any data is dependent on the data your updating, commands tend to fall into categories:
The command is a true fire and forget method, it informs the back-end of a state change but this change does not need to be reflected in the UI, or the change simply isn't important to the UI.
The command will alter the result of a single query
The command will alter the result of multiple queries, usually (in my domain at least) in a cascading fashion, that is, changing the state of a single "high level" piece of data will likely affect many "low level" caches.
My first trigger is the page load, very few items are exempt from this as most pages must assume data has been updated since it was last visited. Though some systems may be able to escape with only updating financial and other critical data in this way.
For short commands I also update data when 'success' is returned from a command. Though this is mostly laziness as IMHO all CQRS commands should be fired asynchronously. It's still an option I couldn't live without but one you may have to if your implementation expects high latency between command and query.
One pattern I'm starting to make use of is the mediator (most MVVM frameworks come with one). When I fire a command, I also fire a message to the mediator specifying which command was launched. Each Cache (A view model property Retriever<T>) listens for commands which affect it and then updates appropriately. I try to minimise the number of messages while still minimising the number of caches that update unnecessary from a single message so I'll (hopefully) eventually end up with a shortlist of update reasons, with each 'reason' updating a list of caches.
Another approach is simple honesty, I find that by exposing graphically how the system updates itself makes users more willing to be patient with it. On firing a command show some UI indicating you're waiting for the successful response, on error you could offer to retry / show the error, on success you start the update of the relevant fields. Baring in mind that this command could have been fired from another terminal (of which you have no knowledge) so data will need to timeout eventually to avoid missing state changes invoked by other machines also.
Noting the irony that the only efficient method of updating cache's and values on a client is to un-separate the commands and queries again, be it through hardcoding or something like a hashmap.
My two cents.
I think MVVM actually fits into CQRS quite well. The ViewModel simply becomes an observable ReadModel.
1 - You initialize your ViewModel state via a query on the ReadModel.
2 - Changes on your ViewModel are automatically reflected on any Views that are bound to it.
3 - Certain changes on your ViewModel trigger a command to propegate to a message queue, an object responsible for sending those commands to the server takes those messages off the queue and sends them to the WriteModel.
4 - Clients should be well formed, meaning the ViewModel should have performed appropriate validation before it ever triggered the command. Once the command has been triggered, any event notifications can be published onto an event bus for the client to communicate changes to other ViewModels or components in the system interested in those changes. These events should carry the relevant information necessary. Typically, this means that other view models usually don't have to re-query the read model as a result of the change unless they are dependent on other data that needs to be retrieved.
5 - There is an object that connects to the message bus on the server for real-time push notifications when other clients make changes that this client is interested in knowing about, falling back to long-polling if necessary. It propagates those to the internal message bus that ties the components on the client together.
6 - The last part to handle is the fact that clients can be occasionally connected, which should be the only reason a command fails (they don't have internet access at the moment), which is when the client should be notified of problems.
In my ASP.NET MVC 3 I use 2 techniques depending on use case:
already well-known Post-Redirect-Get pattern which fits nicely with CQRS. Your MVC action that triggers the command returns a redirection to action that performs a query.
in some cases, like real-time updates of other clients, I rely on domain events/messages. I create an event handler that uses singlarR to push changes to all connected and interested clients.
There are two major ways you can take as far as I know :
1) design your UI , so that the user does not see its changes right away. Like for instance a message to tell him his action is a success, and offering him different choices to continue his work. this should buy you enough time to have updated your readmodel.
2) more complex, but you might keep the information you have send to the server and shows them in the interface.
The most important I guess, educate your user if you can so that they know why the data is not here... yet!
I am thinking about it only now, but these are for sync command handling, not async, in async things go really harder on the brain...the client interface becomes an event eater too..