What core Moodle events will allow me to see when a student navigates between areas of a course? I need to build an event trigger plugin, to call an external REST endpoint, every time a student navigates between screens, begins or ends a course module, etc. The Event API docs has a long list of events, however I cannot find anything which details when the event will actually fire. Also, the only activity I can find, which seems to remotely relate to what I'm looking for is course_module_viewed and, unfortunately, that event name repeats in many areas:
core\event\course_module_viewed
mod_lti\event\course_module_viewed
mod_page\event\course_module_viewed
mod_resource\event\course_module_viewed
mod_url\event\course_module_viewed
I think you're right that you want "course_module_viewed" events - but that event is (should be, because it ties into default activity completion options) implemented by each activity/resource type, because it could mean different things. So if the activity is a "page", then viewing it will trigger mod_page\event\course_module_viewed and if the activity is a SCORM package, mod_scorm\event\course_module_viewed. All of these should extend the abstract core\event\course_module_viewed class.
There is not a default event for moving between pages within an activity - at that level of granularity you are dependent on whether the implementers of that plugin decided to log events for this or not, or indeed whether the activity even has pages.
Other events you may be interested in are core\event\course_viewed and core\event\course_completed which only exist in core, and the various other events that each individual activity you're likely to encounter in your course content fires off. Eg. for a SCORM activity, you might be interested in the "mod_scorm\event\status_submitted" event
Related
The Bloc manual describes the example of a simple Todos app. It works as an example, but I get stuck when trying to make it into a more realistic app. Clearly, a more realistic Todos app needs to keep working when the user temporarily loses network connection, and also needs to occasionally check the server for updates that the user might have added from another device.
So as a basic data model I have:
dataFromServer, which is refreshed every five minutes, and
localData, that describes what changes have been made locally but haven't been synchronized to the server yet.
My current idea is to have three kinds of events:
on<GetTodosFromServer>() which runs every few minutes to check the server for updates and only changes the dataFromServer,
on<TodoAdded>() (and its friends TodoDeleted, TodoChecked, and so on) which get triggered when the user changes the data, and only change the localData, and
on<SyncTodoToServer>() which runs whenever the user changes the todo list, or when network connectivity is restored, and tries to send the changes to the server, retrieves the new value from the server, and then sets the new dataFromServer and localData.
So obviously there's a lot of interaction between these three methods. When a new todo is added after the synchronization to the server starts, but before synchronization is finished, it needs to stay in the local changes object. When GetTodosFromServer and SyncTodoToServer both return server data, they need to find out who has the latest data and keep that. And so on.
Coming from a Redux background, I'm used to having two reducers (one for local data, one for server data) that would only respond to simple actions. E.g. an action { "type": "TodoSuccessfullySyncedToServer", uploadedData: [...], serverResponse: [...] } would be straightforward to parse for both the localData and the dataFromServer reducer. The reducer doesn't contain any of the business logic, it receives actions one by one and all you need to think about inside the reducer is the state before the action, the action itself, and the state after the action. Anything you rely on to handle the action will be in the action itself, not in the context. So different pieces of code that generate those actions can just fire these actions without thinking, knowing that the reducer will handle them correctly.
Bloc on the other hand seems to mix business logic and updating the state. API calls are made within the event handlers, which will emit a value possibly many seconds later. So every time you return from an asynchronous call in an event handler, you need to think about how the state might have changed while that call was happening and the consequences this has on what you're currently doing. Also, an object in the state can be updated by different events that need to coordinate among themselves how to avoid conflicts while doing so.
Is there a best practice on how to avoid the complexity that brings? Is it best practice to split large events into "StartSyncToServer" and "SuccessfullySyncedToServer" events where the second behaves a lot like a Redux reducer? I don't see any of that in the examples, so is there another way this complexity is typically avoided in Bloc? Or is Bloc entirely unopinionated on these things?
I'm not looking for personal opinions here, only if there's something I missed in the Bloc manual (or other authoritative source) about how this was intended to work.
I'm reading about the Facebook Flux and I liked the pattern, but I don't understand why we need keep the store untouchable from the action creator. Facebook only says that it's part of "concern separation" and only the store should know how modify itself. Facebook disagrees with store setters like "setAsRead", but doesn't triggering an event on the action creator through the dispatcher that are captured on the store almost the same thing? And calling something like "setAsRead" doesn't exposes how the store are modifying itself.
Some guys say it causes coupling between the store and action creator, but triggering events on the dispatcher causes coupling between the pub/sub, store and action creator.
Keeping the stores untouchable from the action creator creates the need of the "waitFor". Wait For chains doesn't create more implicit coupling between stores? If some action need stores interacting on some given order why doesn't already make this on the action creator?
Do you guys know the cons to adopt a dispatchless approach with Facebook Flux?
If you implement setters in your store, you would completely break the uni-directional data flow principle that is arguably the central principle of the Flux pattern. Nothing would stop your components, or any other code, from directly manipulating the store's state. Now data mutations no longer come from only stores reacting to actions, so you have many "sources of truth".
The uni-directional data flow principle is as follows, very simplified for the purpose of this discussion:
Actions ----> Stores ----> Components
^ |
|___________________________|
waitFor actually does create a coupling between stores, and it leaks into the Dispatcher which has to provide the waitFor function. It's arguably one of the more contended points of the "vanilla" Flux pattern, and for example Reflux implements it differently, without a central dispatcher.
When designing an application's back-end you will often need to abstract the systems that do things from the systems that actually do them.
There are elements of this in the CQRS and PubSub design patterns.
By way of example:
A new user submits a registration form
Your application receives that data and pushes out a message saying “hey i have some new user data, please do something with this”
A listener / handler / service grabs the data and processes it
(please let me know if that makes no sense)
In my applications I would usually:
Fire a new Event that a Listener is set up to process Event::fire('user.new', $data)
Create a new Command with the data, which is bound to a CommandHandler new NewUserCommand($data)
Call a method in a Service and pass in the data UserService::newUser($data)
While these are nearly exactly the same, I am just wondering - how do you go about deciding which one to use when you are creating the architecture of your applications?
Fire a new Event that a Listener is set up to process
Event::fire('user.new', $data)
Event pattern implies that there could be many handlers, subscribing to the same event and those handlers are disconnected form the sender. Also event handlers usually do not return information to the sender (because there can be actually many handlers and there is a confusion about whose information to return).
So, this is not your case.
Create a new Command with the data, which is bound to a CommandHandler
new NewUserCommand($data)
Commands are an extended way to perform some operation. They can be dispatched, pipelined, queued etc. If you don't need all that capabilities, why to complicate things?
Call a method in a Service and pass in the data
UserService::newUser($data)
Well, this is the most suitable thing for your case, isn't it?
While these are nearly exactly the same, I
am just wondering - how do you go about deciding which one to use when
you are creating the architecture of your applications?
Easy. From many solutions choose only those, which:
metaphorically suitable (do not use events, where your logic does not look like an event)
the simplest (do not go too deep into the depths of programming theories and methods. Always choose solution, that lowers your project development complexity)
When to use command over event?
Command: when I have some single isolated action with few dependencies which must be called from different application parts. The closest analogue is some editor command, which is accessible both from toolbar and menu.
Event: when I have several (at least in perspective) dependent actions, which may be called before/after some other action is executed. For example, if you have a number of services, you can use events to perform cache invalidation for them. Service, that changes a particular object emits "IChangedObject" event. Other services subscribe to such events and respond to them invalidating their cache.
I'm wondering exactly what logic should be contained when applying an event to a state while replaying events using some event sourcing solution.
Specifically, I'm wondering about validation, say I've got entity which can be in one of the following status:
Logged
Active
Close
Cancelled
and the progress needs to be Logged->Active->Close or Logged->Active->Cancelled, we cannot jump from Logged to Close directly for example.
Now, I understand, the validation should be contained in commands: UpdateState would check if the entity current state allows transition to desired one, and would produce appropriate event StatusUpdated which would be persisted into the event store.
Question is, what should I do when replaying it back? Should I just update the status, or should I perform same validation (so that if status transition requirements change it won't be possible to load some previously updated entities unless we add some additional logic), to ensure we won't end up with entities that do not satisfy our current logic?
PS. I think I've got problems grasping it because in my understanding events are essentialy just about 'announcing' something that happened already (and the sender state is already modified) so that interesting parties can react accordingly. And in case of events loading/replaying, you need to alter said state instead of actually 'announce' anything...
You do not need to perform any validation when replaying the event stream.
Commands model things that will be done in the future: You ask the system do to something for you. It's up to the system to decide whether to do it or not, e.g. based on business rules and validation.
Events in contrast model things that have already happened. As in the reality, you can not change the past.
So, this means, when an event gets persisted, it was in consequence of a command which was taken as valid at the point in time when it was processed. Replaying an event stream simply means to have a look at what happened in the past, and you can not change this.
Hence, you do not need to run any validation again.
Moreover, this means that if one day your business logic changes, all the things (business accidents!) that happened in the past still have happened, so they must not change. Hence you are not allowed to use any validation logic, as it may be another one today than when it was when you stored the events.
And again: You can not (and should not) change the past :-)
Example
Supposed you have a way of validating credit card numbers. A customer comes to your shop, pays, you consider his / her card as valid given your current set of rules, and everything is fine.
Then, one day the credit card institute changes the way credit card numbers are calculated, and hence you have another validation algorithm.
When you now play back your past events, the payment had happened, with or without the new validation rules - and you can not change the fact that it had happened! If you wanted to you had to create a new transaction to send money back to the customer. Again, this would result in a new event, not in a changed one from the past.
So, to cut a long story short: Don't validate events against anything. They are valid by definition, as they had happened before.
Any event stream that's been written to the event store should be valid to be played back without introducing any logic in the event handlers. If you needed to change your transitioning process, you'd need to look at doing some sort of conversion, along the lines of this example.
Regarding your last point. Event sourcing is a technique for persisting and restoring the state of an entity using a historical record of ordered events. It just so happens that when you're saving the entity, you can also publish these events for any interested parties to consume.
I am attempting to learn and apply the CQRS design approach (pattern and architecture) to a new project but seem to be missing a key piece.
My client application executes a query and retrieves a list of light-weight, read-only DTOs from the read model. The user selects an item and clicks a button to initiate some action. The action is performed by creating and sending the corresponding command object to the write model (where the command handler carries out the action, updates the data store, etc.) At some point, however, I need to update the UI to reflect changes to the state of the application resulting from the action.
How does the UI know when it is time to refresh the original list?
Additional Info
I have noticed that most articles/blogs discussing CQRS use MVC client apps in their examples. I am working on a Silverlight client right now and am beginning to wonder if the pattern simply doesn't work in that case.
Follow-Up Question
After thinking more about Bartlomiej's response and subsequent discussion, I am wondering about error handling in CQRS. Given that commands are basically fire-and-forget asynchronous operations, how do we report an error condition to the UI?
I see 'refreshing the UI' to take one of two forms:
The operation succeeds, data has changed and the UI should be updated to reflect these changes
The operation fails, data has not changed but the user should be notified of the failure and potential corrective actions.
Even with a Post-Redirect-Get pattern in an MVC, you can't really Redirect until you know the outcome of the operation. None of the examples I've seen thus far address these real-world concerns.
I've been struggling with similar issues for a WPF client. The re-query trigger for any data is dependent on the data your updating, commands tend to fall into categories:
The command is a true fire and forget method, it informs the back-end of a state change but this change does not need to be reflected in the UI, or the change simply isn't important to the UI.
The command will alter the result of a single query
The command will alter the result of multiple queries, usually (in my domain at least) in a cascading fashion, that is, changing the state of a single "high level" piece of data will likely affect many "low level" caches.
My first trigger is the page load, very few items are exempt from this as most pages must assume data has been updated since it was last visited. Though some systems may be able to escape with only updating financial and other critical data in this way.
For short commands I also update data when 'success' is returned from a command. Though this is mostly laziness as IMHO all CQRS commands should be fired asynchronously. It's still an option I couldn't live without but one you may have to if your implementation expects high latency between command and query.
One pattern I'm starting to make use of is the mediator (most MVVM frameworks come with one). When I fire a command, I also fire a message to the mediator specifying which command was launched. Each Cache (A view model property Retriever<T>) listens for commands which affect it and then updates appropriately. I try to minimise the number of messages while still minimising the number of caches that update unnecessary from a single message so I'll (hopefully) eventually end up with a shortlist of update reasons, with each 'reason' updating a list of caches.
Another approach is simple honesty, I find that by exposing graphically how the system updates itself makes users more willing to be patient with it. On firing a command show some UI indicating you're waiting for the successful response, on error you could offer to retry / show the error, on success you start the update of the relevant fields. Baring in mind that this command could have been fired from another terminal (of which you have no knowledge) so data will need to timeout eventually to avoid missing state changes invoked by other machines also.
Noting the irony that the only efficient method of updating cache's and values on a client is to un-separate the commands and queries again, be it through hardcoding or something like a hashmap.
My two cents.
I think MVVM actually fits into CQRS quite well. The ViewModel simply becomes an observable ReadModel.
1 - You initialize your ViewModel state via a query on the ReadModel.
2 - Changes on your ViewModel are automatically reflected on any Views that are bound to it.
3 - Certain changes on your ViewModel trigger a command to propegate to a message queue, an object responsible for sending those commands to the server takes those messages off the queue and sends them to the WriteModel.
4 - Clients should be well formed, meaning the ViewModel should have performed appropriate validation before it ever triggered the command. Once the command has been triggered, any event notifications can be published onto an event bus for the client to communicate changes to other ViewModels or components in the system interested in those changes. These events should carry the relevant information necessary. Typically, this means that other view models usually don't have to re-query the read model as a result of the change unless they are dependent on other data that needs to be retrieved.
5 - There is an object that connects to the message bus on the server for real-time push notifications when other clients make changes that this client is interested in knowing about, falling back to long-polling if necessary. It propagates those to the internal message bus that ties the components on the client together.
6 - The last part to handle is the fact that clients can be occasionally connected, which should be the only reason a command fails (they don't have internet access at the moment), which is when the client should be notified of problems.
In my ASP.NET MVC 3 I use 2 techniques depending on use case:
already well-known Post-Redirect-Get pattern which fits nicely with CQRS. Your MVC action that triggers the command returns a redirection to action that performs a query.
in some cases, like real-time updates of other clients, I rely on domain events/messages. I create an event handler that uses singlarR to push changes to all connected and interested clients.
There are two major ways you can take as far as I know :
1) design your UI , so that the user does not see its changes right away. Like for instance a message to tell him his action is a success, and offering him different choices to continue his work. this should buy you enough time to have updated your readmodel.
2) more complex, but you might keep the information you have send to the server and shows them in the interface.
The most important I guess, educate your user if you can so that they know why the data is not here... yet!
I am thinking about it only now, but these are for sync command handling, not async, in async things go really harder on the brain...the client interface becomes an event eater too..