I am currently using VIP architecture and I was wondering when I should make an API call.
For example, I have two views. A connection view leading to a list view.
The list needs the user to connect to load.
My question is, where should I make my API call to fetch data for the second view ?
Should I make the request as soon as the connection succeeds, and then launch the 2nd view once I get the data of this request.
Or
Should I launch the 2nd view first and then make the request for this view ?
The first solution seems slightly faster, but the second one feels cleaner.
What do you think ?
First of all, VIP/MVC/MVVM architecture has nothing to do with your issue, none of there architectures has rules about when you need to make API calls.
Everything depends on your needs and tech requirements.
As for me there is two most important points:
if your second screen is data sensitive and you need to be sure, that it display latest data - make API call after this screen was displayed and update it UI with latest data.
if you don't care if data that you display is latest/ or you this data would not be updated very often/ or you show static data that would be rarely changed, BUT for you is important that user will see next screen immediately - make API call as soon as it possible(preferably on app launch)
If both previous points is not important for you - make API call after screen will be displayed. It will guaranty that you has latest data.
But you need to remember that there is no rule about it, so make API calls when you really need it.
Related
We are using xamarin forms with prism. We have simple pages with small amount of data to be displayed on each page and include simple calculations. We are using prism navigation service to navigate between pages. We are experiencing some latency from clicking a button to navigating to next page. Data is fetched inside OnNavigatedTo since navigation parameter changes the data. Can someone shed some light of why is there a latency, it is close 1+ second and sometimes 2 seconds.
Also, it seems like each page is getting rendered twice... Once before OnNaviagatedTo and then data changes. OnProperty or OnCollection changed is fired from within OnNavigatedTo and it seems to cause the rendering again.
Version 6.3.0 introduced the concept of OnNavigatingTo, while OnNavigatedTo has been around for a while. There is a distinct difference between the two. Understanding the order in which things occur should help you create a nicer user experience.
New Page is created
OnNavigatedFrom is called
OnNavigatingTo is called
New Page is pushed onto the Navigation Stack and becomes visible
OnNavigatedTo is called
Applications that have to reach out and fetch data can often experience latency issues because it takes time to reach out to the remote service and get the data we want and then parse that data into a usable object. This particular problem was one in which many developers wanted to cut down the demand on the UI with having to refresh as the bindings were being updated which led to the introduction of OnNavigatingTo.
While neither one will reduce network latency it gives you an ability to make the calling page enter an IsBusy state that may display some sort of loading icon which would then be updated to false when NavigateAsync completes and your new page is displayed already loaded.
Assume a page that shows a complex data structure (for example, an article with many details). This view will be reused from time to time by rebinding it to different articles.
Now, I noticed that the ODataModel keeps all used article entities in memory (also if they are no longer bound to any control).
This will lead to two issues:
Memory consumption increases over time (if application will not be reloaded).
If the application forces a refresh of the data model, all entities will be loaded (also not used).
The second issue seems to be the bigger problem. It slows down the speed of the application.
I have not found a solution for that problem. If I use refresh(true, true) it seems all data will be reloaded.
Is there an way to clean the model?
Edit
Lets say you have a list of thousands of articles. User can click on one of the articles and will navigate to a detailed screen of that article.
The OData model in client side will cache this. To see it, do something like:
var oModel = this.getModel("modelName");
look with the debugger into oModel.oData.
If the user now navigates back and chooses the next article, this will be cached as well.
If user does this 1000 times, all articles are now in the model.
If you trigger a oModel.refresh(true);, all these data (of 1000 articles) will be reloaded not only the one bound to the view.
Now my application is not about showing article information. It's a more complex structure with subitems. Each time user is visiting this page, more data will be cached (and re-fetched in case of a refresh call on the model).
Edit 2
The function updateBindings(bForceUpdate?) seems to help a little bit.
Anyhow, the data accumulation is still there in the ODataModel class.
That means: Each visited data path will stay in memory since the next reload (F5) of the full page. If someone uses such an application over a day, the data accumulates and a refresh call on the model will read all data again, if still bound to a view or not.
Try deleteCreatedEntry(oContext). Even though this is not the supposed use case for this method it might work to delete an entity from the model without triggering a backend request.
You could also try if updateBindings(bForceUpdate?) only triggers an update on actually bound entities.
1) I do not really understand your problem here. What is it exactly that you do? OData always holds the result of your request plus a queue of changes to that request. If you create lots of entries while your application is running, of course the memory consumption will increase. If you want to revert back to the original request you can use resetChanges(). THis way the used memory should decrease again. But you lose all your changes to the model.
2) Maybe you should look into Odata filtering (http://www.odata.org/getting-started/basic-tutorial/) so that you only load the entities you really want. If you only want a part of the entity loaded then you should maybe redesign your entities to avoid a lot of overhead.
It is hard to speculate what your exact problem is.
Well, if you know exactly what are you doing, you can try something like this:
this.getModel("modelname").aBindings = []
Better solution would be go through the aBindings array and remove redundant bindings.
Me and my team are currently rookie developers in Objective-C (less than 3 months in) working on the development of a simple tab based app with network capabilities that contains a navigator controller with a table view and a corresponding detailed view in each tab. The target is iOS 4 sdk.
On the networking side, we have a single class that functions as a Singleton that processes the NSURLConnection for each one of the views in order to retrieve the data we need for each of the table views.
The functionality works fine and we can retrieve the data correctly but only if the user doesn't change views until the petition is over or the button of the same petition (example: Login button) is pressed on again. Otherwise, different mistakes can happen. For example, an error message that should only be displayed on the root view of one of the navigation controllers appears on the detailed view and vice versa.
We suspect that the issue is that we are currently handling only a single delegate on the Singleton for the "active view" and that we should change it to support a behavior based on the native Mail app in which you can change views while the data that was asked for in each one of the views keeps loading and updating correctly separately.
We have looked over stackoverflow and other websites and we haven't found a proper methodology to follow. We were considering using an NSOperationQueue and wrapping the NSURLConnections on an NSOperation, but we are not sure if that's the proper approach.
Does anyone have any suggestions on the proper way to handle multiple asynchronous NSURLConnections to update multiple views, both parent and child, almost simultaneously at the whim of the user's interaction? Ideally, we don't want to block the UI or disable the buttons as we have been recommended.
Thank you for your time!
Edit - forgot to add, one of the project restrictions set by our client is that we can only use the native iOS sdk network framework and not the ASIHTTPRequest framework or similar. At the same time, we also forgot to add that we are not uploading any information, we are only retrieving it from the WS.
One suggestion is to use NSOperations and a NSOperationsQueue. The nice thing about this arrangement is you can quickly cancel any in-process or queued work (if say the user hits the back button.
There is a project on github, NSOperation-WebFetches-MadeEasy that makes this about as painless as it can be. You incorporate one class in your classes - OperationsRunner - which comes with a "how-to-use-me" in OperationsRunner.h, and two skeleton NSOperations classes, one the subclass of another, with the subclass showing how to fetch an image.
I'm sure others will post of other solutions - its almost a problem getting started as there are a huge number of libraries and projects doing this. That said, OperationsRunner is a bit over 100 lines of code, and the operations about the same, so this is really easy to read, understand, use, and modify.
You say that your singleton has a delegate. Delegation is inappropriate when multiple objects are interested in the result. If you wish to continue using a singleton for fetching data, you must switch your pattern to be based on notifications. Your singleton will have responsibility for determining which connection corresponds to which task, and choosing an appropriate notification to be posted.
If you still need help with this, let me know, I'll try to post some sample code.
I am attempting to learn and apply the CQRS design approach (pattern and architecture) to a new project but seem to be missing a key piece.
My client application executes a query and retrieves a list of light-weight, read-only DTOs from the read model. The user selects an item and clicks a button to initiate some action. The action is performed by creating and sending the corresponding command object to the write model (where the command handler carries out the action, updates the data store, etc.) At some point, however, I need to update the UI to reflect changes to the state of the application resulting from the action.
How does the UI know when it is time to refresh the original list?
Additional Info
I have noticed that most articles/blogs discussing CQRS use MVC client apps in their examples. I am working on a Silverlight client right now and am beginning to wonder if the pattern simply doesn't work in that case.
Follow-Up Question
After thinking more about Bartlomiej's response and subsequent discussion, I am wondering about error handling in CQRS. Given that commands are basically fire-and-forget asynchronous operations, how do we report an error condition to the UI?
I see 'refreshing the UI' to take one of two forms:
The operation succeeds, data has changed and the UI should be updated to reflect these changes
The operation fails, data has not changed but the user should be notified of the failure and potential corrective actions.
Even with a Post-Redirect-Get pattern in an MVC, you can't really Redirect until you know the outcome of the operation. None of the examples I've seen thus far address these real-world concerns.
I've been struggling with similar issues for a WPF client. The re-query trigger for any data is dependent on the data your updating, commands tend to fall into categories:
The command is a true fire and forget method, it informs the back-end of a state change but this change does not need to be reflected in the UI, or the change simply isn't important to the UI.
The command will alter the result of a single query
The command will alter the result of multiple queries, usually (in my domain at least) in a cascading fashion, that is, changing the state of a single "high level" piece of data will likely affect many "low level" caches.
My first trigger is the page load, very few items are exempt from this as most pages must assume data has been updated since it was last visited. Though some systems may be able to escape with only updating financial and other critical data in this way.
For short commands I also update data when 'success' is returned from a command. Though this is mostly laziness as IMHO all CQRS commands should be fired asynchronously. It's still an option I couldn't live without but one you may have to if your implementation expects high latency between command and query.
One pattern I'm starting to make use of is the mediator (most MVVM frameworks come with one). When I fire a command, I also fire a message to the mediator specifying which command was launched. Each Cache (A view model property Retriever<T>) listens for commands which affect it and then updates appropriately. I try to minimise the number of messages while still minimising the number of caches that update unnecessary from a single message so I'll (hopefully) eventually end up with a shortlist of update reasons, with each 'reason' updating a list of caches.
Another approach is simple honesty, I find that by exposing graphically how the system updates itself makes users more willing to be patient with it. On firing a command show some UI indicating you're waiting for the successful response, on error you could offer to retry / show the error, on success you start the update of the relevant fields. Baring in mind that this command could have been fired from another terminal (of which you have no knowledge) so data will need to timeout eventually to avoid missing state changes invoked by other machines also.
Noting the irony that the only efficient method of updating cache's and values on a client is to un-separate the commands and queries again, be it through hardcoding or something like a hashmap.
My two cents.
I think MVVM actually fits into CQRS quite well. The ViewModel simply becomes an observable ReadModel.
1 - You initialize your ViewModel state via a query on the ReadModel.
2 - Changes on your ViewModel are automatically reflected on any Views that are bound to it.
3 - Certain changes on your ViewModel trigger a command to propegate to a message queue, an object responsible for sending those commands to the server takes those messages off the queue and sends them to the WriteModel.
4 - Clients should be well formed, meaning the ViewModel should have performed appropriate validation before it ever triggered the command. Once the command has been triggered, any event notifications can be published onto an event bus for the client to communicate changes to other ViewModels or components in the system interested in those changes. These events should carry the relevant information necessary. Typically, this means that other view models usually don't have to re-query the read model as a result of the change unless they are dependent on other data that needs to be retrieved.
5 - There is an object that connects to the message bus on the server for real-time push notifications when other clients make changes that this client is interested in knowing about, falling back to long-polling if necessary. It propagates those to the internal message bus that ties the components on the client together.
6 - The last part to handle is the fact that clients can be occasionally connected, which should be the only reason a command fails (they don't have internet access at the moment), which is when the client should be notified of problems.
In my ASP.NET MVC 3 I use 2 techniques depending on use case:
already well-known Post-Redirect-Get pattern which fits nicely with CQRS. Your MVC action that triggers the command returns a redirection to action that performs a query.
in some cases, like real-time updates of other clients, I rely on domain events/messages. I create an event handler that uses singlarR to push changes to all connected and interested clients.
There are two major ways you can take as far as I know :
1) design your UI , so that the user does not see its changes right away. Like for instance a message to tell him his action is a success, and offering him different choices to continue his work. this should buy you enough time to have updated your readmodel.
2) more complex, but you might keep the information you have send to the server and shows them in the interface.
The most important I guess, educate your user if you can so that they know why the data is not here... yet!
I am thinking about it only now, but these are for sync command handling, not async, in async things go really harder on the brain...the client interface becomes an event eater too..
Did anyone hear about asynchronous executing of an EF query?
I want my items control to be filled right when the form loads and the user should be able to view the list while the rest of the items are still being loaded.
Maybe by auto-splitting the execution in bulks of items (i.e. few queries for each execution) all in same connection.
I posted a feature suggestion to Microsoft, please share them with your ideas as well.
Not wanting to sound like a commercial, but I noticed that the latest DevExpress grid gives features like this in their WPF grid. Essentially you want to load visible-count items first, then load the rest in a background thread so your UI isn't freezing up. The background thread should probably load another page at a time and make them available to the UI thread.
It's something you would want to think about carefully and make sure you get it right, or simply buy a control that does the hard work for you.
I take from your link that this is a web app. Is that correct?
A Query must complete and return data before rendering can begin. An EF feature will not help you here. Rather. look at breaking up your process into several processes that can be done at once.
Keep in mind that ASP.NET cannot return a response to a browser if it is not done rendering the HTML.
Let me assume you are executing a single query, getting the results back and displaying them to a page.
Best option: Page your results. if you Have 4000 records, show the first 50. If you show 200+ records to a user, They cannot digest that much information.
If that does not fit your needs, look at firing one query for 50 results. Make an Ajax call to the the remaining records and build the UI from there, in (reasonably sized) chunks.