Advantages of Ionic lazy loading - ionic-framework

Did a simple Google search:
https://www.google.com/search?q=advantages+of+ionic+lazy+loading
And couldn't really find a detailed description of the advantages of lazy loading. Anyone care to explain?

Long story short: (startup)-performance!
The underlying problem:
When you do a cold start of your app (no resume) the webview engine needs to load, parse and interpret a lot of javascript to become useable. The top high-end devices are mostly capable of doing so in a kind of acceptable timeframe but on hardware which is a few years old or simply not equipped with enough CPU power this may take a while.
Another problem (especially when developing PWAs) is network speed, with WiFi or 4G it is no problem (but also far from ideal!) at all to quickly download a few MBs of javascript. But on a slow 3G connection you can go and drink a coffee while waiting until your app becomes interactive.
Lazy-loading to the rescue!
So how can we minimize the effort to make the app interactive faster? We split up our heavy main bundle into many smaller bundles. So if we start our app now, only the bare minimum of javascript needed for the first page has to be fetched and parsed. Every time we need a specific feature (a page) we do the loading just in time (lazy) instead of ahead of time (eager). By always just fetching a small chunk of javascript when needed the performance gain will be huge for some devices but will definitly noticeable on every device.
If you implement lazy-loading in Ionic3 your code also becomes more modular and maintainable because you will create a self-contained Angular module for every IonicPage and by pushing a string onto the nav-stack instead of an actual page instance you remove a lot of dependencies (imports) in your code.

Related

Ionic 2 / Ionic 3 - Garbage Collection

I'm trying to get a better understanding of ionic2 and ionic3.
How does the Garbage Collection work in ionic?
What gets cached and when?
How can we clear this cache?
How do we set up elements for G.C.?
Do we even need to setup elements for G.C?
Can we/Do we need to setup pages for G.C.?
Like seen in this picture (source):
Some of the memory gets G.C'd when going to a new page. However the memory is still significantly higher than before any video had been played.
OK I'm gonna give this one a try:
Ionic itself has not much to do with GC, there are no scheduled runs of a task that cleans up behind you. The only thing ionic (or more specifically the dev team behind ionic) has to do is to design and implement their UI components in way they do not eat up too much memory and also realease unused memory. Especially with Virtual-Scroll there have been issues with memory-leaks and so on.
So lets go a level deeper: Angular! Same point as with ionic. The devs of Angular are responsible for how much memory is used by their framework. But Angular provides a very useful method ngOnDestroy(). Why is this method important to you as an app developer? Because it gives you the chance to clean up behind yourself. This method is called just before your component is destroyed, what does that mean? You do not need your allocated objects, arrays, video-elements (set src='' and then call load()), etc. anymore and you can release the memory. This and this are good reads on how to free memory. However as the docs for onDestory() mention you only have to release memory that is not cleaned up by the automic GC (subscriptions, media-elements, ...). Which brings us to the next level:
Javascript/Browser: This is where the "real" GC happens. Javascript uses a mark-and-sweep garbage collecotor (all modern browsers ship with one), you can read about it here. It runs every now and then and releases every object that is unreachable/not referenced anymore, to explicitly mark an object for GC use the delete keyword. The following image visualizes the mark and sweep process:
Image taken from this article, it explains how javascript memory management works in very great detail, I strongly
recommend reading it.
And of course you always have the native GC of Java/Obj-C which cleans up the native part of the app.

I think I abuse of Ressources.load

Ok so I'm a bit confused about Ressources.Load. I actually use it quite a lot and everyone seems to see this feature as pure evilness. In this documentation, it's even written "Don't use it". I searched a lot about this and found this post. It mostly says to use Ressources.Load only for rare assets, otherwise, performance could/will be harmed.
I can see why this could be a "bad" thing to use, but honestly, I don't know how not to use this in my situation.
Lets say I have a game with ~10 different races with couples of units per race. The user chose it's race and start the game. At this point, it seems normal to me to Ressource.Load only the assets related to this specific race, and not the other ones...
Also, let's say you have a combat scene, with many possible environments (ie: winter, forest, desert, etc.). Again, I wouldn't want to load anything else than the one I'm fighting on. So using Ressources.Load seems the perfect tool. No? Am I missing something important about Unity or what?
Thanks a lot
It's true that Unity loads everything it see that is connected to things in the inspector in the scene. You have no way to stop Unity's loading once you are in the scene. (You can unload later, but it already took the toll of loading them all) The performance harmed in Unity's term seems to mean while playing, because if you connect them to the scene it loads everything from start and plays smooth from then but if you do a dynamic load you risk in lagging while playing.
Don't use it.
This strong recommendation is made for several reasons:
Use of the Resources folder makes fine-grained memory management more
difficult.
It's difficult but not impossible. If you are careful on your own, then you can reap the reward that is lower memory consumption.
Improper use of Resources folders will increase application
startup time and the length of builds. As the number of Resources
folders increases, management of the Assets within those folders
becomes very difficult.
It can't be help because offsetting with the load time you can save at scene start, the increased startup time is probably worth it. Most player won't mind the startup time in my opinion.
The Resources system degrades a project's
ability to deliver custom content to specific platforms and eliminates
the possibility of incremental content upgrades. AssetBundle Variants
are Unity's primary tool for adjusting content on a per-device basis.
Then you only put things that works universally in the Resources folder.
A modern alternate way is to compose your game in scene and use LoadSceneMode.Additive to get what you want one by one. It is suitable for big chunks like combat scene, but for lazy loading of something small in concept (but potentially contains large data like textures) like characters I would still use Resources.Load. The only thing that has delayed load build in is AudioClip which you can deselect preload audio data.
I wrote a detailed load process and its memory consumption here if you are interested in reading.
https://gametorrahod.com/unity-texture-memory-loading-unloading-7054819e4ae8

performance concerns in using Java Advanced Imaging APIs

In our project we use JAI for showing parts of an image, rotating an image and basic zooming in an applet. We now observe that the applet takes a lot of time to load - around 20 seconds for the first time. But subsequently, it takes only 3 seconds (which is also quite high).
JAI development seems to have frozen since 2007.Atleast I could not find any download post 2007 on the Java website.
Has anyone encountered loading issues and solved them in the context of JAI ?
Is there a performant alternative to JAI ?
The images we are using are in TIFF format and they can have multiple images in one physical file.
Any pointers greatly appreciated.
The first application startup (cold startup) could require lot of time, as you need to load tons of libraries including JAI. The second and next application startups (warm startup) are faster as runtime classes are cached in classes.jsa.
Then, the image processing will require the CPU and in order to paint it, the graphics card. With modern computers image processing (basic operations!) and handling (zoom, pan) is trivial and fast with JAI.
We have developed and image reviewing application with JAI + Image I/O and zoom and panning is extremely fast since we finished it in 2007 (1Mp images). After the image is loaded, the processing and handling is very fast, so we load the image in background threads to improve user experience.
The problem with JAI is it current state: frozen and/or dead, but it is mature, quite stable and other products like Apache Log4J have the same issue, no new developments since years, but people continue using it as there is no alternative (well, Logback!).
The are plenty of alternatives to JAI, like ImageMagick, but I didn't test them.
We careful when loading and processing images, like convert to 8bit/channel if possible, perform the operations in background before painting...

Advice on using sandbox vs. caching for UITableView async image download

Apple just released some sample code on lazy loading images in a UITableView a week ago. I checked it out and implemented it into my own UITableView (which is a drawRect one for fast scrolling), to see if there was a difference from what I was already doing.
After implementing I am not sure what is best; the new code or what I already had. I am not seeing much of a speed improvement on my 3GS.
"Sandbox" method: Load images lazily, then save to local tmp folder in the sandbox. Each time the cell is displayed it looks for whether an image with that filename is already located in the sandbox folder. If it is, it retrieves the image and displays it, if not it continues with the download, saves it locally and then displays it. The benefit with this is that the images won't be blank the second time you open the app. They will already be downloaded and ready for displaying.
Caching method: This also loads the images lazily, however, now I include a UIImage on each object in the array that's displayed in the tableview. Instead of saving the image locally, I now download the image and put it into the array for the object. Now, instead of checking for the filename every single time, it jut check whether the UIImage != nil and uses the cached image (or downloads if nil).
A small difference is also that the caching code resizes the image before caching it to the exact size of what is displayed in the cell, whereas the image used in the sandbox code example is actually a bit larger than what it needs to display, which means it has to resize on the fly when scrolling as well. I read months ago that this could be a bit expensive to do, and I am also not sure whether it makes much of a difference in terms of then using a cached image instead of the sandbox-stored image and therefore more CPU intensive anyway (compared to what you save from caching with the caching code above).
I guess my question would be whether I should even bother with the caching code? Again, the new code won't immediately load images on a new launch, whereas the old code actually does because it's already in the sandbox. Since I am not reusing images, I have a lot of images to load (from the sandbox or cache) so I am not noticing a huge difference in speed. In fact, on my 3GS it's almost impossible to tell, in my opinion. The scrolling is not silky smooth, and I assume this is due to the large amount of images that I cannot reuse (different image for each cell). I am also wondering whether the sandbox method would get slower once there's 1000+ images in the folder, for example, eventually having it look through many more images than just 100 or so.
I hope I am making sense. I wanted to be pretty thorough with the details, and I am happy to give more details if needed.
Thanks!
If you have code that already works, and there's not a pressing problem, then don't change it.
If your scrolling actually is too slow, then perhaps you could use a mixture of ideas, and try to get the UIImage, and if it's not there, load it from the sandbox, and if it's not there, then download it.
The only good way to tell if there is any discernible difference in performance is to use profiling tools like Instruments (for measuring things like display framerate for the two techniques) or Shark (to determine hotspots in your code). There could be small differences in your exact implementation that could potentially cause significant differences between any general answer we could give and the actual performance you see in your application.
The thing that primarily concerns me with the "sandbox" method is not performance but disk space usage. Users won't appreciate you filling up their iPhone or iPod Touch with unnecessary files, especially if all the images aren't consistently used or if the set of used images changes often. Without knowing more about your application its impossible to guess how often these cached images would be loaded.
If you're testing locally on your own device, you might be on Wifi network. My recommendation would be to turn Wifi off for part of your testing to see how the two approaches perform when you have to fetch all the images over the cellular network. I would also recommend trying to find an older device (iPhone 3G or worse) because the 3GS does in fact hide potential performance issues that could be annoying for users on older devices.
I have personally used the LazyTableImages technique in my apps many times (provided it hasn't changed drastically between WWDC09 and the recent 'release') and find it to be just what I need. Caching images on disk wouldn't be an option in my case, however, and you shouldn't take my anecdote too strongly into account - profile your own code and use the results it shows.
Edit: The obvious answer is that accessing an in-memory cache is going to be faster than accessing the filesystem, but of course the final word on that is left up to profiling. If the images are already in memory, they don't need to be read from flash and parsed by UIImage. The traditional tradeoff comes into play here though - in-memory caching vs. disk space.
While it may be faster for you to store your images in-memory, you need to be very sure that you correctly handle memory warnings in your application (as you should be doing anyway!). Otherwise long period of use will lead to many, many images in your in-memory cache and trigger memory warnings and if your application is not built to handle these, at best your application will be killed by the OS due to lack of memory resources.
There are pros and cons in both approaches that you present - I suggest using elements of both in your app.
It's better to keep your images in memory and save them later (perhaps when your app quits). If you have a lot of images, it might be faster to use Core Data to save them, than as regular files.
It's also better to avoid doing any resizing on the fly, i.e. in your tableView:cellForRowAtIndexPath: or tableView:willDisplayCell:forRowAtIndexPath: methods or in any method that has to do with drawing your cells' content view. If you can, ask the image provider (content management?) to supply images at the size that your table view displays.

How to reduce the startup time for a typical iPhone app?

To be clear, this is for a normal iPhone application, and not a game.
I've read around the web a few times some developers mentioning that they were working hard to improve/reduce the startup time of their applications, but never with any good background information on how to do so.
So the question is simple: how can you reduce the startup of iPhone applications?
Same as any other performance issue: Use Shark and/or Instruments to identify bottlenecks in your code, and then focus on how you can speed things up there. Each tool will give you a picture of how much time was spent in what parts of your code, so the general scheme would be to run the tool while you start the app and then pore over the data to see where the performance hits occur.
At app startup time, the most likely candidates for improvement will be deferring data loading until later on when it's actually needed, variously described as "on demand" or "lazy" loading. Essentially, don't load any data at app startup unless it's actually needed right away when the app loads. In practice, lots of stuff that may be needed at some point doesn't have to be immediately available when the app starts. For example, if you have a database of N records but only one is visible at a time, don't load all N into memory at app startup time. Load whatever the current record is and then load the others when you actually need them.
James Thomson did a nice blog post documenting his efforts to make PCalc launch faster.
Of particular interest is his use of image with screenshot from last app run, to pull the same trick Default.png does, while loading rest of the app.