performance concerns in using Java Advanced Imaging APIs - applet

In our project we use JAI for showing parts of an image, rotating an image and basic zooming in an applet. We now observe that the applet takes a lot of time to load - around 20 seconds for the first time. But subsequently, it takes only 3 seconds (which is also quite high).
JAI development seems to have frozen since 2007.Atleast I could not find any download post 2007 on the Java website.
Has anyone encountered loading issues and solved them in the context of JAI ?
Is there a performant alternative to JAI ?
The images we are using are in TIFF format and they can have multiple images in one physical file.
Any pointers greatly appreciated.

The first application startup (cold startup) could require lot of time, as you need to load tons of libraries including JAI. The second and next application startups (warm startup) are faster as runtime classes are cached in classes.jsa.
Then, the image processing will require the CPU and in order to paint it, the graphics card. With modern computers image processing (basic operations!) and handling (zoom, pan) is trivial and fast with JAI.
We have developed and image reviewing application with JAI + Image I/O and zoom and panning is extremely fast since we finished it in 2007 (1Mp images). After the image is loaded, the processing and handling is very fast, so we load the image in background threads to improve user experience.
The problem with JAI is it current state: frozen and/or dead, but it is mature, quite stable and other products like Apache Log4J have the same issue, no new developments since years, but people continue using it as there is no alternative (well, Logback!).
The are plenty of alternatives to JAI, like ImageMagick, but I didn't test them.
We careful when loading and processing images, like convert to 8bit/channel if possible, perform the operations in background before painting...

Related

Where Video Player keeps network files and can you keep multiple ones

I've been working with video_player package from Flutter. I'm mostly testing videos taken from the url. My code is very similar to the ones from the examples.
Everywhere I read about it, I can see that this library does not support caching of the videos. But what this exactly means? What exactly is happening behind the scenes and how the behaviour would change if the caching would actually be implemented? How this is different from buffering? Are the video files simply downloaded to our device?
If yes, then were those files are kept?
One additional question is, how can I check the network consumption caused by using such connection? I've tried using Dev Tools but the network tab is always empty.
One last thing is, is it possible to pre-initialize next videos, so when we would like to switch between them, they are already partially pre-loaded?
U can use a package that helps you manage caching futter_cache_manager
https://pub.dev/packages/flutter_cache_manager
U can use this in combination with video_player. However, u would have to download the whole file first to be then be able to retrieve it for video_player to consume it.
An idea would be to stream the video and also download a copy locally. This however would consume more data than just downloading and caching the video first, then playing it locally.
As for how to check for network consumption, i am not sure.
Starting with the theory:
1.Cache is a high-speed storage area while a buffer is a normal storage area on ram for temporary storage.
2.Cache is made from static ram which is faster than the slower dynamic ram used for a buffer.
3.The buffer is mostly used for input/output processes while the cache is used during reading and writing processes from the disk.
4.Cache can also be a section of the disk while a buffer is only a section of the ram.
5.A buffer can be used in keyboards to edit typing mistakes while the cache cannot.
When it comes to video buffering just look at youtube app. You can see the buffer being made when grey line grows bigger before the red line. Mostly information stored this way cannot be accessible at all as Android uses combination of RAM allocation for both caching and buffering as it sees fit for current active process.
Technically you could try pre-loading different videos by starting and pausing all of them at once but I cannot imagine how much tampering with system memory control it would take, even youtube doesn't work like that.

Advantages of Ionic lazy loading

Did a simple Google search:
https://www.google.com/search?q=advantages+of+ionic+lazy+loading
And couldn't really find a detailed description of the advantages of lazy loading. Anyone care to explain?
Long story short: (startup)-performance!
The underlying problem:
When you do a cold start of your app (no resume) the webview engine needs to load, parse and interpret a lot of javascript to become useable. The top high-end devices are mostly capable of doing so in a kind of acceptable timeframe but on hardware which is a few years old or simply not equipped with enough CPU power this may take a while.
Another problem (especially when developing PWAs) is network speed, with WiFi or 4G it is no problem (but also far from ideal!) at all to quickly download a few MBs of javascript. But on a slow 3G connection you can go and drink a coffee while waiting until your app becomes interactive.
Lazy-loading to the rescue!
So how can we minimize the effort to make the app interactive faster? We split up our heavy main bundle into many smaller bundles. So if we start our app now, only the bare minimum of javascript needed for the first page has to be fetched and parsed. Every time we need a specific feature (a page) we do the loading just in time (lazy) instead of ahead of time (eager). By always just fetching a small chunk of javascript when needed the performance gain will be huge for some devices but will definitly noticeable on every device.
If you implement lazy-loading in Ionic3 your code also becomes more modular and maintainable because you will create a self-contained Angular module for every IonicPage and by pushing a string onto the nav-stack instead of an actual page instance you remove a lot of dependencies (imports) in your code.

iPhone and Vertex Buffer Objects

I've just started playing around with opengl es on the iphone the past couple of weeks and i'm looking at refactoring some of my code to use Vertex Buffer Objects(VBO). Before I do though I would like to make sure it'll be worth it. The problem is that afaik the only reason you create VBO's is to shift a chunk of data onto the graphics card so that it doesn't need to be retrieved from system ram when it's used. The iPhone however does not have any dedicated ram that I'm aware of so i'm struggling to see why I would benefit at all from using VBO's. I have seen talk around the internet with conflicting opinions and apple certainly want dev's to use it so there's probably still a reason to use them but just wanted to see if anyone on SO had an opinion to add.
I saw no performance improvement on an iPhone 3G. I moved a bunch of stuff to VBOs, but eventually backed it out as it made it more difficult for me to pursue other performance gains. It's not the quick 25% performance increase that I was hoping for.
I've read somewhere that it can make a difference on the newer hardware (3GS), but I don't have references to back that up.
It depends. (sorry).
Rob didn't see an improvement for his setup, but here is an interesting post that did see a large improvement.
The main reason to existence of VBO's is the presence of static data on 3D models. The first bottleneck you encounter is the slowness of copying data to video memory (by using the unavailable glBegin/glEnd block or glVertexPointer, glBufferData and friends).
Let's imagine the old "flying toaster" screensaver. All toasts are static (changing only the position) - why waste resources copying them every frame from CPU's memory to GPU's? Copy it once with buffers and draw it with a single command. And, depending on how you do animations, even the animated toasters can be described in a static fashion.
My first 2D game I started without VBOs. When I changed to VBOs, no difference (like Rob). But, when I refactored to use more static buffers, FPS gone from 20 to 40. Since my goal was to reach 30, I was satisfied. I had some ideas to refactor even more, leaving everything static, but I don't have time now (game is on review, next one to come).

Advice on using sandbox vs. caching for UITableView async image download

Apple just released some sample code on lazy loading images in a UITableView a week ago. I checked it out and implemented it into my own UITableView (which is a drawRect one for fast scrolling), to see if there was a difference from what I was already doing.
After implementing I am not sure what is best; the new code or what I already had. I am not seeing much of a speed improvement on my 3GS.
"Sandbox" method: Load images lazily, then save to local tmp folder in the sandbox. Each time the cell is displayed it looks for whether an image with that filename is already located in the sandbox folder. If it is, it retrieves the image and displays it, if not it continues with the download, saves it locally and then displays it. The benefit with this is that the images won't be blank the second time you open the app. They will already be downloaded and ready for displaying.
Caching method: This also loads the images lazily, however, now I include a UIImage on each object in the array that's displayed in the tableview. Instead of saving the image locally, I now download the image and put it into the array for the object. Now, instead of checking for the filename every single time, it jut check whether the UIImage != nil and uses the cached image (or downloads if nil).
A small difference is also that the caching code resizes the image before caching it to the exact size of what is displayed in the cell, whereas the image used in the sandbox code example is actually a bit larger than what it needs to display, which means it has to resize on the fly when scrolling as well. I read months ago that this could be a bit expensive to do, and I am also not sure whether it makes much of a difference in terms of then using a cached image instead of the sandbox-stored image and therefore more CPU intensive anyway (compared to what you save from caching with the caching code above).
I guess my question would be whether I should even bother with the caching code? Again, the new code won't immediately load images on a new launch, whereas the old code actually does because it's already in the sandbox. Since I am not reusing images, I have a lot of images to load (from the sandbox or cache) so I am not noticing a huge difference in speed. In fact, on my 3GS it's almost impossible to tell, in my opinion. The scrolling is not silky smooth, and I assume this is due to the large amount of images that I cannot reuse (different image for each cell). I am also wondering whether the sandbox method would get slower once there's 1000+ images in the folder, for example, eventually having it look through many more images than just 100 or so.
I hope I am making sense. I wanted to be pretty thorough with the details, and I am happy to give more details if needed.
Thanks!
If you have code that already works, and there's not a pressing problem, then don't change it.
If your scrolling actually is too slow, then perhaps you could use a mixture of ideas, and try to get the UIImage, and if it's not there, load it from the sandbox, and if it's not there, then download it.
The only good way to tell if there is any discernible difference in performance is to use profiling tools like Instruments (for measuring things like display framerate for the two techniques) or Shark (to determine hotspots in your code). There could be small differences in your exact implementation that could potentially cause significant differences between any general answer we could give and the actual performance you see in your application.
The thing that primarily concerns me with the "sandbox" method is not performance but disk space usage. Users won't appreciate you filling up their iPhone or iPod Touch with unnecessary files, especially if all the images aren't consistently used or if the set of used images changes often. Without knowing more about your application its impossible to guess how often these cached images would be loaded.
If you're testing locally on your own device, you might be on Wifi network. My recommendation would be to turn Wifi off for part of your testing to see how the two approaches perform when you have to fetch all the images over the cellular network. I would also recommend trying to find an older device (iPhone 3G or worse) because the 3GS does in fact hide potential performance issues that could be annoying for users on older devices.
I have personally used the LazyTableImages technique in my apps many times (provided it hasn't changed drastically between WWDC09 and the recent 'release') and find it to be just what I need. Caching images on disk wouldn't be an option in my case, however, and you shouldn't take my anecdote too strongly into account - profile your own code and use the results it shows.
Edit: The obvious answer is that accessing an in-memory cache is going to be faster than accessing the filesystem, but of course the final word on that is left up to profiling. If the images are already in memory, they don't need to be read from flash and parsed by UIImage. The traditional tradeoff comes into play here though - in-memory caching vs. disk space.
While it may be faster for you to store your images in-memory, you need to be very sure that you correctly handle memory warnings in your application (as you should be doing anyway!). Otherwise long period of use will lead to many, many images in your in-memory cache and trigger memory warnings and if your application is not built to handle these, at best your application will be killed by the OS due to lack of memory resources.
There are pros and cons in both approaches that you present - I suggest using elements of both in your app.
It's better to keep your images in memory and save them later (perhaps when your app quits). If you have a lot of images, it might be faster to use Core Data to save them, than as regular files.
It's also better to avoid doing any resizing on the fly, i.e. in your tableView:cellForRowAtIndexPath: or tableView:willDisplayCell:forRowAtIndexPath: methods or in any method that has to do with drawing your cells' content view. If you can, ask the image provider (content management?) to supply images at the size that your table view displays.

How to reduce the startup time for a typical iPhone app?

To be clear, this is for a normal iPhone application, and not a game.
I've read around the web a few times some developers mentioning that they were working hard to improve/reduce the startup time of their applications, but never with any good background information on how to do so.
So the question is simple: how can you reduce the startup of iPhone applications?
Same as any other performance issue: Use Shark and/or Instruments to identify bottlenecks in your code, and then focus on how you can speed things up there. Each tool will give you a picture of how much time was spent in what parts of your code, so the general scheme would be to run the tool while you start the app and then pore over the data to see where the performance hits occur.
At app startup time, the most likely candidates for improvement will be deferring data loading until later on when it's actually needed, variously described as "on demand" or "lazy" loading. Essentially, don't load any data at app startup unless it's actually needed right away when the app loads. In practice, lots of stuff that may be needed at some point doesn't have to be immediately available when the app starts. For example, if you have a database of N records but only one is visible at a time, don't load all N into memory at app startup time. Load whatever the current record is and then load the others when you actually need them.
James Thomson did a nice blog post documenting his efforts to make PCalc launch faster.
Of particular interest is his use of image with screenshot from last app run, to pull the same trick Default.png does, while loading rest of the app.