Why Chrome render images by chunks - dom

This is more a theorical question.
I'm currently working on a web site where all the images takes a while to load and the browser renders them by chunks. My question is Why this happens ?
This site has exactly the same behavior that i'm facing: http://lab2.ravelrumba.com/cssimages/test1/index5.html

I'd imagine it has something to do with buffering. Images can be very large, and most of the time they arrive compressed, meaning that there's some effort your computer has to put into first decoding the image before it can display it on the screen.
When reading a large file, you usually allocate a buffer, which is an area of memory you wish to stream the uncompressed data into. In this context, you'll load a portion of the image, perform the required processing, and continue to do this until all portions of the image file have been completed. Here, it looks like once a part of the image has been fully decoded, it is rendered immediately, whereas in some implementations you'd usually wait until the whole file has been processed before visualising it.
If a bigger buffer were to be allocated, you'd see larger chunks get rendered, but this places a greater overhead on system memory.
Anyway, this is just my hunch.

Related

img src with large image size on UIWebView taking time to load and sometimes crashes

I display html string on UIWebview, and html string comes from server.
In html which I retrieved from server, is having 10 images which are large sized with src attribute. those images total size is above 4 MB. when I load this html string into UIWebview, application takes 4 mins to load images sometimes and sometimes it crashes.
I want to know, if there is any solution to <img src..> tag where I can make thumbnail images.
Any response will greatly appreciated.
Thanks
You should be able to find out on the server-side via java/html5 what browser your client is and what features it has. In doing so the server can determine the appropriate image size to send to the client.
My company has successfully implemented this with iPhone and Android client's retrieving images that are smaller in size as opposed to browser's on laptops receiving larger images.
Good luck!
Big chunks of memory allocated can trigger a sigkill 9 from springboard on heavy system load (by protection) even if you don't overlap the 46MB given for any single application running.
The best way to do this would be to load one image at a time.
For each individual image, create a smaller version.
Release the currently big image loaded since you have a smaller version of it.
Do it again for next, and so on.
You will reduce the impact of big loads.
The Nimbus Framework is doing image download and resize that way. Have a look.
For the UIWebview, I'm not sure of what you can do. Since you get the HTML before displaying it, you could perhaps grep the <img> tags and creating those thumbnails, storing them to the iPhone, replacing the src path of the original <img> tags by a local URL of your thumbs.
Probably the device you are using cannot support 4MB images when decompressed in a UIWebView.
Which device are you using?
4 minutes to load 4 images is a very long time. Is your network very bad? Otherwise it could be another indicator of wrong approach.
Being you I'll try to use native UIImageView to display the images and perform some kind of queued download, maybe with ASIHTTP.
So you can load and unload the images when you need them and avid keeping them always in memory.
As Jojas point, I still think that the answer is correct as the UIWevView drains memory to render the DOM elements.

Disk cache vs Recreating the image

I currently have an app where I do a lot of image manipulation. I basically take an image that is 320x320 (or 640x640 on Retina) and scale it down to 128x128 (or 256x256 on Retina) before rounding off its corners and applying a glossy highlight. Everything is done using Core Graphics drawing.
At any one time there could be around 600 images that need this processing so I do around 40 on app launch using a background thread and cache them into a FIFO queue. When an image not in the cache needs processing, I do so and add it to the end of the cache, discarding the first cached image. If that first image is needed again it goes through the same process.
What I would like to know is if it would make more sense, and is ultimately more efficient, to save the discarded images to disk rather than recreate them from scratch the next time they are needed, as I could instead just read them from disk.
These images are also displayed using a CALayer and therefore there may be a overhead when the layers' contents are set because of the conversion from UIImage to CGImage. If I store them on disk I believe they can be read directly as a CGImage?
Any ideas and input on improving the efficiency of this process will be sincerely welcomed.
My personal choice would be to use a disk cache.
However, you say 'which is more efficient' - what do you mean?
If you mean faster then the disk cache is probably going to win.
If you mean more space efficient then recreating them will win.
If you mean lower memory usage then it entirely depends on your implementation!
You will have to try it and see :)
However, the advantage of the disk solution is that the second time your app starts up, it will already have done the processing so will start faster. That's why I'd use the disk.
From my experience, saving and then reading from the disk is faster. I had some memory warnings by doing it over and over, instead of saving and reading. But, the only to know for sure is to try. I was using around 1000 images, so it makes senses in my case to use the disk.
It's also good to give a try github's libs that downloads and caches UIImage/NSData from Internet.
It may by SDWebImage(https://github.com/rs/SDWebImage) or APSmartStorage (https://github.com/Alterplay/APSmartStorage).
APSmartStorage helps to get data from network and automatically caches data on disk or in memory in a smart configurable way. Should be good enough.

Why do I not get low memory issues until images are draw on the screen?

I am able to load over 200 UIImage objects into a NSMutableDictionary without any memorry warning issues.
When I start displaying them on the screen (after about showing 10-20 images) I get low memory warnings and an eventual crash.
Only about 8 images are displayed at anyone time.
Does it take additional memory to actually draw a UIImage on the screen?
No memory leaks are showing up and i've reviewed code for leaks many many times.
Its probable that you initially create a link to the image, the only data that is actually read at this time are dimensions and image type (in short), the rest of the data is usually not needed until requested usually from a background process i.e. displayed on screen. When it is displayed the actual image data is downloaded from the filepath and decoded according to its image type, it is then cached so it doesn't have to be downloaded again, the caching takes a lot of memory plus if your using double/triple buffering there's extra memory required for the off screen drawing.
Try disposing of all image data before loading new images.
The documentation says that having a UIImage doesn't necessarily imply that the image is actually stored in memory, because it could be purged from the image cache. But it does say that if you try to draw it, it will be pulled back into memory. Maybe it starts purging from its image cache before it warns you about the low memory condition. That explains everything you're seeing.

Lots of png files being loaded into textures for use with OpenGLES is causing slow load times

I have thousands of png files that I am loading using libpng and then creating OpenGLES textures out of for use in the application. Doing this is causing a huge load time lag on the iPhone. Is there any way to speed up load time?
What I've typically done is build an object that handles lazy loading of my textures through a manager. At start up I register my known textures and resolve their file system attributes I will need later to save on simple IO at load time, then as they are needed I pull them in.
To speed things up I have a batch load mechanism too, where I say "load this array of images and return them to me." This is simply to remove the overhead of repeated method calls. Indeed, my single load solution is just a simple wrapper around my batch load.
This way at start up I cache my bookkeeping (object creation, file system attribute discovery, etc), but defer heavy work until necessary. As I load textures into my app at run time it triggers faults which fill in the textures from storage to texture memory. If I'm loading a scene with many textures known before hand I load the set of very common textures in a prefetch, but defer relatively uncommonly seen textures to runtime.
In practice this tends to work due to the probabilities involved - forcing the load at start time assures you you'll encounter all textures at once, whereas sparsely loading them in likelihood causes the expected user latency to drop off weighted by load time latency * probability of being loaded within some window of time from start. If you optimize your start time prefetch to not load textures you decrease your expected UI latencies dramatically.
Additionally, you may want to consider using NSURLConnection:connectionWithRequest:delegate: for loading your textures from storage. It is asynchronous so you can ask it to load your largest ones asynchronously and your smaller ones synchronously to take advantage of IO / CPU idle factors during file system fetches / texture decompression (large files load long while small files load fast and can deserialize at the same time). You should test this though since the iPhone may not handle asynchronous file system access well.
You can place several textures in one big texture (1024x1024). This require some re-calculation of the texcoords though. Try also to have the texture parts close to the actual resolution used when drawing, ie if a texture will fill a 1/2 screen height (240px) its good enough to use a texture of 256x256.
Then, a more advanced method (not tested) could be to concatenate all files (+registering their length+pos) and then used mmap(..) to access this png-db file. Then use NSData to feed the texture loading function if they are based on UIImage. The good thing with using mmap(..) like this is that one don't open and close a lot of files and the access to the data is handled with help of the OS VM manager.
[Note: Yeah, iPhone native PNG loader require some png mangling...might need a none native reader then or concatenate mangled files instead]
Do you know what order you're going to need your thousands of images in? Perhaps you can only load a few hundred of them when your app starts, and then load more on a background thread as you go.
Combining small images into fewer larger textures will also be a good idea, again only if there's some pattern about what images are used together.

jpg or png for UIImage -- which is more efficient?

I am grabbing an image from the camera roll and then using it for a while as well as save it to disk as a PNG on the iPhone. I am getting the odd crash, presumably due to out of memory.
Does it make a difference if I save it as PNG or JPG (assuming I choose note to degrade the quality in the JPG case)? Specifically:
is more memory then used by the UIImage after I reload it off of disk if I saved it as a PNG?
is it possible the act of saving as PNG uses up more memory transiently during the saving process?
I had been assuming the UIImage was a format neutral representation and it shouldn't matter, but I thought I should verify.
I am getting the odd crash, presumably due to out of memory
Then STOP WHAT YOU ARE DOING RIGHT NOW and first figure out if that's actually the cause of the crash. Otherwise there's a very good chance that you're chasing the wrong problem here, fixing a memory problem that doesn't exist while ignoring the real cause of the crash. If you want to fix a crash, start by figuring out what caused the crash. Following up on what's "presumably" the problem is a recipe for wasted time and effort.
I have an application on the store that needs to save intermediate versions of an image as it's being edited. In the original version, I used PNG format for saving, to avoid quality loss from loading and saving JPEG multiple times.
Sometime around the 2.2 software release, Apple introduced a change into the PNG writing code, such that it takes many times longer to save PNG data from some images. I ended up having to change to saving in JPEG format, because my application was timing out when trying to save images on exit.
Also, you'll run into issues because saving in PNG format doesn't preserve the "orientation" information in the UIImage, so a picture taken in Portrait orientation with the built-in camera will appear rotated after you save and reload it.
It depends on what type of images you're dealing with. If you're dealing with photographic images, JPEGs will almost always be smaller than PNGs, with no discernable loss of detail as can be seen by the human eye.
Conversely, if you're dealing with highly non-photographic images such as GUI elements or images with large blocks of solid colors, then PNGs and JPEGs will be comparable in size, but the PNG will save losslessly whereas the JPEG will be lossy and have very visible artifacts. If you have a really simple image (very large blocks of constant colors, e.g.), then a PNG will very likely be much smaller than a JPEG, and again will not have any compression artifacts.
The act of saving an image as a PNG or JPEG should not take up very much transient memory. When an image is in memory, it is typically stored uncompressed in memory so that it can be drawn to the screen very quickly, as opposed to having to decompress it every time you want to render it. Compared to the size of the uncompressed image, the amount of extra temporary storage you need to compress it is very small. If you can fit the uncompressed image in memory, you don't have to worry about the memory used while compressing it.
And of course, once you write the image to the file system in non-volatile storage and free the in-memory image, it really doesn't matter how big the compressed image is, because it doesn't take up main memory any more. The size of the compressed image only affects how much flash storage it's using, which can be an issue, but it does not affect high likely your app is to run out of memory.
Your crashes may be from a known memory leak in the UIImagePickerController.
This should help you fix that.
I don't have any hard data, but I'd assume that PNGs are preferable because Apple seems to use PNGs virtually everywhere in iPhone OS.
However, if you've already got the code set up for writing PNGs, it shouldn't be too hard to change it to write JPEGs, should it? Just try both methods and see which works better.
Use PNG wherever possible. As part of the compilation XCode runs all PNG files through a utility (pngcrush) to compress and optimize them.
is more memory then used by the UIImage after I reload it off of
disk if I saved it as a PNG?
=> No, it's the same memory size if you import from 2 images that have same resolution and same number of channels. (such as RGBA)
is it possible the act of saving as PNG uses up more memory transiently during the saving process?
=> No, it only effect your disk memory.