How to load a large local image (works on simulator, not test device)? - iphone

I'm currently trying to load a map image which is a large(16mb) .jpg file inside a scrollView so you can zoom in and out.
When I launch the app inside the simulator everything runs fine and smooth. However once I run it on my test device (iPod 4.1, iOS 6.0) the app shows the launch image and then it crashes with no error messages at all.
This is how the code currently looks like.
myImage = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"map.jpg"]];
myScrollView=[[UIScrollView alloc]initWithFrame:CGRectMake(15,120,290,365)];
myScrollView.showsVerticalScrollIndicator=YES;
myScrollView.scrollEnabled=YES;
myScrollView.maximumZoomScale = 10.0;
myScrollView.minimumZoomScale = [myScrollView frame].size.width / myImage.frame.size.width;
myScrollView.clipsToBounds = YES;
myScrollView.bounces = NO;
myScrollView.showsHorizontalScrollIndicator = NO;
myScrollView.showsVerticalScrollIndicator = NO;
[self.view addSubview:whiteFrame];
[self.view addSubview:myScrollView];
myScrollView.contentSize = CGSizeMake(myImage.frame.size.width,myImage.frame.size.height);
myScrollView.delegate = self;
myScrollView.zoomScale = [myScrollView frame].size.width / myImage.frame.size.width;
[myScrollView addSubview:myImage];
Any help is greatly appreciated.
EDIT:
I found this in the docs
If you have a very large image, or are loading image data over the web, you may want to create an incremental image source so that you can draw the image data as you accumulate it. You need to perform the following tasks to load an image incrementally from a CFData object:
Create the CFData object for accumulating the image data.
Create an incremental image source by calling the function CGImageSourceCreateIncremental.
Add image data to the CFData object.
Call the function CGImageSourceUpdateData, passing the CFData object and a Boolean value (bool data type) that specifies whether the data parameter contains the entire image, or just partial image data. In any case, the data parameter must contain all the image file data accumulated up to that point.
If you have accumulated enough image data, create an image by calling CGImageSourceCreateImageAtIndex, draw the partial image, and then release it.
Check to see if you have all the data for an image by calling the function CGImageSourceGetStatusAtIndex. If the image is complete, this function returns kCGImageStatusComplete. If the image is not complete, repeat steps 3 and 4 until it is.
Release the incremental image source.
Does anyone know about a sample code for that ?

A 16mb jpg is going to take a pretty decent chunk of RAM when uncompressed. Your app is probably crashing because it is using too much memory. You can check this by reproducing the crash and checking the device's console log in the Xcode organizer.
You will need to either:
Reduce the dimensions of the image as much as your design will permit to greatly reduce memory usage. For example, there's no reason to have a 16mb jpg if it will never be shown at full-resolution.
Chop the image up into tiles and only load the tiles currently displayed on the screen and the surrounding areas. Then load additional tiles as the user scrolls around. This is how maps apps are able to display extremely large images without running out of RAM.
Remember to also test your app on the supported device with the lowest amount of RAM. These devices will probably kill your app sooner than the newest devices.

At 16 MB, the image is too big for your iPod to handle. The app is crashing because it's loading the image directly, and asking for too much memory and the system has had to kill it.
You should create a smaller version of your image, in terms of both canvas size and image quality, for your app.
You could also incrementally load the image instead.
When you're testing on the simulator, it has full access to the gobs of memory on your computer. Way more than the 256MB available to your iPod 4 (there isn't such thing as an iPod 4.1).

Have a look at CATiledLayer
https://developer.apple.com/library/ios/documentation/GraphicsImaging/Reference/CATiledLayer_class/Introduction/Introduction.html
There is also an example Apple project using it
https://developer.apple.com/library/ios/samplecode/PhotoScroller/Introduction/Intro.html#//apple_ref/doc/uid/DTS40010080
Good luck

I hope you are not loading this image on app launch. Application launch should be as light as possible. If launch time is more than 20 secs then iOS kills the app on its own. On simulator resources are much bigger and you may not see the similar behavior as that with device. You may also want to use the lazy loading technique to load lighter image first and then load heavy image only when needed while zooming, panning etc.
Having said that, one imortant aspect of [UIImage imageNamed] is that it caches all images loaded in that way and they never get unloaded, even if you dealloc the UIImage that was created! This is good in some circumstances (e.g. smallish images in UITableView cells), but bad in others (e.g. large images).
The solution is to use [UIImage imageWithData] which does not do this caching and which unloads the data when the UIImage is dealloc'd.
Another solution to this problem consists of either loading scaled version of the image (*1) or you have option to display it in full resolution with help of CATiledLayer(*2).
[UIImage imageWithCGImage:scale:orientation:];
CATiledLayer example

Related

Huge PNG images in IOS

I have a lot of png images in my app which is causing my app to overload the real memory usage of my iPad2 device. My whole app folder with lots of sound files and png images is only about 50-60 MB precompiled, 90 MB on device, but still I'm easily going up to 300MB++ at run time.. ViewControllers on top of former ViewControllers etc.. which I'm also trying to fix..
What I find strange is that by just displaying one background .png image I'm adding 12 MB onto real memory usage(seen in instrument). The image that I used to fill a ImageView image in the storyboard is only 700 KB in my project folder. Taking it out or leaving the image field empty saves me 12 MB of memory...
I'm using a lot of these background images as well as other foreground images in the app which is eating up way to much space.
Any suggestions or help is appreciated
Thanks.
Well, 700kb image on disk space doesn't mean 700kb image in memory. It is compressed while stored on disk, but when it is taken into memory - it will grow in size.
If you are using a lot of images in your project, I would recommend using [UIImage imageWithContentsOfFile:] method. It doesn't cache images internally and you have more control over the memory than using [UIImage imageNamed:].
For me, the general rule of thumb is this. If the image is huge and used once in the app -> [UIImage imageWithContentsOfFile:], but if the image is reused in many places over the app -> [UIImage imageNamed:].
In addition, if you have to use .png format because it has transparency, then you might try giving .webp a chance. This format is not supported officially in ios, however there is a custom implementation on github you can take a loot at.
UPDATE:
I personally don't use interface builder in my apps at all, as I find it extremely time consuming and slow. Instead I create all views programmatically, that gives me more flexibility like choosing between [UIImage imageWithContentsOfFile:] or [UIImage imageNamed:]. You can just set an Outlet to your UIImageView and then set the actual image in code.
As for pngs, there is no such thing as the preferred type of images in iOS. It really depends on your case. If you need transparency -> png, need just a plane image -> jpg. This is just a simple example.
And as for .webp, this format, as I have already mentioned, is not officially supported in iOS, but you can add your own support for it. Basically, .webp will let you replace .png and reduce the size of project folder without loosing transparency in your images.
Hope this helps, cheers!

iOS app : lag issue when display a huge amount of small images

Environment:
I am creating a "photo mosaic" app, and I try to display 1024(32*32) pieces of small images(retina size->w:30px h:20px) on the screen same time. Which means on total, it is the same size as the full screen image size.
Issue:
I load 1024 UIImages, create 1024 UIImageViews, and add all of them to a UIView. When I scroll to this view, there is a big lag: test on iPhone4(iOS 5) and iPhone5(iOS 6). It's just appear on iPhone4, and on iPhone5 is fine. (Supposing iPhone5 have much more better CPU, so I think it is reasonable).
What I think:
Supposing all images have been already loaded from local dir in the memory(using method "imageNamed"), so I think the problem must be in the somewhere of the step display/render the images.
So any idea about it? Any, any idea will be helpful.
Thanks so much,
UPDATE
It is much better after I took the advice from #Antwan van Houdt . Here is the principle code:
-(void)updateCoverImageView:(UIImageView *)smallImage{
UIGraphicsBeginImageContextWithOptions(self.frame.size, NO, 0.0f);
CGContextRef ctx = UIGraphicsGetCurrentContext();
[self.coverImageView.layer renderInContext:ctx];
[smallImage.image drawInRect:smallImage.frame];
self.coverImageView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
Then you just set the alpha value of smallImage to zero, so the system won't render them. And the cover image will replace it. That works for the lag issue caused by displaying a large amount of UIView same time.
Is every image in its own view?
In either case, for such a huge amount of images you should probably consider drawing them in 1 view and if that drawing method is still too slow you could draw it once, render the view to an image and then simply draw that image as it's cached display.
There are several guides available from apple that deal with the performance of custom view drawing and stuff like it.. you should give them a read.
Not knowing the full details of your code, any thing like this, should be done on a background thread. I would read up on graphics contexts, grand central dispatch and then use both those concepts to create the resulting image in another thread and then get the resulting image and display it in the main thread.
Infact since you have many images which can all be processed in parallel, it can be done very fast using GCD.
If you can post the code, I can help you optimize it.

UIImage Performance solution

Is there a more elegant solution for this question?
I have about 70 .png images I want to load and randomly pull from an array when a button is pressed. (Up to 50 images could be on screen at once and each is 40-68kbs in size with dimensions of 150X215, obviously there will be over lap and covered images behind foreground images at times) Should I use the following example to pull that off?
EXAMPLE:
-(void)viewDidLoad {
UIImage *dogImage = [UIImage imageNamed: #"dog.png"];
UIImage *catImage = [UIImage imageNamed: #"cat.png"];
// And so on 68 more times followed by....
for (int i = 1; i <= 70; i++) {
UIImageView *dogView = [[UIImageView alloc]initWithImage:dogImage];
UIImageView *catView = [[UIImageView alloc]initWithImage:catImage];
// And so on 68 more times followed by....
NSArray *animalArray = [[NSArray alloc] initWithObjects: dogView, catView, nil];
// And so on 68 more times ending the array with ,nil
// other code and and release calls, etc...
}
}
Is this fine for performance or am I going to crash the app at launch or soon after? The
Anyone alternatives to doing it this way?
Hmmm, that's around 3.8MB of images. That might take some time to load while the user is sitting there waiting for the view to appear. Try it and see how long it takes.
If the image is not needed until the button is pressed, load it then! You're wasting a lot of time loading 70 images if you only use one of them at a time.
I would suggest that you don't try to load all these in ViewDidLoad. If you must have them all in the array at once then load the images into an array on a separate thread and as each one gets loaded add it to the array.
Also, you need to remember that the size of the image you're quoting is its compressed size. As soon as that image is used in a view, it'll be uncompressed to be drawn to the graphics context. The uncompressed size of 70 images is going to be a hell of a lot more than a few megabytes.
If you load them all that the same time you'll likely crash the app with an out of memory error.
I suggest only storing the image filenames in the array and only loading the image when it's actually required.
Also, there are some differences in the way that UIImage loads image data depending on what method you use to load the image. I'm not confident enough in my memory of this situation to write down the exact differences. The basic idea is that imageNamed: will load and cache the image for you, while (I think) initWithContentsOfFile will load the image data on demand, saving you some memory until the very last minute when the image is needed.
Drawing that many images at once, it's time to look at DrawRect.
I'd say you could use UIImage's imageNamed method to load images when you press the button. That does some caching so repeated images will load faster saving you memory.
Then for drawing them to screen, you want to use one view and use the Quartz2D drawing methods to put them on the screen. Having them all as separate UIImageViews would chew up more memory and really slow things down if you are doing any scrolling or animation.
As well as loading the images on demand, you might also consider storing them using PowerVR texture compression. You don't need to be doing OpenGL to use this compression mode - all the graphics APIs on the iPhone are set up to handle it. The files will be larger on disc than with PNG compression, but the beauty of it is that they don't need to be de-compressed in memory - the compressed image data is rendered directly.

iphone animating frame rate and number of frames

I am working on an app where the customer wants to animate large images (305x332). The customer wants 50 frames in 1.75 seconds to animate in a loop. I am finding that the app is very slow with this much processing. It is slow to start, respond to touches and to shutdown. On the iPhone it self, the app will often crash or lockup the phone. See the code below. My question(s):
Am I doing something to cause the poor performance or is 50 frames too much to ask?
Is there a best practices for number of frames in animations and speed of animations?
Is there a best practices for size of images in an animation?
Please let me know. Here is the code...
NSMutableArray *tempArray = [[NSMutableArray alloc] init];
for(int i = 1; i <= 50; i++)
{
[tempArray addObject:[UIImage imageNamed:[NSString stringWithFormat:#"%#-%d-%04d.JPG",[constitution getConstitutionWord], constitution.getAnimationEnum, i]]];
}
backgroundImage.animationImages = tempArray;
[tempArray release];
backgroundImage.animationDuration = 1.75; // seconds
backgroundImage.animationRepeatCount = 0; // 0 = loops forever
[backgroundImage startAnimating];
I ran some tests a while back. Managed to max out around 40 frames of around 20K PNGs before UIImageview animation gave up and borked.
If you need more than that, you can either switch to use video or write your own animation rendering engine. The rendering engine is pretty straightforward. It would run on a timer that fetches an already loaded UIImage from the head of a queue, updates the view then releases the image. A separate thread preloads a few frames ahead at the tail end of the queue. This way, you have at most N frames in memory at any given time. You can tweak it to find the optimum balance between timer delay and number of frames to pre-load for your app.
A project I worked on used that technique to display hundreds of large images with no problems.
A few tips:
Use CALayers as much as you can.
Don't do ANY scaling when displaying each frame. Try to make each source image the exact size you want displayed.
Try not to cover the animation with other views. It'll slow things down.
You can load the image list from the bundle or by scanning a directory. The disadvantage of bundle is that the images can't be updated without updating the whole app. The disadvantage of loading from a directory is that on first startup you'll have to copy the images from the bundle to a writable location and your app size at runtime gets larger. The main benefit is that you can update the media in that directory via the net.
Sound synchronization becomes a bit dicey. You'll have to come up with your own way to designate when to start/stop sounds. If you have sound and need it to be precise (like lip-synching) going the video route might be more practical.
Good luck.
I think this is a little too much to ask of the iPhone. Decompressed your images take up about 21Mb according to my fag packet. This will likely cause your app to be terminated on the phone just through memory usage. Shifting that much data into the framebuffer is also going to cause the phone problems.
I think you need to use a more suitable animaton technology. ~30fps at 305*322 sounds like video to me. Video is compressed in such a way that you don't need to hold all of the decompressed frames in memory at once. Sadly, if you want to display video without giving over the whole screen to the built in control, you are going to have to build your own player - VLC has been ported to iPhone so could make a good starting point.
The way it was explained to me, the way OS X, including iPhone OS, renders it's view, the images would be continually stacked on each other. Something that you may want to try is a large sprite image that includes all frames, cropped to one at a time and snaps to each frame as needed. This also reduces the image file overhead and rendering speed much like CSS sprites do in web apps. This works well in the HTML5/CSS3 animations I've done on the iPhone and may have the same success in your background image animation.

Difference between [UIImage imageNamed...] and [UIImage imageWithData...]?

I want to load some images into my application from the file system. There's 2 easy ways to do this:
[UIImage imageNamed:fullFileName]
or:
NSString *fileLocation = [[NSBundle mainBundle] pathForResource:fileName ofType:extension];
NSData *imageData = [NSData dataWithContentsOfFile:fileLocation];
[UIImage imageWithData:imageData];
I prefer the first one because it's a lot less code, but I have seen some people saying that the image is cached and that this method uses more memory? Since I don't trust people on most other forums, I thought I'd ask the question here, is there any practical difference, and if so which one is 'better'?
I have tried profiling my app using the Object Allocation instrument, and I can't see any practical difference, though I have only tried in the simulator, and not on an iPhone itself.
It depends on what you're doing with the image. The imageNamed: method does cache the image, but in many cases that's going to help with memory use. For example, if you load an image 10 times to display along with some text in a table view, UIImage will only keep a single representation of that image in memory instead of allocating 10 separate objects. On the other hand, if you have a very large image and you're not re-using it, you might want to load the image from a data object to make sure it's removed from memory when you're done.
If you don't have any huge images, I wouldn't worry about it. Unless you see a problem (and kudos for checking Object Allocation instead of preemptively optimizing), I would choose less lines of code over negligible memory improvements.
In my experience [UIImage imageNamed:] has dramatically better performance, especially when used in UITableViews.
It's not just the memory but also decoding the image. Having it cached is much faster.
As the API reference of UIImage says :
+(UIImage *)imageNamed:(NSString *)name
This method looks in the system caches for an image object with the specified name and returns that object if it exists. If a matching image object is not already in the cache, this method loads the image data from the specified file, caches it, and then returns the resulting object.
+(UIImage *)imageWithContentsOfFile:(NSString *)path
This method does not cache the image object.
so,we can see that if you have a lot of same UI elements(such as UITableViewCell) that may use same image(often as an icons),and due to performance , of course we want to reuse the same image , so that we will save some memory for other use . Generrally the reused image is often used in the ui element that our user may operate on it lots of times . So it values for us to reuse it .So you can choose to use imageNamed method .
And on the other hand , in an application , there will be some UI element that will be there during the app's life cycle,such as a Button , a logo view , so these images used by these ui elements may also be there during the app's life cycle ,you wouldn't consider whether these image should be cache or not .So you can choose to use imageNamed method .
On the contrary,in an application , there are often some UI Elements that created dynamically. For example , our application support dynamic background , so that user can choose the background they like .And the background may be an image .So we may have a interface that list lots of different background (often show by use UIImageView) for user to choose ,we can name the list view MyBackgroundListView.So once the user chooses an background image , the MyBackgroundListView should be destroyed , because it has finishs its function .The next time the user want to change his/her background , we can create MyBackgroundListView again .So the images used by MyBackgroundListView shouldn't be cached , or our application's memory will run out .So this time you should use
imageWithContentsOfFile method.
As the Apple's doc Supporting High-Resolution Screens In Views says
On devices with high-resolution screens, the imageNamed:, imageWithContentsOfFile:, and initWithContentsOfFile: methods automatically looks for a version of the requested image with the #2x modifier in its name. If it finds one, it loads that image instead. If you do not provide a high-resolution version of a given image, the image object still loads a standard-resolution image (if one exists) and scales it during drawing.
so you would worry about the image's search path for retina screen problem . IOS will help you deal with it.
Sorry for my poor English . May it be helpful.
If you don't want your image do be cached you can also use initWithContentsOfFile directly :
NSString *fileLocation = [[NSBundle mainBundle] pathForResource:fileName ofType:extension];
UIImage* yourImage = [[[UIImage alloc] initWithContentsOfFile:imagePath] autorelease];
I've also been told that [UIImage imageNamed:] does a little bit too much caching, and images are not often released. I was told to be careful of using it.
imageWithData is useful when you store your image binary in a database or progressively downloading large image from the web.
I would not use imagenamed if your app has loads of big images which are not the same. I experienced app crashing due to using too much of it.
I don't believe that the image gets cached at all, and I don't know why you are all saying that. UIImage is a subclass of NSObject which uses reference counters to keep track of the things that it is related to. So when you load an image it does that same thing. If you load the same image multiple times it will(or should) have only one copy of the image in memory and just increment the reference counter every time you have to use something with that image. By Reference Counters I mean that when the count gets to 0 it deletes itself. so "alloc", "retain" are each +1 to the count and "release" is -1. Not only is it a better way to manage memory but this style of programming also helps clean up memory leaks.