UIGraphicsBeginImageContext leads to memory (leak) overflow - iphone

In my code I stretch the size of an image to a specified size. The code works fine so far.
I got the problem that "UIGraphicsBeginImageContext ()" does not release the memory of the new image. So, the memory is full after about 10 minutes and the app is terminated by IOS.
Does anyone have a solution to this problem?
- (CCSprite *)createStretchedSignFromString:(NSString *)string withMaxSize:(CGSize)maxSize withImage:(UIImage *)signImage
{
// Create a new image that will be stretched with 10 px cap on each side
UIImage *stretchableSignImage = [signImage stretchableImageWithLeftCapWidth:10 topCapHeight:10];
// Set size for new image
CGSize newImageSize = CGSizeMake(260.f, 78.0f);
// Create new graphics context with size of the answer string and some cap
UIGraphicsBeginImageContext(newImageSize);
// Stretch image to the size of the answer string
[stretchableSignImage drawInRect:CGRectMake(0.0f, 0.0f, newImageSize.width, newImageSize.height)];
// Create new image from the context
UIImage *resizedImage = UIGraphicsGetImageFromCurrentImageContext();
// End graphics context
UIGraphicsEndImageContext();
// Create new texture from the stretched
CCTexture2D *tex = [[CCTexture2D alloc] initWithImage:resizedImage];
CCSprite *spriteWithTex = [CCSprite spriteWithTexture:tex];
[[CCTextureCache sharedTextureCache] removeTexture:tex];
[tex release];
// Return new sprite for the sign with the texture
return spriteWithTex;
}
Called by this code:
// Create image from image path
UIImage *targetSignImage = [UIImage imageWithContentsOfFile:targetSignFileName];
// Create new sprite for the sign with the texture
CCSprite *plainSign = [self createStretchedSignFromString:answerString withMaxSize:CGSizeMake(260.0f, 78.0f) withImage:targetSignImage];
Thank you so far.

I've found the solution to my problem.
First of all, the code shown above is correct and without leaks.
The problem was caused by the removal of the sprite that has planSign as child. The sprite is removed by a timer that runs on a different thread, so on an others NSAutoreleasePool.
[timerClass removeTarget:targetWithSign] released an empty pool.
[timerClass performSelectorOnMainThread:#selector(removeTarget:) withObject:targetWithSign waitUntilDone:NO]; released the correct pool, which contains the target sprite and its child plainSign.
Thanks to SAKrisT and stigi for your suggestions.

Related

Setting frame of CCTexture to fit into a predefined frame CCSprite

I am downloading an image from server and displaying it on game scene. I am able to get the CCTexture2D of the image from server and display it on game scene. The problem is that the image from server may vary in size. But I have to display that image on to a predefined frame CCSprite.
CCSprite *temp = [CCSprite spriteWithTexture:[[CCTexture2D alloc] initWithImage:[UIImage imageWithData:data] resolutionType:kCCResolutioniPhoneFourInchDisplay]];
CCRenderTexture *test=[CCRenderTexture renderTextureWithWidth:70 height:70]; //set proper width and height
[test begin];
[temp draw];
[test end];
UIImage *img=[test getUIImageFromBuffer];
sprite_Temp =[CCSprite spriteWithCGImage:img.CGImage key:#"1"];
sprite_Temp.tag = K_TagUserImage;
sprite_Temp.scale=1;
sprite_Temp.position=ccp(432,273);
[self addChild:sprite_Temp z:1];
I am using this code to resize the CCTexture2D to predefined frame CCSprite. But the image gets cropped to the desired frame which is not wanted. Can someone tell me how to get the original image from server to desired frame without getting cropped. Thanks.
try :
CCSprite *temp = [CCSprite spriteWithTexture:[[CCTexture2D alloc] initWithImage:[UIImage imageWithData:data] resolutionType:kCCResolutioniPhoneFourInchDisplay]];
float scaleX = 70./temp.contentSize.width;
float scaleY = 70./temp.contentSize.height;
// if you want to preserve the original texture's aspect ratio
float scale = MIN(scaleX,scaleY);
temp.scale = scale;
// or if you want to 'stretch-n-squeeze' to 70x70
temp.scaleX = scaleX;
temp.scaleY = scaleY;
// then add the sprite *temp
usual disclaimer : not tested, done from memory, beware of divides by 0 :)

Resize UIImage for editing and save at a high resolution?

Is there a way to load a picture from the library or take a new one, resize it to a smaller size to be able to edit it and then save it at the original size? I'm struggling with this and can't make it to work. I have the resize code setup like this :
firstImage = [firstImage resizedImageWithContentMode:UIViewContentModeScaleAspectFit bounds:CGSizeMake(960, 640) interpolationQuality:kCGInterpolationHigh];
and then i have an UIImageView :
[finalImage setFrame:CGRectMake(10, 100, 300, 232)];
finalImage.image = firstImage;
if I setup the CGSizeMake at the original size of the picture it is a very slow process. I saw in other apps that they work on a smaller image and the editing process is fairly quick even for effects. What's the approach on this?
You can refer to Move and Scale Demo. This is a custom control which implements Moving and scaling and cropping image which can be really helpful to you.
Also this is the most simple code for scaling images to given size. Refer to it here: Resize/Scale of an Image
You can refer to it's code here
// UIImage+Scale.h
#interface UIImage (scale)
-(UIImage*)scaleToSize:(CGSize)size;
#end
Implementation UIImage Scale Category
With the interface in place, let’s write the code for the method that will be added to the UIImage class.
// UIImage+Scale.h
#import "UIImage+Scale.h"
#implementation UIImage (scale)
-(UIImage*)scaleToSize:(CGSize)size
{
// Create a bitmap graphics context
// This will also set it as the current context
UIGraphicsBeginImageContext(size);
// Draw the scaled image in the current context
[self drawInRect:CGRectMake(0, 0, size.width, size.height)];
// Create a new image from current context
UIImage* scaledImage = UIGraphicsGetImageFromCurrentImageContext();
// Pop the current context from the stack
UIGraphicsEndImageContext();
// Return our new scaled image
return scaledImage;
}
#end
Using the UIImage Scale Method
Calling the new scale method we added to UIImage is as simple as this:
#import "UIImage+Scale.h"
...
// Create an image
UIImage *image = [UIImage imageNamed:#"myImage.png"];
// Scale the image
UIImage *scaledImage = [image scaleToSize:CGSizeMake(25.0f, 35.0f)];
Let me know if you want more help.
Hope this helps you.

Resize an ALAsset Photo takes a long time. Any way around this?

I have a blog application that I'm making. To compose a new entry, there is a "Compose Entry" view where the user can select a photo and input text. For the photo, there is a UIImageView placeholder and upon clicking this, a custom ImagePicker comes up where the user can select up to 3 photos.
This is where the problem comes in. I don't need the full resolution photo from the ALAsset, but at the same time, the thumbnail is too low resolution for me to use.
So what I'm doing at this point is resizing the fullResolution photos to a smaller size. However, this takes some time, especially when resizing up to 3 photos to a smaller size.
Here is a code snipped to show what I'm doing:
ALAssetRepresentation *rep = [[dict objectForKey:#"assetObject"] defaultRepresentation];
CGImageRef iref = [rep fullResolutionImage];
if (iref)
{
CGRect screenBounds = [[UIScreen mainScreen] bounds];
UIImage *previewImage;
UIImage *largeImage;
if([rep orientation] == ALAssetOrientationUp) //landscape image
{
largeImage = [[UIImage imageWithCGImage:iref] scaledToWidth:screenBounds.size.width];
previewImage = [[UIImage imageWithCGImage:iref] scaledToWidth:300];
}
else // portrait image
{
previewImage = [[[UIImage imageWithCGImage:iref] scaledToHeight:300] imageRotatedByDegrees:90];
largeImage = [[[UIImage imageWithCGImage:iref] scaledToHeight:screenBounds.size.height] imageRotatedByDegrees:90];
}
}
Here, from the fullresolution image, I am creating two images: a preview image (max 300px on the long end) and a large image (max 960px or 640px on the long end). The preview image is what is shown on the app itself in the "new entry" preview. The large image is what will be used when uploading to the server.
The actual code I'm using to resize, I grabbed somewhere from here:
-(UIImage*)scaledToWidth:(float)i_width
{
float oldWidth = self.size.width;
float scaleFactor = i_width / oldWidth;
float newHeight = self.size.height * scaleFactor;
float newWidth = oldWidth * scaleFactor;
UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight));
[self drawInRect:CGRectMake(0, 0, newWidth, newHeight)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Am I doing things wrong here? As it stands, the ALAsset thumbnail is too low clarity, and at the same time, I dont need the entire full resolution. It's all working now, but the resizing takes some time. Is this just a necessary consequence?
Thanks!
It is a necessary consequence of resizing your image that it will take some amount of time. How much depends on the device, the resolution of the asset and the format of the asset. But you don't have any control over that. But you do have control over where the resizing takes place. I suspect that right now you are resizing the image in your main thread, which will cause the UI to grind to a halt while you are doing the resizing. Do enough images, and your app will appear hung for long enough that the user will just go off and do something else (perhaps check out competing apps in the App Store).
What you should be doing is performing the resizing off the main thread. With iOS 4 and later, this has become much simpler because you can use Grand Central Dispatch to do the resizing. You can take your original block of code from above and wrap it in a block like this:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^{
ALAssetRepresentation *rep = [[dict objectForKey:#"assetObject"] defaultRepresentation];
CGImageRef iref = [rep fullResolutionImage];
if (iref)
{
CGRect screenBounds = [[UIScreen mainScreen] bounds];
__block UIImage *previewImage;
__block UIImage *largeImage;
if([rep orientation] == ALAssetOrientationUp) //landscape image
{
largeImage = [[UIImage imageWithCGImage:iref] scaledToWidth:screenBounds.size.width];
previewImage = [[UIImage imageWithCGImage:iref] scaledToWidth:300];
}
else // portrait image
{
previewImage = [[[UIImage imageWithCGImage:iref] scaledToHeight:300] imageRotatedByDegrees:90];
largeImage = [[[UIImage imageWithCGImage:iref] scaledToHeight:screenBounds.size.height] imageRotatedByDegrees:90];
}
dispatch_async(dispatch_get_main_queue(), ^{
// do what ever you need to do in the main thread here once your image is resized.
// this is going to be things like setting the UIImageViews to show your new images
// or adding new views to your view hierarchy
});
}
});
You'll have to think about things a little differently this way. For example, you've now broken up what used to be a single step into multiple steps now. Code that was running after this will end up running before the image resize is complete or before you actually do anything with the images, so you need to make sure that you didn't have any dependencies on those images or you'll likely crash.
A late answer, but for those stumbling on this question, you might want to consider using the fullScreenImage rather than the fullResolutionImage of the defaultRepresentation. It's usually much smaller, but still large enough to maintain good quality for larger thumbnails.

UIImage returned from UIGraphicsGetImageFromCurrentImageContext leaks

The screenshot of Leak Profiling in Instruments Tool: http://i.stack.imgur.com/rthhI.png
I found my UIImage objects leaking using Instruments tool.
Per Apple's documentation, the object returned from UIGraphicsGetImageFromCurrentImageContext should be autoreleased, I can also see "Autorelease" event when profiling (see the first 2 lines of history of my attached screenshot). However, it seems that the "autorelease" event takes no effect. Why?
EDIT:
Code attached, I use the below code to "mix" two UIImages, also, later on, I use a UIMutableDictionary to cache those UIImage I "mixed". And I'm quite sure that I've called [UIMutableDictionary removeAllObjects] to clear the cache, so those UIImages "should be cleaned"
+ (UIImage*) mixUIImage:(UIImage*)i1 :(UIImage*)i2 :(CGPoint)i1Offset :(CGPoint)i2Offset{
CGFloat width , height;
if (i1) {
width = i1.size.width;
height = i1.size.height;
}else if(i2){
width = i2.size.width;
height = i2.size.height;
}else{
width = 1;
height = 1;
}
// create a new bitmap image context
//
UIGraphicsBeginImageContextWithOptions(CGSizeMake(width, height), NO, i1.scale);
// get context
//
CGContextRef context = UIGraphicsGetCurrentContext();
// push context to make it current
// (need to do this manually because we are not drawing in a UIView)
//
UIGraphicsPushContext(context);
// drawing code comes here- look at CGContext reference
// for available operations
//
// this example draws the inputImage into the context
//
[i2 drawInRect:CGRectMake(i2Offset.x, i2Offset.y, width, height)];
[i1 drawInRect:CGRectMake(i1Offset.x, i1Offset.y, width, height)];
// pop context
//
UIGraphicsPopContext();
// get a UIImage from the image context- enjoy!!!
//
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
// clean up drawing environment
//
UIGraphicsEndImageContext();
return outputImage;
}
I was getting a strange UIImage memory leak using a retained UIImage image from UIGraphicsGetImageFromCurrentImageContext(). I was calling it in a background thread (in response to a timer event). The problem turned out to be - as mentioned deep in the documentation by apple - "you should only call this function from the main thread of your application". Beware.

Overlaying UIImageview over UIImageview save

I'm trying to merge two UIImageViews. The first UIImageView (theimageView) is the background, and the second UIImageView (Birdie) is an image overlaying the first UIImageView. You can load the first UIImageView from a map or take a picture. After this you can drag, rotate and scale the second UIImageView over the first one. I want the output (saved image) to look the same as what I see on the screen.
I got that working, but I get borders and the quality and size are bad. I want the size to be the same as that of the image which is chosen, and the quality to be good. Also I get a crash if I save it a second time, right after the first time.
Here is my current code:
//save actual design in photo library
- (void)captureScreen {
UIImage *myImage = [self addImage:theImageView ToImage:Birdie];
[myImage retain];
UIImageWriteToSavedPhotosAlbum(myImage, self, #selector(imageSavedToPhotosAlbum:didFinishSavingWithError:contextInfo:), self);
}
- (UIImage*) addImage:(UIImage*)theimageView toImage:(UIImage*)Birdie{
CGSize size = CGSizeMake(theimageView.size.height, theimageView.size.width);
UIGraphicsBeginImageContext(size);
CGPoint pointImg1 = CGPointMake(0,0);
[theimageView drawAtPoint:pointImg1 ];
CGPoint pointImage2 = CGPointMake(0, 0);
[Birdie drawAtPoint:pointImage2 ];
UIImage* result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
But I only get errors with this code!
Thanks in advanced!
Take a look at Drawing a PNG Image Into a Graphics Context for Blending Mode Manipulation