UIGraphicsGetImageFromCurrentImageContext returns unexpected nil - iphone

I have the following code:
CGContextRef ctx = UIGraphicsGetCurrentContext();
UIGraphicsBeginImageContext(size);
screenImageContext = UIGraphicsGetCurrentContext();
ctx = screenImageContext;
UIGraphicsPushContext(UIGraphicsGetCurrentContext());
ctx = UIGraphicsGetCurrentContext();
NSLog(#" %#",screenImageContext);
UIImage * result = UIGraphicsGetImageFromCurrentImageContext(); // Returns nil
UIImageWriteToSavedPhotosAlbum(result, nil, nil, nil);
UIGraphicsPopContext();
result = UIGraphicsGetImageFromCurrentImageContext(); // returns valid result
My problem is that UIGraphicsGetImageFromCurrentImageContext returns nil, while the second one after UIGraphicsPopContext returns the correct result.
The docs clearly states that UIGraphicsGetImageFromCurrentImageContext will return nil when either the context is nil or the current context isn't a graphic context, but both these problems aren't happening here.
If anyone could shed some light over this i'd be very grateful
Shai.

From my understanding of things your issue is stemming from the following line
UIGraphicsPushContext(UIGraphicsGetCurrentContext());
With this call you're trying to make the current context the current context. This really doesn't make any sense.
Also with your code it's a little scrambled, you call UIGraphicsGetCurrentContext() which will return the current context, then you call UIGraphisBeginImageContext(CGSize size) which as stated in the doc's
Creates a bitmap-based graphics context and makes it the current context
Then you get the current graphics context again which is now a bitmap-based graphics context thanks to the previous call, then you overwrite the original CGContextRef ("ctx") that you just retrieved.
I'm not 100% certain what you were aiming to achieve with your code but if you were just trying to capture the contents of a bitmap-based context in an image and save it to the photo album then the following code will do that.
CGSize size = CGSizeMake(320, 480); //Screen Size on iPhone device
UIGraphicsBeginImageContext(size); //Create a new Bitmap-based graphics context (also makes this the current context)
CGContextRef screenImageContext = UIGraphicsGetCurrentContext(); //get a reference to the context we just made above
NSLog(#" %#",screenImageContext);
//NOTE: without any drawring code in here this will just be a blank image (white/alpha)
// or an image set to whatever the current UIColor is set to
//So you may want to add some drawing code in here. Although TBH I'm not sure what you were originally
// trying to achieve.
UIImage * result = UIGraphicsGetImageFromCurrentImageContext(); // Returns nil
NSLog(#" %#",result); //just output this to demonstrate that it's non null/nil
UIImageWriteToSavedPhotosAlbum(result, nil, nil, nil);
UIGraphicsEndImageContext(); //Removes the current bitmap-based graphics context from the top of the stack
Hope that helps.

Related

Correct GraphicsContext When Using UIGraphicsGetImageFromCurrentImageContext

I have the following method where I'm trying to do some drawing into an image:
- (UIImage*) renderImage
{
UIGraphicsBeginImageContextWithOptions(self.size, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
//drawing code
UIImage *image = [UIGraphicsGetImageFromCurrentImageContext() retain];
UIGraphicsEndImageContext();
return [image autorelease];
}
When I run this code I noticed that I'm getting hit much harder than I did when I was simply drawing this code in drawRect of a UIView. Am I drawing into the wrong graphics context here (ie CGContextRef context = UIGraphicsGetCurrentContext();)? Or is UIGraphicsGetImageFromCurrentImageContext just that much more expensive than drawing in drawRect?
The main difference is that the context that you create requires an offscreen rendering, it isn't the same context created in -drawRect. So you are adding an additional memory to the heap that stays until you will release the image.

UIImage returned from UIGraphicsGetImageFromCurrentImageContext leaks

The screenshot of Leak Profiling in Instruments Tool: http://i.stack.imgur.com/rthhI.png
I found my UIImage objects leaking using Instruments tool.
Per Apple's documentation, the object returned from UIGraphicsGetImageFromCurrentImageContext should be autoreleased, I can also see "Autorelease" event when profiling (see the first 2 lines of history of my attached screenshot). However, it seems that the "autorelease" event takes no effect. Why?
EDIT:
Code attached, I use the below code to "mix" two UIImages, also, later on, I use a UIMutableDictionary to cache those UIImage I "mixed". And I'm quite sure that I've called [UIMutableDictionary removeAllObjects] to clear the cache, so those UIImages "should be cleaned"
+ (UIImage*) mixUIImage:(UIImage*)i1 :(UIImage*)i2 :(CGPoint)i1Offset :(CGPoint)i2Offset{
CGFloat width , height;
if (i1) {
width = i1.size.width;
height = i1.size.height;
}else if(i2){
width = i2.size.width;
height = i2.size.height;
}else{
width = 1;
height = 1;
}
// create a new bitmap image context
//
UIGraphicsBeginImageContextWithOptions(CGSizeMake(width, height), NO, i1.scale);
// get context
//
CGContextRef context = UIGraphicsGetCurrentContext();
// push context to make it current
// (need to do this manually because we are not drawing in a UIView)
//
UIGraphicsPushContext(context);
// drawing code comes here- look at CGContext reference
// for available operations
//
// this example draws the inputImage into the context
//
[i2 drawInRect:CGRectMake(i2Offset.x, i2Offset.y, width, height)];
[i1 drawInRect:CGRectMake(i1Offset.x, i1Offset.y, width, height)];
// pop context
//
UIGraphicsPopContext();
// get a UIImage from the image context- enjoy!!!
//
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
// clean up drawing environment
//
UIGraphicsEndImageContext();
return outputImage;
}
I was getting a strange UIImage memory leak using a retained UIImage image from UIGraphicsGetImageFromCurrentImageContext(). I was calling it in a background thread (in response to a timer event). The problem turned out to be - as mentioned deep in the documentation by apple - "you should only call this function from the main thread of your application". Beware.

What's the correct code to save a CGLayer as a PNG file?

Please note that this question is about CGLayer (which you typically use to draw offscreen), it is not about CALayer.
In iOS, what's the correct code to save a CGLayer as a PNG file? Thanks!
Again, that's CGLayer, not CALayer.
Note that you CAN NOT use UIGraphicsGetImageFromCurrentImageContext.
(From the documentation, "You can call UIGraphicsGetImageFromCurrentImageContext only when a bitmap-based graphics context is the current graphics context.")
Note that you CAN NOT use renderInContext:. renderInContext: is strictly for CALayers. CGLayers are totally different.
So, how can you actually convert a CGLayer to a PNG image? Or indeed, how to render a CGLayer in to a bitmap in some way (of course you can then easily save as an image).
Later ... Ken has answered this difficult question. I will paste in a long example code that may help people. Thanks again Ken! Amazing!
-(void)drawingExperimentation
{
// this code uses the ASTOUNDING solution by KENNYTM -- Oct/Nov2010
//
// create a CGLayer for offscreen drawing
// note. for "yourContext", ideally it should be a context from your screen, ie the
// context you "normally get" in one of your drawRect routines associated with
// drawing to the screen normally.
// UIGraphicsGetCurrentContext() also normally works but you could have colorspace woes
// so create the CGLayer called notepad...
CGLayerRef notepad = CGLayerCreateWithContext(yourContext,CGSizeMake(1500,1500), NULL);
CGContextRef notepadContext = CGLayerGetContext(notepad);
// you can for example write an image in to notepad
CGImageRef imageExamp = [[UIImage imageWithContentsOfFile:
[[NSBundle mainBundle] pathForResource:#"smallTestImage" ofType:#"png"] ] CGImage];
CGContextDrawImage( notepadContext, CGRectMake(100,100, 50,50), imageExamp);
// setting the colorspace may or may not be relevant to you
CGContextSetFillColorSpace( notepadContext, CGColorSpaceCreateDeviceRGB() );
// you can draw to notepad as much as you like in the normal way
// don't forget to push it's context on and off your work space so you can draw to it
UIGraphicsPushContext(notepadContext);
// set the colors
CGContextSetRGBFillColor(notepadContext, 0.15,0.25,0.35, 0.45);
// draw rects
UIRectFill(CGRectMake(x,y,w,h));
// draw ovals, filled stroked or whatever you wish
UIBezierPath* d = [UIBezierPath bezierPathWithOvalInRect:CGRectMake(x,y,w,h)];
[d fill];
// draw cubic and other curves
UIBezierPath *longPath;
longPath.lineWidth = 42;
longPath.lineCapStyle = kCGLineCapRound;
longPath.lineJoinStyle = kCGLineJoinRound;
[longPath moveToPoint:p];
[longPath addCurveToPoint:q controlPoint1:r controlPoint2:s];
[longPath addCurveToPoint:a controlPoint1:b controlPoint2:c];
[longPath addCurveToPoint:m controlPoint1:n controlPoint2:o];
[longPath closePath];
[longPath stroke];
UIGraphicsPopContext();
// so now you have a nice CGLayer.
// how to save it to a file?
// you can save it to a file using the amazing KENNY-TM-METHOD !!!
UIGraphicsBeginImageContext( CGLayerGetSize(notepad) );
CGContextRef rr = UIGraphicsGetCurrentContext();
CGContextDrawLayerAtPoint(rr, CGPointZero, notepad);
UIImage* ii = UIGraphicsGetImageFromCurrentImageContext();
NSData* pp = UIImagePNGRepresentation(ii);
[pp writeToFile:#"foo.png" atomically:YES];
UIGraphicsEndImageContext();
// you may prefer to look at it like this:
UIGraphicsBeginImageContext( CGLayerGetSize(notepad) );
CGContextDrawLayerAtPoint(UIGraphicsGetCurrentContext(), CGPointZero, notepad);
[UIImagePNGRepresentation(UIGraphicsGetImageFromCurrentImageContext()) writeToFile:#"foo.png" atomically:YES];
UIGraphicsEndImageContext();
// there are three clever steps in the KENNY-TM-METHOD:
// - start a new UIGraphics image context
// - CGContextDrawLayerAtPoint which can, in fact, draw a CGLayer
// - just use the usual UIImagePNGRepresentation to convert to a png
// done! a miracle
// if you are testing on your mac-simulator, you'll find the file
// simply in the main drive directory
return;
}
For iPhone OS, it should be possible to draw a CGLayer on a CGContext and then convert into a UIImage, which can then be encoded into PNG and saved.
CGSize size = CGLayerGetSize(layer);
UIGraphicsBeginImageContext(size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextDrawLayerAtPoint(ctx, CGPointZero, layer);
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
NSData* pngData = UIImagePNGRepresentation(image);
[pngData writeToFile:... atomically:YES];
UIGraphicsEndImageContext();
(not tested)

Need to create a snapshot of a view at runtime

i need to take a snapshot of my current ipad-view. That view loads and plays a video.
I found this function which works almost really well.
- (UIImage*)captureView:(UIView *)view {
CGRect rect = [[UIScreen mainScreen] bounds];
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[view.layer renderInContext:context];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
(Source from: LINK)
The problem is, that the current frame of the playing video is not captured, only the view but without video-content. Do i forgot to update the display or anything else before saving the image? Is there a special function that reads the latest screen-values?
Thanks for your time and help.
Have you tried using the UIGetScreenImage() function?
It's private, but Apple allows to use it, so your app will be validated for the App Store, even if you use it.
You just need to declare its prototype, so the compiler won't complain:
CGImageRef UIGetScreenImage( void );
Note: you can create a NSImage from the CGImageRef with the following NSImage method:
- ( id )initWithCGImage: ( CGImageRef )cgImage size:( NSSize )size;

iphone drawing context

I am not trying to draw on a component I am simply trying to create a new context (I think) and spit out a UIImage with the contents of my drawing on it. I'm not trying to draw on any existing component. I am using the following code:
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, self.textSize.width, self.textSize.height, 8, 4 * self.textSize.width,
colorSpace, kCGImageAlphaPremultipliedFirst);
CGContextSetRGBFillColor(context, 0.0, 0.0, 0.0, 1);
NSString *text = #"Hello world";
[text drawAtPoint:CGPointMake(0, 0) forWidth:maxWidth withFont:font lineBreakMode:UILineBreakModeWordWrap];
CGImageRef imageMasked = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
UIImage * myRendering = [UIImage imageWithCGImage:imageMasked];
Unfortunately I am getting an "invalid context" error message. Searching Google only seemed to bring up people trying to draw on existing components. I want to spit out a new UIImage.
I tried the example here for creating a bitmap context - http://developer.apple.com/iphone/library/documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_context/dq_context.html#//apple_ref/doc/uid/TP30001066-CH203-SW9
I still get:
: CGContextGetShouldSmoothFonts: invalid context
: CGContextSetFont: invalid context
: CGContextSetTextMatrix: invalid context
: CGContextSetFontSize: invalid context
etc...
You always draw against a context. The context itself can be linked to the Display, a Bitmap or even a PDF.
Depending on what you want to achieve you have to create a context or use an existing one. For custom components (I assumed you mean this because you mentioned a JPanel) you just override the drawRect method of an UIView like so.
- (void)drawRect:(CGRect)rect {
CGContextRef ctx = UIGraphicsGetCurrentContext();
As you can see the context was already created for you and you get it by calling UIGraphicsGetCurrentContext().
First of all, check if you actually created a context by checking the return value of CGBitmapContextCreate. i.e.
if ( context == NULL ) NSLog(#"glad I always check my return values!");
And check the console for any message.
If you haven't created a context, check if you are developing on iOS 4.0 or later. If not, don't pass NULL as first argument but malloc a chuck of memory.
Ok so the problem was I didn't put my bitmap drawing code in viewDidLoad. I really don't understand this stuff. Why is it so unintuitive?