CGContextFillRects: invalid context - Objective C - iphone

I have this piece of code to give my images a color I need:
- (UIImage*)convertToMask: (UIImage *) image
{
UIGraphicsBeginImageContextWithOptions(image.size, NO, image.scale);
CGRect imageRect = CGRectMake(0.0f, 0.0f, image.size.width, image.size.height);
CGContextRef ctx = UIGraphicsGetCurrentContext();
// Draw a white background (for white mask)
CGContextSetRGBFillColor(ctx, 1.0f, 1.0f, 1.0f, 0.9f);
CGContextFillRect(ctx, imageRect);
// Apply the source image's alpha
[image drawInRect:imageRect blendMode:kCGBlendModeDestinationIn alpha:1.0f];
UIImage* outImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outImage;
}
Everything works great in my first view but when I add this to my detail view, it gives me this error (it still works):
CGContextSetRGBFillColor: invalid context 0x0. This is a serious error. This application, or a library it uses, is using an invalid context and is thereby contributing to an overall degradation of system stability and reliability. This notice is a courtesy: please fix this problem. It will become a fatal error in an upcoming update.
CGContextFillRects: invalid context 0x0. This is a serious error. This
application, or a library it uses, is using an invalid context and is
thereby contributing to an overall degradation of system stability and
reliability. This notice is a courtesy: please fix this problem. It
will become a fatal error in an upcoming update.
Any idea how to get rid of this error?
Thanks.
EDIT:
The action was called with nil for image. I easily fixed it by adding a condition. Thanks #ipmcc for the comment.
- (UIImage*)convertToMask: (UIImage *) image
{
if (image != nil) {
UIGraphicsBeginImageContextWithOptions(image.size, NO, image.scale);
CGRect imageRect = CGRectMake(0.0f, 0.0f, image.size.width, image.size.height);
CGContextRef ctx = UIGraphicsGetCurrentContext();
// Draw a white background (for white mask)
CGContextSetRGBFillColor(ctx, 1.0f, 1.0f, 1.0f, 0.9f);
CGContextFillRect(ctx, imageRect);
// Apply the source image's alpha
[image drawInRect:imageRect blendMode:kCGBlendModeDestinationIn alpha:1.0f];
UIImage* outImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outImage;
}else{
return image;
}
}

Try this:
In xcode add symbolic breakpoint to CGPostError. (Add symbolic breakpoint, and to Symbol field type CGPostError)
When error happen, debugger will stop code executions and you can check stack of methods calls and check parameters.

// UIImage+MyMask.h
#interface UIImage (MyMask)
- (UIImage*)convertToMask;
#end
// UIImage+MyMask.m
#implementation UIImage (MyMask)
- (UIImage*)convertToMask
{
UIGraphicsBeginImageContextWithOptions(self.size, NO, self.scale);
CGRect imageRect = CGRectMake(0.0f, 0.0f, self.size.width, self.size.height);
CGContextRef ctx = UIGraphicsGetCurrentContext();
// Draw a white background (for white mask)
CGContextSetRGBFillColor(ctx, 1.0f, 1.0f, 1.0f, 0.9f);
CGContextFillRect(ctx, imageRect);
// Apply the source image's alpha
[image drawInRect:imageRect blendMode:kCGBlendModeDestinationIn alpha:1.0f];
UIImage* outImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outImage;
}
#end
Then you can call like this (no need to check nil):
UIImage *maskImage = [someImage convertToMask];

as I know, you call UIGraphicsBeginImageContextWithOptions with a size.width or size.height 0
just add symbolic breakpoint to CGPostError to check.

Related

UIImage with border. Invalid Context 0x0

I'm fetching an image and saving it to temporary directory in a background thread. Before saving, I want to add a little white border to this UIImage. I'm doing this:
- (UIImage *)imageWithBorder:(UIImage *)image {
CGSize newSize = CGSizeMake(image.size.width, image.size.height);
CGRect rect = CGRectMake(0.0, 0.0, image.size.width, image.size.height);
UIGraphicsBeginImageContextWithOptions(newSize, NO, image.scale); {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextBeginTransparencyLayer (context, NULL);
//Draw
[image drawInRect:rect blendMode:kCGBlendModeNormal alpha:1.0];
CGContextSetRGBStrokeColor(context, 1.0, 1.0, 1.0, 1.0);
CGContextStrokeRectWithWidth(context, rect, 10);
CGContextEndTransparencyLayer(context);
}
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
Everything is okay, after loading that image from a temporary directory I can see border on an image, but in console I'm getting invalid context 0x0 when the code below implements. What's wrong here?
Update 01
- (UIImage *)imageWithBorder:(UIImage *)image {
CGSize newSize = CGSizeMake(image.size.width, image.size.height);
CGRect rect = CGRectMake(0.0, 0.0, image.size.width, image.size.height);
UIGraphicsBeginImageContextWithOptions(newSize, NO, image.scale); {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextDrawImage (context, rect, [image CGImage]);
CGContextSetRGBStrokeColor(context, 1.0, 1.0, 1.0, 1.0);
CGContextStrokeRectWithWidth(context, rect, 10);
}
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
Try changing:
[image drawInRect:rect blendMode:kCGBlendModeNormal alpha:1.0];
to
CGContextDrawImage (context, rect, [image CGImage]);
Also, why use a transparency layer? You have the alpha set to 1, so you could just overwrite the original image, no?
EDIT: I just ran the same code in my app, on the background, it worked flawlessly:
- (UIImage *)imageWithBorder:(UIImage *)image
{
NSLog(#"Thread %#", [NSThread currentThread]);
CGSize newSize = CGSizeMake(image.size.width, image.size.height);
CGRect rect = CGRectMake(0.0, 0.0, image.size.width, image.size.height);
UIGraphicsBeginImageContextWithOptions(newSize, NO, image.scale);
CGContextRef context = UIGraphicsGetCurrentContext();
//Draw
CGContextDrawImage(context, rect, [image CGImage]);
CGContextSetRGBStrokeColor(context, 1.0, 1.0, 1.0, 1.0);
CGContextStrokeRectWithWidth(context, rect, 10);
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
assert(result);
return result;
}
When I do this in viewDidAppear: (but its a small image, maybe the size is relevant?)
dispatch_async(dispatch_get_global_queue(0, 0), ^{ [self imageWithBorder:[UIImage imageNamed:#"02-redo.png"]] ; } );
this is all I see in the log:
2012-08-27 13:48:04.679 Searcher[32825:11103] Thread <NSThread: 0x6d30e30>{name = (null), num = 4}
So something else is going on with you code. I'm building with Xcode 4.4.1, deployment target is 5.1
EDIT: In your block, you cannot refer to "cell" as cell is temporal - with recycling it may not even be visible. So the image name will be routed through a local NSString, then at the end, you need another method that the image can be posted to, along with the cell indexPath. That method will see if that indexPath is in the visibleCells of the table, if so then update the image, if not you want to store the image so it can be used with the tableView next asks for the cell.
EDIT2: This code:
dispatch_async(queue, ^{
UIImage *image = [_thumbnailMaster thumbnailForItemWithFilename:cell.label.text];
dispatch_sync(dispatch_get_main_queue(), ^{ cell.thumbnailView.image = image; });
});
Should be turned into something that does not involve "cell" as cell may go out of scope or be used by another index (assuming recycling). Something like this:
NSString *text = cell.label.text;
NSIndexPath *path = ...;
dispatch_async(queue, ^{
UIImage *image = [_thumbnailMaster thumbnailForItemWithFilename:text];
dispatch_sync(dispatch_get_main_queue(), ^{ [self updateCellImage:image withPath"path]; });
});
Your method determines if that cell is visible, if so update the image, if not save it so next time you need it you have it.
UIGraphicsGetCurrentContext Returns the current graphics context.
CGContextRef UIGraphicsGetCurrentContext ( void ); Return Value The
current graphics context.
Discussion The current graphics context is nil by default. Prior to
calling its drawRect: method, view objects push a valid context onto
the stack, making it current. If you are not using a UIView object to
do your drawing, however, you must push a valid context onto the stack
manually using the UIGraphicsPushContext function.
You should call this function from the main thread of your application
only.
This is from UIKit Function Reference

iOS Drawing App Issue

I am creating a simple drawing application in which I have a UIView in background & UIImageView in foreground. I am doing some drawing stuff in UIView and I have set an image in UIImageView. I want to add the transparency effect in UIImageView to show the lines behind the image. I know I can do this by reducing alpha, but I don’t want to change the alpha of image.
I want to do it with CGContextSetBlendMode, but I don’t know how to do this. Kindly help me to resolve this issue.
Thanks!
IMAGE http://www.freeimagehosting.net/q3237
UIImage *img = [UIImage imageNamed:#"Image.png"];
UIGraphicsBeginImageContext(self.view.frame.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(ctx, kCGBlendModeMultiply);
[img drawInRect:CGRectMake(0, 0, 768, 1004) blendMode:kCGBlendModeDarken alpha:1]; [imageView.image drawInRect:CGRectMake(0, 0, self.view.frame.size.width,self.view.frame.size.height)];
CGContextSetBlendMode(ctx, kCGBlendModeDarken);
imageView.image = UIGraphicsGetImageFromCurrentImageContext();
No need to keep imageview do it like this..
Since you are drawing on two different context you wont be able to use blend modes across them. For that you need to draw other stuff on your drawing view and then draw your image..
- (void)drawRect:(CGRect)rect {
// do ur drawing stuff first
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, 0, self.bounds.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, self.bounds, self.image.CGImage);
CGContextSetBlendMode(context, kCGBlendModeSaturation);
CGContextSetRGBFillColor(context, 0.0, 0.0, 0.0, 1.0);
CGContextFillRect(context, rect);
}

CGContextStrokeRect is drawing only a side of rect

I need to draw a rect filled with color and its border...
the rect is filled with color properly but the outside border is partially drawn, just the right side of the rect is drawn!
The generated UIImage is going to be used in a UITableViewCell's imageView.
- (UIImage *)legendItemWithColor:(UIColor *)color
{
UIGraphicsBeginImageContext(self.view.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGRect outside = CGRectMake(128, 128, 128, 128);
CGRect legend = CGRectInset(outside, 1, 1);
NSLog(#"Outside: %#", NSStringFromCGRect(outside));
NSLog(#"Legend: %#", NSStringFromCGRect(legend));
CGContextSetFillColorWithColor(context, color.CGColor);
CGContextFillRect(context, legend);
CGContextSetStrokeColorWithColor(context, [UIColor blackColor].CGColor);
CGContextStrokeRect(context, outside);
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIGraphicsPopContext();
return img;
}
The problem is when you pass self.view.frame.size to UIGraphicsBeginImageContext() and then use drawn rectangle in array, it is downscaled and the border is obfuscated. Try to pass only the size you need so, i.e. CGSizeMake(2*128+128+2,2*128+128+2). Then it displays ok

how to darken a UIImageView

I need to darken a UIImageView when it gets touched, almost exactly like icons on the springboard (home screen).
Should I be added UIView with a 0.5 alpha and black background. This seems clumsy. Should I be using Layers or something (CALayers).
I would let a UIImageView handle the actual drawing of the image, but toggle the image to one that's been darkened in advance. Here's some code I've used to generate darkened images with alpha maintained:
+ (UIImage *)darkenImage:(UIImage *)image toLevel:(CGFloat)level
{
// Create a temporary view to act as a darkening layer
CGRect frame = CGRectMake(0.0, 0.0, image.size.width, image.size.height);
UIView *tempView = [[UIView alloc] initWithFrame:frame];
tempView.backgroundColor = [UIColor blackColor];
tempView.alpha = level;
// Draw the image into a new graphics context
UIGraphicsBeginImageContext(frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[image drawInRect:frame];
// Flip the context vertically so we can draw the dark layer via a mask that
// aligns with the image's alpha pixels (Quartz uses flipped coordinates)
CGContextTranslateCTM(context, 0, frame.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextClipToMask(context, frame, image.CGImage);
[tempView.layer renderInContext:context];
// Produce a new image from this context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage *toReturn = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
UIGraphicsEndImageContext();
[tempView release];
return toReturn;
}
How about subclassing UIView and adding a UIImage ivar (called image)? Then you could override -drawRect: something like this, provided you had a boolean ivar called pressed that was set while touched.
- (void)drawRect:(CGRect)rect
{
[image drawAtPoint:(CGPointMake(0.0, 0.0))];
// if pressed, fill rect with dark translucent color
if (pressed)
{
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextSaveGState(ctx);
CGContextSetRGBFillColor(ctx, 0.5, 0.5, 0.5, 0.5);
CGContextFillRect(ctx, rect);
CGContextRestoreGState(ctx);
}
}
You would want to experiment with RGBA values above. And, of course, non-rectangular shapes would require a bit more work - like a CGMutablePathRef.
UIImageView can have multiple images; you could have two versions of the image and switch to the darker one when needed.

How do I create a new image by drawing on top of an existing one using Quartz?

i have a view with uiimageview i assign this uiimageview image by camera..now i want to do some drawing onto image....using coregraphics.i want to do something like this... select an area by touching and drawing line when line joins something like circle or any shape..i want to change that particular area in to something else for example change color there.turn that into grayscale.. till now i am able to draw line...here is an image of line drawn over a uiimage view...
but i am unable to figure it out how do i draw at imageview's image..mean how to modify imageview's image???
also i want to restore image when click on clear button or something like undo..does someone knows how to achieve this?
and
how do i create a rectangle when click on crop button move the rectangle any where on the screen...and then push the button to crop the image...and then save cropped image..
These are the steps:
Create a CGBitmapContext matching the image's colorspace and dimensions.
Draw the image into that context.
Draw whatever you want on top of the image.
Create a new image from the context.
Dispose off the context.
Here's a method that takes an image, draws something on top of it and returns a new UIImage with modified contents:
- (UIImage*)modifiedImageWithImage:(UIImage*)uiImage
{
// build context to draw in
CGImageRef image = uiImage.CGImage;
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate(NULL,
CGImageGetWidth(image), CGImageGetHeight(image),
8, CGImageGetWidth(image) * 4,
colorspace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorspace);
// draw original image
CGRect r = CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image));
CGContextSetBlendMode(ctx, kCGBlendModeCopy);
CGContextDrawImage(ctx, r, image);
CGContextSetBlendMode(ctx, kCGBlendModeNormal);
// draw something
CGContextAddEllipseInRect(ctx, CGRectInset(r, 10, 10));
CGContextSetRGBStrokeColor(ctx, 1.0f, 1.0f, 1.0f, 0.5f);
CGContextSetLineWidth(ctx, 16.0f);
CGContextDrawPath(ctx, kCGPathStroke);
CGContextAddEllipseInRect(ctx, CGRectInset(r, 10, 10));
CGContextSetRGBStrokeColor(ctx, 0.7f, 0.0f, 0.0f, 1.0f);
CGContextSetLineWidth(ctx, 4.0f);
CGContextDrawPath(ctx, kCGPathStroke);
// create resulting image
image = CGBitmapContextCreateImage(ctx);
UIImage* newImage = [[[UIImage alloc] initWithCGImage:image] autorelease];
CGImageRelease(image);
CGContextRelease(ctx);
return newImage;
}
To restore to old image, just keep a reference to it.
The cropping thing is not related to the above and you should create a new question for that.
A lot easier solution would be
(UIImage *) modifyImage:(UIImage *)inputImage
{
UIGraphicsBeginImageContext(inputImage.size);
[inputImage drawInRect:CGRectMake(0, 0, inputImage.size.width, inputImage.size.height);
CGContextRef ctx = UIGraphicsGetCurrentContext();
//Drawing code using above context goes here
/*
*
*/
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
Take a look at Overview of Quartz 2D for information on using Quartz 2D on iPhone.