I am not trying to draw on a component I am simply trying to create a new context (I think) and spit out a UIImage with the contents of my drawing on it. I'm not trying to draw on any existing component. I am using the following code:
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, self.textSize.width, self.textSize.height, 8, 4 * self.textSize.width,
colorSpace, kCGImageAlphaPremultipliedFirst);
CGContextSetRGBFillColor(context, 0.0, 0.0, 0.0, 1);
NSString *text = #"Hello world";
[text drawAtPoint:CGPointMake(0, 0) forWidth:maxWidth withFont:font lineBreakMode:UILineBreakModeWordWrap];
CGImageRef imageMasked = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
UIImage * myRendering = [UIImage imageWithCGImage:imageMasked];
Unfortunately I am getting an "invalid context" error message. Searching Google only seemed to bring up people trying to draw on existing components. I want to spit out a new UIImage.
I tried the example here for creating a bitmap context - http://developer.apple.com/iphone/library/documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_context/dq_context.html#//apple_ref/doc/uid/TP30001066-CH203-SW9
I still get:
: CGContextGetShouldSmoothFonts: invalid context
: CGContextSetFont: invalid context
: CGContextSetTextMatrix: invalid context
: CGContextSetFontSize: invalid context
etc...
You always draw against a context. The context itself can be linked to the Display, a Bitmap or even a PDF.
Depending on what you want to achieve you have to create a context or use an existing one. For custom components (I assumed you mean this because you mentioned a JPanel) you just override the drawRect method of an UIView like so.
- (void)drawRect:(CGRect)rect {
CGContextRef ctx = UIGraphicsGetCurrentContext();
As you can see the context was already created for you and you get it by calling UIGraphicsGetCurrentContext().
First of all, check if you actually created a context by checking the return value of CGBitmapContextCreate. i.e.
if ( context == NULL ) NSLog(#"glad I always check my return values!");
And check the console for any message.
If you haven't created a context, check if you are developing on iOS 4.0 or later. If not, don't pass NULL as first argument but malloc a chuck of memory.
Ok so the problem was I didn't put my bitmap drawing code in viewDidLoad. I really don't understand this stuff. Why is it so unintuitive?
Related
I have the following method where I'm trying to do some drawing into an image:
- (UIImage*) renderImage
{
UIGraphicsBeginImageContextWithOptions(self.size, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
//drawing code
UIImage *image = [UIGraphicsGetImageFromCurrentImageContext() retain];
UIGraphicsEndImageContext();
return [image autorelease];
}
When I run this code I noticed that I'm getting hit much harder than I did when I was simply drawing this code in drawRect of a UIView. Am I drawing into the wrong graphics context here (ie CGContextRef context = UIGraphicsGetCurrentContext();)? Or is UIGraphicsGetImageFromCurrentImageContext just that much more expensive than drawing in drawRect?
The main difference is that the context that you create requires an offscreen rendering, it isn't the same context created in -drawRect. So you are adding an additional memory to the heap that stays until you will release the image.
- (UIImage *)roundedCornerImage:(NSInteger)cornerSize borderSize:(NSInteger)borderSize {
// If the image does not have an alpha layer, add one
UIImage *image = [self imageWithAlpha];
// Build a context that's the same dimensions as the new size
CGBitmapInfo info = CGImageGetBitmapInfo(image.CGImage);
CGContextRef context = CGBitmapContextCreate(NULL,
image.size.width,
image.size.height,
CGImageGetBitsPerComponent(image.CGImage),
0,
CGImageGetColorSpace(image.CGImage),
CGImageGetBitmapInfo(image.CGImage));
// Create a clipping path with rounded corners
CGContextBeginPath(context);
[self addRoundedRectToPath:CGRectMake(borderSize, borderSize, image.size.width - borderSize * 2, image.size.height - borderSize * 2)
context:context
ovalWidth:cornerSize
ovalHeight:cornerSize];
CGContextClosePath(context);
CGContextClip(context);
// Draw the image to the context; the clipping path will make anything outside the rounded rect transparent
CGContextDrawImage(context, CGRectMake(0, 0, image.size.width, image.size.height), image.CGImage);
// Create a CGImage from the context
CGImageRef clippedImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
// Create a UIImage from the CGImage
UIImage *roundedImage = [UIImage imageWithCGImage:clippedImage];
CGImageRelease(clippedImage);
return roundedImage;
}
I have the method above and am adding rounded corners to Twitter profile images. For most of the images this works awesome. There are a few that cause the following error to occur:
: CGBitmapContextCreate: unsupported parameter combination: 8 integer bits/component; 32 bits/pixel; 3-component color space; kCGImageAlphaLast; 96 bytes/row.
I have done some debugging and it looks like the only difference from the images causing errors and the ones that are not is the parameter, CGImageGetBitmapInfo(image.CGImage), when creating the context. This throws the error and results in the context being null. I tried setting the last parameter to kCGImageAlphaPremultipliedLast to no avail either. The image is drawn this time but with much less quality. Is there a way to get a higher quality image on par with the rest of them? The path to the image is via Twitter so not sure if they have different ones you can pull.
I have seen the other questions regarding this error too. None of have solved this issue. I saw this post but the errored images are completely blurry after that. And casting the width and height to NSInteger also didn't work. Below is a screenshot of the two profile images and their quality as well. The first one is causing the error.
Does anyone have any idea what the issue is here?
Thanks a ton. This has been killing me.
iOS does not support kCGImageAlphaLast. You need to use kCGImageAlphaPremultipliedLast.
You also need to handle the scale of your initial image. Your current code doesn't, so it downsamples the image if its scale is 2.0.
You can write the entire function more simply by using UIKit functions and classes. UIKit will take care of the scale for you; you just have to pass in the original image's scale when you ask it to create the graphics context.
- (UIImage *)roundedCornerImage:(NSInteger)cornerSize borderSize:(NSInteger)borderSize {
// If the image does not have an alpha layer, add one
UIImage *image = [self imageWithAlpha];
UIGraphicsBeginImageContextWithOptions(image.size, NO, image.scale); {
CGRect imageRect = (CGRect){ CGPointZero, image.size };
CGRect borderRect = CGRectInset(imageRect, borderSize, borderSize);
UIBezierPath *path = [UIBezierPath bezierPathWithRoundedRect:borderRect
byRoundingCorners:UIRectCornerAllCorners
cornerRadii:CGSizeMake(cornerSize, cornerSize)];
[path addClip];
[image drawAtPoint:CGPointZero];
}
UIImage *roundedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return roundedImage;
}
If your imageWithAlpha method itself creates a UIImage from another UIImage, it needs to propagate the scale also.
I have the following code:
CGContextRef ctx = UIGraphicsGetCurrentContext();
UIGraphicsBeginImageContext(size);
screenImageContext = UIGraphicsGetCurrentContext();
ctx = screenImageContext;
UIGraphicsPushContext(UIGraphicsGetCurrentContext());
ctx = UIGraphicsGetCurrentContext();
NSLog(#" %#",screenImageContext);
UIImage * result = UIGraphicsGetImageFromCurrentImageContext(); // Returns nil
UIImageWriteToSavedPhotosAlbum(result, nil, nil, nil);
UIGraphicsPopContext();
result = UIGraphicsGetImageFromCurrentImageContext(); // returns valid result
My problem is that UIGraphicsGetImageFromCurrentImageContext returns nil, while the second one after UIGraphicsPopContext returns the correct result.
The docs clearly states that UIGraphicsGetImageFromCurrentImageContext will return nil when either the context is nil or the current context isn't a graphic context, but both these problems aren't happening here.
If anyone could shed some light over this i'd be very grateful
Shai.
From my understanding of things your issue is stemming from the following line
UIGraphicsPushContext(UIGraphicsGetCurrentContext());
With this call you're trying to make the current context the current context. This really doesn't make any sense.
Also with your code it's a little scrambled, you call UIGraphicsGetCurrentContext() which will return the current context, then you call UIGraphisBeginImageContext(CGSize size) which as stated in the doc's
Creates a bitmap-based graphics context and makes it the current context
Then you get the current graphics context again which is now a bitmap-based graphics context thanks to the previous call, then you overwrite the original CGContextRef ("ctx") that you just retrieved.
I'm not 100% certain what you were aiming to achieve with your code but if you were just trying to capture the contents of a bitmap-based context in an image and save it to the photo album then the following code will do that.
CGSize size = CGSizeMake(320, 480); //Screen Size on iPhone device
UIGraphicsBeginImageContext(size); //Create a new Bitmap-based graphics context (also makes this the current context)
CGContextRef screenImageContext = UIGraphicsGetCurrentContext(); //get a reference to the context we just made above
NSLog(#" %#",screenImageContext);
//NOTE: without any drawring code in here this will just be a blank image (white/alpha)
// or an image set to whatever the current UIColor is set to
//So you may want to add some drawing code in here. Although TBH I'm not sure what you were originally
// trying to achieve.
UIImage * result = UIGraphicsGetImageFromCurrentImageContext(); // Returns nil
NSLog(#" %#",result); //just output this to demonstrate that it's non null/nil
UIImageWriteToSavedPhotosAlbum(result, nil, nil, nil);
UIGraphicsEndImageContext(); //Removes the current bitmap-based graphics context from the top of the stack
Hope that helps.
I have a few UIImage objects which I want to compose into a single UIImage and then save it to disk. I'm not displaying this on the screen so it doesn't make sense to do it in -drawRect.
Is there a way of creating a context similar like in -drawRect: and then just draw the UIImage objects in there using something like CGContextDrawImage(context, imgRect, img.CGImage); ?
I believe you want to use a CGContextRef to draw all the images in at the desired place and then get the resulting image. The code will look something like this:
CGContextRef context = CGBitmapContextCreate(nil, desired_width, desired_height, bitsPerComponent, bytesPerRow, CGColorSpaceCreateDeviceRGB(), CGImageGetBitmapInfo(image));
//This code is to ilustrate what you have to do:
for(image in your Images) {
CGContextDrawImage(context, CGRectMake(currentImage.frame.origin.x, currentImage.frame.origin.y, CGImageGetWidth(currentImage), CGImageGetHeight(currentImage), currentImage);
}
CGImageRef mergeResult = CGBitmapContextCreateImage(context);
mergedImage = [[UIImage alloc] initWithCGImage:mergeResult];
CGContextRelease(context);
CGImageRelease(mergeResult);
CGContextRefs can be created whenever you wish and this allows you to do all kind of image manipulations.
Use CGBitmapContextCreate to create context and CGBitmapContextCreateImage to get final result.
Alright what I am trying to do is:
given an image where there is a circle within that image that is "blank". I want to take an existing image from user library and then mask it so that only a certain part of that image is shown on the "blank" image..
I have tried a few masking code but they all seem to work the other way around ... any tips on how to tackle this?
Unfortunately you can't use CoreAnimation to do this (which would make it rather easy).
Looking at Apple's CoreAnimation documentation:
iOS Note: As a performance consideration, iOS does not support the mask property.
Therefore the next best way to do this is to use Quartz 2D (as answered here):
CGContextRef mainViewContentContext;
CGColorSpaceRef colorSpace;
colorSpace = CGColorSpaceCreateDeviceRGB();
// create a bitmap graphics context the size of the image
mainViewContentContext = CGBitmapContextCreate (NULL, targetSize.width, targetSize.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
// free the rgb colorspace
CGColorSpaceRelease(colorSpace);
if (mainViewContentContext==NULL)
return NULL;
CGImageRef maskImage = [[UIImage imageNamed:#"mask.png"] CGImage];
CGContextClipToMask(mainViewContentContext, CGRectMake(0, 0, targetSize.width, targetSize.height), maskImage);
CGContextDrawImage(mainViewContentContext, CGRectMake(thumbnailPoint.x, thumbnailPoint.y, scaledWidth, scaledHeight), self.CGImage);
// Create CGImageRef of the main view bitmap content, and then
// release that bitmap context
CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
// convert the finished resized image to a UIImage
UIImage *theImage = [UIImage imageWithCGImage:mainViewContentBitmapContext];
// image is retained by the property setting above, so we can
// release the original
CGImageRelease(mainViewContentBitmapContext);
// return the image
return theImage;