How can I draw an CGImageRef context on the screen? - iphone

I have a beautiful CGImageRef context, which I created the whole day to get alpha values ;)
It's defined like that:
CGContextRef context = CGBitmapContextCreate (bitmapData, pixWidth, pixHeiht 8, pixWidth, NULL, kCGImageAlphaOnly);
So for my understanding, that context represents somehow my image. But "virtually", non-visible somewhere in memory.
Can I stuff that in an UIImageView or draw that directly to the screen? I guess that alpha would be converted to grayscale or something like that.

You can create a UIImage by calling:
+ (UIImage *)imageWithCGImage:(CGImageRef)cgImage
and then draw the UIImage using:
- (void)drawAtPoint:(CGPoint)point

Go look at CGBitmapContextCreateImage(), that can give you a CGImageRef from your bitmap context. You can then draw that using the CGContext... functions or make a UIImage using +[UIImage imageWithCGImage:].

CGSize size = ...;
UIGraphicsBeginImageContext(size);
...
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
...
CGPoint pt = ...;
[img drawAtPoint:pt];

Related

<Error>: CGBitmapContextCreate: unsupported parameter combination vs. lower resolution image

- (UIImage *)roundedCornerImage:(NSInteger)cornerSize borderSize:(NSInteger)borderSize {
// If the image does not have an alpha layer, add one
UIImage *image = [self imageWithAlpha];
// Build a context that's the same dimensions as the new size
CGBitmapInfo info = CGImageGetBitmapInfo(image.CGImage);
CGContextRef context = CGBitmapContextCreate(NULL,
image.size.width,
image.size.height,
CGImageGetBitsPerComponent(image.CGImage),
0,
CGImageGetColorSpace(image.CGImage),
CGImageGetBitmapInfo(image.CGImage));
// Create a clipping path with rounded corners
CGContextBeginPath(context);
[self addRoundedRectToPath:CGRectMake(borderSize, borderSize, image.size.width - borderSize * 2, image.size.height - borderSize * 2)
context:context
ovalWidth:cornerSize
ovalHeight:cornerSize];
CGContextClosePath(context);
CGContextClip(context);
// Draw the image to the context; the clipping path will make anything outside the rounded rect transparent
CGContextDrawImage(context, CGRectMake(0, 0, image.size.width, image.size.height), image.CGImage);
// Create a CGImage from the context
CGImageRef clippedImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
// Create a UIImage from the CGImage
UIImage *roundedImage = [UIImage imageWithCGImage:clippedImage];
CGImageRelease(clippedImage);
return roundedImage;
}
I have the method above and am adding rounded corners to Twitter profile images. For most of the images this works awesome. There are a few that cause the following error to occur:
: CGBitmapContextCreate: unsupported parameter combination: 8 integer bits/component; 32 bits/pixel; 3-component color space; kCGImageAlphaLast; 96 bytes/row.
I have done some debugging and it looks like the only difference from the images causing errors and the ones that are not is the parameter, CGImageGetBitmapInfo(image.CGImage), when creating the context. This throws the error and results in the context being null. I tried setting the last parameter to kCGImageAlphaPremultipliedLast to no avail either. The image is drawn this time but with much less quality. Is there a way to get a higher quality image on par with the rest of them? The path to the image is via Twitter so not sure if they have different ones you can pull.
I have seen the other questions regarding this error too. None of have solved this issue. I saw this post but the errored images are completely blurry after that. And casting the width and height to NSInteger also didn't work. Below is a screenshot of the two profile images and their quality as well. The first one is causing the error.
Does anyone have any idea what the issue is here?
Thanks a ton. This has been killing me.
iOS does not support kCGImageAlphaLast. You need to use kCGImageAlphaPremultipliedLast.
You also need to handle the scale of your initial image. Your current code doesn't, so it downsamples the image if its scale is 2.0.
You can write the entire function more simply by using UIKit functions and classes. UIKit will take care of the scale for you; you just have to pass in the original image's scale when you ask it to create the graphics context.
- (UIImage *)roundedCornerImage:(NSInteger)cornerSize borderSize:(NSInteger)borderSize {
// If the image does not have an alpha layer, add one
UIImage *image = [self imageWithAlpha];
UIGraphicsBeginImageContextWithOptions(image.size, NO, image.scale); {
CGRect imageRect = (CGRect){ CGPointZero, image.size };
CGRect borderRect = CGRectInset(imageRect, borderSize, borderSize);
UIBezierPath *path = [UIBezierPath bezierPathWithRoundedRect:borderRect
byRoundingCorners:UIRectCornerAllCorners
cornerRadii:CGSizeMake(cornerSize, cornerSize)];
[path addClip];
[image drawAtPoint:CGPointZero];
}
UIImage *roundedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return roundedImage;
}
If your imageWithAlpha method itself creates a UIImage from another UIImage, it needs to propagate the scale also.

How to compose two UIImage objects into one UIImage outside of -drawRect:?

I have a few UIImage objects which I want to compose into a single UIImage and then save it to disk. I'm not displaying this on the screen so it doesn't make sense to do it in -drawRect.
Is there a way of creating a context similar like in -drawRect: and then just draw the UIImage objects in there using something like CGContextDrawImage(context, imgRect, img.CGImage); ?
I believe you want to use a CGContextRef to draw all the images in at the desired place and then get the resulting image. The code will look something like this:
CGContextRef context = CGBitmapContextCreate(nil, desired_width, desired_height, bitsPerComponent, bytesPerRow, CGColorSpaceCreateDeviceRGB(), CGImageGetBitmapInfo(image));
//This code is to ilustrate what you have to do:
for(image in your Images) {
CGContextDrawImage(context, CGRectMake(currentImage.frame.origin.x, currentImage.frame.origin.y, CGImageGetWidth(currentImage), CGImageGetHeight(currentImage), currentImage);
}
CGImageRef mergeResult = CGBitmapContextCreateImage(context);
mergedImage = [[UIImage alloc] initWithCGImage:mergeResult];
CGContextRelease(context);
CGImageRelease(mergeResult);
CGContextRefs can be created whenever you wish and this allows you to do all kind of image manipulations.
Use CGBitmapContextCreate to create context and CGBitmapContextCreateImage to get final result.

Is this kind of masking possible with UIImage or CGImage API in iOS

I have an UIImage with some text and would like to apply pattern UIImage as masking. Is this possible ?
I understand that with UILabel we can get this kind of gradient using CAGradientLayer. But can this be done if the source is an UIImage ?
The image may have some symbols/pictures etc other than regular characters and hence UIImage. Also i could reuse the image by applying different masking pattern depending on the context.
is this possible ?
Appreciate your help.
EDIT: Thanks for all your answers.
I understand applying the gradient to a text label or creating an image that has text.
But my goal is to get this.--> Click here
i.e. i have a png with some drawing like a flower with transparent background. I want to apply the gradient to the object inside that picture at runtime with a gradient.png as shown in the picture above. Is that possible with masking ?
Thanks
Looks like you should be able to use CGImageMaskCreate:
- (UIImage*) maskImage:(UIImage *)image withMask:(UIImage *)maskImage {
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([image CGImage], mask);
return [UIImage imageWithCGImage:masked];
}
For a longer discussion check out the comment thread here:
http://iosdevelopertips.com/cocoa/how-to-mask-an-image.html
Yes, it is :)
textField.textColor = [UIColor colorWithPatternImage: [UIImage imageNamed:#"rainbowGradient.png"]];
If you want heavy control, Jason Whyne's idea might work. But I like this one, because it's about 8 lines shorter.
Here's just another way to draw an image of text masking something. It's based on kCGBlendModeSourceIn blending mode: you draw text on a clear background and then draw the fill all over the place.
NSString *theString = ...;
UIFont *theFont = ...;
CGSize stringSize = [theString sizeWithFont:theFont];
// The background must be clear (fully transparent), hence NO as the 2nd argument
UIGraphicsBeginImageContextWithOptions(stringSize, NO, 0);
[theString drawAtPoint:CGPointZero withFont:theFont];
// This effectively colorizes the image. Use a pattern color...
[patternColor set];
UIRectFillUsingBlendMode(CGRectMake(0, 0, stringSize.width, stringSize.height), kCGBlendModeSourceIn);
// ... or an image:
[patternImage drawInRect:CGRectMake(0, 0, stringSize.width, stringSize.height) blendMode:kCGBlendModeSourceIn alpha:1.0f];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
sample using mask is fine and works well, but You are leaking.
CGImageMaskCreate and CGImageCreateWithMask do allocate (following "create" -> retain rule)
so You should release mask & image after using:
CGImageRef masked = CGImageCreateWithMask([image CGImage], mask);
UIImage * result = [UIImage imageWithCGImage:masked];
CGImageRelease(mask);
CGImageRelease(masked);
return result;
As per ADC docs:
...
Return Value
A Quartz bitmap image mask. You are responsible for releasing this object by calling CGImageRelease.

What's the correct code to save a CGLayer as a PNG file?

Please note that this question is about CGLayer (which you typically use to draw offscreen), it is not about CALayer.
In iOS, what's the correct code to save a CGLayer as a PNG file? Thanks!
Again, that's CGLayer, not CALayer.
Note that you CAN NOT use UIGraphicsGetImageFromCurrentImageContext.
(From the documentation, "You can call UIGraphicsGetImageFromCurrentImageContext only when a bitmap-based graphics context is the current graphics context.")
Note that you CAN NOT use renderInContext:. renderInContext: is strictly for CALayers. CGLayers are totally different.
So, how can you actually convert a CGLayer to a PNG image? Or indeed, how to render a CGLayer in to a bitmap in some way (of course you can then easily save as an image).
Later ... Ken has answered this difficult question. I will paste in a long example code that may help people. Thanks again Ken! Amazing!
-(void)drawingExperimentation
{
// this code uses the ASTOUNDING solution by KENNYTM -- Oct/Nov2010
//
// create a CGLayer for offscreen drawing
// note. for "yourContext", ideally it should be a context from your screen, ie the
// context you "normally get" in one of your drawRect routines associated with
// drawing to the screen normally.
// UIGraphicsGetCurrentContext() also normally works but you could have colorspace woes
// so create the CGLayer called notepad...
CGLayerRef notepad = CGLayerCreateWithContext(yourContext,CGSizeMake(1500,1500), NULL);
CGContextRef notepadContext = CGLayerGetContext(notepad);
// you can for example write an image in to notepad
CGImageRef imageExamp = [[UIImage imageWithContentsOfFile:
[[NSBundle mainBundle] pathForResource:#"smallTestImage" ofType:#"png"] ] CGImage];
CGContextDrawImage( notepadContext, CGRectMake(100,100, 50,50), imageExamp);
// setting the colorspace may or may not be relevant to you
CGContextSetFillColorSpace( notepadContext, CGColorSpaceCreateDeviceRGB() );
// you can draw to notepad as much as you like in the normal way
// don't forget to push it's context on and off your work space so you can draw to it
UIGraphicsPushContext(notepadContext);
// set the colors
CGContextSetRGBFillColor(notepadContext, 0.15,0.25,0.35, 0.45);
// draw rects
UIRectFill(CGRectMake(x,y,w,h));
// draw ovals, filled stroked or whatever you wish
UIBezierPath* d = [UIBezierPath bezierPathWithOvalInRect:CGRectMake(x,y,w,h)];
[d fill];
// draw cubic and other curves
UIBezierPath *longPath;
longPath.lineWidth = 42;
longPath.lineCapStyle = kCGLineCapRound;
longPath.lineJoinStyle = kCGLineJoinRound;
[longPath moveToPoint:p];
[longPath addCurveToPoint:q controlPoint1:r controlPoint2:s];
[longPath addCurveToPoint:a controlPoint1:b controlPoint2:c];
[longPath addCurveToPoint:m controlPoint1:n controlPoint2:o];
[longPath closePath];
[longPath stroke];
UIGraphicsPopContext();
// so now you have a nice CGLayer.
// how to save it to a file?
// you can save it to a file using the amazing KENNY-TM-METHOD !!!
UIGraphicsBeginImageContext( CGLayerGetSize(notepad) );
CGContextRef rr = UIGraphicsGetCurrentContext();
CGContextDrawLayerAtPoint(rr, CGPointZero, notepad);
UIImage* ii = UIGraphicsGetImageFromCurrentImageContext();
NSData* pp = UIImagePNGRepresentation(ii);
[pp writeToFile:#"foo.png" atomically:YES];
UIGraphicsEndImageContext();
// you may prefer to look at it like this:
UIGraphicsBeginImageContext( CGLayerGetSize(notepad) );
CGContextDrawLayerAtPoint(UIGraphicsGetCurrentContext(), CGPointZero, notepad);
[UIImagePNGRepresentation(UIGraphicsGetImageFromCurrentImageContext()) writeToFile:#"foo.png" atomically:YES];
UIGraphicsEndImageContext();
// there are three clever steps in the KENNY-TM-METHOD:
// - start a new UIGraphics image context
// - CGContextDrawLayerAtPoint which can, in fact, draw a CGLayer
// - just use the usual UIImagePNGRepresentation to convert to a png
// done! a miracle
// if you are testing on your mac-simulator, you'll find the file
// simply in the main drive directory
return;
}
For iPhone OS, it should be possible to draw a CGLayer on a CGContext and then convert into a UIImage, which can then be encoded into PNG and saved.
CGSize size = CGLayerGetSize(layer);
UIGraphicsBeginImageContext(size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextDrawLayerAtPoint(ctx, CGPointZero, layer);
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
NSData* pngData = UIImagePNGRepresentation(image);
[pngData writeToFile:... atomically:YES];
UIGraphicsEndImageContext();
(not tested)

mask image via another image

Alright what I am trying to do is:
given an image where there is a circle within that image that is "blank". I want to take an existing image from user library and then mask it so that only a certain part of that image is shown on the "blank" image..
I have tried a few masking code but they all seem to work the other way around ... any tips on how to tackle this?
Unfortunately you can't use CoreAnimation to do this (which would make it rather easy).
Looking at Apple's CoreAnimation documentation:
iOS Note: As a performance consideration, iOS does not support the mask property.
Therefore the next best way to do this is to use Quartz 2D (as answered here):
CGContextRef mainViewContentContext;
CGColorSpaceRef colorSpace;
colorSpace = CGColorSpaceCreateDeviceRGB();
// create a bitmap graphics context the size of the image
mainViewContentContext = CGBitmapContextCreate (NULL, targetSize.width, targetSize.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
// free the rgb colorspace
CGColorSpaceRelease(colorSpace);
if (mainViewContentContext==NULL)
return NULL;
CGImageRef maskImage = [[UIImage imageNamed:#"mask.png"] CGImage];
CGContextClipToMask(mainViewContentContext, CGRectMake(0, 0, targetSize.width, targetSize.height), maskImage);
CGContextDrawImage(mainViewContentContext, CGRectMake(thumbnailPoint.x, thumbnailPoint.y, scaledWidth, scaledHeight), self.CGImage);
// Create CGImageRef of the main view bitmap content, and then
// release that bitmap context
CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
// convert the finished resized image to a UIImage
UIImage *theImage = [UIImage imageWithCGImage:mainViewContentBitmapContext];
// image is retained by the property setting above, so we can
// release the original
CGImageRelease(mainViewContentBitmapContext);
// return the image
return theImage;