Is this kind of masking possible with UIImage or CGImage API in iOS - iphone

I have an UIImage with some text and would like to apply pattern UIImage as masking. Is this possible ?
I understand that with UILabel we can get this kind of gradient using CAGradientLayer. But can this be done if the source is an UIImage ?
The image may have some symbols/pictures etc other than regular characters and hence UIImage. Also i could reuse the image by applying different masking pattern depending on the context.
is this possible ?
Appreciate your help.
EDIT: Thanks for all your answers.
I understand applying the gradient to a text label or creating an image that has text.
But my goal is to get this.--> Click here
i.e. i have a png with some drawing like a flower with transparent background. I want to apply the gradient to the object inside that picture at runtime with a gradient.png as shown in the picture above. Is that possible with masking ?
Thanks

Looks like you should be able to use CGImageMaskCreate:
- (UIImage*) maskImage:(UIImage *)image withMask:(UIImage *)maskImage {
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([image CGImage], mask);
return [UIImage imageWithCGImage:masked];
}
For a longer discussion check out the comment thread here:
http://iosdevelopertips.com/cocoa/how-to-mask-an-image.html

Yes, it is :)
textField.textColor = [UIColor colorWithPatternImage: [UIImage imageNamed:#"rainbowGradient.png"]];
If you want heavy control, Jason Whyne's idea might work. But I like this one, because it's about 8 lines shorter.

Here's just another way to draw an image of text masking something. It's based on kCGBlendModeSourceIn blending mode: you draw text on a clear background and then draw the fill all over the place.
NSString *theString = ...;
UIFont *theFont = ...;
CGSize stringSize = [theString sizeWithFont:theFont];
// The background must be clear (fully transparent), hence NO as the 2nd argument
UIGraphicsBeginImageContextWithOptions(stringSize, NO, 0);
[theString drawAtPoint:CGPointZero withFont:theFont];
// This effectively colorizes the image. Use a pattern color...
[patternColor set];
UIRectFillUsingBlendMode(CGRectMake(0, 0, stringSize.width, stringSize.height), kCGBlendModeSourceIn);
// ... or an image:
[patternImage drawInRect:CGRectMake(0, 0, stringSize.width, stringSize.height) blendMode:kCGBlendModeSourceIn alpha:1.0f];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

sample using mask is fine and works well, but You are leaking.
CGImageMaskCreate and CGImageCreateWithMask do allocate (following "create" -> retain rule)
so You should release mask & image after using:
CGImageRef masked = CGImageCreateWithMask([image CGImage], mask);
UIImage * result = [UIImage imageWithCGImage:masked];
CGImageRelease(mask);
CGImageRelease(masked);
return result;
As per ADC docs:
...
Return Value
A Quartz bitmap image mask. You are responsible for releasing this object by calling CGImageRelease.

Related

Image mask without transparant part

I've been looking into this for a while now.
I'm trying to put a gray image over a cup
Currently i'm stretching an uiimageview with a grey background to the amount the slider has gone.
-(void) sliderChanged:(CGFloat) value{
drinksView.grayArea.frame = CGRectMake( 0 , -value ,372 ,value);
I know this is very dirty ,not what i want...
What i want is that the grey part only covers the part where there is a cup (e.g the part where the image is not transparant). The image of the cup just has a transparant background
Does anybody have an idea of how to achieve this ? i'm a noob with masks and many tutorials have led me nowhere and i don't even know if it's possible.
P.S: drawing a path around the cup is not possible because the cup image can change to a glass
The easiest way I can think of is to use a masked CALayer. This is what you need to do:
Instead of using a UIImageView, use a CALayer with gray background as your overlay. Your sliderChanged: method would remain untouched, except that drinksView.grayArea would be a layer instead of a view.
So far the effect will be exactly the same as before. Now, you need to set the grayArea's mask. Do the following:
CALayer * maskLayer = [CALayer new];
maskLayer.contents = myCupImage.CGImage;
grayArea.mask = maskLayer;
I think by default the layer will stretch the content as the scale is changed. We don't want that. You can fix this by setting the layer's contentsGravity to, say, kCAGravityTop.
That should do what you want.
One caveat: I'm not quite sure how masks cope with changing content gravity. If you have issues on that front, you can fixed it by adding a container layer:
Set a fixed frame for grayArea (equal to the size of the cup image).
Instead of adding grayArea directly, introduce a container layer for it:
CALayer * container = [CALayer new];
container.masksToBounds = YES;
[container addSublayer:grayArea];
[drinksView.layer addSublayer:container];
In your sliderChanged:, change the frame of container instead of the grayArea.
Hope this works.
First of all you will need to get familiar with this method
// masks the item based on the MaskImage
- (UIImage*) itemMask : (UIImage*)image withMask:(UIImage*)maskImage
{
UIImage* afterMasking = nil;
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef), CGImageGetHeight(maskRef), CGImageGetBitsPerComponent(maskRef), CGImageGetBitsPerPixel(maskRef), CGImageGetBytesPerRow(maskRef), CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([image CGImage], mask);
CFRelease(mask);
afterMasking = [UIImage imageWithCGImage:masked];
CFRelease(masked);
return afterMasking;
}
What that does is that you feed it your Cup image and the mask Image together and it will mask your cup image. It will only mask your cup image and nothing underneath so you don't have to worry.
The problem you have is that the grey box resizes. How I would approach this is to crop the mask according to the slider value. So make a black box and change it size how you do with the grayBG BEFORE you feed it through the method. That should be quick to do so so I won't elaborate but at the end you will have something like this. Pardon the half elaborated graphics
drinksView.grayLayer = [CALayer new];
drinksView.grayLayer.frame = CGRectMake(0, 0, 372, 0);
drinksView.grayLayer.contentsGravity = kCAGravityTop;
CALayer * topLayer = [CALayer new];
topLayer.frame = CGRectMake(0, 0, 372, 367);
UIImage * grayImage = [UIImage imageNamed:#"grayDrink.png"];
topLayer.contents = (id) grayImage.CGImage;
CALayer * maskLayer = [CALayer new];
maskLayer.contents = (id)cupImage.CGImage;
maskLayer.masksToBounds = YES;
maskLayer.bounds = CGRectMake(0, 0, 372, 367);
maskLayer.position = CGPointMake(186,184);
[topLayer setMask:maskLayer];
[drinksView.grayLayer addSublayer:topLayer];
[drinksView.grayLayer setMasksToBounds:YES];
[[drinksView layer]addSublayer:drinksView.grayLayer];

<Error>: CGBitmapContextCreate: unsupported parameter combination vs. lower resolution image

- (UIImage *)roundedCornerImage:(NSInteger)cornerSize borderSize:(NSInteger)borderSize {
// If the image does not have an alpha layer, add one
UIImage *image = [self imageWithAlpha];
// Build a context that's the same dimensions as the new size
CGBitmapInfo info = CGImageGetBitmapInfo(image.CGImage);
CGContextRef context = CGBitmapContextCreate(NULL,
image.size.width,
image.size.height,
CGImageGetBitsPerComponent(image.CGImage),
0,
CGImageGetColorSpace(image.CGImage),
CGImageGetBitmapInfo(image.CGImage));
// Create a clipping path with rounded corners
CGContextBeginPath(context);
[self addRoundedRectToPath:CGRectMake(borderSize, borderSize, image.size.width - borderSize * 2, image.size.height - borderSize * 2)
context:context
ovalWidth:cornerSize
ovalHeight:cornerSize];
CGContextClosePath(context);
CGContextClip(context);
// Draw the image to the context; the clipping path will make anything outside the rounded rect transparent
CGContextDrawImage(context, CGRectMake(0, 0, image.size.width, image.size.height), image.CGImage);
// Create a CGImage from the context
CGImageRef clippedImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
// Create a UIImage from the CGImage
UIImage *roundedImage = [UIImage imageWithCGImage:clippedImage];
CGImageRelease(clippedImage);
return roundedImage;
}
I have the method above and am adding rounded corners to Twitter profile images. For most of the images this works awesome. There are a few that cause the following error to occur:
: CGBitmapContextCreate: unsupported parameter combination: 8 integer bits/component; 32 bits/pixel; 3-component color space; kCGImageAlphaLast; 96 bytes/row.
I have done some debugging and it looks like the only difference from the images causing errors and the ones that are not is the parameter, CGImageGetBitmapInfo(image.CGImage), when creating the context. This throws the error and results in the context being null. I tried setting the last parameter to kCGImageAlphaPremultipliedLast to no avail either. The image is drawn this time but with much less quality. Is there a way to get a higher quality image on par with the rest of them? The path to the image is via Twitter so not sure if they have different ones you can pull.
I have seen the other questions regarding this error too. None of have solved this issue. I saw this post but the errored images are completely blurry after that. And casting the width and height to NSInteger also didn't work. Below is a screenshot of the two profile images and their quality as well. The first one is causing the error.
Does anyone have any idea what the issue is here?
Thanks a ton. This has been killing me.
iOS does not support kCGImageAlphaLast. You need to use kCGImageAlphaPremultipliedLast.
You also need to handle the scale of your initial image. Your current code doesn't, so it downsamples the image if its scale is 2.0.
You can write the entire function more simply by using UIKit functions and classes. UIKit will take care of the scale for you; you just have to pass in the original image's scale when you ask it to create the graphics context.
- (UIImage *)roundedCornerImage:(NSInteger)cornerSize borderSize:(NSInteger)borderSize {
// If the image does not have an alpha layer, add one
UIImage *image = [self imageWithAlpha];
UIGraphicsBeginImageContextWithOptions(image.size, NO, image.scale); {
CGRect imageRect = (CGRect){ CGPointZero, image.size };
CGRect borderRect = CGRectInset(imageRect, borderSize, borderSize);
UIBezierPath *path = [UIBezierPath bezierPathWithRoundedRect:borderRect
byRoundingCorners:UIRectCornerAllCorners
cornerRadii:CGSizeMake(cornerSize, cornerSize)];
[path addClip];
[image drawAtPoint:CGPointZero];
}
UIImage *roundedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return roundedImage;
}
If your imageWithAlpha method itself creates a UIImage from another UIImage, it needs to propagate the scale also.

Image Masking+Iphone SDK

I want to mask two images
Image 1:
Image 2:
Now i want to merge this two images like The image2 will come in centre of the image1.
I've read Any idea why this image masking code does not work? and "How to Mask an Image"
But on using this function:-
- (UIImage*) maskImage:(UIImage *)image withMask:(UIImage *)maskImage {
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([image CGImage], mask);
return [UIImage imageWithCGImage:masked];
}
But by using this function I got the output is :-
As far as I can tell the problem is not with maskImage:withMask:. The wrong output you posted is not wrong: where the pixels of image1 are black, image2 is not visible.
The problem is probably or in the functions you used to load image and mask, or in the code producing the graphical output.
Actually it seems to me that you got the right output, but using the wrong colorspace (grayscale without alpha). Check if the argument image you supply is actually in RGBa format, and if the returned UIImage isn't drawn onto some other CGContext using DeviceGray as colorspace.
The other possible cause that comes to my mind is that you inverted image1 and image2. According to the documentation, mask should be scaled up to the size of image (see below) but you have a small image there. This seems reasonable also because image1 is in grayscale, although when I tried to swap image1 and image2 I got image1 unmodified as output.
To run the tests I used the images attached, and then copied & pasted the method -(UIImage *)maskImage:(UIImage *)image withMask:(UIImage *)maskImage as it is.
I used a simple iOS project with one view and an UIImageView inside (with tag 1), and the following code in the controller:
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
UIImageView *imageView=(UIImageView *)[self.view viewWithTag:1];
UIImage *im1=[UIImage imageNamed:#"image1"];
UIImage *im2=[UIImage imageNamed:#"image2"];
imageView.image=[self maskImage:im2 withMask:im1];
}
I get the following output (which is right):
If by mistake you invert im1 with im2, you get instead image1 unmodified.

Texture from UIColor?

I am drawing a pie chart, each slice has a different color. I need to give the slices a textured look, not just the plain color. Any ideas how to do this? I don't want to use a image to use as a texture for all the possible colors. So I need to generate a texture or something like that. Any ideas. Thank You!
ps: this is an iphone project. (I can't use Core Image)
Use colorWithPatternImage with UIColor.
Edit: Sorry should have read the question properly.
You will need to use a UIGraphicsContext to create an image you can use in colorWithPatternImage. I would suggest using a grayscale image that you can load in, tint with a similar method to this, then use as a pattern in UIColor.
So you would have a method along the lines of this:
- (UIColor *)texturedPatternWithTint:(UIColor *)tint {
UIImage *texture = [UIImage imageNamed:#"texture.png"];
CGRect wholeImage = CGRectMake(0, 0, texture.size.width, texture.size.height);
UIGraphicsBeginImageContextWithOptions(texture.size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextDrawImage(context, wholeImage, texture.CGImage);
CGContextSetBlendMode(context, kCGBlendModeMultiply);
CGContextSetFillColor(context, CGColorGetComponents(tint.CGColor));
CGContextFillRect(context, self.bounds);
UIImage *tintedTexture = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return [UIColor colorWithPatternImage:tintedTexture];
}
(not tested)

How can I draw an CGImageRef context on the screen?

I have a beautiful CGImageRef context, which I created the whole day to get alpha values ;)
It's defined like that:
CGContextRef context = CGBitmapContextCreate (bitmapData, pixWidth, pixHeiht 8, pixWidth, NULL, kCGImageAlphaOnly);
So for my understanding, that context represents somehow my image. But "virtually", non-visible somewhere in memory.
Can I stuff that in an UIImageView or draw that directly to the screen? I guess that alpha would be converted to grayscale or something like that.
You can create a UIImage by calling:
+ (UIImage *)imageWithCGImage:(CGImageRef)cgImage
and then draw the UIImage using:
- (void)drawAtPoint:(CGPoint)point
Go look at CGBitmapContextCreateImage(), that can give you a CGImageRef from your bitmap context. You can then draw that using the CGContext... functions or make a UIImage using +[UIImage imageWithCGImage:].
CGSize size = ...;
UIGraphicsBeginImageContext(size);
...
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
...
CGPoint pt = ...;
[img drawAtPoint:pt];