mask image via another image - iphone

Alright what I am trying to do is:
given an image where there is a circle within that image that is "blank". I want to take an existing image from user library and then mask it so that only a certain part of that image is shown on the "blank" image..
I have tried a few masking code but they all seem to work the other way around ... any tips on how to tackle this?

Unfortunately you can't use CoreAnimation to do this (which would make it rather easy).
Looking at Apple's CoreAnimation documentation:
iOS Note: As a performance consideration, iOS does not support the mask property.
Therefore the next best way to do this is to use Quartz 2D (as answered here):
CGContextRef mainViewContentContext;
CGColorSpaceRef colorSpace;
colorSpace = CGColorSpaceCreateDeviceRGB();
// create a bitmap graphics context the size of the image
mainViewContentContext = CGBitmapContextCreate (NULL, targetSize.width, targetSize.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
// free the rgb colorspace
CGColorSpaceRelease(colorSpace);
if (mainViewContentContext==NULL)
return NULL;
CGImageRef maskImage = [[UIImage imageNamed:#"mask.png"] CGImage];
CGContextClipToMask(mainViewContentContext, CGRectMake(0, 0, targetSize.width, targetSize.height), maskImage);
CGContextDrawImage(mainViewContentContext, CGRectMake(thumbnailPoint.x, thumbnailPoint.y, scaledWidth, scaledHeight), self.CGImage);
// Create CGImageRef of the main view bitmap content, and then
// release that bitmap context
CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
// convert the finished resized image to a UIImage
UIImage *theImage = [UIImage imageWithCGImage:mainViewContentBitmapContext];
// image is retained by the property setting above, so we can
// release the original
CGImageRelease(mainViewContentBitmapContext);
// return the image
return theImage;

Related

How to compose two UIImage objects into one UIImage outside of -drawRect:?

I have a few UIImage objects which I want to compose into a single UIImage and then save it to disk. I'm not displaying this on the screen so it doesn't make sense to do it in -drawRect.
Is there a way of creating a context similar like in -drawRect: and then just draw the UIImage objects in there using something like CGContextDrawImage(context, imgRect, img.CGImage); ?
I believe you want to use a CGContextRef to draw all the images in at the desired place and then get the resulting image. The code will look something like this:
CGContextRef context = CGBitmapContextCreate(nil, desired_width, desired_height, bitsPerComponent, bytesPerRow, CGColorSpaceCreateDeviceRGB(), CGImageGetBitmapInfo(image));
//This code is to ilustrate what you have to do:
for(image in your Images) {
CGContextDrawImage(context, CGRectMake(currentImage.frame.origin.x, currentImage.frame.origin.y, CGImageGetWidth(currentImage), CGImageGetHeight(currentImage), currentImage);
}
CGImageRef mergeResult = CGBitmapContextCreateImage(context);
mergedImage = [[UIImage alloc] initWithCGImage:mergeResult];
CGContextRelease(context);
CGImageRelease(mergeResult);
CGContextRefs can be created whenever you wish and this allows you to do all kind of image manipulations.
Use CGBitmapContextCreate to create context and CGBitmapContextCreateImage to get final result.

How to get rid of previous reflection when reflecting a UIImageView (with changing pictures)?

I have managed to use the reflection sample app from apple to create a reflection from a UIImageView.
The problem is that when I change the picture inside the UIImageView, the reflection from the previous displayed picture remains on the screen. The new reflection on the next picture then overlaps the previous reflection.
How do I ensure that the previous reflection is removed when I change to the next picture?
Thank you so much. I hope my question is not too basic.
Here is the code which I have used so far:
// reflection
self.view.autoresizesSubviews = YES;
self.view.userInteractionEnabled = YES;
// create the reflection view
CGRect reflectionRect = currentView.frame;
// the reflection is a fraction of the size of the view being reflected
reflectionRect.size.height = reflectionRect.size.height * kDefaultReflectionFraction;
// and is offset to be at the bottom of the view being reflected
reflectionRect = CGRectOffset(reflectionRect, 0, currentView.frame.size.height);
reflectionView = [[UIImageView alloc] initWithFrame:reflectionRect];
// determine the size of the reflection to create
NSUInteger reflectionHeight = currentView.bounds.size.height * kDefaultReflectionFraction;
// create the reflection image, assign it to the UIImageView and add the image view to the containerView
reflectionView.image = [self reflectedImage:currentView withHeight:reflectionHeight];
reflectionView.alpha = kDefaultReflectionOpacity;
[self.view addSubview:reflectionView];
Then the code below is used to form the reflection:
CGImageRef CreateGradientImage(int pixelsWide, int pixelsHigh)
{
CGImageRef theCGImage = NULL;
// gradient is always black-white and the mask must be in the gray colorspace
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// create the bitmap context
CGContextRef gradientBitmapContext = CGBitmapContextCreate(nil, pixelsWide, pixelsHigh,
8, 0, colorSpace, kCGImageAlphaNone);
// define the start and end grayscale values (with the alpha, even though
// our bitmap context doesn't support alpha the gradient requires it)
CGFloat colors[] = {0.0, 1.0, 1.0, 1.0};
// create the CGGradient and then release the gray color space
CGGradientRef grayScaleGradient = CGGradientCreateWithColorComponents(colorSpace, colors, NULL, 2);
CGColorSpaceRelease(colorSpace);
// create the start and end points for the gradient vector (straight down)
CGPoint gradientStartPoint = CGPointZero;
CGPoint gradientEndPoint = CGPointMake(0, pixelsHigh);
// draw the gradient into the gray bitmap context
CGContextDrawLinearGradient(gradientBitmapContext, grayScaleGradient, gradientStartPoint,
gradientEndPoint, kCGGradientDrawsAfterEndLocation);
CGGradientRelease(grayScaleGradient);
// convert the context into a CGImageRef and release the context
theCGImage = CGBitmapContextCreateImage(gradientBitmapContext);
CGContextRelease(gradientBitmapContext);
// return the imageref containing the gradient
return theCGImage;
}
CGContextRef MyCreateBitmapContext(int pixelsWide, int pixelsHigh)
{
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create the bitmap context
CGContextRef bitmapContext = CGBitmapContextCreate (nil, pixelsWide, pixelsHigh, 8,
0, colorSpace,
// this will give us an optimal BGRA format for the device:
(kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst));
CGColorSpaceRelease(colorSpace);
return bitmapContext;
}
- (UIImage *)reflectedImage:(UIImageView *)fromImage withHeight:(NSUInteger)height
{
if (!height) return nil;
// create a bitmap graphics context the size of the image
CGContextRef mainViewContentContext = MyCreateBitmapContext(fromImage.bounds.size.width, height);
// offset the context -
// This is necessary because, by default, the layer created by a view for caching its content is flipped.
// But when you actually access the layer content and have it rendered it is inverted. Since we're only creating
// a context the size of our reflection view (a fraction of the size of the main view) we have to translate the
// context the delta in size, and render it.
//
CGFloat translateVertical= fromImage.bounds.size.height - height;
CGContextTranslateCTM(mainViewContentContext, 0, -translateVertical);
// render the layer into the bitmap context
CALayer *layer = fromImage.layer;
[layer renderInContext:mainViewContentContext];
// create CGImageRef of the main view bitmap content, and then release that bitmap context
CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
// create a 2 bit CGImage containing a gradient that will be used for masking the
// main view content to create the 'fade' of the reflection. The CGImageCreateWithMask
// function will stretch the bitmap image as required, so we can create a 1 pixel wide gradient
CGImageRef gradientMaskImage = CreateGradientImage(1, height);
// create an image by masking the bitmap of the mainView content with the gradient view
// then release the pre-masked content bitmap and the gradient bitmap
CGImageRef reflectionImage = CGImageCreateWithMask(mainViewContentBitmapContext, gradientMaskImage);
CGImageRelease(mainViewContentBitmapContext);
CGImageRelease(gradientMaskImage);
// convert the finished reflection image to a UIImage
UIImage *theImage = [UIImage imageWithCGImage:reflectionImage];
// image is retained by the property setting above, so we can release the original
CGImageRelease(reflectionImage);
return theImage;
}
If you don't want to use IB, just add
reflectionView.image = nil;
before
reflectedImage.image = [self reflectedImage:...
and don't forget this line
if (currentView.image == nil) reflectedImage.image = nil;
or else you'll end up with an old reflection after the image has disappeared.

mask text inside uitextview/uiwebview

finally I choose to devote some time to find a way/implementation to
mask text inside UITextView/UIWebView.
By now what I'm able to do is:
- add some custom background
- add a uitextview/uiwebview with some text
- add an UIImageView (with a covering png) or a CAGradientLayer to
create a simple mask effect (*)
Of course this is not a magic bullet and require at least one more
layer (the one pointed out with *).
Furthermore it's not so good when you have a full transparent
background 'cause everyone can recognize the extra view/layer used to
fade away the text.
I searched all over google but still not found a good solution (I've
found about mask an image, blah blah)...
Any tips?
Thanks in advance,
marcio
PS maybe a screenshot will be more straightforward, here you're!
http://grab.by/KzS
Yes! I finally got it. I don't know if it's the Apple's way but it works. Maybe they have the opportunity to employ some private apis. Anyway this is a sort of pseudo-algorithm on how I got it works:
1) get a screenshot of the window
2) crop the desired rect with CGImageCreateWithImageInRect
3) apply a gradient mask (stolen from Apple' sample code on Reflections)
4) create an UIImageView with the freshly created image
I also noted that it doesn't affect the performances even on the lowest devices.
Hope it will be helpful!
And this is a crop of the result (link text)
I've promised to myself to implement a category just to make it better. Until now the code is quite spread in different classes.
Just to make a sample (supported only landscape orientation, see the transform below, supported only top mask). In this case I overrided didMoveToWindow of the table that needs to be masked:
- (void)didMoveToWindow {
if (self.window) {
UIImageView *reflected = (UIImageView *)[self.superview viewWithTag:TABLE_SHADOW_TOP];
if (!reflected) {
UIImage *image = [UIImage screenshot:self.window];
//
CGRect croppedRect = CGRectMake(480-self.frame.size.height, self.frame.origin.x, 16, self.frame.size.width);
CGImageRef cropImage = CGImageCreateWithImageInRect(image.CGImage, croppedRect);
UIImage *reflectedImage = [UIImage imageMaskedWithGradient:cropImage];
CGImageRelease(cropImage);
UIImageView *reflected = [[UIImageView alloc] initWithImage:reflectedImage];
reflected.transform = CGAffineTransformMakeRotation(-(M_PI/2));
reflected.tag = TABLE_SHADOW_TOP;
CGRect adjusted = reflected.frame;
adjusted.origin = self.frame.origin;
reflected.frame = adjusted;
[self.superview addSubview:reflected];
[reflected release];
}
}
}
and this is the uiimage category:
CGImageRef CreateGradientImage(int pixelsWide, int pixelsHigh)
{
CGImageRef theCGImage = NULL;
// gradient is always black-white and the mask must be in the gray colorspace
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// create the bitmap context
CGContextRef gradientBitmapContext = CGBitmapContextCreate(NULL, pixelsWide, pixelsHigh,
8, 0, colorSpace, kCGImageAlphaNone);
// define the start and end grayscale values (with the alpha, even though
// our bitmap context doesn't support alpha the gradient requires it)
CGFloat colors[] = {0.0, 1.0, 1.0, 1.0};
// create the CGGradient and then release the gray color space
CGGradientRef grayScaleGradient = CGGradientCreateWithColorComponents(colorSpace, colors, NULL, 2);
CGColorSpaceRelease(colorSpace);
// create the start and end points for the gradient vector (straight down)
CGPoint gradientStartPoint = CGPointZero;
// CGPoint gradientStartPoint = CGPointMake(0, pixelsHigh);
CGPoint gradientEndPoint = CGPointMake(pixelsWide/1.75, 0);
// draw the gradient into the gray bitmap context
CGContextDrawLinearGradient(gradientBitmapContext, grayScaleGradient, gradientStartPoint,
gradientEndPoint, kCGGradientDrawsAfterEndLocation);
CGGradientRelease(grayScaleGradient);
// convert the context into a CGImageRef and release the context
theCGImage = CGBitmapContextCreateImage(gradientBitmapContext);
CGContextRelease(gradientBitmapContext);
// return the imageref containing the gradient
return theCGImage;
}
CGContextRef MyCreateBitmapContext(int pixelsWide, int pixelsHigh)
{
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create the bitmap context
CGContextRef bitmapContext = CGBitmapContextCreate (NULL, pixelsWide, pixelsHigh, 8,
0, colorSpace,
// this will give us an optimal BGRA format for the device:
(kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst));
CGColorSpaceRelease(colorSpace);
return bitmapContext;
}
+ (UIImage *)imageMaskedWithGradient:(CGImageRef)image {
UIDeviceOrientation deviceOrientation = [UIDevice currentDevice].orientation;
DEBUG(#"need to support deviceOrientation: %i", deviceOrientation);
float width = CGImageGetWidth(image);
float height = CGImageGetHeight(image);
// create a bitmap graphics context the size of the image
CGContextRef mainViewContentContext = MyCreateBitmapContext(width, height);
// create a 2 bit CGImage containing a gradient that will be used for masking the
// main view content to create the 'fade' of the reflection. The CGImageCreateWithMask
// function will stretch the bitmap image as required, so we can create a 1 pixel wide gradient
CGImageRef gradientMaskImage = CreateGradientImage(width, 1);
// create an image by masking the bitmap of the mainView content with the gradient view
// then release the pre-masked content bitmap and the gradient bitmap
CGContextClipToMask(mainViewContentContext, CGRectMake(0.0, 0.0, width, height), gradientMaskImage);
CGImageRelease(gradientMaskImage);
// draw the image into the bitmap context
CGContextDrawImage(mainViewContentContext, CGRectMake(0, 0, width, height), image);
// create CGImageRef of the main view bitmap content, and then release that bitmap context
CGImageRef reflectionImage = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
// convert the finished reflection image to a UIImage
UIImage *theImage = [UIImage imageWithCGImage:reflectionImage];
// image is retained by the property setting above, so we can release the original
CGImageRelease(reflectionImage);
return theImage;
}
Hope it helps.

Any code/library to scale down an UIImage?

Is there any code or library out there that can help me scale down an image? If you take a picture with the iPhone, it is something like 2000x1000 pixels which is not very network friendly. I want to scale it down to say 480x320. Any hints?
This is what I am using. Works well. I'll definitely be watching this question to see if anyone has anything better/faster. I just added the below to a category on UIimage.
+ (UIImage*)imageWithImage:(UIImage*)image scaledToSize:(CGSize)newSize {
UIGraphicsBeginImageContext( newSize );
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
See http://vocaro.com/trevor/blog/2009/10/12/resize-a-uiimage-the-right-way/ - this has a set of code you can download as well as some descriptions.
If speed is a worry, you can experiment with using CGContextSetInterpolationQuality to set a lower interpolation quality than the default.
Please note, this is NOT my code. I did a little digging and found it here. I figured you'd have to drop into the CoreGraphics layer, but wasn't quite sure of the specifics. This should work. Just be careful about managing your memory.
// ==============================================================
// resizedImage
// ==============================================================
// Return a scaled down copy of the image.
UIImage* resizedImage(UIImage *inImage, CGRect thumbRect)
{
CGImageRef imageRef = [inImage CGImage];
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef);
// There's a wierdness with kCGImageAlphaNone and CGBitmapContextCreate
// see Supported Pixel Formats in the Quartz 2D Programming Guide
// Creating a Bitmap Graphics Context section
// only RGB 8 bit images with alpha of kCGImageAlphaNoneSkipFirst, kCGImageAlphaNoneSkipLast, kCGImageAlphaPremultipliedFirst,
// and kCGImageAlphaPremultipliedLast, with a few other oddball image kinds are supported
// The images on input here are likely to be png or jpeg files
if (alphaInfo == kCGImageAlphaNone)
alphaInfo = kCGImageAlphaNoneSkipLast;
// Build a bitmap context that's the size of the thumbRect
CGContextRef bitmap = CGBitmapContextCreate(
NULL,
thumbRect.size.width, // width
thumbRect.size.height, // height
CGImageGetBitsPerComponent(imageRef), // really needs to always be 8
4 * thumbRect.size.width, // rowbytes
CGImageGetColorSpace(imageRef),
alphaInfo
);
// Draw into the context, this scales the image
CGContextDrawImage(bitmap, thumbRect, imageRef);
// Get an image from the context and a UIImage
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage* result = [UIImage imageWithCGImage:ref];
CGContextRelease(bitmap); // ok if NULL
CGImageRelease(ref);
return result;
}
Please see the solution I posted to this question. The question involves rotating an image 90 degrees instead of scaling it, but the premise is the same (it's just the matrix transformation that is different).

Capturing EAGLview content WITH alpha channel on iPhone

have been struggling with this issue for quite some time now and couldn't find an answer so far. Basically, what I want to do, is capturing the content of my EAGLview and then use it to merge it with other images. Anyway, the mainproblem is, that everything transparent in my EAGLview renders opaque when saving it to the photoalbum or putting it into a UIImageView. Let me share some code with you, I found somewhere else:
- (CGImageRef) glToUIImage {
unsigned char buffer[320*480*4];
glReadPixels(0,0,320,480,GL_RGBA,GL_UNSIGNED_BYTE,&buffer);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, &buffer, 320*480*4, NULL);
CGImageRef iref = CGImageCreate(320,480,8,32,320*4,CGColorSpaceCreateDeviceRGB(),kCGBitmapByteOrderDefault,ref,NULL,true,kCGRenderingIntentDefault);
size_t width = CGImageGetWidth(iref);
size_t height = CGImageGetHeight(iref);
size_t length = width*height*4;
uint32_t *pixels = (uint32_t *)malloc(length);
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width*4, CGImageGetColorSpace(iref), kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextTranslateCTM(context, 0.0, height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, CGRectMake(0.0, 0.0, width, height), iref);
CGImageRef outputRef = CGBitmapContextCreateImage(context);
UIImage *outputImage = [UIImage imageWithCGImage:outputRef];
UIImageWriteToSavedPhotosAlbum(outputImage, nil, nil, nil);
return outputRef;
}
As I already mentioned, this perfectly grabs the content of my EAGLview, but I can not get the image with its alpha values.
Any help appreciated. Thanks!
Two places I can see that you might be losing your transparency:
when you're drawing your scene: does your scene have a transparent background? make sure you're doing a glClear to something like (0,0,0,0) rather than (0,0,0,1).
when you're drawing the image to flip it over: what is the default background color here? Seems likely it's a non-transparent black and you'll end up with that where the transparent parts of your scene used to be.
You could check if #2 is your problem by saving the image before you flip it over, and if it is, you could avoid the flipping over process by flipping the memory in your pixels buffer directly rather than using Core Graphics calls.