iPhone: Create a screenshot programmatically and then add it as a subview - iphone

I would like to call a method which takes a screenshot and then load this screenshot as a subview. I am using Apples sample code on how to take a screen shot (see below) and was trying to use the result (an image) in my code. However, I don't really know how to get the image from the method into my code. This is what I tried; it's obviously wrong, but it's all I could come up with:
// Test Screenshot:
screenShot = [UIImage screenshot]; // THIS DOESN'T WORK
screenShotView = [[UIImageView alloc] initWithImage:screenShot];
[screenShotView setFrame:CGRectMake(0, 0, 320, 480)];
[self.view addSubview:screenShotView];
And this is Apple's sample code for the method:
- (UIImage*)screenshot
{
NSLog(#"Shot");
// Create a graphics context with the target size
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Iterate over every window from back to front
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen])
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
// Render the layer hierarchy to the current context
[[window layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
// Retrieve the screenshot image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Any help would be very much appreciated! Thanks.
EDIT: This is the viewDidLoad method in which I create a TextView and then try to capture a screen shot of it:
- (void)viewDidLoad {
// Setup TextView:
NSString* someText = #"Some Text";
CGRect frameText = CGRectMake(0, 0, 320, 480);
aTextView = [[UITextView alloc] initWithFrame:frameText];
aTextView.text = someText;
[self.view addSubview:aTextView];
// Test Screenshot:
screenShotView = [[UIImageView alloc] initWithImage:[self screenshot]];
[screenShotView setFrame:CGRectMake(10, 10, 200, 200)];
[self.view addSubview:screenShotView];
[self.view bringSubviewToFront:screenShotView];
[super viewDidLoad];
}

To use the image just change this line
screenShot = [UIImage screenshot];
To
screenShot = [self screenshot];
Edit: Check to see if the [self screenshot] returns a valid image or nil.
- (void)viewDidLoad {
// Setup TextView:
NSString* someText = #"Some Text";
CGRect frameText = CGRectMake(0, 0, 320, 480);
aTextView = [[UITextView alloc] initWithFrame:frameText];
aTextView.text = someText;
[self.view addSubview:aTextView];
// Test Screenshot:
UIImage *screenShotImage = [self screenShot];
if(screenShot){
screenShotView = [[UIImageView alloc] initWithImage:screenShotImage];
[screenShotView setFrame:CGRectMake(10, 10, 200, 200)];
[self.view addSubview:screenShotView];
[self.view bringSubviewToFront:screenShotView];
}else
NSLog(#"Something went wrong in screenShot method, the image is nil");
[super viewDidLoad];
}

Related

Face detection is not working properly on resized images specially in Device, why?

Here is the code I am using to detect face from an Image:
- (void)detectFaces:(UIImageView *)photo
{
CIImage *coreImage = [CIImage imageWithCGImage:photo.image.CGImage];
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil
options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh
forKey:CIDetectorAccuracy]];
NSArray* features = [detector featuresInImage:coreImage];
for(CIFaceFeature* faceFeature in features)
{
NSLog(#"self.view %#",NSStringFromCGRect(self.view.frame));
NSLog(#"self.view %#",NSStringFromCGRect(self.view.bounds));
NSLog(#"self.vounds %#",NSStringFromCGRect(faceFeature.bounds));
CGFloat faceWidth = faceFeature.bounds.size.width;
UIView* faceView = [[UIView alloc] initWithFrame:faceFeature.bounds];
faceView.layer.borderWidth = 1;
faceView.layer.borderColor = [[UIColor redColor] CGColor];
[self.view addSubview:faceView];
if(faceFeature.hasLeftEyePosition)
{
UIView* leftEyeView = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.leftEyePosition.x-faceWidth*0.15, faceFeature.leftEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];
[leftEyeView setBackgroundColor:[[UIColor blueColor] colorWithAlphaComponent:0.3]];
[leftEyeView setCenter:faceFeature.leftEyePosition];
leftEyeView.layer.cornerRadius = faceWidth*0.15;
[self.view addSubview:leftEyeView];
}
if(faceFeature.hasRightEyePosition)
{
UIView* leftEye = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.rightEyePosition.x-faceWidth*0.15, faceFeature.rightEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];
[leftEye setBackgroundColor:[[UIColor blueColor] colorWithAlphaComponent:0.3]];
[leftEye setCenter:faceFeature.rightEyePosition];
leftEye.layer.cornerRadius = faceWidth*0.15;
[self.view addSubview:leftEye];
}
if(faceFeature.hasMouthPosition)
{
UIView* mouth = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.mouthPosition.x-faceWidth*0.2, faceFeature.mouthPosition.y-faceWidth*0.2, faceWidth*0.4, faceWidth*0.4)];
[mouth setBackgroundColor:[[UIColor greenColor] colorWithAlphaComponent:0.3]];
[mouth setCenter:faceFeature.mouthPosition];
mouth.layer.cornerRadius = faceWidth*0.2;
[self.view addSubview:mouth];
}
}
}
This is code that I have used to resize an image:
- (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
//UIGraphicsBeginImageContext(newSize);
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
and finally I am calling detect face method like this:
UIImageView *inputImage = [[UIImageView alloc] initWithImage:[self imageWithImage:[UIImage imageNamed:#"facedetectionpic.jpg"] scaledToSize:CGSizeMake(320, 460)]];
[self.view addSubview:inputImage];
[inputImage setTransform:CGAffineTransformMakeScale(1, -1)];
[self.view setTransform:CGAffineTransformMakeScale(1, -1)];
[self performSelectorInBackground:#selector(detectFaces:) withObject:inputImage];
It is working properly in Simulator but not in device. Can anyone please help me on this.
Simulator:
Device:
When I have changed options in UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0); to UIGraphicsBeginImageContextWithOptions(newSize, NO, 1.0); it started working even in device. Solved the issue.
Change options in UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0); to
UIGraphicsBeginImageContextWithOptions(newSize, NO, 1.0); it will work.

Subclassing UIView in a map to draw lines

I have an app that displays custom maps. I use a CATiledView to display the maps.
I would like to be able to draw a route over the top of the maps. To do this, I am creating a UIView then adding it to the scrollView after I add the tiling layer like this:
- (void)displayTiledImageNamed:(NSString *)imageName size:(CGSize)imageSize
{
// clear the previous imageView
[imageView removeFromSuperview];
[imageView release];
imageView = nil;
[linesView removeFromSuperview];
[linesView release];
linesView = nil;
// reset our zoomScale to 1.0 before doing any further calculations
self.zoomScale = 1.0;
// make a new TilingView for the new image
imageView = [[TilingView alloc] initWithImageName:imageName size:imageSize];
linesView = [[LinesView alloc]initWithImageName:imageName size:imageSize];
linesView.backgroundColor = [UIColor clearColor];
[self addSubview:imageView];
[self addSubview:linesView];
[self configureForImageSize:imageSize];
}
The problem, is the line that I create in linesView is not scaling correctly.
It's hard to describe, but the line that is being created is scaled as if it were drawn on the device itself, rather than drawn on the map. See the following code:
#import "LinesView.h"
#implementation LinesView
#synthesize imageName;
- (id)initWithImageName:(NSString *)name size:(CGSize)size
{
if ((self = [super initWithFrame:CGRectMake(0, 0, size.width, size.height)])) {
self.imageName = name;
}
return self;
}
- (void)dealloc
{
[super dealloc];
}
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRGBStrokeColor(context, 1, 0, 0, 1);
CGContextSetLineWidth(context, 20.0);
CGContextMoveToPoint(context, 1.0f, 220.0f);
CGContextAddLineToPoint(context, 340.0f, 80);
CGContextStrokePath(context);
}
#end
I have tried putting the code to draw the line in the drawRect method of the tilingView and it works perfectly. The line width is 20px relative to the map. In the linesView the line appears to be 20px wide relative to the device and positioned relative to the scrollview.
Sorry I'm trying my best to describe the problem...
Figured this out - I added the linesView to the tilingView instead of the scrollView like this:
- (void)displayTiledImageNamed:(NSString *)imageName size:(CGSize)imageSize
{
// clear the previous imageView
[imageView removeFromSuperview];
[imageView release];
imageView = nil;
[linesView removeFromSuperview];
[linesView release];
linesView = nil;
// reset our zoomScale to 1.0 before doing any further calculations
self.zoomScale = 1.0;
// make a new TilingView for the new image
imageView = [[TilingView alloc] initWithImageName:imageName size:imageSize];
linesView = [[LinesView alloc]initWithImageName:imageName size:imageSize];
linesView.backgroundColor = [UIColor clearColor];
[self addSubview:imageView];
[imageView addSubview:linesView];
[self configureForImageSize:imageSize];
}

transition animation on only a part of the screen

I am displaying two images in my iPhone app, one on top, one on the bottom. They cover the entire screen. The user, with a swipe gesture, can change either of the images, depending on where the swipe started.
I want the image to change with an animated transition. It currently works without animation, or with the entire screen transitioning. Is it possible to make the transition occur over part of the screen?
I load the images for the first time (in viewDidLoad) thus:
// top image
UIImage *topImage = [UIImage imageNamed:#"top1.png"];
CGRect topframe = CGRectMake(0.0f, 0.0f, 320.0f, 240.0f);
UIImageView *topView = [[UIImageView alloc] initWithFrame:topframe];
topView.image = topImage;
[self.view addSubview:topView];
[topView release];
// bottom image
UIImage *bottomImage = [UIImage imageNamed:#"bottom1.png"];
CGRect bottomframe = CGRectMake(0.0f, 240.0f, 320.0f, 240.0f);
UIImageView *bottomView = [[UIImageView alloc] initWithFrame:bottomframe];
bottomView.image = bottomImage;
[self.view addSubview:bottomView];
[bottomView release];
When I detect a swipe, and detect which of the two images are to be changed, I call a routine like this one:
- (void)changeTopImage:(NSString *)newImage {
UIImage *topImage = [UIImage imageNamed:newImage];
CGRect topframe = CGRectMake(0.0f, 0.0f, 320.0f, 240.0f);
UIImageView *topView = [[UIImageView alloc] initWithFrame:topframe];
topView.image = topImage;
[self.view addSubview:topView];
[topView release];
}
Essentilly, I'm continually loading images on top of each other. Is that the best way to do it, especially in terms of memory management?
Everything else I've tried, using techniques such as below, make the entire screen transition:
[UIView beginAnimations:nil context:nil];
...
[UIView commitAnimations];
Thanks for any leads on which way I should be going on this.
well, an easy way could be this (repeat for the bottomImage):
don't release topView when you load it the first time in your method,
but declare it in your .h file,
and use a tempView too:
#interface YourClass: UIViewController{
UIImageView *topView;
UIImageView *tempView;
}
this way you can call them to move them and remove them when you load a new image
then in .m:
EDIT: some correction (see coco's comments):
[a] [b] line 4 = [c] line 11:
- (void)changeTopImage:(NSString *)newImage {
UIImage *topImage = [UIImage imageNamed:newImage];
//CGRect topframe = CGRectMake(0.0f, 0.0f, (320.0f + 320), 240.0f);
CGRect topframe = CGRectMake((0.0f + 320), 0.0f, 320.0f, 240.0f);
tempView = [[UIImageView alloc] initWithFrame:topframe];
//topView.image = topImage;
tempView.image = topImage;
[self.view addSubview:tempView];
[[UIApplication sharedApplication] beginIgnoringInteractionEvents];
[UIView beginAnimations:#"animation" context:NULL];
[UIView setAnimationDuration:0.5];
[UIView setAnimationDelegate:self];
tempView.center = topView.center;
// do this to push old topView away, comment next line if wanna new image just cover it
//topView.center = CGPointMake(topView.x - 320, topView.y);
topView.center = CGPointMake(topView.center.x - 320, topView.center.y);
// call a method when animation has finished:
[UIView setAnimationDidStopSelector:#selector(endOfAnimation:finished:context:)];
[UIView commitAnimations];
}
- (void)endOfAnimation:(NSString *)animationID finished:(NSNumber *)finished context:(void *)context{
[topView removeFromSuperview];
topView = tempView;
[tempView release];
tempView = nil;
[[UIApplication sharedApplication] endIgnoringInteractionEvents];
}
- (void)dealloc {
if (tempView != nil) {
[tempView release];
}
[topView release];
[super dealloc];
}

How to make something like iPhone Folders?

I'm wanting to know if there's a way I can transform my view to look something like iPhone folders. In other words, I want my view to split somewhere in the middle and reveal a view underneath it. Is this possible?
EDIT:
Per the suggestion below, I could take a screenshot of my application by doing this:
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Not sure what to do with this, however.
EDIT:2
I've figured out how to add some shadows to my view, and here's what I've achieved (cropped to show relevant part):
EDIT:3
http://github.com/jwilling/JWFolders
the basic thought will be to take a picture of your current state and split it somewhere. Then animate both parts by setting a new frame. I don't know how to take a screenshot programmatically so I can't provide sample code…
EDIT: hey hey it's not looking great but it works ^^
// wouldn't be sharp on retina displays, instead use "withOptions" and set scale to 0.0
// UIGraphicsBeginImageContext(self.view.bounds.size);
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 0.0);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *f = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGRect fstRect = CGRectMake(0, 0, 320, 200);
CGRect sndRect = CGRectMake(0, 200, 320, 260); // was 0,200,320,280
CGImageRef fImageRef = CGImageCreateWithImageInRect([f CGImage], fstRect);
UIImage *fCroppedImage = [UIImage imageWithCGImage:fImageRef];
CGImageRelease(fImageRef);
CGImageRef sImageRef = CGImageCreateWithImageInRect([f CGImage], sndRect);
UIImage *sCroppedImage = [UIImage imageWithCGImage:sImageRef];
CGImageRelease(sImageRef);
UIImageView *first = [[UIImageView alloc]initWithFrame:fstRect];
first.image = fCroppedImage;
//first.contentMode = UIViewContentModeTop;
UIImageView *second = [[UIImageView alloc]initWithFrame:sndRect];
second.image = sCroppedImage;
//second.contentMode = UIViewContentModeBottom;
UIView *blank = [[UIView alloc]initWithFrame:CGRectMake(0, 0, 320, 460)];
blank.backgroundColor = [UIColor darkGrayColor];
[self.view addSubview:blank];
[self.view addSubview:first];
[self.view addSubview:second];
[UIView animateWithDuration:2.0 animations:^{
second.center = CGPointMake(second.center.x, second.center.y+75);
}];
You can uncomment the two .contentMode lines and the quality will improve but in my case the subview has an offset of 10px or so (you can see it by setting a background color to both subviews)
//EDIT 2: ok found that bug. Had used the whole 320x480 screen, but had to cut off the status bar so it should be 320x460 and all is working great ;)
Instead of taking a snapshot of the view, you could use a separate view for each row of icons. You'll have to do a bit more work with repositioning stuff, but the rows won't be static when the folder is open (in other words, they'll keep redrawing as necessary).
I took relikd's code as a base and made it a bit more dynamic.
You can specify split position and direction when calling the function and I added a boarder to the split images.
#define splitAnimationTime 0.5
- (void)split:(SplitDirection)splitDirection
atYPostition:(int)splitYPosition
withRevealedViewHeight:(int)revealedViewHeight{
// wouldn't be sharp on retina displays, instead use "withOptions" and set scale to 0.0
// UIGraphicsBeginImageContext(self.view.bounds.size);
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 0.0);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *f = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGRect fullScreenRect = [self getScreenFrameForCurrentOrientation];
CGRect upperSplitRect = CGRectMake(0, 0,fullScreenRect.size.width, splitYPosition);
CGRect lowerSplitRect = CGRectMake(0, splitYPosition, fullScreenRect.size.width, fullScreenRect.size.height-splitYPosition);
CGImageRef upperImageRef = CGImageCreateWithImageInRect([f CGImage], upperSplitRect);
UIImage *upperCroppedImage = [UIImage imageWithCGImage:upperImageRef];
CGImageRelease(upperImageRef);
CGImageRef lowerImageRef = CGImageCreateWithImageInRect([f CGImage], lowerSplitRect);
UIImage *lowerCroppedImage = [UIImage imageWithCGImage:lowerImageRef];
CGImageRelease(lowerImageRef);
UIImageView *upperImage = [[UIImageView alloc]initWithFrame:upperSplitRect];
upperImage.image = upperCroppedImage;
//first.contentMode = UIViewContentModeTop;
UIView *upperBoarder = [[UIView alloc]initWithFrame:CGRectMake(0, splitYPosition, fullScreenRect.size.width, 1)];
upperBoarder.backgroundColor = [UIColor whiteColor];
[upperImage addSubview:upperBoarder];
UIImageView *lowerImage = [[UIImageView alloc]initWithFrame:lowerSplitRect];
lowerImage.image = lowerCroppedImage;
//second.contentMode = UIViewContentModeBottom;
UIView *lowerBoarder = [[UIView alloc]initWithFrame:CGRectMake(0, 0, fullScreenRect.size.width, 1)];
lowerBoarder.backgroundColor = [UIColor whiteColor];
[lowerImage addSubview:lowerBoarder];
int reveledViewYPosition = splitYPosition;
if(splitDirection==SplitDirectionUp){
reveledViewYPosition = splitYPosition - revealedViewHeight;
}
UIView *revealedView = [[UIView alloc]initWithFrame:CGRectMake(0, reveledViewYPosition, fullScreenRect.size.width, revealedViewHeight)];
revealedView.backgroundColor = [UIColor scrollViewTexturedBackgroundColor];
[self.view addSubview:revealedView];
[self.view addSubview:upperImage];
[self.view addSubview:lowerImage];
[UIView animateWithDuration:splitAnimationTime animations:^{
if(splitDirection==SplitDirectionUp){
upperImage.center = CGPointMake(upperImage.center.x, upperImage.center.y-revealedViewHeight);
} else { //assume down
lowerImage.center = CGPointMake(lowerImage.center.x, lowerImage.center.y+revealedViewHeight);
}
}];
}
This means I can call it like this:
[self split:SplitDirectionUp atYPostition:500 withRevealedViewHeight:200];
I used these conveniance functions in the updated split function:
- (CGRect)getScreenFrameForCurrentOrientation {
return [self getScreenFrameForOrientation:[UIApplication sharedApplication].statusBarOrientation];
}
- (CGRect)getScreenFrameForOrientation:(UIInterfaceOrientation)orientation {
UIScreen *screen = [UIScreen mainScreen];
CGRect fullScreenRect = screen.bounds;
BOOL statusBarHidden = [UIApplication sharedApplication].statusBarHidden;
//implicitly in Portrait orientation.
if(orientation == UIInterfaceOrientationLandscapeRight || orientation == UIInterfaceOrientationLandscapeLeft){
CGRect temp = CGRectZero;
temp.size.width = fullScreenRect.size.height;
temp.size.height = fullScreenRect.size.width;
fullScreenRect = temp;
}
if(!statusBarHidden){
CGFloat statusBarHeight = 20;
fullScreenRect.size.height -= statusBarHeight;
}
return fullScreenRect;
}
and this enum:
typedef enum SplitDirection
{
SplitDirectionDown,
SplitDirectionUp
}SplitDirection;
Adding a return to normaal function and adding the arrow would be a great addition.

How to tint an image/show a colour?

2 things i want to do, which are related:
Show a block of any colour. So i could change that colour to something else at any time.
Tint a UIImage to be a different colour. An overlay of colour with alpha turned down could work here, but say it was an image which had a transparent background and didn't take up the full square of the image.
Any ideas?
Another option, would be to use category methods on UIImage like this...
// Tint the image, default to half transparency if given an opaque colour.
- (UIImage *)imageWithTint:(UIColor *)tintColor {
CGFloat white, alpha;
[tintColor getWhite:&white alpha:&alpha];
return [self imageWithTint:tintColor alpha:(alpha == 1.0 ? 0.5f : alpha)];
}
// Tint the image
- (UIImage *)imageWithTint:(UIColor *)tintColor alpha:(CGFloat)alpha {
// Begin drawing
CGRect aRect = CGRectMake(0.f, 0.f, self.size.width, self.size.height);
UIGraphicsBeginImageContext(aRect.size);
// Get the graphic context
CGContextRef c = UIGraphicsGetCurrentContext();
// Converting a UIImage to a CGImage flips the image,
// so apply a upside-down translation
CGContextTranslateCTM(c, 0, self.size.height);
CGContextScaleCTM(c, 1.0, -1.0);
// Draw the image
[self drawInRect:aRect];
// Set the fill color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextSetFillColorSpace(c, colorSpace);
// Set the mask to only tint non-transparent pixels
CGContextClipToMask(c, aRect, self.CGImage);
// Set the fill color
CGContextSetFillColorWithColor(c, [tintColor colorWithAlphaComponent:alpha].CGColor);
UIRectFillUsingBlendMode(aRect, kCGBlendModeColor);
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Release memory
CGColorSpaceRelease(colorSpace);
return img;
}
The first one is easy. Make a new UIView and set its background color to whatever color you’d like.
The second is more difficult. As you mentioned, you can put a new view on top of it with transparency turned down, but to get it to clip in the same places, you’d want to use a mask. Something like this:
UIImage *myImage = [UIImage imageNamed:#"foo.png"];
UIImageView *originalImageView = [[UIImageView alloc] initWithImage:myImage];
[originalImageView setFrame:CGRectMake(0.0f, 0.0f, 100.0f, 100.0f)];
[parentView addSubview:originalImageView];
UIView *overlay = [[UIView alloc] initWithFrame:[originalImageView frame]];
UIImageView *maskImageView = [[UIImageView alloc] initWithImage:myImage];
[maskImageView setFrame:[overlay bounds]];
[[overlay layer] setMask:[maskImageView layer]];
[overlay setBackgroundColor:[UIColor redColor]];
[parentView addSubview:overlay];
Keep in mind you’ll have to #import <QuartzCore/QuartzCore.h> in the implementation file.
Here is another way to implement image tinting, especially if you are already using QuartzCore for something else.
Import QuartzCore:
#import <QuartzCore/QuartzCore.h>
Create transparent CALayer and add it as a sublayer for the image you want to tint:
CALayer *sublayer = [CALayer layer];
[sublayer setBackgroundColor:[UIColor whiteColor].CGColor];
[sublayer setOpacity:0.3];
[sublayer setFrame:toBeTintedImage.frame];
[toBeTintedImage.layer addSublayer:sublayer];
Add QuartzCore to your projects Framework list (if it isn't already there), otherwise you'll get compiler errors like this:
Undefined symbols for architecture i386: "_OBJC_CLASS_$_CALayer"
An easy way to achieve 1 is to create a UILabel or even a UIView and change the backgroundColor as you like.
There is a way to multiply colours instead of just overlaying them, and that should work for 2. See this tutorial.
Try this
- (void)viewDidLoad
{
[super viewDidLoad];
UIView* maskedView = [self filledViewForPNG:[UIImage imageNamed:#"mask_effect.png"]
mask:[UIImage imageNamed:#"mask_image.png"]
maskColor:[UIColor colorWithRed:.6 green:.2 blue:.7 alpha:1]];
[self.view addSubview:maskedView];
}
-(UIView*)filledViewForPNG:(UIImage*)image mask:(UIImage*)maskImage maskColor:(UIColor*)maskColor
{
UIImageView *pngImageView = [[UIImageView alloc] initWithImage:image];
UIImageView *maskImageView = [[UIImageView alloc] initWithImage:maskImage];
CGRect bounds;
if (image) {
bounds = pngImageView.bounds;
}
else
{
bounds = maskImageView.bounds;
}
UIView* parentView = [[UIView alloc]initWithFrame:bounds];
[parentView setAutoresizesSubviews:YES];
[parentView setClipsToBounds:YES];
UIView *overlay = [[UIView alloc] initWithFrame:bounds];
[[overlay layer] setMask:[maskImageView layer]];
[overlay setBackgroundColor:maskColor];
[parentView addSubview:overlay];
[parentView addSubview:pngImageView];
return parentView;
}