Crop circular or elliptical image from original UIImage - iphone

I am working on openCV for detecting the face .I want face to get cropped once its detected.Till now I got the face and have marked the rect/ellipse around it on iPhone.
Please help me out in cropping the face in circular/elliptical pattern
(UIImage *) opencvFaceDetect:(UIImage *)originalImage
{
cvSetErrMode(CV_ErrModeParent);
IplImage *image = [self CreateIplImageFromUIImage:originalImage];
// Scaling down
/*
Creates IPL image (header and data) ----------------cvCreateImage
CVAPI(IplImage*) cvCreateImage( CvSize size, int depth, int channels );
*/
IplImage *small_image = cvCreateImage(cvSize(image->width/2,image->height/2),
IPL_DEPTH_8U, 3);
/*SMOOTHES DOWN THYE GUASSIAN SURFACE--------:cvPyrDown*/
cvPyrDown(image, small_image, CV_GAUSSIAN_5x5);
int scale = 2;
// Load XML
NSString *path = [[NSBundle mainBundle] pathForResource:#"haarcascade_frontalface_default" ofType:#"xml"];
CvHaarClassifierCascade* cascade = (CvHaarClassifierCascade*)cvLoad([path cStringUsingEncoding:NSASCIIStringEncoding], NULL, NULL, NULL);
// Check whether the cascade has loaded successfully. Else report and error and quit
if( !cascade )
{
NSLog(#"ERROR: Could not load classifier cascade\n");
//return;
}
//Allocate the Memory storage
CvMemStorage* storage = cvCreateMemStorage(0);
// Clear the memory storage which was used before
cvClearMemStorage( storage );
CGColorSpaceRef colorSpace;
CGContextRef contextRef;
CGRect face_rect;
// Find whether the cascade is loaded, to find the faces. If yes, then:
if( cascade )
{
CvSeq* faces = cvHaarDetectObjects(small_image, cascade, storage, 1.1f, 3, 0, cvSize(20, 20));
cvReleaseImage(&small_image);
// Create canvas to show the results
CGImageRef imageRef = originalImage.CGImage;
colorSpace = CGColorSpaceCreateDeviceRGB();
contextRef = CGBitmapContextCreate(NULL, originalImage.size.width, originalImage.size.height, 8, originalImage.size.width * 4,
colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault);
//VIKAS
CGContextDrawImage(contextRef, CGRectMake(0, 0, originalImage.size.width, originalImage.size.height), imageRef);
CGContextSetLineWidth(contextRef, 4);
CGContextSetRGBStrokeColor(contextRef, 1.0, 1.0, 1.0, 0.5);
// Draw results on the iamge:Draw all components of face in the form of small rectangles
// Loop the number of faces found.
for(int i = 0; i < faces->total; i++)
{
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
// Calc the rect of faces
// Create a new rectangle for drawing the face
CvRect cvrect = *(CvRect*)cvGetSeqElem(faces, i);
// CGRect face_rect = CGContextConvertRectToDeviceSpace(contextRef,
// CGRectMake(cvrect.x * scale, cvrect.y * scale, cvrect.width * scale, cvrect.height * scale));
face_rect = CGContextConvertRectToDeviceSpace(contextRef,
CGRectMake(cvrect.x*scale, cvrect.y , cvrect.width*scale , cvrect.height*scale*1.25
));
facedetectapp=(FaceDetectAppDelegate *)[[UIApplication sharedApplication]delegate];
facedetectapp.grabcropcoordrect=face_rect;
NSLog(#" FACE off %f %f %f %f",facedetectapp.grabcropcoordrect.origin.x,facedetectapp.grabcropcoordrect.origin.y,facedetectapp.grabcropcoordrect.size.width,facedetectapp.grabcropcoordrect.size.height);
CGContextStrokeRect(contextRef, face_rect);
//CGContextFillEllipseInRect(contextRef,face_rect);
CGContextStrokeEllipseInRect(contextRef,face_rect);
[pool release];
}
}
CGImageRef imageRef = CGImageCreateWithImageInRect([originalImage CGImage],face_rect);
UIImage *returnImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
cvReleaseMemStorage(&storage);
cvReleaseHaarClassifierCascade(&cascade);
return returnImage;
}
}
Thanks
Vikas

There are a pile of blend modes to choose from, a few of which are useful for "masking". I believe this should do approximately what you want:
CGContextSaveGState(contextRef);
CGContextSetBlendMode(contextRef,kCGBlendModeDestinationIn);
CGContextFillEllipseInRect(contextRef,face_rect);
CGContextRestoreGState(contextRef);
"approximately" because it'll mask the entire context contents every time, thus doing the wrong thing for more than one face. To handle this case, use CGContextAddEllipseInRect() in the loop and CGContextFillPath() at the end.
You might also want to look at CGContextBeginTransparencyLayerWithRect().

Following is the answer I given in How to crop UIImage on oval shape or circle shape? to make the image circle. It works for me..
Download the Support archive file from URL http://vocaro.com/trevor/blog/2009/10/12/resize-a-uiimage-the-right-way/
#import "UIImage+RoundedCorner.h"
#import "UIImage+Resize.h"
Following lines used to resize the image and convert in to round with radius
UIImage *mask = [UIImage imageNamed:#"mask.jpg"];
mask = [mask resizedImage:CGSizeMake(47, 47) interpolationQuality:kCGInterpolationHigh ];
mask = [mask roundedCornerImage:23.5 borderSize:1];
Hope it helps some one..

Related

Cropping image with transparency in iPhone

I am working on Jigsaw type of game where i have two images for masking,
I have implemented this code for masking
- (UIImage*) maskImage:(UIImage *)image withMaskImage:(UIImage*)maskImage {
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef maskImageRef = [maskImage CGImage];
CGContextRef mainViewContentContext = CGBitmapContextCreate (NULL, maskImage.size.width, maskImage.size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
if (mainViewContentContext==NULL)
return NULL;
CGFloat ratio = 0;
ratio = maskImage.size.width/ image.size.width;
if(ratio * image.size.height < maskImage.size.height) {
ratio = maskImage.size.height/ image.size.height;
}
CGRect rect1 = {{0, 0}, {maskImage.size.width, maskImage.size.height}};
CGRect rect2 = {{-((image.size.width*ratio)-maskImage.size.width)/2,-((image.size.height*ratio)-maskImage.size.height)/2},{image.size.width*ratio, image.size.height*ratio}};
CGContextClipToMask(mainViewContentContext, rect1, maskImageRef);
CGContextDrawImage(mainViewContentContext, rect2, image.CGImage);
CGImageRef newImage = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
UIImage *theImage = [UIImage imageWithCGImage:newImage];
CGImageRelease(newImage);
return theImage;
}
+
=
This is final result i got after masking.
now i would like to crop image in piece like and and so on parametrically(crop an image by transparency).
if any one has implemented such code or any idea on this scenario please share.
Thanks.
I am using this line of code for as Guntis Treulands's suggestion
int i=1;
for (int x=0; x<=212; x+=106) {
for (int y=0; y<318; y+=106) {
CGRect rect = CGRectMake(x, y, 106, 106);
CGRect rect2x = CGRectMake(x*2, y*2, 212, 212);
UIImage *orgImg = [UIImage imageNamed:#"cat#2x.png"];
UIImage *frmImg = [UIImage imageNamed:[NSString stringWithFormat:#"%d#2x.png",i]];
UIImage *cropImg = [self cropImage:orgImg withRect:rect2x];
UIImageView *tmpImg = [[UIImageView alloc] initWithFrame:rect];
[tmpImg setUserInteractionEnabled:YES];
[tmpImg setImage:[self maskImage:cropImg withMaskImage:frmImg]];
[self.view addSubview:tmpImg];
i++;
}
}
orgImg is original cat image, frmImg frame for holding individual piece, masked in photoshop and cropImg is 106x106 cropped image of original cat#2x.png.
my function for cropping is as following
- (UIImage *) cropImage:(UIImage*)originalImage withRect:(CGRect)rect {
return [UIImage imageWithCGImage:CGImageCreateWithImageInRect([originalImage CGImage], rect)];
}
UPDATE 2
I became really curious to find a better way to create a Jigsaw puzzle, so I spent two weekends and created a demo project of Jigsaw puzzle.
It contains:
provide column/row count and it will generate necessary puzzle pieces with correct width/height. The more columns/rows - the smaller the width/height and outline/inline puzzle form.
each time generate randomly sides
can randomly position / rotate pieces at the beginning of launch
each piece can be rotated by tap, or by two fingers (like a real piece) - but once released, it will snap to 90/180/270/360 degrees
each piece can be moved if touched on its “touchable shape” boundary (which is mostly the - same visible puzzle shape, but WITHOUT inline shapes)
Drawbacks:
no checking if piece is in its right place
if more than 100 pieces - it starts to lag, because, when picking up a piece, it goes through all subviews until it finds correct piece.
UPDATE
Thanks for updated question.
I managed to get this:
As you can see - jigsaw item is cropped correctly, and it is in square imageView (green color is UIImageView backgroundColor).
So - what I did was:
CGRect rect = CGRectMake(105, 0, 170, 170); //~ location on cat image where second Jigsaw item will be.
UIImage *originalCatImage = [UIImage imageNamed:#"cat.png"];//original cat image
UIImage *jigSawItemMask = [UIImage imageNamed:#"JigsawItemNo2.png"];//second jigsaw item mask (visible in my answer) (same width/height as cat image.)
UIImage *fullJigSawItemImage = [jigSawItemMask maskImage:originalCatImage];//masking - so that from full cat image would be visible second jigsaw item
UIImage *croppedJigSawItemImage = [self fullJigSawItemImage withRect:rect];//cropping so that we would get small image with jigsaw item centered in it.
For image masking I am using UIImage category function: (but you can probably use your masking function. But I'll post it anyways.)
- (UIImage*) maskImage:(UIImage *)image
{
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
UIImage *maskImage = self;
CGImageRef maskImageRef = [maskImage CGImage];
// create a bitmap graphics context the size of the image
CGContextRef mainViewContentContext = CGBitmapContextCreate (NULL, maskImage.size.width, maskImage.size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
if (mainViewContentContext==NULL)
return NULL;
CGFloat ratio = 0;
ratio = maskImage.size.width/ image.size.width;
if(ratio * image.size.height < maskImage.size.height) {
ratio = maskImage.size.height/ image.size.height;
}
CGRect rect1 = {{0, 0}, {maskImage.size.width, maskImage.size.height}};
CGRect rect2 = {{-((image.size.width*ratio)-maskImage.size.width)/2 , -((image.size.height*ratio)-maskImage.size.height)/2}, {image.size.width*ratio, image.size.height*ratio}};
CGContextClipToMask(mainViewContentContext, rect1, maskImageRef);
CGContextDrawImage(mainViewContentContext, rect2, image.CGImage);
// Create CGImageRef of the main view bitmap content, and then
// release that bitmap context
CGImageRef newImage = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
UIImage *theImage = [UIImage imageWithCGImage:newImage];
CGImageRelease(newImage);
// return the image
return theImage;
}
PREVIOUS ANSWER
Can you prepare a mask for each piece?
For example, you have that frame image. Can you cut it in photoshop in 9 separate images, where in each image it would only show corresponding piece. (all the rest - delete).
Example - second piece mask:
Then you use each of these newly created mask images on cat image - each piece will mask all image, but one peace. Thus you will have 9 piece images using 9 different masks.
For larger or different jigsaw frame - again, create separated image masks.
This is a basic solution, but not perfect, as you need to prepare each peace mask separately.
Hope it helps..

How can I draw an image?

I'm programming on objective-c. I have an image a line (see below) (1 x 30) pixels.
How can I get a UIImage (50 x 30) (see below) from this line?
Create a CGBitmapContext with size of 50 * 30 than you can just draw that image on the context by using CGContextDrawImage.
After that use CGBitmapContextCreateImage and [UIImage imageWithCGImage:] to create the UIImage
CGContextRef CreateBitmapContext(int pixelsWide, int pixelsHigh)
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (pixelsWide * 4); // RGBA
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
colorSpace = CGColorSpaceCreateDeviceRGB();
context = CGBitmapContextCreate (NULL,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
NSCAssert(context != NULL, #"cannot create bitmap context");
CGColorSpaceRelease( colorSpace );
return context;
}
CGContextRef context = CreateBitmapContext(50, 30);
UIImage *yourLineImage = ...;
CGImageRef cgImg = [yourLineImage CGImage];
for (int i = 0; i < 50; i++) {
CGRect rect;
rect.origin.x = i;
rect.origin.y = 0;
rect.size.width = 1;
rect.size.height = 30;
CGContextDrawImage(context, rect, cgImg);
}
CGImageRef output = CGBitmapContextCreateImage(context);
UIImage *result = [UIImage imageWithCGImage:output];
if your line has simple color, try this lazy method:
UIImageView *line = [[UIImageView alloc] initWithFrame:CGRectMake(10, 10, 50, 30)];
[line setImage:[UIImage imageNamed:#"your gray line"]];
[self.view addSubView:line];
You can use +[UIColor colorWithPatternImage] in iOS:
NSString *path =
[[NSBundle mainBundle] pathForResource:#"<# the pattern file #>" ofType:#"png"];
UIColor *patternColor = [UIColor colorWithPatternImage:
[UIImage imageWithContentsOfFile:path]];
/* use patternColor anywhere as a regular UIColor instance */
It works better with seamless patterns. For OSX you can use +[NSColor colorWithPatternImage] method.
If you just want to draw the image, you might want to try UIImage's drawInRect: method.
You'd typically want to call this from your custom UIView's drawRect:.
There are different approaches to drawing in Cocoa (and Cocoa-Touch) so here's Apple's Drawing and Printing Guide for iOS.

Face Tracking in iPhone using OpenCV

I want to create face tracking in iPhone same like this code. Its a mac os code but I want to make it in iPhone same as given code.
Any Idea about face tracking in iphone.
You have to use OPENCV to detect the face and import it to your code.In this method i ahve used a rectangle/ellipse to represent the face detected
-(UIImage *) opencvFaceDetect:(UIImage *)originalImage {
cvSetErrMode(CV_ErrModeParent);
IplImage *image = [self CreateIplImageFromUIImage:originalImage];
// Scaling down
/*
Creates IPL image (header and data) ----------------cvCreateImage
CVAPI(IplImage*) cvCreateImage( CvSize size, int depth, int channels );
*/
IplImage *small_image = cvCreateImage(cvSize(image->width/2,image->height/2), IPL_DEPTH_8U, 3);
/*SMOOTHES DOWN THYE GUASSIAN SURFACE--------:cvPyrDown*/
cvPyrDown(image, small_image, CV_GAUSSIAN_5x5);
int scale = 2;
// Load XML
NSString *path = [[NSBundle mainBundle] pathForResource:#"haarcascade_frontalface_default" ofType:#"xml"];
CvHaarClassifierCascade* cascade = (CvHaarClassifierCascade*)cvLoad([path cStringUsingEncoding:NSASCIIStringEncoding], NULL, NULL, NULL);
// Check whether the cascade has loaded successfully. Else report and error and quit
if( !cascade )
{
NSLog(#"ERROR: Could not load classifier cascade\n");
//return;
}
//Allocate the Memory storage
CvMemStorage* storage = cvCreateMemStorage(0);
// Clear the memory storage which was used before
cvClearMemStorage( storage );
CGColorSpaceRef colorSpace;
CGContextRef contextRef;
CGRect face_rect;
// Find whether the cascade is loaded, to find the faces. If yes, then:
if( cascade )
{
CvSeq* faces = cvHaarDetectObjects(small_image, cascade, storage, 1.1f, 3, 0, cvSize(20, 20));
cvReleaseImage(&small_image);
// Create canvas to show the results
CGImageRef imageRef = originalImage.CGImage;
colorSpace = CGColorSpaceCreateDeviceRGB();
contextRef = CGBitmapContextCreate(NULL, originalImage.size.width, originalImage.size.height,
8, originalImage.size.width * 4, colorSpace,
kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault);
//VIKAS
CGContextDrawImage(contextRef, CGRectMake(0, 0, originalImage.size.width, originalImage.size.height), imageRef);
CGContextSetLineWidth(contextRef, 4);
CGContextSetRGBStrokeColor(contextRef, 1.0, 1.0, 1.0, 0.5);
// Draw results on the image:Draw all components of face in the form of small rectangles
// Loop the number of faces found.
for(int i = 0; i < faces->total; i++)
{
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
// Calc the rect of faces
// Create a new rectangle for drawing the face
CvRect cvrect = *(CvRect*)cvGetSeqElem(faces, i);
// CGRect face_rect = CGContextConvertRectToDeviceSpace(contextRef,
// CGRectMake(cvrect.x * scale, cvrect.y * scale, cvrect.width * scale, cvrect.height * scale));
face_rect = CGContextConvertRectToDeviceSpace(contextRef,
CGRectMake(cvrect.x*scale, cvrect.y, cvrect.width*scale, cvrect.height*scale*1.25));
facedetectapp=(FaceDetectAppDelegate *)[[UIApplication sharedApplication]delegate];
facedetectapp.grabcropcoordrect=face_rect;
NSLog(#" FACE off %f %f %f %f",facedetectapp.grabcropcoordrect.origin.x,facedetectapp.grabcropcoordrect.origin.y,facedetectapp.grabcropcoordrect.size.width,facedetectapp.grabcropcoordrect.size.height);
CGContextStrokeRect(contextRef, face_rect);
//CGContextFillEllipseInRect(contextRef,face_rect);
CGContextStrokeEllipseInRect(contextRef,face_rect);
[pool release];
}
}
CGImageRef imageRef = CGImageCreateWithImageInRect([originalImage CGImage],face_rect);
UIImage *returnImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
cvReleaseMemStorage(&storage);
cvReleaseHaarClassifierCascade(&cascade);
return returnImage;
}
Take a look at this article. It includes a demo project and explains how to get best performance when processing live video.
Computer vision with iOS Part 2: Face tracking in live video

show image in a CGContextRef

What i am doing, i downloded a code for calender now i want to show images on its tiles(for date).
What i am trying shows in code
- (void)drawTextInContext:(CGContextRef)ctx
{
CGContextSaveGState(ctx);
CGFloat width = self.bounds.size.width;
CGFloat height = self.bounds.size.height;
CGFloat numberFontSize = floorf(0.3f * width);
CGContextSetFillColorWithColor(ctx, kDarkCharcoalColor);
CGContextSetTextDrawingMode(ctx, kCGTextClip);
for (NSInteger i = 0; i < [self.text length]; i++) {
NSString *letter = [self.text substringWithRange:NSMakeRange(i, 1)];
CGSize letterSize = [letter sizeWithFont:[UIFont boldSystemFontOfSize:numberFontSize]];
CGContextSaveGState(ctx); // I will need to undo this clip after the letter's gradient has been drawn
[letter drawAtPoint:CGPointMake(4.0f+(letterSize.width*i), 0.0f) withFont:[UIFont boldSystemFontOfSize:numberFontSize]];
if ([self.date isToday]) {
CGContextSetFillColorWithColor(ctx, kWhiteColor);
CGContextFillRect(ctx, self.bounds);
} else {
// CGContextDrawLinearGradient(ctx, TextFillGradient, CGPointMake(0,0), CGPointMake(0, height/3), kCGGradientDrawsAfterEndLocation);
CGDataProviderRef dataProvider = CGDataProviderCreateWithFilename("left-arrow.png");
CGImageRef image = CGImageCreateWithPNGDataProvider(dataProvider, NULL, NO, kCGRenderingIntentDefault);
//UIImage* image = [UIImage imageNamed:#"left-arrow.png"];
//CGImageRef imageRef = image.CGImage;
CGContextDrawImage(ctx, CGRectMake(8.0f+(letterSize.width*i), 0.0f, 5, 5), image);
//im.image=[UIImage imageNamed:#"left-arrow.png"];
}
CGContextRestoreGState(ctx); // get rid of the clip for the current letter
}
CGContextRestoreGState(ctx);
}
In else condition i want to show images on the tile so for that i am converting image objects in the CGImageRef.
Please help me.
I am not sure this would be done in same manner or in other manner please suggest your way to do this.
Thanx a lot.
The file-path of the image seems to problematic. You can retrieve the correct path with the NSBundle-methods. Also you're leaking a lot of memory, because you don't release your images and data-providers. To make a long story short, try this:
[[UIImage imageNamed:#"left-arrow.png"] drawInRect:...]
or even simpler:
[[UIImage imageNamed:#"left-arrow.png"] drawAtPoint:...]

UIImage created from CGImageRef fails with UIImagePNGRepresentation

I'm using the following code to crop and create a new UIImage out of a bigger one. I've isolated the issue to be with the function CGImageCreateWithImageInRect() which seem to not set some CGImage property the way I want. :-) The problem is that a call to function UIImagePNGRepresentation() fails returning a nil.
CGImageRef origRef = [stillView.image CGImage];
CGImageRef cgCrop = CGImageCreateWithImageInRect( origRef, theRect);
UIImage *imgCrop = [UIImage imageWithCGImage:cgCrop];
...
NSData *data = UIImagePNGRepresentation ( imgCrop);
-- libpng error: No IDATs written into file
Any idea what might wrong or alternative for cropping a rect out of UIImage?
I had the same problem, but only when testing compatibility on iOS 3.2. On 4.2 it works fine.
In the end I found this http://www.hive05.com/2008/11/crop-an-image-using-the-iphone-sdk/ which works on both, albeit a little more verbose!
I converted this into a category on UIImage:
UIImage+Crop.h
#interface UIImage (Crop)
- (UIImage*) imageByCroppingToRect:(CGRect)rect;
#end
UIImage+Crop.m
#implementation UIImage (Crop)
- (UIImage*) imageByCroppingToRect:(CGRect)rect
{
//create a context to do our clipping in
UIGraphicsBeginImageContext(rect.size);
CGContextRef currentContext = UIGraphicsGetCurrentContext();
//create a rect with the size we want to crop the image to
//the X and Y here are zero so we start at the beginning of our
//newly created context
CGRect clippedRect = CGRectMake(0, 0, rect.size.width, rect.size.height);
CGContextClipToRect( currentContext, clippedRect);
//create a rect equivalent to the full size of the image
//offset the rect by the X and Y we want to start the crop
//from in order to cut off anything before them
CGRect drawRect = CGRectMake(rect.origin.x * -1,
rect.origin.y * -1,
self.size.width,
self.size.height);
//draw the image to our clipped context using our offset rect
CGContextTranslateCTM(currentContext, 0.0, rect.size.height);
CGContextScaleCTM(currentContext, 1.0, -1.0);
CGContextDrawImage(currentContext, drawRect, self.CGImage);
//pull the image from our cropped context
UIImage *cropped = UIGraphicsGetImageFromCurrentImageContext();
//pop the context to get back to the default
UIGraphicsEndImageContext();
//Note: this is autoreleased
return cropped;
}
#end
In a PNG there are various chunks present, some containing palette info, some actual image data and some other information, it's a very interesting standard. The IDAT chunk is the bit that actually contains the image data. If there's no "IDAT written into file" then libpng has had some issue creating a PNG from the input data.
I don't know exactly what your stillView.image is, but what happens when you pass your code a CGImageRef that is certainly valid? What are the actual values in theRect? If your theRect is beyond the bounds of the image then the cgCrop you're trying to use to make the UIImage could easily be nil - or not nil, but containing no image or an image with width and height 0, giving libpng nothing to work with.
It seems the solution you are trying should work, but I recommend to use this:
CGImageRef image = [stillView.image CGImage];
CGRect cropZone;
size_t cWitdh = cropZone.size.width;
size_t cHeight = cropZone.size.height;
size_t bitsPerComponent = CGImageGetBitsPerComponent(image);
size_t bytesPerRow = CGImageGetBytesPerRow(image) / CGImageGetWidth(image) * cWidth;
//Now we build a Context with those dimensions.
CGContextRef context = CGBitmapContextCreate(nil, cWitdh, cHeight, bitsPerComponent, bytesPerRow, CGColorSpaceCreateDeviceRGB(), CGImageGetBitmapInfo(image));
CGContextDrawImage(context, cropZone, image);
CGImageRef result = CGBitmapContextCreateImage(context);
UIImage * cropUIImage = [[UIImage alloc] initWithCGImage:tmp];
CGContextRelease(context);
CGImageRelease(mergeResult);
NSData * imgData = UIImagePNGRepresentation ( cropUIImage);
UIImage *croppedImage = [self imageByCropping:yourImageView.image toRect:heredefineyourRect];
CGSize size = CGSizeMake(croppedImage.size.height, croppedImage.size.width);
UIGraphicsBeginImageContext(size);
CGPoint pointImg1 = CGPointMake(0,0);
[croppedImage drawAtPoint:pointImg1 ];
[[UIImage imageNamed:yourImagenameDefine] drawInRect:CGRectMake(0,532, 150,80) ];//here define your Reactangle
UIImage* result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
croppedImage = result;
yourCropImageView.image=croppedImage;
[yourCropImageView.image retain];