I was wondering if anyone could provide an example of how to take a screenshot which mixes OpenGL and UIKit elements. Ever since Apple made UIGetScreenImage() private this has become a pretty difficult task because the two common methods Apple used to replace it capture only UIKit or only OpenGL.
This similar question references Apple's Technical Q&A QA1714, but the QA only describes how to handle elements from the camera and UIKit. How do you go about rendering the UIKit view hierarchy into an image context and then drawing the image of your OpenGL ES view on top of it like the answer to the similar question suggests?
This should do the trick. Basically rendering everything to CG and creating an image you can do whatever with.
// In Your UI View Controller
- (UIImage *)createSavableImage:(UIImage *)plainGLImage
{
UIImageView *glImage = [[UIImageView alloc] initWithImage:[myGlView drawGlToImage]];
glImage.transform = CGAffineTransformMakeScale(1, -1);
UIGraphicsBeginImageContext(self.view.bounds.size);
//order of getting the context depends on what should be rendered first.
// this draws the UIKit on top of the gl image
[glImage.layer renderInContext:UIGraphicsGetCurrentContext()];
[someUIView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Do something with resulting image
return finalImage;
}
// In Your GL View
- (UIImage *)drawGlToImage
{
// Draw OpenGL data to an image context
UIGraphicsBeginImageContext(self.frame.size);
unsigned char buffer[320 * 480 * 4];
CGContextRef aContext = UIGraphicsGetCurrentContext();
glReadPixels(0, 0, 320, 480, GL_RGBA, GL_UNSIGNED_BYTE, &buffer);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, &buffer, 320 * 480 * 4, NULL);
CGImageRef iref = CGImageCreate(320,480,8,32,320*4, CGColorSpaceCreateDeviceRGB(), kCGImageAlphaLast, ref, NULL, true, kCGRenderingIntentDefault);
CGContextScaleCTM(aContext, 1.0, -1.0);
CGContextTranslateCTM(aContext, 0, -self.frame.size.height);
UIImage *im = [[UIImage alloc] initWithCGImage:iref];
UIGraphicsEndImageContext();
return im;
}
Then, to create a screenshot
UIImage *glImage = [self drawGlToImage];
UIImage *screenshot = [self createSavableImage:glImage];
Related
I am developing an iPad application in iOS6 that shows designing of houses with different colors and textures. For that I am using cocos2d. And for showing the used textures and colors on the home, I am using UIKit views.
Now I want to take a screenshot of this view, which contains both cocos2d layer and UIKit views.
If I am taking screen shot using cocos2d like:
UIImage *screenshot = [AppDelegate screenshotWithStartNode:n];
then it is only taking a snap of the cocos2d layer.
else if I am taking screen shot using UIkit like:
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
then it is only taking capture of the UIKit components and blackouts the cocos2d part.
I want both of them in a same screen shot...
try this method and just change some code with your requirement..
-(UIImage*) screenshotUIImage
{
CGSize displaySize = [self displaySize];
CGSize winSize = [self winSize];
//Create buffer for pixels
GLuint bufferLength = displaySize.width * displaySize.height * 4;
GLubyte* buffer = (GLubyte*)malloc(bufferLength);
//Read Pixels from OpenGL
glReadPixels(0, 0, displaySize.width, displaySize.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
//Make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, bufferLength, NULL);
//Configure image
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * displaySize.width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef iref = CGImageCreate(displaySize.width, displaySize.height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
uint32_t* pixels = (uint32_t*)malloc(bufferLength);
CGContextRef context = CGBitmapContextCreate(pixels, winSize.width, winSize.height, 8, winSize.width * 4, CGImageGetColorSpace(iref), kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextTranslateCTM(context, 0, displaySize.height);
CGContextScaleCTM(context, 1.0f, -1.0f);
switch (deviceOrientation_)
{
case CCDeviceOrientationPortrait: break;
case CCDeviceOrientationPortraitUpsideDown:
CGContextRotateCTM(context, CC_DEGREES_TO_RADIANS(180));
CGContextTranslateCTM(context, -displaySize.width, -displaySize.height);
break;
case CCDeviceOrientationLandscapeLeft:
CGContextRotateCTM(context, CC_DEGREES_TO_RADIANS(-90));
CGContextTranslateCTM(context, -displaySize.height, 0);
break;
case CCDeviceOrientationLandscapeRight:
CGContextRotateCTM(context, CC_DEGREES_TO_RADIANS(90));
CGContextTranslateCTM(context, displaySize.width * 0.5f, -displaySize.height);
break;
}
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, displaySize.width, displaySize.height), iref);
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage *outputImage = [UIImage imageWithCGImage:imageRef];
//Dealloc
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGImageRelease(iref);
CGColorSpaceRelease(colorSpaceRef);
CGContextRelease(context);
free(buffer);
free(pixels);
return outputImage;
}
- (Texture2D*) screenshotTexture {
return [[Texture2D alloc] initWithImage:[self screenshotUIImage]];
}
For More Info see This Link
Refer all answers and comments its very interesting
i hope this help you
I researched a lot about this... And at present I can't find any code which can take a screenshot of a screen containing cocos2d and UIKit both together. Even there is some code available but, it is not acceptable on AppStore. So, if u use that code, your app will be rejected from the AppStore.
So for now, I found a temporary solution to achieve this:
First I took the screenshot of my cocos2d layer and then took that screenshot in a UIImageView and added that UIImageView on my screen behind all the present UIViews like this, such that user can't visualize this event:
[self.view insertSubview:imgView atIndex:1];
At index 1 because my cocos2d layer is at index 0. so above that...
Now, that the cocos2d picture is part of my UIKit view, I took the screenshot of my current screen with the normal UIKit way. And there we are. I now had the screenshot containing both the views.
This worked for me for now. If any one finds a valid solution for this, then most welcome!!!
I'll be waiting for any feasible solution for this.
Thanks all for help!!!
I am working on Jigsaw type of game where i have two images for masking,
I have implemented this code for masking
- (UIImage*) maskImage:(UIImage *)image withMaskImage:(UIImage*)maskImage {
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef maskImageRef = [maskImage CGImage];
CGContextRef mainViewContentContext = CGBitmapContextCreate (NULL, maskImage.size.width, maskImage.size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
if (mainViewContentContext==NULL)
return NULL;
CGFloat ratio = 0;
ratio = maskImage.size.width/ image.size.width;
if(ratio * image.size.height < maskImage.size.height) {
ratio = maskImage.size.height/ image.size.height;
}
CGRect rect1 = {{0, 0}, {maskImage.size.width, maskImage.size.height}};
CGRect rect2 = {{-((image.size.width*ratio)-maskImage.size.width)/2,-((image.size.height*ratio)-maskImage.size.height)/2},{image.size.width*ratio, image.size.height*ratio}};
CGContextClipToMask(mainViewContentContext, rect1, maskImageRef);
CGContextDrawImage(mainViewContentContext, rect2, image.CGImage);
CGImageRef newImage = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
UIImage *theImage = [UIImage imageWithCGImage:newImage];
CGImageRelease(newImage);
return theImage;
}
+
=
This is final result i got after masking.
now i would like to crop image in piece like and and so on parametrically(crop an image by transparency).
if any one has implemented such code or any idea on this scenario please share.
Thanks.
I am using this line of code for as Guntis Treulands's suggestion
int i=1;
for (int x=0; x<=212; x+=106) {
for (int y=0; y<318; y+=106) {
CGRect rect = CGRectMake(x, y, 106, 106);
CGRect rect2x = CGRectMake(x*2, y*2, 212, 212);
UIImage *orgImg = [UIImage imageNamed:#"cat#2x.png"];
UIImage *frmImg = [UIImage imageNamed:[NSString stringWithFormat:#"%d#2x.png",i]];
UIImage *cropImg = [self cropImage:orgImg withRect:rect2x];
UIImageView *tmpImg = [[UIImageView alloc] initWithFrame:rect];
[tmpImg setUserInteractionEnabled:YES];
[tmpImg setImage:[self maskImage:cropImg withMaskImage:frmImg]];
[self.view addSubview:tmpImg];
i++;
}
}
orgImg is original cat image, frmImg frame for holding individual piece, masked in photoshop and cropImg is 106x106 cropped image of original cat#2x.png.
my function for cropping is as following
- (UIImage *) cropImage:(UIImage*)originalImage withRect:(CGRect)rect {
return [UIImage imageWithCGImage:CGImageCreateWithImageInRect([originalImage CGImage], rect)];
}
UPDATE 2
I became really curious to find a better way to create a Jigsaw puzzle, so I spent two weekends and created a demo project of Jigsaw puzzle.
It contains:
provide column/row count and it will generate necessary puzzle pieces with correct width/height. The more columns/rows - the smaller the width/height and outline/inline puzzle form.
each time generate randomly sides
can randomly position / rotate pieces at the beginning of launch
each piece can be rotated by tap, or by two fingers (like a real piece) - but once released, it will snap to 90/180/270/360 degrees
each piece can be moved if touched on its “touchable shape” boundary (which is mostly the - same visible puzzle shape, but WITHOUT inline shapes)
Drawbacks:
no checking if piece is in its right place
if more than 100 pieces - it starts to lag, because, when picking up a piece, it goes through all subviews until it finds correct piece.
UPDATE
Thanks for updated question.
I managed to get this:
As you can see - jigsaw item is cropped correctly, and it is in square imageView (green color is UIImageView backgroundColor).
So - what I did was:
CGRect rect = CGRectMake(105, 0, 170, 170); //~ location on cat image where second Jigsaw item will be.
UIImage *originalCatImage = [UIImage imageNamed:#"cat.png"];//original cat image
UIImage *jigSawItemMask = [UIImage imageNamed:#"JigsawItemNo2.png"];//second jigsaw item mask (visible in my answer) (same width/height as cat image.)
UIImage *fullJigSawItemImage = [jigSawItemMask maskImage:originalCatImage];//masking - so that from full cat image would be visible second jigsaw item
UIImage *croppedJigSawItemImage = [self fullJigSawItemImage withRect:rect];//cropping so that we would get small image with jigsaw item centered in it.
For image masking I am using UIImage category function: (but you can probably use your masking function. But I'll post it anyways.)
- (UIImage*) maskImage:(UIImage *)image
{
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
UIImage *maskImage = self;
CGImageRef maskImageRef = [maskImage CGImage];
// create a bitmap graphics context the size of the image
CGContextRef mainViewContentContext = CGBitmapContextCreate (NULL, maskImage.size.width, maskImage.size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
if (mainViewContentContext==NULL)
return NULL;
CGFloat ratio = 0;
ratio = maskImage.size.width/ image.size.width;
if(ratio * image.size.height < maskImage.size.height) {
ratio = maskImage.size.height/ image.size.height;
}
CGRect rect1 = {{0, 0}, {maskImage.size.width, maskImage.size.height}};
CGRect rect2 = {{-((image.size.width*ratio)-maskImage.size.width)/2 , -((image.size.height*ratio)-maskImage.size.height)/2}, {image.size.width*ratio, image.size.height*ratio}};
CGContextClipToMask(mainViewContentContext, rect1, maskImageRef);
CGContextDrawImage(mainViewContentContext, rect2, image.CGImage);
// Create CGImageRef of the main view bitmap content, and then
// release that bitmap context
CGImageRef newImage = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
UIImage *theImage = [UIImage imageWithCGImage:newImage];
CGImageRelease(newImage);
// return the image
return theImage;
}
PREVIOUS ANSWER
Can you prepare a mask for each piece?
For example, you have that frame image. Can you cut it in photoshop in 9 separate images, where in each image it would only show corresponding piece. (all the rest - delete).
Example - second piece mask:
Then you use each of these newly created mask images on cat image - each piece will mask all image, but one peace. Thus you will have 9 piece images using 9 different masks.
For larger or different jigsaw frame - again, create separated image masks.
This is a basic solution, but not perfect, as you need to prepare each peace mask separately.
Hope it helps..
i'm developing an iphone app ,
i need to manipulate a image at run time so that i can give it an image border .
i need to combine two UIImages , one as a border or background image , and the other UIImage will reside inside the first , so what's the best way to do it ??
i read about some functions like , UIGraphicsGetCurrentContext() but it seems that it should be done at main thread and i need some lighter code cause i need to call it several times .
thanks
Image Manipulation
Here's an example (not tested, just to show you an idea):
-(UIImage *)imageWithImage:(UIImage *)image borderImage:(UIImage *)borderImage covertToSize:(CGSize)size {
UIGraphicsBeginImageContext(size);
[borderImage drawInRect:CGRectMake( 0, 0, size.width, size.height )];
[image drawInRect:CGRectMake( 10, 10, size.width - 20, size.height - 20)];
UIImage *destImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return destImage;
}
It draws background image, than your image, which will be quite smaller. You can use this if you do want to really manipulate with image and your borderImage is not transparent. If it is transparent, you can draw it over your image (swap lines with drawRect: calls).
Just for Display
Or if you want this for display purpose only, you can overlay views or you can use Quartz and set some sort of border via CALayer.
self.imageView.layer.cornerRadius = 3.0;
self.imageView.layer.masksToBounds = YES;
self.imageView.layer.borderColor = [UIColor blackColor].CGColor;
self.imageView.layer.borderWidth = 1.0;
If your border is transparent, you could just create a UIImageView with your image in it, and another UIImageView with your border in it and then add them as subviews to a view, with the border on top of the image.
Why don't you just make the second UIImageView the subview of the first one (Border imageView)?
This is a piece of code I found way back when and use in my UIView extension. Not sure who to credit for it, but here it is:
- (UIImage *)overlayWithImage:(UIImage *)image2 {
UIImage *image1 = self;
CGRect drawRect = CGRectMake(0.0, 0.0, image1.size.width, image1.size.height);
// Create the bitmap context
CGContextRef bitmapContext = NULL;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
CGSize size = CGSizeMake(image1.size.width, image1.size.height);
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (size.width * 4);
bitmapByteCount = (bitmapBytesPerRow * size.height);
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL) {
return nil;
}
// Create the bitmap context. We want pre-multiplied ARGB, 8-bits
// per component. Regardless of what the source image format is
// (CMYK, Grayscale, and so on) it will be converted over to the format
// specified here by CGBitmapContextCreate.
bitmapContext = CGBitmapContextCreate (bitmapData, size.width, size.height,8,bitmapBytesPerRow, CGColorSpaceCreateDeviceRGB(),kCGImageAlphaNoneSkipFirst);
if (bitmapContext == NULL)
// error creating context
return nil;
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(bitmapContext, drawRect, [image1 CGImage]);
CGContextDrawImage(bitmapContext, drawRect, [image2 CGImage]);
CGImageRef img = CGBitmapContextCreateImage(bitmapContext);
UIImage* ui_img = [UIImage imageWithCGImage: img];
CGImageRelease(img);
CGContextRelease(bitmapContext);
free(bitmapData);
return ui_img;
}
I use this on a background thread without complications.
I created a masked image using a function form an iphone blog:
UIImage *imgToSave = [self maskImage:[UIImage imageNamed:#"pic.jpg"] withMask:[UIImage imageNamed:#"sd-face-mask.png"]];
Looks good in a UIImageView
UIImageView *imgView = [[UIImageView alloc] initWithImage:imgToSave];
imgView.center = CGPointMake(160.0f, 140.0f);
[self.view addSubview:imgView];
UIImagePNGRepresentation to save to disk:
[UIImagePNGRepresentation(imgToSave) writeToFile:[self findUniqueSavePath] atomically:YES];
UIImagePNGRepresentation returns NSData of an image that looks different.
The output is inverse image mask.
The area that was cut out in the app is now visible in the file.
The area that was visible in the app is now removed. Visibility is opposite.
My mask is designed to remove everything but the face area in the picture. The UIImage looks right in the app but after I save it on disk, the file looks opposite. The face is removed but everything else this there.
Please let me know if you can help!
In quartz you cam mask either by an image mask (black let through and white blocks), or a normal image (white let through and black blocks) which is the opposite. It seems for some reason saving is treating the image mask as a normal image to mask with. One thought is to render to a bitmap context and then create an image to be saved from that.
I had the exact same issue, when I saved the file it was one way, but the image returned in memory was the exact opposite.
The culprit & the solution was UIImagePNGRepresentation(). It fixes the in-app image before saving it to disk, so I just inserted that function as the last step in creating the masked image and returning that.
This may not be the most elegant solution, but it works. I copied some code from my app and condensed it, not sure if this code below works as is, but if not, its close... maybe just some typos.
Enjoy. :)
// MyImageHelperObj.h
#interface MyImageHelperObj : NSObject
+ (UIImage *) createGrayScaleImage:(UIImage*)originalImage;
+ (UIImage *) createMaskedImageWithSize:(CGSize)newSize sourceImage:(UIImage *)sourceImage maskImage:(UIImage *)maskImage;
#end
// MyImageHelperObj.m
#import <QuartzCore/QuartzCore.h>
#import "MyImageHelperObj.h"
#implementation MyImageHelperObj
+ (UIImage *) createMaskedImageWithSize:(CGSize)newSize sourceImage:(UIImage *)sourceImage maskImage:(UIImage *)maskImage;
{
// create image size rect
CGRect newRect = CGRectZero;
newRect.size = newSize;
// draw source image
UIGraphicsBeginImageContextWithOptions(newRect.size, NO, 0.0f);
[sourceImage drawInRect:newRect];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
// draw mask image
[maskImage drawInRect:newRect blendMode:kCGBlendModeNormal alpha:1.0f];
maskImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// create grayscale version of mask image to make the "image mask"
UIImage *grayScaleMaskImage = [MyImageHelperObj createGrayScaleImage:maskImage];
CGFloat width = CGImageGetWidth(grayScaleMaskImage.CGImage);
CGFloat height = CGImageGetHeight(grayScaleMaskImage.CGImage);
CGFloat bitsPerPixel = CGImageGetBitsPerPixel(grayScaleMaskImage.CGImage);
CGFloat bytesPerRow = CGImageGetBytesPerRow(grayScaleMaskImage.CGImage);
CGDataProviderRef providerRef = CGImageGetDataProvider(grayScaleMaskImage.CGImage);
CGImageRef imageMask = CGImageMaskCreate(width, height, 8, bitsPerPixel, bytesPerRow, providerRef, NULL, false);
CGImageRef maskedImage = CGImageCreateWithMask(newImage.CGImage, imageMask);
CGImageRelease(imageMask);
newImage = [UIImage imageWithCGImage:maskedImage];
CGImageRelease(maskedImage);
return [UIImage imageWithData:UIImagePNGRepresentation(newImage)];
}
+ (UIImage *) createGrayScaleImage:(UIImage*)originalImage;
{
//create gray device colorspace.
CGColorSpaceRef space = CGColorSpaceCreateDeviceGray();
//create 8-bit bimap context without alpha channel.
CGContextRef bitmapContext = CGBitmapContextCreate(NULL, originalImage.size.width, originalImage.size.height, 8, 0, space, kCGImageAlphaNone);
CGColorSpaceRelease(space);
//Draw image.
CGRect bounds = CGRectMake(0.0, 0.0, originalImage.size.width, originalImage.size.height);
CGContextDrawImage(bitmapContext, bounds, originalImage.CGImage);
//Get image from bimap context.
CGImageRef grayScaleImage = CGBitmapContextCreateImage(bitmapContext);
CGContextRelease(bitmapContext);
//image is inverted. UIImage inverts orientation while converting CGImage to UIImage.
UIImage* image = [UIImage imageWithCGImage:grayScaleImage];
CGImageRelease(grayScaleImage);
return image;
}
#end
I'm using the following code to crop and create a new UIImage out of a bigger one. I've isolated the issue to be with the function CGImageCreateWithImageInRect() which seem to not set some CGImage property the way I want. :-) The problem is that a call to function UIImagePNGRepresentation() fails returning a nil.
CGImageRef origRef = [stillView.image CGImage];
CGImageRef cgCrop = CGImageCreateWithImageInRect( origRef, theRect);
UIImage *imgCrop = [UIImage imageWithCGImage:cgCrop];
...
NSData *data = UIImagePNGRepresentation ( imgCrop);
-- libpng error: No IDATs written into file
Any idea what might wrong or alternative for cropping a rect out of UIImage?
I had the same problem, but only when testing compatibility on iOS 3.2. On 4.2 it works fine.
In the end I found this http://www.hive05.com/2008/11/crop-an-image-using-the-iphone-sdk/ which works on both, albeit a little more verbose!
I converted this into a category on UIImage:
UIImage+Crop.h
#interface UIImage (Crop)
- (UIImage*) imageByCroppingToRect:(CGRect)rect;
#end
UIImage+Crop.m
#implementation UIImage (Crop)
- (UIImage*) imageByCroppingToRect:(CGRect)rect
{
//create a context to do our clipping in
UIGraphicsBeginImageContext(rect.size);
CGContextRef currentContext = UIGraphicsGetCurrentContext();
//create a rect with the size we want to crop the image to
//the X and Y here are zero so we start at the beginning of our
//newly created context
CGRect clippedRect = CGRectMake(0, 0, rect.size.width, rect.size.height);
CGContextClipToRect( currentContext, clippedRect);
//create a rect equivalent to the full size of the image
//offset the rect by the X and Y we want to start the crop
//from in order to cut off anything before them
CGRect drawRect = CGRectMake(rect.origin.x * -1,
rect.origin.y * -1,
self.size.width,
self.size.height);
//draw the image to our clipped context using our offset rect
CGContextTranslateCTM(currentContext, 0.0, rect.size.height);
CGContextScaleCTM(currentContext, 1.0, -1.0);
CGContextDrawImage(currentContext, drawRect, self.CGImage);
//pull the image from our cropped context
UIImage *cropped = UIGraphicsGetImageFromCurrentImageContext();
//pop the context to get back to the default
UIGraphicsEndImageContext();
//Note: this is autoreleased
return cropped;
}
#end
In a PNG there are various chunks present, some containing palette info, some actual image data and some other information, it's a very interesting standard. The IDAT chunk is the bit that actually contains the image data. If there's no "IDAT written into file" then libpng has had some issue creating a PNG from the input data.
I don't know exactly what your stillView.image is, but what happens when you pass your code a CGImageRef that is certainly valid? What are the actual values in theRect? If your theRect is beyond the bounds of the image then the cgCrop you're trying to use to make the UIImage could easily be nil - or not nil, but containing no image or an image with width and height 0, giving libpng nothing to work with.
It seems the solution you are trying should work, but I recommend to use this:
CGImageRef image = [stillView.image CGImage];
CGRect cropZone;
size_t cWitdh = cropZone.size.width;
size_t cHeight = cropZone.size.height;
size_t bitsPerComponent = CGImageGetBitsPerComponent(image);
size_t bytesPerRow = CGImageGetBytesPerRow(image) / CGImageGetWidth(image) * cWidth;
//Now we build a Context with those dimensions.
CGContextRef context = CGBitmapContextCreate(nil, cWitdh, cHeight, bitsPerComponent, bytesPerRow, CGColorSpaceCreateDeviceRGB(), CGImageGetBitmapInfo(image));
CGContextDrawImage(context, cropZone, image);
CGImageRef result = CGBitmapContextCreateImage(context);
UIImage * cropUIImage = [[UIImage alloc] initWithCGImage:tmp];
CGContextRelease(context);
CGImageRelease(mergeResult);
NSData * imgData = UIImagePNGRepresentation ( cropUIImage);
UIImage *croppedImage = [self imageByCropping:yourImageView.image toRect:heredefineyourRect];
CGSize size = CGSizeMake(croppedImage.size.height, croppedImage.size.width);
UIGraphicsBeginImageContext(size);
CGPoint pointImg1 = CGPointMake(0,0);
[croppedImage drawAtPoint:pointImg1 ];
[[UIImage imageNamed:yourImagenameDefine] drawInRect:CGRectMake(0,532, 150,80) ];//here define your Reactangle
UIImage* result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
croppedImage = result;
yourCropImageView.image=croppedImage;
[yourCropImageView.image retain];