How use CGImageCreateWithImageInRect for iPhone 4 (HD)? - iphone

I use the following code to getting images from the sprite. And it works fine everywhere except the iPhone 4 (HD version).
- (UIImage *)croppedImage:(CGRect)rect {
CGImageRef image = CGImageCreateWithImageInRect([self CGImage], rect);
UIImage *result = [UIImage imageWithCGImage:image];
CGImageRelease(image);
return result;
}
The iPhone 4 automatically load HD version of the image (sprite#2x.png) instead sprite.png. The original image has a scale 2, but the resulting image has a scale 1 and wrong size.
How to handle this behavior taking into account the different scales for iPhone 3G[s] and the iPhone 4?
I have read this document, but about the use CGImageCreateWithImageInRect here says nothing.

From what I can tell the CGImageCreateWithImageInRect will do the right thing. What you need to change is the UIImage initilization
http://developer.apple.com/iphone/library/documentation/uikit/reference/UIImage_Class/Reference/Reference.html#//apple_ref/occ/clm/UIImage/imageWithCGImage:scale:orientation:
Change it to [UIImage imageWithCGImage:image scale:self.scale orientation:self. imageOrientation] and it should work just fine. (this is assuming this is a category on UIImage which it looks like it is)

You should multiply the crop rect by the image scale. From my experience, it's unnecessary to use any different image initilization.
- (UIImage *)_cropImage:(UIImage *)image withRect:(CGRect)cropRect
{
cropRect = CGRectMake(cropRect.origin.x * image.scale,
cropRect.origin.y * image.scale,
cropRect.size.width * image.scale,
cropRect.size.height * image.scale);
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], cropRect);
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return croppedImage;
}

Related

How to scale image to 0.25 or resize image to 478*640 in iOS6

In my application i have image with dimension 1936*2592 and size upto 6 MB.
i need scale image to 0.25 or resize image to 478*640 and then upload it to server.
I have tried lots of ways to do it in ios6 but failed
imageData= UIImagePNGRepresentation(self.cl_PriviewImage);
UIImage *temp = [[UIImage alloc] initWithData: imageData];
ScaledImage = [temp resizedImage:CGSizeMake(478, 640) interpolationQuality:kCGInterpolationHigh];
ScaledData = UIImagePNGRepresentation( ScaledImage);
//self.cl_PriviewImage is my image
it shows resizedImage method not found.
I have also tried with ScaleToSize method but its showing same error.
i have also tried with
ScaledImage = [UIImage imageWithCGImage:self.cl_PriviewImage.CGImage scale:0.25f orientation:self.cl_PriviewImage.imageOrientation];
ScaledData = UIImagePNGRepresentation(ScaledImage);
in this case there is no error but my purpose didn't get fulfilled i.e uploaded image doesn't get changed in scale/size.
Is there any any other way in iOS 6 or am i wrong some where?
Please suggest me.
Thanks in advance
- (UIImage*) image:(UIImage *) image scaledToSize:(CGSize)newSize;
{
UIGraphicsBeginImageContextWithOptions(newSize, NO, 1);
CGContextSetInterpolationQuality(UIGraphicsGetCurrentContext(), kCGInterpolationHigh);
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
You can also add this method in a UIImage category, that would be a better solution.

Image 90 degrees rotated AVCam

I've looked through tons of questions here in Stackoverflow and none of them solved my problem.
So, I'm using Apple's AVCam sample:
http://developer.apple.com/library/ios/#samplecode/AVCam/Introduction/Intro.html
So, when I take a picture and save it in the image library it's fine, but when I show it in the screen by cropping it and when I send it to a server by using
NSData* pictureData = UIImageJPEGRepresentation(self.snappedPictureView.image, 0.9);
it sends it 90 degrees rotated!
Here is the code I crop it:
UIImage* cropped = [image imageByCroppingRect:CGRectMake(0, 0, (image.size.width * 300)/self.view.frame.size.width, (image.size.height * 300)/self.view.frame.size.height)];
imageByCroppingRect is:
- (UIImage *) imageByCroppingRect:(CGRect)area
{
UIImage *croppedImage;
CGImageRef imageRef = CGImageCreateWithImageInRect([self CGImage], area);
// or use the UIImage wherever you like
croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return croppedImage;
}
When you crop the image, you are loosing the metadata associated with that image telling it the proper way to rotate it.
Instead your code should preserve the original rotation of the image like this:
- (UIImage *) imageByCroppingRect:(CGRect)rect {
CGImageRef imageRef = CGImageCreateWithImageInRect([self CGImage], rect);
UIImage *result = [UIImage imageWithCGImage:imageRef scale:self.scale orientation:self.imageOrientation];
CGImageRelease(imageRef);
return result;
}

Why is this transparent image showing up with a white background on a device?

I am creating a transparent image on iOS using Quartz. However, this image shows up with the proper transparency in the Simulator, but shows a white background on a device.
Here is the code I'm using for this:
+ (UIImage *)clearRect:(CGRect)rect inImage:(UIImage *)image {
if (UIGraphicsBeginImageContextWithOptions != NULL)
UIGraphicsBeginImageContextWithOptions([image size], NO, 0.0);
else
UIGraphicsBeginImageContext([image size]);
CGContextRef context = UIGraphicsGetCurrentContext();
[image drawInRect:CGRectMake(0.0, 0.0, [image size].width, [image size].height)];
CGContextClearRect(context, rect);
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
NSData *data= UIImagePNGRepresentation(result);
UIImage *img = [UIImage imageWithData:data];
UIGraphicsEndImageContext();
return img;
}
This is what I'm seeing on the device:
What could I be doing wrong?
Given that the code works in the simulator and not your device you may have become victim to the cruftmonster described below:
http://weblog.bignerdranch.com/642-beware-the-cruftmonster/
You may have some legacy files that have been removed from your Xcode project but not the application bundle on the device. Did you initially test with opaque images?
Not sure if this is the solution, but it certainly matches the symptoms you describe.

iOS Retina Display Masking Bug

I'm currently using two images for a menu that I've built. I was using this code a while ago for normal display systems and it was working fine, with the retina display I'm having some issues with CGImageRef creating the right masked image on a depress for the background display. I've tried importing using the Image extensions for retina images. The images are supplied using:
[UIImage imageNamed:#"filename.png"]
I've provided both a standard and a retina image with both the filename.png and the filename#2x.png names.
The problem comes when choosing the mask for the selected area. The code works fine with lower resolution resources, and a high resolution main resource, but when I use
CGImageCreateWithImageInRect
And specify the rect that I want to create the image within, the image's scale is increased meaning that the main button's resolution is fine, but the image that is returned and superimposed on the button downpress is not the correct resolution, but oddly scaled to twice the pixel density, which looks terrible.
I've tried both
UIImage *img2 = [UIImage imageWithCGImage:cgImg scale:[img scale] orientation:[img imageOrientation]];
UIImage *scaledImage = [UIImage imageWithCGImage:[img2 CGImage] scale:4.0 orientation:UIImageOrientationUp];
And I seem to be getting nowhere when I take the image and drawInRect:(Selected Rect)
I have been tearing my hair out for about 2 hours now, and can't seem to find a decent solution, does anyone have any ideas?
I figured out what is necessary to be done in this instance. I created a helper method that would take the scale of the image into account when building the pressed state image and made it scale the CGRect by the image scale like so
- (UIImage *)imageFromImage:(UIImage *)image inRect:(CGRect)rect {
rect.size.height = rect.size.height * [image scale];
rect.size.width = rect.size.width * [image scale];
rect.origin.x = rect.origin.x * [image scale];
rect.origin.y = rect.origin.y * [image scale];
CGImageRef sourceImageRef = [image CGImage];
CGImageRef newImageRef = CGImageCreateWithImageInRect(sourceImageRef, rect);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef scale:[image scale] orientation:[image imageOrientation]];
CGImageRelease(newImageRef);
return newImage;
}
That should fix anyone having similar issues for mapping.

UIImagePNGRepresentation and masked images

I created a masked image using a function form an iphone blog:
UIImage *imgToSave = [self maskImage:[UIImage imageNamed:#"pic.jpg"] withMask:[UIImage imageNamed:#"sd-face-mask.png"]];
Looks good in a UIImageView
UIImageView *imgView = [[UIImageView alloc] initWithImage:imgToSave];
imgView.center = CGPointMake(160.0f, 140.0f);
[self.view addSubview:imgView];
UIImagePNGRepresentation to save to disk:
[UIImagePNGRepresentation(imgToSave) writeToFile:[self findUniqueSavePath] atomically:YES];
UIImagePNGRepresentation returns NSData of an image that looks different.
The output is inverse image mask.
The area that was cut out in the app is now visible in the file.
The area that was visible in the app is now removed. Visibility is opposite.
My mask is designed to remove everything but the face area in the picture. The UIImage looks right in the app but after I save it on disk, the file looks opposite. The face is removed but everything else this there.
Please let me know if you can help!
In quartz you cam mask either by an image mask (black let through and white blocks), or a normal image (white let through and black blocks) which is the opposite. It seems for some reason saving is treating the image mask as a normal image to mask with. One thought is to render to a bitmap context and then create an image to be saved from that.
I had the exact same issue, when I saved the file it was one way, but the image returned in memory was the exact opposite.
The culprit & the solution was UIImagePNGRepresentation(). It fixes the in-app image before saving it to disk, so I just inserted that function as the last step in creating the masked image and returning that.
This may not be the most elegant solution, but it works. I copied some code from my app and condensed it, not sure if this code below works as is, but if not, its close... maybe just some typos.
Enjoy. :)
// MyImageHelperObj.h
#interface MyImageHelperObj : NSObject
+ (UIImage *) createGrayScaleImage:(UIImage*)originalImage;
+ (UIImage *) createMaskedImageWithSize:(CGSize)newSize sourceImage:(UIImage *)sourceImage maskImage:(UIImage *)maskImage;
#end
// MyImageHelperObj.m
#import <QuartzCore/QuartzCore.h>
#import "MyImageHelperObj.h"
#implementation MyImageHelperObj
+ (UIImage *) createMaskedImageWithSize:(CGSize)newSize sourceImage:(UIImage *)sourceImage maskImage:(UIImage *)maskImage;
{
// create image size rect
CGRect newRect = CGRectZero;
newRect.size = newSize;
// draw source image
UIGraphicsBeginImageContextWithOptions(newRect.size, NO, 0.0f);
[sourceImage drawInRect:newRect];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
// draw mask image
[maskImage drawInRect:newRect blendMode:kCGBlendModeNormal alpha:1.0f];
maskImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// create grayscale version of mask image to make the "image mask"
UIImage *grayScaleMaskImage = [MyImageHelperObj createGrayScaleImage:maskImage];
CGFloat width = CGImageGetWidth(grayScaleMaskImage.CGImage);
CGFloat height = CGImageGetHeight(grayScaleMaskImage.CGImage);
CGFloat bitsPerPixel = CGImageGetBitsPerPixel(grayScaleMaskImage.CGImage);
CGFloat bytesPerRow = CGImageGetBytesPerRow(grayScaleMaskImage.CGImage);
CGDataProviderRef providerRef = CGImageGetDataProvider(grayScaleMaskImage.CGImage);
CGImageRef imageMask = CGImageMaskCreate(width, height, 8, bitsPerPixel, bytesPerRow, providerRef, NULL, false);
CGImageRef maskedImage = CGImageCreateWithMask(newImage.CGImage, imageMask);
CGImageRelease(imageMask);
newImage = [UIImage imageWithCGImage:maskedImage];
CGImageRelease(maskedImage);
return [UIImage imageWithData:UIImagePNGRepresentation(newImage)];
}
+ (UIImage *) createGrayScaleImage:(UIImage*)originalImage;
{
//create gray device colorspace.
CGColorSpaceRef space = CGColorSpaceCreateDeviceGray();
//create 8-bit bimap context without alpha channel.
CGContextRef bitmapContext = CGBitmapContextCreate(NULL, originalImage.size.width, originalImage.size.height, 8, 0, space, kCGImageAlphaNone);
CGColorSpaceRelease(space);
//Draw image.
CGRect bounds = CGRectMake(0.0, 0.0, originalImage.size.width, originalImage.size.height);
CGContextDrawImage(bitmapContext, bounds, originalImage.CGImage);
//Get image from bimap context.
CGImageRef grayScaleImage = CGBitmapContextCreateImage(bitmapContext);
CGContextRelease(bitmapContext);
//image is inverted. UIImage inverts orientation while converting CGImage to UIImage.
UIImage* image = [UIImage imageWithCGImage:grayScaleImage];
CGImageRelease(grayScaleImage);
return image;
}
#end