UIGraphicsBeginImageContextWithOptions and UIImageJPEGRepresentation not working well together - iphone

So i have this code to create a UIImage:
UIGraphicsBeginImageContextWithOptions(border.frame.size, YES, 0);
[border.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *thumbnailImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
At this point, the size on the image is correct, 80x100.
Then it runs this code:
NSData *fullImageData = UIImageJPEGRepresentation(image, 1.0f);
And the NSData of the image returns an image at the size 160x200 - twice as much as it should be.
It's became clear the reason for this is the line:
UIGraphicsBeginImageContextWithOptions(border.frame.size, YES, 0);
The 0 on the end is the scale, and because it's 0 it goes by the devices scale factor. I keep it this way to maintain a clear image. However, when i set the image to 1, despite the image staying the size it should, it doesn't come out in retina quality. What i want to do is keep it in retina quality, but also keep it at the right size. Is there a way to do this?

Try resizing the UIImage before calling UIImageJPEGRepresentation
- (UIImage *)resizeImage:(UIImage*)image newSize:(CGSize)newSize {
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGImageRef imageRef = image.CGImage;
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height);
CGContextConcatCTM(context, flipVertical);
// Draw into the context; this scales the image
CGContextDrawImage(context, newRect, imageRef);
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
UIGraphicsEndImageContext();
return newImage;
}
if([UIScreen mainScreen].scale > 1)
{
thumbnailImage = [self thumbnailImage newSize:CGSizeMake(thumbnailImage.size.width/[UIScreen mainScreen].scale, thumbnailImage.size.height/[UIScreen mainScreen].scale)];
}

- (UIImage *)imageWithImage{
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}

Related

convert any uploaded image in 72 DPI

In my application i m using upload image functionality and i want to convert uploaded image in 72 DPI if image is with more or less DPI.
can we do it with using cgimagecreate
Please suggest me.
Thanks.
We can get image with 72 DPI with below method, actually we need to just get new image from context as below.
-(UIImage *)resizeImageFor72DPI:(UIImage*)image newSize:(CGSize)newSize
{
CGRect newRect= CGRectZero;
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)]
&& [[UIScreen mainScreen] scale] == 2.0)
{
//For retina image
newSize=CGSizeMake(newSize.width/2.0,newSize.height/2.0);
}
newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGImageRef imageRef = image.CGImage;
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height);
CGContextConcatCTM(context, flipVertical);
// Draw into the context; this scales the image
CGContextDrawImage(context, newRect, imageRef);
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
UIGraphicsEndImageContext();
return newImage;
}

UIImage Black and White with Transparency.

Below is code for converting image to Black and white. it is working fine unless image with Transparency comes. That transparent area is converted to black. please help on this what is wrong here.
+ (UIImage *)getBlackAndWhiteVersionOfImage:(UIImage *)anImage
{
UIImage *newImage;
UIImage *imageToDisplay;
int orientation = anImage.imageOrientation;
if (anImage) {
CGColorSpaceRef colorSapce = CGColorSpaceCreateDeviceGray();
CGContextRef context = CGBitmapContextCreate(nil, anImage.size.width * anImage.scale, anImage.size.height * anImage.scale, 8, anImage.size.width * anImage.scale, colorSapce, kCGImageAlphaNone);
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGContextSetShouldAntialias(context, NO);
CGContextDrawImage(context, CGRectMake(0, 0, anImage.size.width, anImage.size.height), [anImage CGImage]);
CGImageRef bwImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSapce);
UIImage *resultImage = [UIImage imageWithCGImage:bwImage];
CGImageRelease(bwImage);
UIGraphicsBeginImageContextWithOptions(anImage.size, NO, anImage.scale);
[resultImage drawInRect:CGRectMake(0.0, 0.0, anImage.size.width, anImage.size.height)];
newImage = UIGraphicsGetImageFromCurrentImageContext();
imageToDisplay =
[UIImage imageWithCGImage:[newImage CGImage]
scale:1.0
orientation: orientation];
UIGraphicsEndImageContext();
}
return imageToDisplay;
}
I dont think gray colorspace has an alpha compononent

How to know if a UIImage is representable in PNG or JPG?

I got a UIImage from UIImagePickerController, and using the code from this site to resize the image
- (UIImage *)resizedImage:(CGSize)newSize
transform:(CGAffineTransform)transform
drawTransposed:(BOOL)transpose
interpolationQuality:(CGInterpolationQuality)quality {
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGRect transposedRect = CGRectMake(0, 0, newRect.size.height, newRect.size.width);
CGImageRef imageRef = self.CGImage;
// Build a context that's the same dimensions as the new size
CGContextRef bitmap = CGBitmapContextCreate(NULL,
newRect.size.width,
newRect.size.height,
CGImageGetBitsPerComponent(imageRef),
0,
CGImageGetColorSpace(imageRef),
CGImageGetBitmapInfo(imageRef));
// Rotate and/or flip the image if required by its orientation
CGContextConcatCTM(bitmap, transform);
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(bitmap, quality);
// Draw into the context; this scales the image
CGContextDrawImage(bitmap, transpose ? transposedRect : newRect, imageRef);
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(bitmap);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
// Clean up
CGContextRelease(bitmap);
CGImageRelease(newImageRef);
return newImage;
}
UIImagePNGRepresentation() failed to return NSData on re-sized image, but UIImageJPEGRepresentation() succeed.
How do we know if a UIImage is presentable in PNG or JPEG? What missed in the above code that make the resized image can not be represented in PNG?
According to apple document: "This function may return nil if the image has no data or if the underlying CGImageRef contains data in an unsupported bitmap format."
What bitmap format supported by PNG presentation? How to make an UIImage PNG-supported format?
That was a mistake that in another part of the code the image was rescaled with the following
CGContextRef context = CGBitmapContextCreate(NULL,
size.width,
size.height,
8,
0,
CGImageGetColorSpace(source),
kCGImageAlphaNoneSkipFirst);
Changing kCGImageAlphaNoneSkipFirst to CGImageGetBitmapInfo(source) fixed the problem
go to following link...
How to check if downloaded PNG image is corrupt?
it may help you...
Let me know it is working or not...
Happy Coding!!!!

Why do I lost UIImage quality when I want to color it

I'm currently using this code to color a white UIImage with a desired color. But after treatment, it appears that the image (embedded in a UIImageView with the good size of the original image) lost its quality.
+ (UIImage *)imageNamed:(NSString *)name withColor:(UIColor *)theColor
{
UIImage *baseImage = [UIImage imageNamed:name];
UIGraphicsBeginImageContext(baseImage.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGRect area = CGRectMake(0, 0, baseImage.size.width, baseImage.size.height);
CGContextScaleCTM(ctx, 1, -1);
CGContextTranslateCTM(ctx, 0, -area.size.height);
CGContextSaveGState(ctx);
CGContextClipToMask(ctx, area, baseImage.CGImage);
[theColor set];
CGContextFillRect(ctx, area);
CGContextRestoreGState(ctx);
CGContextSetBlendMode(ctx, kCGBlendModeMultiply);
CGContextDrawImage(ctx, area, baseImage.CGImage);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Any suggestion on how to fix this issue ?

UIImageJPEGRepresentation giving 2x images on retina display

I have this code, which creates an image, and then adds some effects to it and sizes it down to make largeThumbnail.
UIImage *originalImage = [UIImage imageWithData:self.originalImage];
thumbnail = createLargeThumbnailFromImage(originalImage);
NSLog(#"thumbnail: %f", thumbnail.size.height);
NSData *thumbnailData = UIImageJPEGRepresentation(thumbnail, 1.0);
Later on:
UIImage *image = [UIImage imageWithData:self.largeThumbnail];
NSLog(#"thumbnail 2: %f", image.size.height);
NSLog returns:
thumbnail: 289.000000
thumbnail 2: 578.000000
As you can see, when it converts the image back from data, it makes it 2x the size. Any ideas why this may be happening?
Large thumbnail code:
UIImage *createLargeThumbnailFromImage(UIImage *image) {
UIImage *resizedImage;
resizedImage = [image imageScaledToFitSize:LARGE_THUMBNAIL_SIZE];
CGRect largeThumbnailRect = CGRectMake(0, 0, resizedImage.size.width, resizedImage.size.height);
UIGraphicsBeginImageContextWithOptions(resizedImage.size, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
//Image
CGContextTranslateCTM(context, 0, resizedImage.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, largeThumbnailRect, resizedImage.CGImage);
//Border
CGContextSaveGState(context);
CGRect innerRect = rectForRectWithInset(largeThumbnailRect, 1.5);
CGMutablePathRef borderPath = createRoundedRectForRect(innerRect, 0);
CGContextSetStrokeColorWithColor(context, [[UIColor whiteColor] CGColor]);
CGContextSetLineWidth(context, 3);
CGContextAddPath(context, borderPath);
CGContextStrokePath(context);
CGContextRestoreGState(context);
UIImage *thumbnail = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return thumbnail;
}
Try replacing the part where you load the second image:
UIImage *image = [UIImage imageWithData:self.largeThumbnail];
with this one:
UIImage *jpegImage = [UIImage imageWithData:self.largeThumbnail];
UIImage *image = [UIImage imageWithCGImage:jpegImage.CGImage scale:originalImage.scale orientation:jpegImage.imageOrientation];
What happens here is that the image scale is not set, so you get double image dimensions.