Why do I lost UIImage quality when I want to color it - iphone

I'm currently using this code to color a white UIImage with a desired color. But after treatment, it appears that the image (embedded in a UIImageView with the good size of the original image) lost its quality.
+ (UIImage *)imageNamed:(NSString *)name withColor:(UIColor *)theColor
{
UIImage *baseImage = [UIImage imageNamed:name];
UIGraphicsBeginImageContext(baseImage.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGRect area = CGRectMake(0, 0, baseImage.size.width, baseImage.size.height);
CGContextScaleCTM(ctx, 1, -1);
CGContextTranslateCTM(ctx, 0, -area.size.height);
CGContextSaveGState(ctx);
CGContextClipToMask(ctx, area, baseImage.CGImage);
[theColor set];
CGContextFillRect(ctx, area);
CGContextRestoreGState(ctx);
CGContextSetBlendMode(ctx, kCGBlendModeMultiply);
CGContextDrawImage(ctx, area, baseImage.CGImage);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Any suggestion on how to fix this issue ?

Related

UIImage Black and White with Transparency.

Below is code for converting image to Black and white. it is working fine unless image with Transparency comes. That transparent area is converted to black. please help on this what is wrong here.
+ (UIImage *)getBlackAndWhiteVersionOfImage:(UIImage *)anImage
{
UIImage *newImage;
UIImage *imageToDisplay;
int orientation = anImage.imageOrientation;
if (anImage) {
CGColorSpaceRef colorSapce = CGColorSpaceCreateDeviceGray();
CGContextRef context = CGBitmapContextCreate(nil, anImage.size.width * anImage.scale, anImage.size.height * anImage.scale, 8, anImage.size.width * anImage.scale, colorSapce, kCGImageAlphaNone);
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGContextSetShouldAntialias(context, NO);
CGContextDrawImage(context, CGRectMake(0, 0, anImage.size.width, anImage.size.height), [anImage CGImage]);
CGImageRef bwImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSapce);
UIImage *resultImage = [UIImage imageWithCGImage:bwImage];
CGImageRelease(bwImage);
UIGraphicsBeginImageContextWithOptions(anImage.size, NO, anImage.scale);
[resultImage drawInRect:CGRectMake(0.0, 0.0, anImage.size.width, anImage.size.height)];
newImage = UIGraphicsGetImageFromCurrentImageContext();
imageToDisplay =
[UIImage imageWithCGImage:[newImage CGImage]
scale:1.0
orientation: orientation];
UIGraphicsEndImageContext();
}
return imageToDisplay;
}
I dont think gray colorspace has an alpha compononent

How to combine UIImage and UILabel into one image and save

I have 2 UILabels and 2 images that i need to merge into a single UIImage to save.
I know I could do it with screen shots but my main image is rounded so if I rect it, it will still show the sharp edge.
I can do this to combine the images :
//CGSize newImageSize = CGSizeMake(cropImage.frame.size.width, cropImage.frame.size.height);
CGSize newImageSize = CGSizeMake(480, 320);
NSLog(#"CGSize %#",NSStringFromCGSize(newImageSize));
UIGraphicsBeginImageContextWithOptions(newImageSize, NO, 0.0); //retina res
[self.viewForImg.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
NSData *imgData = UIImageJPEGRepresentation(image, 0.9); //UIImagePNGRepresentation ( image ); // get JPEG representation
UIImage * imagePNG = [UIImage imageWithData:imgData]; // wrap UIImage around PNG representation
UIGraphicsEndImageContext();
return imagePNG;
but not sure how to add in the UILabel.
Any reply is much appreciated.
Use [myLabel.layer renderInContext:UIGraphicsGetCurrentContext()]; to draw in current context.
For eg:-
UIGraphicsBeginImageContextWithOptions(newImageSize, NO, 0.0); //retina res
[self.viewForImg.layer renderInContext:UIGraphicsGetCurrentContext()];
[myLabel.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
Based on your comments, if you want to draw this in a particular frame do it as follows,
[myLabel drawTextInRect:CGRectMake(0.0f, 0.0f, 100.0f, 50.0f)];
If you want to color the background, try this,
CGRect drawRect = CGRectMake(rect.origin.x, rect.origin.y,rect.size.width, rect.size.height);
CGContextSetRGBFillColor(context, 100.0f/255.0f, 100.0f/255.0f, 100.0f/255.0f, 1.0f);
CGContextFillRect(context, drawRect);
or you can check this question Setting A CGContext Transparent Background.
UIEdgeInsets insets = UIEdgeInsetsMake(1, 1, 1, 1);
CGSize imageSizeWithBorder = CGSizeMake(view.frame.size.width + insets.left + insets.right, view.frame.size.height + insets.top + insets.bottom);
UIGraphicsBeginImageContextWithOptions(imageSizeWithBorder, UIEdgeInsetsEqualToEdgeInsets(insets, UIEdgeInsetsZero), 0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextClipToRect(context, (CGRect){{insets.left, insets.top}, view.frame.size});
CGContextTranslateCTM(context, -view.frame.origin.x + insets.left, -view.frame.origin.y + insets.top);
[view.layer renderInContext:context];
UIImage *viewCopy = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Try this!
UIGraphicsBeginImageContextWithOptions(newImageSize, NO, scale); //retina res
[COGI.layer renderInContext:UIGraphicsGetCurrentContext()];
[COGI.image drawInRect:CGRectMake(0, 0, 248, 290)];
[iconI.image drawInRect:CGRectMake(4, 20, 240, 240)];
[stampI.image drawInRect:CGRectMake(0, -5, 248, 290)];
[headerL drawTextInRect:CGRectMake(14, 35, 220, 40)];
[detailL drawTextInRect:CGRectMake(16, 200, 215, 65)];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
[[UIColor redColor] set];
NSData *imgData = UIImageJPEGRepresentation(image, 1.0); //UIImagePNGRepresentation ( image ); // get JPEG representation
UIImage * imagePNG = [UIImage imageWithData:imgData]; // wrap UIImage around PNG representation
UIGraphicsEndImageContext();
return imagePNG;

UIImageJPEGRepresentation giving 2x images on retina display

I have this code, which creates an image, and then adds some effects to it and sizes it down to make largeThumbnail.
UIImage *originalImage = [UIImage imageWithData:self.originalImage];
thumbnail = createLargeThumbnailFromImage(originalImage);
NSLog(#"thumbnail: %f", thumbnail.size.height);
NSData *thumbnailData = UIImageJPEGRepresentation(thumbnail, 1.0);
Later on:
UIImage *image = [UIImage imageWithData:self.largeThumbnail];
NSLog(#"thumbnail 2: %f", image.size.height);
NSLog returns:
thumbnail: 289.000000
thumbnail 2: 578.000000
As you can see, when it converts the image back from data, it makes it 2x the size. Any ideas why this may be happening?
Large thumbnail code:
UIImage *createLargeThumbnailFromImage(UIImage *image) {
UIImage *resizedImage;
resizedImage = [image imageScaledToFitSize:LARGE_THUMBNAIL_SIZE];
CGRect largeThumbnailRect = CGRectMake(0, 0, resizedImage.size.width, resizedImage.size.height);
UIGraphicsBeginImageContextWithOptions(resizedImage.size, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
//Image
CGContextTranslateCTM(context, 0, resizedImage.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, largeThumbnailRect, resizedImage.CGImage);
//Border
CGContextSaveGState(context);
CGRect innerRect = rectForRectWithInset(largeThumbnailRect, 1.5);
CGMutablePathRef borderPath = createRoundedRectForRect(innerRect, 0);
CGContextSetStrokeColorWithColor(context, [[UIColor whiteColor] CGColor]);
CGContextSetLineWidth(context, 3);
CGContextAddPath(context, borderPath);
CGContextStrokePath(context);
CGContextRestoreGState(context);
UIImage *thumbnail = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return thumbnail;
}
Try replacing the part where you load the second image:
UIImage *image = [UIImage imageWithData:self.largeThumbnail];
with this one:
UIImage *jpegImage = [UIImage imageWithData:self.largeThumbnail];
UIImage *image = [UIImage imageWithCGImage:jpegImage.CGImage scale:originalImage.scale orientation:jpegImage.imageOrientation];
What happens here is that the image scale is not set, so you get double image dimensions.

UIGraphicsBeginImageContextWithOptions and UIImageJPEGRepresentation not working well together

So i have this code to create a UIImage:
UIGraphicsBeginImageContextWithOptions(border.frame.size, YES, 0);
[border.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *thumbnailImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
At this point, the size on the image is correct, 80x100.
Then it runs this code:
NSData *fullImageData = UIImageJPEGRepresentation(image, 1.0f);
And the NSData of the image returns an image at the size 160x200 - twice as much as it should be.
It's became clear the reason for this is the line:
UIGraphicsBeginImageContextWithOptions(border.frame.size, YES, 0);
The 0 on the end is the scale, and because it's 0 it goes by the devices scale factor. I keep it this way to maintain a clear image. However, when i set the image to 1, despite the image staying the size it should, it doesn't come out in retina quality. What i want to do is keep it in retina quality, but also keep it at the right size. Is there a way to do this?
Try resizing the UIImage before calling UIImageJPEGRepresentation
- (UIImage *)resizeImage:(UIImage*)image newSize:(CGSize)newSize {
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGImageRef imageRef = image.CGImage;
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height);
CGContextConcatCTM(context, flipVertical);
// Draw into the context; this scales the image
CGContextDrawImage(context, newRect, imageRef);
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
UIGraphicsEndImageContext();
return newImage;
}
if([UIScreen mainScreen].scale > 1)
{
thumbnailImage = [self thumbnailImage newSize:CGSizeMake(thumbnailImage.size.width/[UIScreen mainScreen].scale, thumbnailImage.size.height/[UIScreen mainScreen].scale)];
}
- (UIImage *)imageWithImage{
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}

CGContextAddEllipseInRect to CGImageRef to CGImageMaskCreate to CGContextClipToMask

I haven't been able to find one single example on the internets that teaches me how to create a circle on the fly and then use this circle to clip an UIImage.
Here's my code, unfortunately it doesn't give me desired results.
//create a graphics context
UIGraphicsBeginImageContext(CGSizeMake(243, 243));
CGContextRef context = UIGraphicsGetCurrentContext();
//create my object in this context
CGContextAddEllipseInRect(context, CGRectMake(0, 0, 243, 243));
CGContextSetFillColor(context, CGColorGetComponents([[UIColor whiteColor] CGColor]));
CGContextFillPath(context);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
//create an uiimage from the ellipse
//Get the drawing image
CGImageRef maskImage = [image CGImage];
// Get the mask from the image
CGImageRef maskRef = CGImageMaskCreate(CGImageGetWidth(maskImage)
, CGImageGetHeight(maskImage)
, CGImageGetBitsPerComponent(maskImage)
, CGImageGetBitsPerPixel(maskImage)
, CGImageGetBytesPerRow(maskImage)
, CGImageGetDataProvider(maskImage)
, NULL
, false);
//finally clip the context to the mask.
CGContextClipToMask( context , CGRectMake(0, 0, 243, 243) , maskRef );
//draw the image
[firstPieceView.image drawInRect:CGRectMake(0, 0, 320, 480)];
// [firstPieceView drawRect:CGRectMake(0, 0, 320, 480)];
//extract a new image
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
NSLog(#"self.firstPieceView is %#", NSStringFromCGRect(self.firstPieceView.frame));
UIGraphicsEndImageContext();
self.firstPieceView.image = outputImage;
I would appreciate any directions.
I suspect you need to rephrase your question better.
There's plenty of example code for whatever you're trying to do out there.
Here's how you could implement a custom UIView subclass to clip am image to an ellipse:
- (void)drawInRect:(CGRect)rect {
UIImage image;// set/get from somewhere
CGImageRef imageRef = [image CGImageRef];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextAddEllipseInRect(context, self.bounds);
CGContextClip(context);
CGContextDrawImage(context, self.bounds, imageRef);
}
caveat emptor
Edit (a day later, free time produces):
- (void)drawRect:(CGRect)rect {
// we're ignoring rect and drawing the whole view
CGImageRef imageRef = [_image CGImage]; // ivar: UIImage *_image;
CGContextRef context = UIGraphicsGetCurrentContext();
// set the background to black
[[UIColor blackColor] setFill];
CGContextFillRect(context, self.bounds);
// modify the context coordinates,
// UIKit and CoreGraphics are oriented differently
CGContextSaveGState(context);
CGContextTranslateCTM(context, 0, CGRectGetHeight(rect));
CGContextScaleCTM(context, 1, -1);
// add clipping path to the context, then execute the clip
// this is in effect for all drawing until GState restored
CGContextAddEllipseInRect(context, self.bounds);
CGContextClip(context);
// stretch the image to be the size of the view
CGContextDrawImage(context, self.bounds, imageRef);
CGContextRestoreGState(context);
}