Below is code for converting image to Black and white. it is working fine unless image with Transparency comes. That transparent area is converted to black. please help on this what is wrong here.
+ (UIImage *)getBlackAndWhiteVersionOfImage:(UIImage *)anImage
{
UIImage *newImage;
UIImage *imageToDisplay;
int orientation = anImage.imageOrientation;
if (anImage) {
CGColorSpaceRef colorSapce = CGColorSpaceCreateDeviceGray();
CGContextRef context = CGBitmapContextCreate(nil, anImage.size.width * anImage.scale, anImage.size.height * anImage.scale, 8, anImage.size.width * anImage.scale, colorSapce, kCGImageAlphaNone);
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGContextSetShouldAntialias(context, NO);
CGContextDrawImage(context, CGRectMake(0, 0, anImage.size.width, anImage.size.height), [anImage CGImage]);
CGImageRef bwImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSapce);
UIImage *resultImage = [UIImage imageWithCGImage:bwImage];
CGImageRelease(bwImage);
UIGraphicsBeginImageContextWithOptions(anImage.size, NO, anImage.scale);
[resultImage drawInRect:CGRectMake(0.0, 0.0, anImage.size.width, anImage.size.height)];
newImage = UIGraphicsGetImageFromCurrentImageContext();
imageToDisplay =
[UIImage imageWithCGImage:[newImage CGImage]
scale:1.0
orientation: orientation];
UIGraphicsEndImageContext();
}
return imageToDisplay;
}
I dont think gray colorspace has an alpha compononent
Related
I'm currently using this code to color a white UIImage with a desired color. But after treatment, it appears that the image (embedded in a UIImageView with the good size of the original image) lost its quality.
+ (UIImage *)imageNamed:(NSString *)name withColor:(UIColor *)theColor
{
UIImage *baseImage = [UIImage imageNamed:name];
UIGraphicsBeginImageContext(baseImage.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGRect area = CGRectMake(0, 0, baseImage.size.width, baseImage.size.height);
CGContextScaleCTM(ctx, 1, -1);
CGContextTranslateCTM(ctx, 0, -area.size.height);
CGContextSaveGState(ctx);
CGContextClipToMask(ctx, area, baseImage.CGImage);
[theColor set];
CGContextFillRect(ctx, area);
CGContextRestoreGState(ctx);
CGContextSetBlendMode(ctx, kCGBlendModeMultiply);
CGContextDrawImage(ctx, area, baseImage.CGImage);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Any suggestion on how to fix this issue ?
iam doing one application.In that Im using one imageview with image and add one view with sone clear holes.Means through that holes we can see the background image.So my problem is i want to capture that total screen (imageview with this holes view).Iam using below code but it's not working.
- (UIImage*)captureView:(UIView *)yourView {
CGRect rect = [[UIScreen mainScreen] bounds];
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[yourView.layer renderInContext:context];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
but it's not working.
This IS how we do:
UIGraphicsBeginImageContextWithOptions(yourView.bounds.size, yourView.opaque, 0.0);
[yourView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *LastImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Try this to take your screeShoT
-(UIImage*)getScreenShot
{
CGSize screenSize = [[UIScreen mainScreen] applicationFrame].size;
//CGSize screenSize = CGSizeMake(1024, 768);
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate(nil, screenSize.width, 416, 8, 4*(int)screenSize.width, colorSpaceRef, kCGImageAlphaPremultipliedLast);
CGContextTranslateCTM(ctx, 0.0, screenSize.height);
CGContextScaleCTM(ctx, 1.0, -1.0);
[(CALayer*)self.yourView.layer renderInContext:ctx];
CGImageRef cgImage = CGBitmapContextCreateImage(ctx);
UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CGContextRelease(ctx);
return image;
}
In my application, I want to do the following steps:
1 - Capture the screen, this part is no problem for me, I'm using the following code:
- (UIImage *)captureScreen {
UIGraphicsBeginImageContextWithOptions(self.view.frame.size, YES, 0.0f);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
2 - I cropped the image with this function
- (UIImage *)cropImage(UIImage *)image inRect:(CGRect)rect {
CGImageRef imageRef = CGImageCreateWithImageInRect(image.CGImage, rect);
UIImage *resultImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return resultImage;
}
3 - Then I mask the cropped image with a pure black and white mask
- (UIImage *)maskImage:(UIImage *)image withMask:(UIImage *)maskImage {
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef maskedRef = CGImageCreateWithMask([image CGImage], mask);
UIImage *resultImage = [UIImage imageWithCGImage:maskedRef];
CGImageRelease(mask);
CGImageRelease(maskedRef);
return resultImage;
}
However, the result image I got is that outside the shape of the mask, the image is in black color instead of transparent. Can anybody help me?
This Works for me. Hope it will work for you too.
- (UIImage*) doImageMask:(UIImage *)mainImage:(UIImage*)maskImage{
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef maskImageRef = [maskImage CGImage];
// create a bitmap graphics context the size of the image
CGContextRef mainViewContentContext = CGBitmapContextCreate (NULL, maskImage.size.width, maskImage.size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
if (mainViewContentContext == NULL){
return NULL;
}
CGFloat ratio = 0;
ratio = maskImage.size.width/ mainImage.size.width;
if(ratio * mainImage.size.height < maskImage.size.height) {
ratio = maskImage.size.height/ mainImage.size.height;
}
CGRect rect1 = {{0, 0}, {maskImage.size.width, maskImage.size.height}};
CGRect rect2 = {{-((mainImage.size.width*ratio)-maskImage.size.width)/2 , -((mainImage.size.height*ratio)-maskImage.size.height)/2}, {mainImage.size.width*ratio, mainImage.size.height*ratio}};
CGContextClipToMask(mainViewContentContext, rect1, maskImageRef);
CGContextDrawImage(mainViewContentContext, rect2, mainImage.CGImage);
// Create CGImageRef of the main view bitmap content, and then
// release that bitmap context
CGImageRef newImage = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
UIImage *theImage = [UIImage imageWithCGImage:newImage];
CGImageRelease(newImage);
// return the image
return theImage;
}
I solve my problem, it is due to the alpha channel of the image to be masked. So before masking, I create another UIImage with alpha channel and continue my steps.
This is the code for creating a UIImage with alpha
- (UIImage *)imageWithAlpha {
CGImageRef imageRef = self.CGImage;
CGFloat width = CGImageGetWidth(imageRef);
CGFloat height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(nil, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGImageRef resultImageRef = CGBitmapContextCreateImage(context);
UIImage *resultImage = [UIImage imageWithCGImage:resultImageRef scale:self.scale orientation:self.imageOrientation];
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
CGImageRelease(resultImageRef);
return resultImage;
}
So i have this code to create a UIImage:
UIGraphicsBeginImageContextWithOptions(border.frame.size, YES, 0);
[border.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *thumbnailImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
At this point, the size on the image is correct, 80x100.
Then it runs this code:
NSData *fullImageData = UIImageJPEGRepresentation(image, 1.0f);
And the NSData of the image returns an image at the size 160x200 - twice as much as it should be.
It's became clear the reason for this is the line:
UIGraphicsBeginImageContextWithOptions(border.frame.size, YES, 0);
The 0 on the end is the scale, and because it's 0 it goes by the devices scale factor. I keep it this way to maintain a clear image. However, when i set the image to 1, despite the image staying the size it should, it doesn't come out in retina quality. What i want to do is keep it in retina quality, but also keep it at the right size. Is there a way to do this?
Try resizing the UIImage before calling UIImageJPEGRepresentation
- (UIImage *)resizeImage:(UIImage*)image newSize:(CGSize)newSize {
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGImageRef imageRef = image.CGImage;
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height);
CGContextConcatCTM(context, flipVertical);
// Draw into the context; this scales the image
CGContextDrawImage(context, newRect, imageRef);
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
UIGraphicsEndImageContext();
return newImage;
}
if([UIScreen mainScreen].scale > 1)
{
thumbnailImage = [self thumbnailImage newSize:CGSizeMake(thumbnailImage.size.width/[UIScreen mainScreen].scale, thumbnailImage.size.height/[UIScreen mainScreen].scale)];
}
- (UIImage *)imageWithImage{
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
I have another quartz 2d (iphone) question:
How to change all the colors of an ImageView to black and white and leave only red ???
DD
UIImage *originalImage = [info objectForKey:UIImagePickerControllerOriginalImage]; this image we get from UIImagePickerController
CGColorSpaceRef colorSapce = CGColorSpaceCreateDeviceGray();
CGContextRef context = CGBitmapContextCreate(nil, originalImage.size.width, originalImage.size.height, 8, originalImage.size.width, colorSapce, kCGImageAlphaNone);
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGContextSetShouldAntialias(context, NO);
CGContextDrawImage(context, CGRectMake(0, 0, originalImage.size.width, originalImage.size.height), [originalImage CGImage]);
CGImageRef bwImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSapce);
UIImage *resultImage = [UIImage imageWithCGImage:bwImage]; // This is result B/W image.
CGImageRelease(bwImage);
This is something that i found on other forums, AND IT SEEMS TO WORK!!!