iOS - Cannot process image using CIFilter - iphone

I am trying to process image using Core Image. I have created UIImage category to do it.
I have added QuartzCore and CoreImage frameworks to project, imported CoreImage/CoreImage.h and used this code:
CIImage *inputImage = self.CIImage;
CIFilter *exposureAdjustmentFilter = [CIFilter filterWithName:#"CIExposureAdjust"];
[exposureAdjustmentFilter setDefaults];
[exposureAdjustmentFilter setValue:inputImage forKey:#"inputImage"];
[exposureAdjustmentFilter setValue:[NSNumber numberWithFloat:5.0f] forKey:#"inputEV"];
CIImage *outputImage = [exposureAdjustmentFilter valueForKey:#"outputImage"];
CIContext *myContext = [CIContext contextWithOptions:nil];
return [UIImage imageWithCGImage:[myContext createCGImage:outputImage fromRect:outputImage.extent]];
But I have got nil output image from the filter.
I have also tried to use CIHueAdjust with the same result.
Than you in advance
UPDATE: I have found solution. It was necessary to alloc new CIImage, not only pass reference to UIImage.CIImage this way:
CIImage *inputImage = [[CIImage alloc] initWithImage:self];

Try following code:-
CIImage *inputImage = [[CIImage alloc] initWithImage:[UIImage imageNamed:#"old-country-rain.jpg"]];
CIFilter * controlsFilter = [CIFilter filterWithName:#"CIExposureAdjust"];
[controlsFilter setValue:inputImage forKey:kCIInputImageKey];
[controlsFilter setValue:[NSNumber numberWithFloat: 2.0f] forKey:#"inputEV"];
NSLog(#"%#",controlsFilter.attributes);
CIImage *displayImage = controlsFilter.outputImage;
UIImage *finalImage = [UIImage imageWithCIImage:displayImage];
CIContext *context = [CIContext contextWithOptions:nil];
if (displayImage == nil || finalImage == nil) {
// We did not get output image. Let's display the original image itself.
imageView.image = [UIImage imageNamed:#"old-country-rain.jpg"];
}else {
// We got output image. Display it.
imageView.image = [UIImage imageWithCGImage:[context createCGImage:displayImage fromRect:displayImage.extent]];
}

Related

Can't detect face in images that capture by camera of Iphone

I have a problem. I use 2 image. One is download from internet. the other is captured by camera of iPhone.
I use CIDetector to detect face in 2 images. It work perfect in image that download from internet. But the other, it's can't detect or detect wrong.
I check in many images. That result is the same.
Try this
NSDictionary *options = [NSDictionary dictionaryWithObject: CIDetectorAccuracyLow forKey: CIDetectorAccuracy];
CIDetector *detector = [CIDetector detectorOfType: CIDetectorTypeFace context: nil options: options];
CIImage *ciImage = [CIImage imageWithCGImage: [image CGImage]];
NSNumber *orientation = [NSNumber numberWithInt:[image imageOrientation]+1];
NSDictionary *fOptions = [NSDictionary dictionaryWithObject:orientation forKey: CIDetectorImageOrientation];
NSArray *features = [detector featuresInImage:ciImage options:fOptions];
for (CIFaceFeature *f in features) {
NSLog(#"left eye found: %#", (f. hasLeftEyePosition ? #"YES" : #"NO"));
NSLog(#"right eye found: %#", (f. hasRightEyePosition ? #"YES" : #"NO"));
NSLog(#"mouth found: %#", (f. hasMouthPosition ? #"YES" : #"NO"));
if(f.hasLeftEyePosition)
NSLog(#"left eye position x = %f , y = %f", f.leftEyePosition.x, f.leftEyePosition.y);
if(f.hasRightEyePosition)
NSLog(#"right eye position x = %f , y = %f", f.rightEyePosition.x, f.rightEyePosition.y);
if(f.hasMouthPosition)
NSLog(#"mouth position x = %f , y = %f", f.mouthPosition.x, f.mouthPosition.y);
}
if you're using the front camera always in portrait add this
NSDictionary* imageOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:6] forKey:CIDetectorImageOrientation];
NSArray* features = [detector featuresInImage:image options:imageOptions];
For more info
sample: https://github.com/beetlebugorg/PictureMe
iOS Face Detection Issue
Face Detection issue using CIDetector
https://stackoverflow.com/questions/4332868/detect-face-in-iphone?rq=1
I try to this code above. It's can detect images captured by Iphone. But it's can't detect image download from Internet. This is my code
NSDictionary *options = [NSDictionary dictionaryWithObject: CIDetectorAccuracyLow forKey: CIDetectorAccuracy];
CIDetector *detector = [CIDetector detectorOfType: CIDetectorTypeFace context: nil options: options];
CIImage *ciImage = [CIImage imageWithCGImage: [facePicture CGImage]];
NSDictionary* imageOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:6] forKey:CIDetectorImageOrientation];
NSArray *features = [detector featuresInImage:ciImage options:imageOptions];
And when it's detect face. I show by code
for (CIFaceFeature *feature in features) {
// // Set red feature color
CGRect faceRect = [feature bounds];
CGContextSetRGBFillColor(context, 0.0f, 0.0f, 0.0f, 0.5f);
CGContextSetStrokeColorWithColor(context, [UIColor whiteColor].CGColor);
CGContextSetLineWidth(context, 2.0f * scale);
CGContextAddRect(context, feature.bounds);
CGContextDrawPath(context, kCGPathFillStroke);
CGContextDrawImage(context, faceRect, [imgDraw CGImage]);
it's not right position. It's move to right a distance.
I had the same problem. You can change size of the image before detection.
CGSize size = CGSizeMake(cameraCaptureImage.size.width, cameraCaptureImage.size.height);
UIGraphicsBeginImageContext(size);
[cameraCaptureImage drawInRect:CGRectMake(0, 0, size.width, size.height)];
cameraCaptureImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

How to adjust brightness and contrast of CGImageRef

I need to adjust the contrast and brightness of the CGImageRef my means of CoreGraphics/Quartz.
Any ideas how to do it ?
Quartz guildeline and online search didn't give many results.
Please don't refer to OpenGL solution.
You want Core Image. The filter for your purpose is CIColorControls.
also if you want to improve the behavior you can use GCD, enjoy!
CIContext *ctxt63 = [CIContext contextWithOptions:nil];
CIFilter *filter63 = [CIFilter filterWithName:#"CIColorControls"];
[filter63 setDefaults];
[filter63 setValue:input forKey:kCIInputImageKey];
[filter63 setValue:#1.8 forKeyPath:kCIInputSaturationKey];
[filter63 setValue:[NSNumber numberWithFloat:0.8] forKey:#"inputBrightness"];
[filter63 setValue:[NSNumber numberWithFloat:3.0] forKey:#"inputContrast"];
CIImage *output63 = [filter63 outputImage];
//Aplicar el filtro en segundo plano
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
CGImageRef res63 = [ctxt63 createCGImage:output63 fromRect:[output63 extent]];
dispatch_async(dispatch_get_main_queue(), ^{
UIImage *img63 = [UIImage imageWithCGImage:res63];
CGImageRelease(res63);
self.photoView.image = img63;
});
});

Creating Thumbnail from Video - Improving Speed Performance - AVAsset - iPhone [duplicate]

This question already has an answer here:
Grabbing First Frame of a Video - Thumbnail Resolution - iPhone
(1 answer)
Closed 3 years ago.
I am using code based on the code in the following thread to generate a video thumbnail :
Getting a thumbnail from a video url or data in iPhone SDK
My code is as follows :
if (selectionType == kUserSelectedMedia) {
NSURL * assetURL = [info objectForKey:UIImagePickerControllerReferenceURL];
AVURLAsset *asset=[[AVURLAsset alloc] initWithURL:assetURL options:nil];
AVAssetImageGenerator *generator = [[AVAssetImageGenerator alloc] initWithAsset:asset];
generator.appliesPreferredTrackTransform=TRUE;
[asset release];
CMTime thumbTime = CMTimeMakeWithSeconds(0,30);
//NSLog(#"Starting Async Queue");
AVAssetImageGeneratorCompletionHandler handler = ^(CMTime requestedTime, CGImageRef im, CMTime actualTime, AVAssetImageGeneratorResult result, NSError *error){
if (result != AVAssetImageGeneratorSucceeded) {
NSLog(#"couldn't generate thumbnail, error:%#", error);
}
//NSLog(#"Updating UI");
selectMediaButton.hidden = YES;
selectedMedia.hidden = NO;
cancelMediaChoiceButton.hidden = NO;
whiteBackgroundMedia.hidden = NO;
//Convert CGImage thumbnail to UIImage.
UIImage * thumbnail = [UIImage imageWithCGImage:im];
int checkSizeW = thumbnail.size.width;
int checkSizeH = thumbnail.size.height;
NSLog(#"Image width is %d", checkSizeW);
NSLog(#"Image height is %d", checkSizeH);
if (checkSizeW >=checkSizeH) {
//This is a landscape image or video.
NSLog(#"This is a landscape image - will resize automatically");
}
if (checkSizeH >=checkSizeW) {
//This is a portrait image or video.
selectedIntroHeight = thumbnail.size.height;
}
//Set the image once resized.
selectedMedia.image = thumbnail;
//Set out confirm button BOOL to YES and check if we need to display confirm button.
mediaReady = YES;
[self checkIfConfirmButtonShouldBeDisplayed];
//[button setImage:[UIImage imageWithCGImage:im] forState:UIControlStateNormal];
//thumbImg=[[UIImage imageWithCGImage:im] retain];
[generator release];
};
CGSize maxSize = CGSizeMake(320, 180);
generator.maximumSize = maxSize;
[generator generateCGImagesAsynchronouslyForTimes:[NSArray arrayWithObject:[NSValue valueWithCMTime:thumbTime]] completionHandler:handler];
}
}
The issue is that there is a delay of about 5-10 seconds in generating the thumbnail image. Is there anyway that I could improve the speed of this code and generate the thumbnail quicker ?
Thank you.
This is a generic code, you should just pass a path for the media file and set the ratio between 0 and 1.0.
+ (UIImage*)previewFromFileAtPath:(NSString*)path ratio:(CGFloat)ratio
{
AVAsset *asset = [AVURLAsset assetWithURL:[NSURL fileURLWithPath:path]];
AVAssetImageGenerator *imageGenerator = [[AVAssetImageGenerator alloc]initWithAsset:asset];
CMTime duration = asset.duration;
CGFloat durationInSeconds = duration.value / duration.timescale;
CMTime time = CMTimeMakeWithSeconds(durationInSeconds * ratio, (int)duration.value);
CGImageRef imageRef = [imageGenerator copyCGImageAtTime:time actualTime:NULL error:NULL];
UIImage *thumbnail = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return thumbnail;
}
Swift solution:
func previewImageForLocalVideo(url:NSURL) -> UIImage?
{
let asset = AVAsset(URL: url)
let imageGenerator = AVAssetImageGenerator(asset: asset)
imageGenerator.appliesPreferredTrackTransform = true
var time = asset.duration
//If possible - take not the first frame (it could be completely black or white on camara's videos)
time.value = min(time.value, 2)
do {
let imageRef = try imageGenerator.copyCGImageAtTime(time, actualTime: nil)
return UIImage(CGImage: imageRef)
}
catch let error as NSError
{
print("Image generation failed with error \(error)")
return nil
}
}

CoreImage Memory problems in ios5

Does anyone knows how to release memory while using core image framework to apply HUE changes on image?
Here is my code:-
CIImage *inputImage = [[CIImage alloc] initWithImage:currentImage];
CIFilter * controlsFilter = [CIFilter filterWithName:#"CIHueAdjust"];
[controlsFilter setValue:inputImage forKey:kCIInputImageKey];
[controlsFilter setValue:[NSNumber numberWithFloat:slider.value] forKey:#"inputAngle"];
CIImage *displayImage = controlsFilter.outputImage;
UIImage *finalImage = [UIImage imageWithCIImage:displayImage];
CIContext *context = [CIContext contextWithOptions:nil];
if (displayImage == nil || finalImage == nil) {
// We did not get output image. Let's display the original image itself.
photoEditView.image = currentImage;
}else {
// We got output image. Display it.
photoEditView.image = [UIImage imageWithCGImage:[context createCGImage:displayImage fromRect:displayImage.extent]];
}
context = nil;
[inputImage release];
I think you need to release this one as well :
[context createCGImage:displayImage fromRect:displayImage.extent]
by using the CGImageRelease(CGImageRef) method.

Convert UIImage to CVImageBufferRef

This code mostly works, but the resulting data seems to loose a color channel (is what I am thinking) as the resulting image data when displayed is tinted blue!
Here is the code:
UIImage* myImage=[UIImage imageNamed:#"sample1.png"];
CGImageRef imageRef=[myImage CGImage];
CVImageBufferRef pixelBuffer = [self pixelBufferFromCGImage:imageRef];
The method pixelBufferFromCGIImage was grabbed from another post on stackoverflow here: How do I export UIImage array as a movie? (although this application is unrelated to what I am trying to do) it is
+ (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image
{
CGSize frameSize = CGSizeMake(CGImageGetWidth(image), CGImageGetHeight(image));
NSDictionary *options = #{
(__bridge NSString *)kCVPixelBufferCGImageCompatibilityKey: #(NO),
(__bridge NSString *)kCVPixelBufferCGBitmapContextCompatibilityKey: #(NO)
};
CVPixelBufferRef pixelBuffer;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, frameSize.width,
frameSize.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pixelBuffer);
if (status != kCVReturnSuccess) {
return NULL;
}
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *data = CVPixelBufferGetBaseAddress(pixelBuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(data, frameSize.width, frameSize.height,
8, CVPixelBufferGetBytesPerRow(pixelBuffer), rgbColorSpace,
(CGBitmapInfo) kCGImageAlphaNoneSkipLast);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
return pixelBuffer;
}
I am thinking it has something to do with the relationship between kCVPixelFormatType_32ARGB and kCGImageAlphaNoneSkipLast though I have tried every combination and get either the same result or a application crash. Once again, this gets the UIImage data into CVImageBufferRef but when I display the image on screen, it appears to loose a color channel and shows up tinted blue. The image is a png.
The solution is that this code works perfectly as intended. :) The issue was in using the data in creating an OpenGL texture. Completely unrelated to this code. Anyone searching for how to Convert UIImage to CVImageBufferRef, your answer is in the above code!
If anyone is still looking for a solution to this problem, I solved it by switching the BOOLs in the pixelBuffer's options:
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:NO], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:NO], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
From NO to YES:
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
I encounter the same problem and find some samples: http://www.cakesolutions.net/teamblogs/2014/03/08/cmsamplebufferref-from-cgimageref
try to change
CGBitmapInfo bitmapInfo = (CGBitmapInfo)kCGBitmapByteOrder32Little |
kCGImageAlphaPremultipliedFirst)
Here's what really works:
+ (CVPixelBufferRef)pixelBufferFromImage:(CGImageRef)image {
CGSize frameSize = CGSizeMake(CGImageGetWidth(image), CGImageGetHeight(image)); // Not sure why this is even necessary, using CGImageGetWidth/Height in status/context seems to work fine too
CVPixelBufferRef pixelBuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, frameSize.width, frameSize.height, kCVPixelFormatType_32BGRA, nil, &pixelBuffer);
if (status != kCVReturnSuccess) {
return NULL;
}
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *data = CVPixelBufferGetBaseAddress(pixelBuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(data, frameSize.width, frameSize.height, 8, CVPixelBufferGetBytesPerRow(pixelBuffer), rgbColorSpace, (CGBitmapInfo) kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
return pixelBuffer;
}
You can change the pixel buffer back to a UIImage (and then display or save it) to confirm that it works with this method:
+ (UIImage *)imageFromPixelBuffer:(CVPixelBufferRef)pixelBuffer {
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef myImage = [context createCGImage:ciImage fromRect:CGRectMake(0, 0, CVPixelBufferGetWidth(pixelBuffer), CVPixelBufferGetHeight(pixelBuffer))];
UIImage *image = [UIImage imageWithCGImage:myImage];
// Uncomment the following lines to say the image to your application's document directory
//NSString *imageSavePath = [documentsDirectory stringByAppendingPathComponent:[NSString stringWithFormat:#"myImageFromPixelBuffer.png"]];
//[UIImagePNGRepresentation(image) writeToFile:imageSavePath atomically:YES];
return image;
}
Just to clarify the answer above: I've ran into the same issue because my shader code was expecting two layered samples within a image buffer, while I used a single layer buffer
This line took the rgb values from one sample and passed them to (I don't know what), but the end result is full colored image.
gl_FragColor = vec4(texture2D(SamplerY, texCoordVarying).rgb, 1);
It sounds like it might be that relationship. Possibly have it be a jpg and RGB instead of indexed colors with a png?