Does anyone knows how to release memory while using core image framework to apply HUE changes on image?
Here is my code:-
CIImage *inputImage = [[CIImage alloc] initWithImage:currentImage];
CIFilter * controlsFilter = [CIFilter filterWithName:#"CIHueAdjust"];
[controlsFilter setValue:inputImage forKey:kCIInputImageKey];
[controlsFilter setValue:[NSNumber numberWithFloat:slider.value] forKey:#"inputAngle"];
CIImage *displayImage = controlsFilter.outputImage;
UIImage *finalImage = [UIImage imageWithCIImage:displayImage];
CIContext *context = [CIContext contextWithOptions:nil];
if (displayImage == nil || finalImage == nil) {
// We did not get output image. Let's display the original image itself.
photoEditView.image = currentImage;
}else {
// We got output image. Display it.
photoEditView.image = [UIImage imageWithCGImage:[context createCGImage:displayImage fromRect:displayImage.extent]];
}
context = nil;
[inputImage release];
I think you need to release this one as well :
[context createCGImage:displayImage fromRect:displayImage.extent]
by using the CGImageRelease(CGImageRef) method.
Related
I have a problem. I use 2 image. One is download from internet. the other is captured by camera of iPhone.
I use CIDetector to detect face in 2 images. It work perfect in image that download from internet. But the other, it's can't detect or detect wrong.
I check in many images. That result is the same.
Try this
NSDictionary *options = [NSDictionary dictionaryWithObject: CIDetectorAccuracyLow forKey: CIDetectorAccuracy];
CIDetector *detector = [CIDetector detectorOfType: CIDetectorTypeFace context: nil options: options];
CIImage *ciImage = [CIImage imageWithCGImage: [image CGImage]];
NSNumber *orientation = [NSNumber numberWithInt:[image imageOrientation]+1];
NSDictionary *fOptions = [NSDictionary dictionaryWithObject:orientation forKey: CIDetectorImageOrientation];
NSArray *features = [detector featuresInImage:ciImage options:fOptions];
for (CIFaceFeature *f in features) {
NSLog(#"left eye found: %#", (f. hasLeftEyePosition ? #"YES" : #"NO"));
NSLog(#"right eye found: %#", (f. hasRightEyePosition ? #"YES" : #"NO"));
NSLog(#"mouth found: %#", (f. hasMouthPosition ? #"YES" : #"NO"));
if(f.hasLeftEyePosition)
NSLog(#"left eye position x = %f , y = %f", f.leftEyePosition.x, f.leftEyePosition.y);
if(f.hasRightEyePosition)
NSLog(#"right eye position x = %f , y = %f", f.rightEyePosition.x, f.rightEyePosition.y);
if(f.hasMouthPosition)
NSLog(#"mouth position x = %f , y = %f", f.mouthPosition.x, f.mouthPosition.y);
}
if you're using the front camera always in portrait add this
NSDictionary* imageOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:6] forKey:CIDetectorImageOrientation];
NSArray* features = [detector featuresInImage:image options:imageOptions];
For more info
sample: https://github.com/beetlebugorg/PictureMe
iOS Face Detection Issue
Face Detection issue using CIDetector
https://stackoverflow.com/questions/4332868/detect-face-in-iphone?rq=1
I try to this code above. It's can detect images captured by Iphone. But it's can't detect image download from Internet. This is my code
NSDictionary *options = [NSDictionary dictionaryWithObject: CIDetectorAccuracyLow forKey: CIDetectorAccuracy];
CIDetector *detector = [CIDetector detectorOfType: CIDetectorTypeFace context: nil options: options];
CIImage *ciImage = [CIImage imageWithCGImage: [facePicture CGImage]];
NSDictionary* imageOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:6] forKey:CIDetectorImageOrientation];
NSArray *features = [detector featuresInImage:ciImage options:imageOptions];
And when it's detect face. I show by code
for (CIFaceFeature *feature in features) {
// // Set red feature color
CGRect faceRect = [feature bounds];
CGContextSetRGBFillColor(context, 0.0f, 0.0f, 0.0f, 0.5f);
CGContextSetStrokeColorWithColor(context, [UIColor whiteColor].CGColor);
CGContextSetLineWidth(context, 2.0f * scale);
CGContextAddRect(context, feature.bounds);
CGContextDrawPath(context, kCGPathFillStroke);
CGContextDrawImage(context, faceRect, [imgDraw CGImage]);
it's not right position. It's move to right a distance.
I had the same problem. You can change size of the image before detection.
CGSize size = CGSizeMake(cameraCaptureImage.size.width, cameraCaptureImage.size.height);
UIGraphicsBeginImageContext(size);
[cameraCaptureImage drawInRect:CGRectMake(0, 0, size.width, size.height)];
cameraCaptureImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
I need to adjust the contrast and brightness of the CGImageRef my means of CoreGraphics/Quartz.
Any ideas how to do it ?
Quartz guildeline and online search didn't give many results.
Please don't refer to OpenGL solution.
You want Core Image. The filter for your purpose is CIColorControls.
also if you want to improve the behavior you can use GCD, enjoy!
CIContext *ctxt63 = [CIContext contextWithOptions:nil];
CIFilter *filter63 = [CIFilter filterWithName:#"CIColorControls"];
[filter63 setDefaults];
[filter63 setValue:input forKey:kCIInputImageKey];
[filter63 setValue:#1.8 forKeyPath:kCIInputSaturationKey];
[filter63 setValue:[NSNumber numberWithFloat:0.8] forKey:#"inputBrightness"];
[filter63 setValue:[NSNumber numberWithFloat:3.0] forKey:#"inputContrast"];
CIImage *output63 = [filter63 outputImage];
//Aplicar el filtro en segundo plano
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
CGImageRef res63 = [ctxt63 createCGImage:output63 fromRect:[output63 extent]];
dispatch_async(dispatch_get_main_queue(), ^{
UIImage *img63 = [UIImage imageWithCGImage:res63];
CGImageRelease(res63);
self.photoView.image = img63;
});
});
This question already has an answer here:
Grabbing First Frame of a Video - Thumbnail Resolution - iPhone
(1 answer)
Closed 3 years ago.
I am using code based on the code in the following thread to generate a video thumbnail :
Getting a thumbnail from a video url or data in iPhone SDK
My code is as follows :
if (selectionType == kUserSelectedMedia) {
NSURL * assetURL = [info objectForKey:UIImagePickerControllerReferenceURL];
AVURLAsset *asset=[[AVURLAsset alloc] initWithURL:assetURL options:nil];
AVAssetImageGenerator *generator = [[AVAssetImageGenerator alloc] initWithAsset:asset];
generator.appliesPreferredTrackTransform=TRUE;
[asset release];
CMTime thumbTime = CMTimeMakeWithSeconds(0,30);
//NSLog(#"Starting Async Queue");
AVAssetImageGeneratorCompletionHandler handler = ^(CMTime requestedTime, CGImageRef im, CMTime actualTime, AVAssetImageGeneratorResult result, NSError *error){
if (result != AVAssetImageGeneratorSucceeded) {
NSLog(#"couldn't generate thumbnail, error:%#", error);
}
//NSLog(#"Updating UI");
selectMediaButton.hidden = YES;
selectedMedia.hidden = NO;
cancelMediaChoiceButton.hidden = NO;
whiteBackgroundMedia.hidden = NO;
//Convert CGImage thumbnail to UIImage.
UIImage * thumbnail = [UIImage imageWithCGImage:im];
int checkSizeW = thumbnail.size.width;
int checkSizeH = thumbnail.size.height;
NSLog(#"Image width is %d", checkSizeW);
NSLog(#"Image height is %d", checkSizeH);
if (checkSizeW >=checkSizeH) {
//This is a landscape image or video.
NSLog(#"This is a landscape image - will resize automatically");
}
if (checkSizeH >=checkSizeW) {
//This is a portrait image or video.
selectedIntroHeight = thumbnail.size.height;
}
//Set the image once resized.
selectedMedia.image = thumbnail;
//Set out confirm button BOOL to YES and check if we need to display confirm button.
mediaReady = YES;
[self checkIfConfirmButtonShouldBeDisplayed];
//[button setImage:[UIImage imageWithCGImage:im] forState:UIControlStateNormal];
//thumbImg=[[UIImage imageWithCGImage:im] retain];
[generator release];
};
CGSize maxSize = CGSizeMake(320, 180);
generator.maximumSize = maxSize;
[generator generateCGImagesAsynchronouslyForTimes:[NSArray arrayWithObject:[NSValue valueWithCMTime:thumbTime]] completionHandler:handler];
}
}
The issue is that there is a delay of about 5-10 seconds in generating the thumbnail image. Is there anyway that I could improve the speed of this code and generate the thumbnail quicker ?
Thank you.
This is a generic code, you should just pass a path for the media file and set the ratio between 0 and 1.0.
+ (UIImage*)previewFromFileAtPath:(NSString*)path ratio:(CGFloat)ratio
{
AVAsset *asset = [AVURLAsset assetWithURL:[NSURL fileURLWithPath:path]];
AVAssetImageGenerator *imageGenerator = [[AVAssetImageGenerator alloc]initWithAsset:asset];
CMTime duration = asset.duration;
CGFloat durationInSeconds = duration.value / duration.timescale;
CMTime time = CMTimeMakeWithSeconds(durationInSeconds * ratio, (int)duration.value);
CGImageRef imageRef = [imageGenerator copyCGImageAtTime:time actualTime:NULL error:NULL];
UIImage *thumbnail = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return thumbnail;
}
Swift solution:
func previewImageForLocalVideo(url:NSURL) -> UIImage?
{
let asset = AVAsset(URL: url)
let imageGenerator = AVAssetImageGenerator(asset: asset)
imageGenerator.appliesPreferredTrackTransform = true
var time = asset.duration
//If possible - take not the first frame (it could be completely black or white on camara's videos)
time.value = min(time.value, 2)
do {
let imageRef = try imageGenerator.copyCGImageAtTime(time, actualTime: nil)
return UIImage(CGImage: imageRef)
}
catch let error as NSError
{
print("Image generation failed with error \(error)")
return nil
}
}
I am trying to process image using Core Image. I have created UIImage category to do it.
I have added QuartzCore and CoreImage frameworks to project, imported CoreImage/CoreImage.h and used this code:
CIImage *inputImage = self.CIImage;
CIFilter *exposureAdjustmentFilter = [CIFilter filterWithName:#"CIExposureAdjust"];
[exposureAdjustmentFilter setDefaults];
[exposureAdjustmentFilter setValue:inputImage forKey:#"inputImage"];
[exposureAdjustmentFilter setValue:[NSNumber numberWithFloat:5.0f] forKey:#"inputEV"];
CIImage *outputImage = [exposureAdjustmentFilter valueForKey:#"outputImage"];
CIContext *myContext = [CIContext contextWithOptions:nil];
return [UIImage imageWithCGImage:[myContext createCGImage:outputImage fromRect:outputImage.extent]];
But I have got nil output image from the filter.
I have also tried to use CIHueAdjust with the same result.
Than you in advance
UPDATE: I have found solution. It was necessary to alloc new CIImage, not only pass reference to UIImage.CIImage this way:
CIImage *inputImage = [[CIImage alloc] initWithImage:self];
Try following code:-
CIImage *inputImage = [[CIImage alloc] initWithImage:[UIImage imageNamed:#"old-country-rain.jpg"]];
CIFilter * controlsFilter = [CIFilter filterWithName:#"CIExposureAdjust"];
[controlsFilter setValue:inputImage forKey:kCIInputImageKey];
[controlsFilter setValue:[NSNumber numberWithFloat: 2.0f] forKey:#"inputEV"];
NSLog(#"%#",controlsFilter.attributes);
CIImage *displayImage = controlsFilter.outputImage;
UIImage *finalImage = [UIImage imageWithCIImage:displayImage];
CIContext *context = [CIContext contextWithOptions:nil];
if (displayImage == nil || finalImage == nil) {
// We did not get output image. Let's display the original image itself.
imageView.image = [UIImage imageNamed:#"old-country-rain.jpg"];
}else {
// We got output image. Display it.
imageView.image = [UIImage imageWithCGImage:[context createCGImage:displayImage fromRect:displayImage.extent]];
}
I want to know that the image file size in IPhone PhotoAlbum which selected by UIImagePickerController.
I've tried this code with 1,571,299 byte jpeg image.
UIIamge *selectedImage = [info objectForKey:#"UIImagePickerControllerOriginalImage"];
NSData *imageData;
if ( /* PNG IMAGE */ )
imageData = UIImagePNGReprensentation(selectedImage);
else
imageData = UIImageJPEGReprensentation(selectedImage);
NSUInteger fileLength = [imageData length];
NSLog(#"file length : [%u]", fileLength);
But when I run the code, it print 362788 byte.
Is there anybody who know this?
If you have code like this to take a picture:
UIImagePickerController *controller = [[[UIImagePickerController alloc] init] autorelease];
controller.sourceType = UIImagePickerControllerSourceTypePhotoLibrary;
controller.delegate = self;
[self presentModalViewController:controller animated:YES];
Then you can retrieve a file size of the picked image in the following way:
NSURL *assetURL = [info objectForKey:#"UIImagePickerControllerReferenceURL"];
ALAssetsLibrary *library = [[[ALAssetsLibrary alloc] init] autorelease];
[library assetForURL:assetURL resultBlock:^(ALAsset *asset) {
NSLog(#"Size: %lld", asset.defaultRepresentation.size);
} failureBlock:nil];
If the source type is UIImagePickerControllerSourceTypeCamera, you must save the in-memory image to disk before retrieving its file size.
As some commenters have said, even if we assume the methodology is correct you are reprocessing the image anyway so the byte sizes will not match. I use the following method for JPG images, ymmv for PNG:
+ (NSInteger)bytesInImage:(UIImage *)image {
CGImageRef imageRef = [image CGImage];
return CGImageGetBytesPerRow(imageRef) * CGImageGetHeight(imageRef);
}
The above, as a commenter noted, does return the uncompressed size however.
here is in Swift 4
extension UIImage {
var memorySize: Int {
guard let imageRef = self.cgImage else { return 0 }
return imageRef.bytesPerRow * imageRef.height
}
}