Applying CIFilter to UIImageView - iphone

In UIViewController displaying image in UIImageView. I want to display it with some special effects. Using core image.framework
UIImageView *myImage = [[UIImageView alloc]
initWithImage:[UIImage imageNamed:#"piano.png"]];
myImage.frame = CGRectMake(10, 50, 300, 360);
CIContext *context = [CIContext contextWithOptions:nil];
CIFilter *filter= [CIFilter filterWithName:#"CIVignette"];
CIImage *inputImage = [[CIImage alloc] initWithImage:[UIImage imageNamed:#"piano.png"]];
[filter setValue:inputImage forKey:#"inputImage"];
[filter setValue:[NSNumber numberWithFloat:18] forKey:#"inputIntensity"];
[filter setValue:[NSNumber numberWithFloat:0] forKey:#"inputRadius"];
[baseView addSubview:inputImage];
but looks like I'm missing something or doing something wrong.

As the other post indicates, a CIImage is not a view so it can't be added as one. CIImage is really only used for doing image processing, to display the filtered image you'll need to convert it back to a UIImage. To do this, you need to get the output CIImage from the filter (not the input image). If you have multiple filters chained, use the last filter in the chain. Then you'll need to convert the output CIImage to a CGImage, and from there to a UIImage. This code accomplishes these things:
CIImage *result = [filter valueForKey:kCIOutputImageKey]; //Get the processed image from the filter
CGImageRef cgImage = [context createCGImage:result fromRect:[result extent]; //Create a CGImage from the output CIImage
UIImage* outputImage = [UIImage imageWithCGImage:cgImage]; // Create a UIImage from the CGImage
Also remember that the UIImage will have to go into a UIImageView, as it's not a view itself!
For more information, see the Core Image programming guide: https://developer.apple.com/library/ios/#documentation/GraphicsImaging/Conceptual/CoreImaging/ci_intro/ci_intro.html

CIImage can not be added as subview because it is not a view (UIView subclass). You need a UIImageView with a UIImage attached to its 'image' property (and this UIImage you can create from the CIImage, I believe).

Related

Same CIGaussianBlur effect archives two highly different results depending on the background [duplicate]

As I noticed when CIGaussianBlur is applied to image, image's corners gets blurred so that it looks like being smaller than original. So I figured out that I need to crop it correctly to avoid having transparent edges of image. But how to calculate how much I need to crop in dependence of blur amount?
Example:
Original image:
Image with 50 inputRadius of CIGaussianBlur (blue color is background of everything):
Take the following code as an example...
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *inputImage = [[CIImage alloc] initWithImage:image];
CIFilter *filter = [CIFilter filterWithName:#"CIGaussianBlur"];
[filter setValue:inputImage forKey:kCIInputImageKey];
[filter setValue:[NSNumber numberWithFloat:5.0f] forKey:#"inputRadius"];
CIImage *result = [filter valueForKey:kCIOutputImageKey];
CGImageRef cgImage = [context createCGImage:result fromRect:[result extent]];
This results in the images you provided above. But if I instead use the original images rect to create the CGImage off of the context the resulting image is the desired size.
CGImageRef cgImage = [context createCGImage:result fromRect:[inputImage extent]];
There are two issues. The first is that the blur filter samples pixels outside the edges of the input image. These pixels are transparent. That's where the transparent pixels come from.
The trick is to extend the edges before you apply the blur filter. This can be done by a clamp filter e.g. like this:
CIFilter *affineClampFilter = [CIFilter filterWithName:#"CIAffineClamp"];
CGAffineTransform xform = CGAffineTransformMakeScale(1.0, 1.0);
[affineClampFilter setValue:[NSValue valueWithBytes:&xform
objCType:#encode(CGAffineTransform)]
forKey:#"inputTransform"];
This filter extends the edges infinitely and eliminates the transparency. The next step would be to apply the blur filter.
The second issue is a bit weird. Some renderers produce a bigger output image for the blur filter and you must adapt the origin of the resulting CIImage by some offset e.g. like this:
CGImageRef cgImage = [context createCGImage:outputImage
fromRect:CGRectOffset([inputImage extend],
offset, offset)];
The software renderer on my iPhone needs three times the blur radius as offset. The hardware renderer on the same iPhone does not need any offset at all. Maybee you could deduce the offset from the size difference of input and output images, but I did not try...
To get a nice blurred version of an image with hard edges you first need to apply a CIAffineClamp to the source image, extending its edges out and then you need to ensure that you use the input image's extents when generating the output image.
The code is as follows:
CIContext *context = [CIContext contextWithOptions:nil];
UIImage *image = [UIImage imageNamed:#"Flower"];
CIImage *inputImage = [[CIImage alloc] initWithImage:image];
CIFilter *clampFilter = [CIFilter filterWithName:#"CIAffineClamp"];
[clampFilter setDefaults];
[clampFilter setValue:inputImage forKey:kCIInputImageKey];
CIFilter *blurFilter = [CIFilter filterWithName:#"CIGaussianBlur"];
[blurFilter setValue:clampFilter.outputImage forKey:kCIInputImageKey];
[blurFilter setValue:#10.0f forKey:#"inputRadius"];
CIImage *result = [blurFilter valueForKey:kCIOutputImageKey];
CGImageRef cgImage = [context createCGImage:result fromRect:[inputImage extent]];
UIImage *result = [[UIImage alloc] initWithCGImage:cgImage scale:image.scale orientation:UIImageOrientationUp];
CGImageRelease(cgImage);
Note this code was tested on iOS. It should be the similar for OS X (substituting NSImage for UIImage).
I saw some of the solutions and wanted to recommend a more modern one, based off some of the ideas shared here:
private lazy var coreImageContext = CIContext() // Re-use this.
func blurredImage(image: CIImage, radius: CGFloat) -> CGImage? {
let blurredImage = image
.clampedToExtent()
.applyingFilter(
"CIGaussianBlur",
parameters: [
kCIInputRadiusKey: radius,
]
)
.cropped(to: image.extent)
return coreImageContext.createCGImage(blurredImage, from: blurredImage.extent)
}
If you need a UIImage afterward, you can of course get it like so:
let image = UIImage(cgImage: cgImage)
... For those wondering, the reason for returning a CGImage is (as noted in the Apple documentation):
Due to Core Image's coordinate system mismatch with UIKit, this filtering approach may yield unexpected results when displayed in a UIImageView with "contentMode". Be sure to back it with a CGImage so that it handles contentMode properly.
If you need a CIImage you could return that, but in this case if you're displaying the image, you'd probably want to be careful.
This works for me :)
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *inputImage = [[CIImage alloc] initWithImage:image];
CIFilter *blurFilter = [CIFilter filterWithName:#"CIGaussianBlur"];
[blurFilter setDefaults];
[blurFilter setValue:inputImage forKey:#"inputImage"];
CGFloat blurLevel = 20.0f; // Set blur level
[blurFilter setValue:[NSNumber numberWithFloat:blurLevel] forKey:#"inputRadius"]; // set value for blur level
CIImage *outputImage = [blurFilter valueForKey:#"outputImage"];
CGRect rect = inputImage.extent; // Create Rect
rect.origin.x += blurLevel; // and set custom params
rect.origin.y += blurLevel; //
rect.size.height -= blurLevel*2.0f; //
rect.size.width -= blurLevel*2.0f; //
CGImageRef cgImage = [context createCGImage:outputImage fromRect:rect]; // Then apply new rect
imageView.image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
Here is the Swift 5 version of blurring the image. Set the Clamp filter to defaults so you will no need to give transform.
func applyBlurEffect() -> UIImage? {
let context = CIContext(options: nil)
let imageToBlur = CIImage(image: self)
let clampFilter = CIFilter(name: "CIAffineClamp")!
clampFilter.setDefaults()
clampFilter.setValue(imageToBlur, forKey: kCIInputImageKey)
//The CIAffineClamp filter is setting your extent as infinite, which then confounds your context. Try saving off the pre-clamp extent CGRect, and then supplying that to the context initializer.
let inputImageExtent = imageToBlur!.extent
guard let currentFilter = CIFilter(name: "CIGaussianBlur") else {
return nil
}
currentFilter.setValue(clampFilter.outputImage, forKey: kCIInputImageKey)
currentFilter.setValue(10, forKey: "inputRadius")
guard let output = currentFilter.outputImage, let cgimg = context.createCGImage(output, from: inputImageExtent) else {
return nil
}
return UIImage(cgImage: cgimg)
}
Here is Swift version:
func applyBlurEffect(image: UIImage) -> UIImage {
let context = CIContext(options: nil)
let imageToBlur = CIImage(image: image)
let blurfilter = CIFilter(name: "CIGaussianBlur")
blurfilter!.setValue(imageToBlur, forKey: "inputImage")
blurfilter!.setValue(5.0, forKey: "inputRadius")
let resultImage = blurfilter!.valueForKey("outputImage") as! CIImage
let cgImage = context.createCGImage(resultImage, fromRect: resultImage.extent)
let blurredImage = UIImage(CGImage: cgImage)
return blurredImage
}
See below two implementations for Xamarin (C#).
1) Works for iOS 6
public static UIImage Blur(UIImage image)
{
using(var blur = new CIGaussianBlur())
{
blur.Image = new CIImage(image);
blur.Radius = 6.5f;
using(CIImage output = blur.OutputImage)
using(CIContext context = CIContext.FromOptions(null))
using(CGImage cgimage = context.CreateCGImage (output, new RectangleF(0, 0, image.Size.Width, image.Size.Height)))
{
return UIImage.FromImage(cgimage);
}
}
}
2) Implementation for iOS 7
Using the way shown above isn't working properly on iOS 7 anymore (at least at the moment with Xamarin 7.0.1). So I decided to add cropping another way (measures may depend on the blur radius).
private static UIImage BlurImage(UIImage image)
{
using(var blur = new CIGaussianBlur())
{
blur.Image = new CIImage(image);
blur.Radius = 6.5f;
using(CIImage output = blur.OutputImage)
using(CIContext context = CIContext.FromOptions(null))
using(CGImage cgimage = context.CreateCGImage (output, new RectangleF(0, 0, image.Size.Width, image.Size.Height)))
{
return UIImage.FromImage(Crop(CIImage.FromCGImage(cgimage), image.Size.Width, image.Size.Height));
}
}
}
private static CIImage Crop(CIImage image, float width, float height)
{
var crop = new CICrop
{
Image = image,
Rectangle = new CIVector(10, 10, width - 20, height - 20)
};
return crop.OutputImage;
}
Try this, let the input's extent be -createCGImage:fromRect:'s parameter:
-(UIImage *)gaussianBlurImageWithRadius:(CGFloat)radius {
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *input = [CIImage imageWithCGImage:self.CGImage];
CIFilter *filter = [CIFilter filterWithName:#"CIGaussianBlur"];
[filter setValue:input forKey:kCIInputImageKey];
[filter setValue:#(radius) forKey:kCIInputRadiusKey];
CIImage *output = [filter valueForKey:kCIOutputImageKey];
CGImageRef imgRef = [context createCGImage:output
fromRect:input.extent];
UIImage *outImage = [UIImage imageWithCGImage:imgRef
scale:UIScreen.mainScreen.scale
orientation:UIImageOrientationUp];
CGImageRelease(imgRef);
return outImage;
}

UIGraphicsContext memory leak

Hi In my app I have a function that takes an Image of the current view and turns it into a blurred image then adds it to the current.view. All though I remove the view using [remove from superview] it the memory still stays high. I am using core graphics and set all of the UI Images to zero.
I do get a memory leak warning
-(void)burImage
{
//Get a screen capture from the current view.
UIGraphicsBeginImageContext(CGSizeMake(320, 450));
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//Blur the image
CIImage *blurImg = [CIImage imageWithCGImage:viewImg.CGImage];
CGAffineTransform transform = CGAffineTransformIdentity;
CIFilter *clampFilter = [CIFilter filterWithName:#"CIAffineClamp"];
[clampFilter setValue:blurImg forKey:#"inputImage"];
[clampFilter setValue:[NSValue valueWithBytes:&transform objCType:#encode(CGAffineTransform)] forKey:#"inputTransform"];
CIFilter *gaussianBlurFilter = [CIFilter filterWithName: #"CIGaussianBlur"];
[gaussianBlurFilter setValue:clampFilter.outputImage forKey: #"inputImage"];
[gaussianBlurFilter setValue:[NSNumber numberWithFloat:22.0f] forKey:#"inputRadius"];
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgImg = [context createCGImage:gaussianBlurFilter.outputImage fromRect:[blurImg extent]];
UIImage *outputImg = [UIImage imageWithCGImage:cgImg];
//Add UIImageView to current view.
UIImageView *imgView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 320, 450)];
[imgView setTag:1109];
imgView.image = outputImg;
[imgView setTag:1108];
gaussianBlurFilter = nil;
outputImg = nil;
blurImg = nil;
viewImg = nil;
[self.view addSubview:imgView];
UIGraphicsEndImageContext();
}
The static analyzer ("Analyze" on the Xcode "Product" menu) is informing you that you are missing a needed CGImageRelease(cgImg) at the end of your method. If you have a Core foundation object returned from a method/function with "Create" or "Copy" in the name, you are responsible for releasing it.
By the way, if you tap on the icon (once in the margin, and again on the version that appears in the error message), it will show you more information:
That can be helpful for tracking back to where the problem originated, in this case the call to createCGImage. If you look at the documentation for createCGImage, it confirms this diagnosis, reporting:
Return Value
A Quartz 2D image. You are responsible for releasing the returned image when you no longer need it.
For general counsel about releasing Core Foundation objects, see the Create Rule in the Memory Management Programming Guide for Core Foundation.

Adding Filters to imageview

I am new to ios development and trying to use code filters...
imgAnimation=[[UIImageView alloc]initWithFrame:frame];
imgAnimation.animationImages=_arrimg;
//animationImages=animationImages;
imgAnimation.contentMode=UIViewContentModeScaleAspectFit;
imgAnimation.animationDuration = 2.0f;
imgAnimation.animationRepeatCount = 0;
[imgAnimation startAnimating];
[self.view addSubview:imgAnimation];
My animation is working properly but how can I apply filters like sepia, grey scale I had found many tutorials but they are for single images kindly help ???
Sample Code :
+(UIImage *) applyFilterToImage: (UIImage *)inputImage
{
CIImage *beginImage = [CIImage imageWithCGImage:[inputImage CGImage]];
CIContext *context = [CIContext contextWithOptions:nil];
CIFilter *filter = [CIFilter filterWithName:#"CISepiaTone" keysAndValues: kCIInputImageKey, beginImage, #"inputIntensity", [NSNumber numberWithFloat:0.8], nil];
CIImage *outputImage = [filter outputImage];
CGImageRef cgimg = [context createCGImage:outputImage fromRect:[outputImage extent]];
UIImage *newImg = [UIImage imageWithCGImage:cgimg];
CGImageRelease(cgimg);
return newImg;
}
Basically you have to pre load all the images with the filters. Then load them to an array and set it to animationImages property.
For getting the various filters you can refer this project from github
GPU image open source framework for image effects.
Download the Source Code From Here:-
Hope it Helps to You :)

CIGaussianBlur and CIAffineClamp on iOS 6

I am trying to blur an image using CoreImage on iOS 6 without having a noticeable black border. Apple documentation states that using a CIAffineClamp filter can achieve this but I'm not able to get an output image from the filter. Here's what I tried, but unfortunately an empty image is created when I access the [clampFilter outputImage]. If I only perform the blur an image is produced, but with the dark inset border.
CIImage *inputImage = [[CIImage alloc] initWithCGImage:self.CGImage];
CIContext *context = [CIContext contextWithOptions:nil];
CGAffineTransform transform = CGAffineTransformIdentity;
CIFilter *clampFilter = [CIFilter filterWithName:#"CIAffineClamp"];
[clampFilter setValue:inputImage forKey:kCIInputImageKey];
[clampFilter setValue:[NSValue valueWithBytes:&transform objCType:#encode(CGAffineTransform)] forKey:#"inputTransform"];
CIImage *outputImage = [clampFilter outputImage];
CIFilter *blurFilter = [CIFilter filterWithName:#"CIGaussianBlur"
keysAndValues:kCIInputImageKey, outputImage, #"inputRadius", [NSNumber numberWithFloat:radius], nil];
outputImage = [blurFilter outputImage];
CGImageRef cgimg = [context createCGImage:outputImage fromRect:[outputImage extent]];
UIImage *blurredImage = [UIImage imageWithCGImage:cgimg];
CGImageRelease(cgimg);
The CIAffineClamp filter is setting your extent as infinite, which then confounds your context. Try saving off the pre-clamp extent CGRect, and then supplying that to the context initializer.

Setting a UIImageView with ALAsset

I have a UIImageView and would like to set it to the contents of an ALAsset.
How is the ALAssets contents used to set a UIImage?
Try this. Asset is ALAsset by the way.
UIImage *image = [UIImage imageWithCGImage:[asset thumbnail] scale:1.0 orientation:[[asset defaultRepresentation] orientation]]
UIImageView *image_view = [[UIImageView alloc] initWithImage:image]
As mentioned by Clayton and Aaron Brager, the key is imageWithCGImage method of UIImage class.
There are a two ways to get image content:
asset.thumbnail or asset.aspectThumbnail
via a presentation of the asset
there are default presentation and presentation for UTI
CGImage prepared
CGImageWithOptions, fullResolutionImage, fullScreenImage
raw data
getBytes:fromOffset:length:error:
(if you want to keep EXIF etc, use imageWithData instead of imageWithCGImage)
You may refer http://developer.apple.com/library/ios/#documentation/AssetsLibrary/Reference/ALAssetRepresentation_Class/Reference/Reference.html#//apple_ref/doc/c_ref/ALAssetRepresentation for more details.
ALAssetRepresentation *rep = [self defaultRepresentation];
UIImage *image = [UIImage imageWithCGImage:[rep fullScreenImage]];