How to duplicate (copy) a CIImage - iphone

How can I duplicate a CIImage from another CIImage without going through rendering a CGImage

CIImage *copiedImage = [originalImage copy];
As you can read in the documentation, CIImage conforms to NSCopying.

Related

EAGLContext to UIImage

In objective-C, I was able to render the EAGLContext to UIImage using glReadPixels to read the pixel data from the framebuffer, creating a CGImage using the pixel data, and then retrieving a UIImage from the context.
I am trying a new method in Swift, but it is returning an empty image. I would like to create a GLKView from the EAGLContext, and then use 'snapshot' to get the UIImage. The “myEaglContext” passed into the GLKView constructor has data on it.
let glkView = GLKView(frame: CGRect(x: 0,y: 0,width: self.frame.size.width,height: self.frame.size.height),context: myEaglContext)
glkView.bindDrawable()
glkView.display()
let image : UIImage = glkView.snapshot
Does anyone know why this approach is not working?

Creating UIImage from CIImage

I am using some CoreImage filters to process an image. Applying the filter to my input image results in an output image called filterOutputImage of type CIImage.
I now wish to display that image, and tried doing:
self.modifiedPhoto = [UIImage imageWithCIImage:filterOutputImage];
self.photoImageView.image = self.modifiedPhoto;
The view however is blank - nothing is being displayed.
If I add logging statements that print out details about both filterOutputImage and self.modifiedPhoto, those logging statements are showing me that both those vars appear to contain legitimate image data: their size is being reported and the objects are not nil.
So after doing some Googling, I found a solution that requires going through a CGImage stage; vis:
CGImageRef outputImageRef = [context createCGImage:filterOutputImage fromRect:[filterOutputImage extent]];
self.modifiedPhoto = [UIImage imageWithCGImage:outputImageRef scale:self.originalPhoto.scale orientation:self.originalPhoto.imageOrientation];
self.photoImageView.image = self.modifiedPhoto;
CGImageRelease(outputImageRef);
This second approach works: I am getting the correct image displayed in the view.
Can someone please explain to me why my first attempt failed? What am I doing wrong with the imageWithCIImage method that is resulting in an image that seems to exist but can't be displayed? Is it always necessary to "pass through" a CGImage stage in order to generate a UIImage from a CIImage?
Hoping someone can clear up my confusion :)
H.
This should do it!
-(UIImage*)makeUIImageFromCIImage:(CIImage*)ciImage
{
self.cicontext = [CIContext contextWithOptions:nil];
// finally!
UIImage * returnImage;
CGImageRef processedCGImage = [self.cicontext createCGImage:ciImage
fromRect:[ciImage extent]];
returnImage = [UIImage imageWithCGImage:processedCGImage];
CGImageRelease(processedCGImage);
return returnImage;
}
I assume that self.photoImageView is a UIImageView? If so, ultimately, it is going to call -[UIImage CGImage] on the UIImage and then pass that CGImage as the contents property of a CALayer.
(See comments: my details were wrong)
Per the UIImage documentation for -[UIImage CGImage]:
If the UIImage object was initialized using a CIImage object, the
value of the property is NULL.
So the UIImageView calls -CGImage, but that results in NULL, so nothing is displayed.
I haven't tried this, but you could try making a custom UIView and then using UIImage's -draw... methods in -[UIView drawRect:] to draw the CIImage.

How do I perform a fast pixellation filter on an image?

I have a little problem with my pixellation image processing algorithm.
I load the image from the beginning into an array of type unsigned char*
After that, when needed, I modify this data and have to update the image.
This updating takes too long. This is how I am doing it:
CGDataProviderRef dataProvider = CGProviderCrateWithData(.....);
CGImageRef cgImage = CGImageCreate(....);
[imageView setImage:[UIImage imageWithCGImage:cgImage]]];
Everything is working but it's very slow to process a large image. I tried running this on a background thread, but that didn't help.
So basically, this takes too long. Does anyone have any idea how to improve it?
As others have suggested, you'll want to offload this work from the CPU to the GPU in order to have any kind of decent processing performance on these mobile devices.
To that end, I've created an open source framework for iOS called GPUImage that makes it relatively simple to do this kind of accelerated image processing. It does require OpenGL ES 2.0 support, but every iOS device sold for the last couple of years has this (stats show something like 97% of all iOS devices in the field do).
As part of that framework, one of the initial filters I've bundled is a pixellation one. The SimpleVideoFilter sample application shows how to use this, with a slider that controls the pixel width in the processed image:
This filter is the result of a fragment shader with the following GLSL code:
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform highp fractionalWidthOfPixel;
void main()
{
highp vec2 sampleDivisor = vec2(fractionalWidthOfPixel);
highp vec2 samplePos = textureCoordinate - mod(textureCoordinate, sampleDivisor);
gl_FragColor = texture2D(inputImageTexture, samplePos );
}
In my benchmarks, GPU-based filters like this perform 6-24X faster than equivalent CPU-bound processing routines for images and video on iOS. The above-linked framework should be reasonably easy to incorporate in an application, and the source code is freely available for you to customize however you see fit.
How about the use the Core Image filter named CIPixellate?
Here is a code snippet of how i implemented it. You can play with kCIInputScaleKey to get the intensity you want:
// initialize context and image
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *logo = [CIImage imageWithData:UIImagePNGRepresentation([UIImage imageNamed:#"test"])];
// set filter and properties
CIFilter *filter = [CIFilter filterWithName:#"CIPixellate"];
[filter setValue:logo forKey:kCIInputImageKey];
[filter setValue:[[CIVector alloc] initWithX:150 Y:150] forKey:kCIInputCenterKey]; // default: 150, 150
[filter setValue:[NSNumber numberWithDouble:100.0] forKey:kCIInputScaleKey]; // default: 8.0
// render image
CIImage *result = (CIImage *) [filter valueForKey:kCIOutputImageKey];
CGRect extent = result.extent;
CGImageRef cgImage = [context createCGImage:result fromRect:extent];
// result
UIImage *image = [[UIImage alloc] initWithCGImage:cgImage];
Here is the official Apple Filter Tutorial and a List of available Filters.
Update #1
I just wrote a method to execute the rendering work in background:
- (void) pixelateImage:(UIImage *) image withIntensity:(NSNumber *) intensity completionHander:(void (^)(UIImage *pixelatedImage)) handler {
// async task
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
// initialize context and image
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *logo = [CIImage imageWithData:UIImagePNGRepresentation(image)];
// set filter and properties
CIFilter *filter = [CIFilter filterWithName:#"CIPixellate"];
[filter setValue:logo forKey:kCIInputImageKey];
[filter setValue:[[CIVector alloc] initWithX:150 Y:150] forKey:kCIInputCenterKey]; // default: 150, 150
[filter setValue:intensity forKey:kCIInputScaleKey]; // default: 8.0
// render image
CIImage *result = (CIImage *) [filter valueForKey:kCIOutputImageKey];
CGRect extent = result.extent;
CGImageRef cgImage = [context createCGImage:result fromRect:extent];
// result
UIImage *image = [[UIImage alloc] initWithCGImage:cgImage];
// dispatch to main thread
dispatch_async(dispatch_get_main_queue(), ^{
handler(image);
});
});
}
Call it like this:
[self pixelateImage:[UIImage imageNamed:#"test"] withIntensity:[NSNumber numberWithDouble:100.0] completionHander:^(UIImage *pixelatedImage) {
self.logoImageView.image = pixelatedImage;
}];
The iPhone is not a great device to be doing computationally–intensive tasks like image manipulation. If you're looking to improve the performance in displaying very high resolution images—possibly while performing some image processing tasks at the same time, look into using CATiledLayer. It's made to display the contents in tiled chunks so you can display/process content data only as needed on individual tiles.
Converted #Kai Burghardt's answer to Swift 3
func pixelateImage(_ image: UIImage, withIntensity intensity: Int) -> UIImage {
// initialize context and image
let context = CIContext(options: nil)
let logo = CIImage(data: UIImagePNGRepresentation(image)!)!
// set filter and properties
let filter = CIFilter(name: "CIPixellate")
filter?.setValue(logo, forKey: kCIInputImageKey)
filter?.setValue(CIVector(x:150,y:150), forKey: kCIInputCenterKey)
filter?.setValue(intensity, forKey: kCIInputScaleKey)
let result = filter?.value(forKey: kCIOutputImageKey) as! CIImage
let extent = result.extent
let cgImage = context.createCGImage(result, from: extent)
// result
let processedImage = UIImage(cgImage: cgImage!)
return processedImage
}
calling this code as
self.myImageView.image = pixelateImage(UIImage(named:"test"),100)
Actually, it's simple as this. Higher input scale key means more pixellation.
let filter = CIFilter(name: "CIPixellate")
filter?.setValue(inputImage, forKey: kCIInputImageKey)
filter?.setValue(30, forKey: kCIInputScaleKey)
let pixellatedCIImage = filter?.outputImage
The result is CIImage, you can convert it using UIImage using
UIImage(ciImage: pixellatedCIImage)
I agree with #Xorlev. The only thing I would hope is (provided that you are using a lot of floating point operations) that you are building for arm6 and using thumb isa. In that case compile without -mthumb option and the performance might improve.

How can I save a Bitmap Image, represented by a CGContextRef, to the iPhone's hard drive?

I have a little sketch program that I've managed to throw together using Quartz2D, but I'm very new to the iOS platform and I'm having trouble figuring out how to allow the user to save their sketch. I have a CGContextRef that contains the sketch, how can I save it so that it can later be retrieved? Once it's saved, how can I retrieve it?
Thanks so much in advance for your help! I'm going to continue reading up on this now...
There are different types of CGContexts. You are most likely having a screen based context, but you would need either a bitmapContext or a pdfContext, to create a bitmap or a pdf file.
See the UIKit Functions Reference. Functions that should be of interest are:
UIGraphicsBeginImageContext()
UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
UIImageJPEGRepresentation()
UIImageWriteToSavedPhotosAlbum()
UIGraphicsBeginPDFContextToFile()
UIGraphicsEndPDFContext()
Here's how I did it...
Saving
CGImageRef imageRef = CGBitmapContextCreateImage(bitmapContext);
UIImage* image = [[UIImage alloc] initWithCGImage:imageRef];
NSData* imageData = UIImagePNGRepresentation(image);
[imageData writeToFile:filePath atomically:YES];
Retrieving
UIImage* image = [UIImage imageWithContentsOfFile:filePath];
CGImageRef imageRef = [image CGImage];

How to add a CGImageRef to a NSDictionary?

I have a CGImageRef variable and a CGRect (so no pointers) and I need to add it to an NSDictionary. Like an NSArray, NSDictionary-s only accept pointers. How can you add an CGImageRef or an CGRect anyway?
CGImageRef is already a pointer, but it's not (AFAIK) a pointer to a valid Cocoa object. You can turn a CGImage into a Cocoa NSBitmapImageRep with [[NSBitmapImageRep alloc] initWithCGImage:someimageref]. And NSValue is there to wrap primitive types like CGRect.