Creating UIImage from CIImage - ios5

I am using some CoreImage filters to process an image. Applying the filter to my input image results in an output image called filterOutputImage of type CIImage.
I now wish to display that image, and tried doing:
self.modifiedPhoto = [UIImage imageWithCIImage:filterOutputImage];
self.photoImageView.image = self.modifiedPhoto;
The view however is blank - nothing is being displayed.
If I add logging statements that print out details about both filterOutputImage and self.modifiedPhoto, those logging statements are showing me that both those vars appear to contain legitimate image data: their size is being reported and the objects are not nil.
So after doing some Googling, I found a solution that requires going through a CGImage stage; vis:
CGImageRef outputImageRef = [context createCGImage:filterOutputImage fromRect:[filterOutputImage extent]];
self.modifiedPhoto = [UIImage imageWithCGImage:outputImageRef scale:self.originalPhoto.scale orientation:self.originalPhoto.imageOrientation];
self.photoImageView.image = self.modifiedPhoto;
CGImageRelease(outputImageRef);
This second approach works: I am getting the correct image displayed in the view.
Can someone please explain to me why my first attempt failed? What am I doing wrong with the imageWithCIImage method that is resulting in an image that seems to exist but can't be displayed? Is it always necessary to "pass through" a CGImage stage in order to generate a UIImage from a CIImage?
Hoping someone can clear up my confusion :)
H.

This should do it!
-(UIImage*)makeUIImageFromCIImage:(CIImage*)ciImage
{
self.cicontext = [CIContext contextWithOptions:nil];
// finally!
UIImage * returnImage;
CGImageRef processedCGImage = [self.cicontext createCGImage:ciImage
fromRect:[ciImage extent]];
returnImage = [UIImage imageWithCGImage:processedCGImage];
CGImageRelease(processedCGImage);
return returnImage;
}

I assume that self.photoImageView is a UIImageView? If so, ultimately, it is going to call -[UIImage CGImage] on the UIImage and then pass that CGImage as the contents property of a CALayer.
(See comments: my details were wrong)
Per the UIImage documentation for -[UIImage CGImage]:
If the UIImage object was initialized using a CIImage object, the
value of the property is NULL.
So the UIImageView calls -CGImage, but that results in NULL, so nothing is displayed.
I haven't tried this, but you could try making a custom UIView and then using UIImage's -draw... methods in -[UIView drawRect:] to draw the CIImage.

Related

How do I take a screenshot of a view which has transform?

I'm able to crop a view with this code
- (UIImage *)captureScreenInRect:(CGRect)captureFrame {
CALayer *layer;
layer = self.view.layer;
UIGraphicsBeginImageContext(self.view.bounds.size);
CGContextClipToRect (UIGraphicsGetCurrentContext(),captureFrame);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return screenImage;
}
But I have an imageview zoomed in with transform and it isn't shown to scale.
How do I capture EXACTLY what the user sees on the screen
The Stack Overflow question "renderInContext:" and CATransform3D has more info, but the gist is:
QCCompositionLayer, CAOpenGLLayer, and QTMovieLayer layers are not rendered. Additionally, layers that use 3D transforms are not rendered, nor are layers that specify backgroundFilters, filters, compositingFilter, or a mask values.
(from the CALayer docs).
More info is also available in this technical Q&A: http://developer.apple.com/library/ios/#qa/qa1703/_index.html
If your app is not going to the app store you can use the undocumented UIGetScreenImage API:
// Define at top of implementation file
CGImageRef UIGetScreenImage(void);
...
- (void)buttonPressed:(UIButton *)button
{
// Capture screen here...
CGImageRef screen = UIGetScreenImage();
UIImage* image = [UIImage imageWithCGImage:screen];
CGImageRelease(screen);
// Save the captured image to photo album
UIImageWriteToSavedPhotosAlbum(image, self, #selector(image:didFinishSavingWithError:contextInfo:), nil);
}
(from John Muchow)
However, use of this API will make your app not get approved.
I have been unable to find any other workarounds.

How to draw a Mirrored UIImage in UIView with drawRect?

I load a image and create a mirror in this way:
originalImg = [UIImage imageNamed:#"ms06.png"];
mirrorImg = [UIImage imageWithCGImage:[originalImg CGImage] scale:1.0 orientation:UIImageOrientationUpMirrored];
Then I set the above UIImage object to a subclass of UIView, and override the drawRect:
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGAffineTransform t0 = CGContextGetCTM(context);
CGContextConcatCTM(context, t0);
CGContextDrawImage(context, self.bounds, [image CGImage]);
CGContextRestoreGState(context);
}
No matter which image I draw, the displayed image always be the original one, mirrored image was never displayed when set to the UIView subclass.
I'm sure that the mirrored image was set to the UIView correctly because the debug info showed that the orientation member variable equals to 4, which means "UIImageOrientationUpMirrored", while the original image equals to 0.
Does anyone could help me with this problem, thanks.
I also tried to display the mirrored image in UIImageView with setImage: and it works correctly. By the way I found that the breakpoint in drawRect was never hit when call the setImage of UIImageView, how can we define the drawing behavior(such as draw a line above the image) when loading image to the UIImageView?
You mirrow the image on UI-Level. This returns a new UIImage, but the CGImage stays the same. If you do some NSLogs, you will notice this.
You can also do transformations on UI-Level. If you use this approach, I would suggest to use originalImg.scale instead of 1.0. Then the code would work for retina and non-retina displays.
[UIImage imageWithCGImage:[originalImg CGImage] scale:originalImg.scale orientation:UIImageOrientationUpMirrored];
If you really need to mirror the CGImage, take a look at NYXImagesKit on GitHub (see UIImage+Rotating.m)

images from documents asynchronous

I need to read images from NSDocumentDirectory to multiple uiimageview async so it won't block the UI.
I know i can use perform selector in background to load a uiimage, but then how can i associate it with the dynamic uiimageview ?
One convenient way is to use blocks, something like:
[self loadFullImageAt:imagePath completion:^(UIIMage * image){
self.imageView.image = image;
}];
Where you would load the image as data (since UIImage otherwise loads the image data deferred - when you first access it). It's also a good idea to decompress the image while still in the background thread, so the main thread doesn't have to do it when we first use the image.
- (void)loadFullImageAt:(NSString *)imageFilePath completion:(MBLoaderCompletion)completion {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^{
NSData *imageData = [NSData dataWithContentsOfFile:imageFilePath];
UIImage *image = nil;
if (imageData) {
image = [[[UIImage alloc] initWithData:imageData] decodedImage];
}
dispatch_async(dispatch_get_main_queue(), ^{
completion(image);
});
});
}
The callback is defined as:
typedef void (^MBLoaderCompletion)(UIImage *image);
Here's an UIImage category that implements the decompression code:
UIIMage+Decode.h
#import <UIKit/UIKit.h>
#interface UIImage (Decode)
- (UIImage *)decodedImage;
#end
UIIMage+Decode.m
#import "UIImage+Decode.h"
#implementation UIImage (Decode)
- (UIImage *)decodedImage {
CGImageRef imageRef = self.CGImage;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL,
CGImageGetWidth(imageRef),
CGImageGetHeight(imageRef),
8,
// Just always return width * 4 will be enough
CGImageGetWidth(imageRef) * 4,
// System only supports RGB, set explicitly
colorSpace,
// Makes system don't need to do extra conversion when displayed.
kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little);
CGColorSpaceRelease(colorSpace);
if (!context) return nil;
CGRect rect = (CGRect){CGPointZero,{CGImageGetWidth(imageRef), CGImageGetHeight(imageRef)}};
CGContextDrawImage(context, rect, imageRef);
CGImageRef decompressedImageRef = CGBitmapContextCreateImage(context);
CGContextRelease(context);
UIImage *decompressedImage = [[UIImage alloc] initWithCGImage:decompressedImageRef scale:self.scale orientation:self.imageOrientation];
CGImageRelease(decompressedImageRef);
return decompressedImage;
}
#end
The sample code provided here assumes that we're using ARC.
When you say "dynamic" UIImageView, are these programmatically created on a UIScrollView? on a UITableView? samfisher is quite right on the basic question, but the details differ a little based upon how you created the UIImageView (e.g. if UITableView, you need to make sure that the cell is still visible and hasn't been dequeued; if UIScrollView, even then you might want to only load the UIImageView if the image is still visible on the screen (esp if the images are large or numerous)).
But the basic idea is that you might do something like:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
UIImage *image = [self getTheImage];
// ok, now that you have the image, dispatch the update of the UI back to the main queue
dispatch_async(dispatch_get_main_queue(), ^{
// if the image view is still visible, update it
});
});
Note that you invoke the retrieval of the image on some background queue or thread, but make sure to update the UI back on the main thread!
If you're updating a scrollview, you might want to do some checking that the view is still visible, such as contemplated here or here. If you're updating a tableview, perhaps something like this which checks if the cell is still visible. It all depends upon what you're trying to do.
you can use NSThread/dispatch queue for creating threads which can create your UIImageView-s and loads up images in them.

(iphone) am I creating a leak when creating a new image from an image?

I have following code as UIImage+Scale.h category.
-(UIImage*)scaleToSize:(CGSize)size
{
UIGraphicsBeginImageContext(size);
[self drawInRect:CGRectMake(0, 0, size.width, size.height)];
// is this scaledImage auto-released?
UIImage* scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return scaledImage;
}
I use image obtained as above and use it as following.
UIImage* image = [[UIImage alloc] initWithData: myData];
image = [image scaleToSize: size]; <- wouldn't this code create a leak since
image(before scaling) is lost somewhere?
i guess above codes work fine if image was first created with auto-release.
But if image was created using 'alloc', it would create a leak in my short knowledge.
How should I change scaleToSize: to guard against it?
Thank you
EDIT -
I'd like to use alloc(or retain)/release on UIImage so that I can keep the # of UIImage in memory at a point small.
(i'm loading many UIImages in a loop and device can't take it)
Notice that your code could be rewritten as:
UIImage *image = [[UIImage alloc] initWithData:myData];
UIImage *scaledImage = [image scaleToSize:size];
image = scaledImage;
so let’s see what happens:
image is obtained via alloc, hence you own that object
scaledImage is obtained via a method that returns an autoreleased object since UIGraphicsGetImageFromCurrentImageContext() returns an autoreleased object
you own the original image but you don’t own scaledImage. You are responsible for releasing the original image, otherwise you have a leak.
In your code, you use a single variable to refer to both objects: the original image and the scaled image. This doesn’t change the fact that you own the first image, hence you need to release it to avoid leaks. Since you lose the original image reference by using the same variable, one common idiom is to send -autorelease to the original object:
UIImage *image = [[[UIImage alloc] initWithData:myData] autorelease];
image = [image scaleToSize:size];
Or, if you’d rather release the original image instead of autoreleasing it,
UIImage *image = [[UIImage alloc] initWithData:myData];
UIImage *scaledImage = [image scaleToSize:size];
[image release];
// use scaledImage from this point on, or assign image = scaledImage
IMO, it doesn’t make sense to change scaleToSize:. It is an instance method that creates an (autoreleased) image based on a given UIImage instance. It’s similar to -[NSString stringByAppendingString:], which creates a (an autoreleased) string based on a given NSString instance. It doesn’t and shouldn’t care about the ownership of the original string, and the same applies to your scaleToSize: method. How would the method know whether the caller wants to keep the original image?
I’d also rename scaleToSize: to imageByScalingToSize to make it similar to Cocoa’s naming convention — you’re getting an image by applying an operation to an existing image.
Yeah, it is sure that you have a leak. The object stored previously in the image is not referenced anymore but not deallocated yet

Can any of the iPhone collection objects hold an image?

Actually, I wanted a custom cell which contains 2 image objects and 1 text object, and I decided to make a container for those objects.
So is it possible to hold a image in object and insert that object in any of the collection objects, and later use that object to display inside cell?
NSArray and NSDictionary both hold objects. These are most likely the collections you'll use with a table view.
The best way to implement what you are trying to do is to use the UIImage class. UIImages wrap a CGImage and do all the memory management for you (if your app is running low on memory, the image data is purged and automatically reloaded when you draw it- pretty cool, huh?) You can also read images from files very easily using this class (a whole bunch of formats supported).
Look at the documentation for NSArray, NSMutableArray, and UIImage for more information.
//create a UIImage from a jpeg image
UIImage *myImage = [UIImage imageWithContentsOfFile:#"myImage.jpg"];
NSArray *myArray = [NSMutableArray array]; // this will autorelease, so if you need to keep it around, retain it
[myArray addObject:myImage];
//to draw this image in a UIView's drawRect method
CGContextRef context = UIGraphicsGetCurrentContext(); // you need to find the context to draw into
UIImage *myImage = [myArray lastObject]; // this gets the last object from an array, use the objectAtIndex: method to get a an object with a specific index
CGImageRef *myCGImage = [myImage CGImage];
CGContextDrawImage(context, rect, myCGImage); //rect is passed to drawRect
There should be no problem with that. Just make sure you are properly retaining it and what not in your class.