Can any of the iPhone collection objects hold an image? - iphone

Actually, I wanted a custom cell which contains 2 image objects and 1 text object, and I decided to make a container for those objects.
So is it possible to hold a image in object and insert that object in any of the collection objects, and later use that object to display inside cell?

NSArray and NSDictionary both hold objects. These are most likely the collections you'll use with a table view.
The best way to implement what you are trying to do is to use the UIImage class. UIImages wrap a CGImage and do all the memory management for you (if your app is running low on memory, the image data is purged and automatically reloaded when you draw it- pretty cool, huh?) You can also read images from files very easily using this class (a whole bunch of formats supported).
Look at the documentation for NSArray, NSMutableArray, and UIImage for more information.
//create a UIImage from a jpeg image
UIImage *myImage = [UIImage imageWithContentsOfFile:#"myImage.jpg"];
NSArray *myArray = [NSMutableArray array]; // this will autorelease, so if you need to keep it around, retain it
[myArray addObject:myImage];
//to draw this image in a UIView's drawRect method
CGContextRef context = UIGraphicsGetCurrentContext(); // you need to find the context to draw into
UIImage *myImage = [myArray lastObject]; // this gets the last object from an array, use the objectAtIndex: method to get a an object with a specific index
CGImageRef *myCGImage = [myImage CGImage];
CGContextDrawImage(context, rect, myCGImage); //rect is passed to drawRect

There should be no problem with that. Just make sure you are properly retaining it and what not in your class.

Related

Is it possible to determine if a UIImage is stretchable?

I'm trying to reuse a small chunk of code inside a custom button class. For this to work I need to pass in non stretchable images (an icon) or a stretchable image (a 'swoosh'). Within the code I need to set the rect to draw the height so I'd like, ideally, to simply determine if the image is stretchable or not? If it isn't I draw it at the size of the image, if not I draw at the bounds of the containing rect.
From my investigation so far capInsets (iOS 5) or leftCapWidth/topCapHeight (pre iOS 5) are not useful for this.
Is there something buried in the core or quartz information I can use?
Just curious, for now I'm coding around it with an extra parameter.
** I've since read through CGImageRef and the CI equivalent **
As far as I can tell there is no such information that we can access to identify such images, which begs the question how does the OS know?
There is no way to detect this unless you have some intense image analysis (which won't be 100% correct). UIImage is essentially some pixels with meta-information, all obtained from the file that you loaded it from. No file formats would have that information.
However, you can encode some information into the file name of the image. If you have an image called foo.png that is stretchable, why not call it foo.stretch.png? Your loading routines can analyse the file name and extract meta-information that you can associate with the UIImage (see http://labs.vectorform.com/2011/07/objective-c-associated-objects/ for associated objects) or by creating your own class that composites a UIImage with meta-information.
Good luck in your research.
When u create UIImage*, its .size property is absolute.
If u mean stretchable to your button view. just check scale for example.
- (BOOL) stretchableImage:(UIImage*)aImage toView:(UIView*)aView{
CGFloat scaleW = aView.size.width/aImage.size.width;
CGFloat scaleH = aView.size.height/aImage.size.height;
if (scaleW == scaleH){
if (scaleW < 1)
return YES;
}
return NO;
}
You can check it's class.
UIImage *img = [UIImage imageNamed:#"back"];
NSString *imgClass = NSStringFromClass(img.class);
UIImage *imgStretch = [img stretchableImageWithLeftCapWidth:10 topCapHeight:10];
NSString *imgStrClass = NSStringFromClass(imgStretch.class);
NSLog(#"Normal class:\t%#\nStretchable class:\t%#",imgClass,imgStrClass);
Console:
Normal class: UIImage
Stretchable class: _UIResizableImage

Creating UIImage from CIImage

I am using some CoreImage filters to process an image. Applying the filter to my input image results in an output image called filterOutputImage of type CIImage.
I now wish to display that image, and tried doing:
self.modifiedPhoto = [UIImage imageWithCIImage:filterOutputImage];
self.photoImageView.image = self.modifiedPhoto;
The view however is blank - nothing is being displayed.
If I add logging statements that print out details about both filterOutputImage and self.modifiedPhoto, those logging statements are showing me that both those vars appear to contain legitimate image data: their size is being reported and the objects are not nil.
So after doing some Googling, I found a solution that requires going through a CGImage stage; vis:
CGImageRef outputImageRef = [context createCGImage:filterOutputImage fromRect:[filterOutputImage extent]];
self.modifiedPhoto = [UIImage imageWithCGImage:outputImageRef scale:self.originalPhoto.scale orientation:self.originalPhoto.imageOrientation];
self.photoImageView.image = self.modifiedPhoto;
CGImageRelease(outputImageRef);
This second approach works: I am getting the correct image displayed in the view.
Can someone please explain to me why my first attempt failed? What am I doing wrong with the imageWithCIImage method that is resulting in an image that seems to exist but can't be displayed? Is it always necessary to "pass through" a CGImage stage in order to generate a UIImage from a CIImage?
Hoping someone can clear up my confusion :)
H.
This should do it!
-(UIImage*)makeUIImageFromCIImage:(CIImage*)ciImage
{
self.cicontext = [CIContext contextWithOptions:nil];
// finally!
UIImage * returnImage;
CGImageRef processedCGImage = [self.cicontext createCGImage:ciImage
fromRect:[ciImage extent]];
returnImage = [UIImage imageWithCGImage:processedCGImage];
CGImageRelease(processedCGImage);
return returnImage;
}
I assume that self.photoImageView is a UIImageView? If so, ultimately, it is going to call -[UIImage CGImage] on the UIImage and then pass that CGImage as the contents property of a CALayer.
(See comments: my details were wrong)
Per the UIImage documentation for -[UIImage CGImage]:
If the UIImage object was initialized using a CIImage object, the
value of the property is NULL.
So the UIImageView calls -CGImage, but that results in NULL, so nothing is displayed.
I haven't tried this, but you could try making a custom UIView and then using UIImage's -draw... methods in -[UIView drawRect:] to draw the CIImage.

(iphone) am I creating a leak when creating a new image from an image?

I have following code as UIImage+Scale.h category.
-(UIImage*)scaleToSize:(CGSize)size
{
UIGraphicsBeginImageContext(size);
[self drawInRect:CGRectMake(0, 0, size.width, size.height)];
// is this scaledImage auto-released?
UIImage* scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return scaledImage;
}
I use image obtained as above and use it as following.
UIImage* image = [[UIImage alloc] initWithData: myData];
image = [image scaleToSize: size]; <- wouldn't this code create a leak since
image(before scaling) is lost somewhere?
i guess above codes work fine if image was first created with auto-release.
But if image was created using 'alloc', it would create a leak in my short knowledge.
How should I change scaleToSize: to guard against it?
Thank you
EDIT -
I'd like to use alloc(or retain)/release on UIImage so that I can keep the # of UIImage in memory at a point small.
(i'm loading many UIImages in a loop and device can't take it)
Notice that your code could be rewritten as:
UIImage *image = [[UIImage alloc] initWithData:myData];
UIImage *scaledImage = [image scaleToSize:size];
image = scaledImage;
so let’s see what happens:
image is obtained via alloc, hence you own that object
scaledImage is obtained via a method that returns an autoreleased object since UIGraphicsGetImageFromCurrentImageContext() returns an autoreleased object
you own the original image but you don’t own scaledImage. You are responsible for releasing the original image, otherwise you have a leak.
In your code, you use a single variable to refer to both objects: the original image and the scaled image. This doesn’t change the fact that you own the first image, hence you need to release it to avoid leaks. Since you lose the original image reference by using the same variable, one common idiom is to send -autorelease to the original object:
UIImage *image = [[[UIImage alloc] initWithData:myData] autorelease];
image = [image scaleToSize:size];
Or, if you’d rather release the original image instead of autoreleasing it,
UIImage *image = [[UIImage alloc] initWithData:myData];
UIImage *scaledImage = [image scaleToSize:size];
[image release];
// use scaledImage from this point on, or assign image = scaledImage
IMO, it doesn’t make sense to change scaleToSize:. It is an instance method that creates an (autoreleased) image based on a given UIImage instance. It’s similar to -[NSString stringByAppendingString:], which creates a (an autoreleased) string based on a given NSString instance. It doesn’t and shouldn’t care about the ownership of the original string, and the same applies to your scaleToSize: method. How would the method know whether the caller wants to keep the original image?
I’d also rename scaleToSize: to imageByScalingToSize to make it similar to Cocoa’s naming convention — you’re getting an image by applying an operation to an existing image.
Yeah, it is sure that you have a leak. The object stored previously in the image is not referenced anymore but not deallocated yet

Duplicate UIViews

I'd like to display the same UIView multiple times. At the moment, I have my drawing in a primary UIView, then copy this into an image using renderInContext: and UIGraphicsGetImageFromCurrentImageContext. Then I set the contents of the other proxy UIViews to be this image.
UIGraphicsBeginImageContext(size);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * clonedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return [clonedImage CGImage];
I'm experiencing a bottleneck in the renderInContext: call, presumably because it has to copy the image of the view. I'm seeing hot spots in resample_byte_h_3cpp and resample_byte_v_Ncpp, but I'm not sure what these are doing.
Is it possible to display the same UIView multiple times to reduce this overhead? Or is there a more efficient way to render the image?
How about making a copy of the UIImage instead of generating new images over and over from the UIView?
//.. Create the clonedImage from UIView
CGImageRef cgImageRef = [clonedImage CGImage];
UIImage *twinImage = [[UIImage alloc] initWithCGImage:cgImageRef];
//.. Use the images
[clonedImage release]; // if needed
[twinImage release]; // if needed

Sharing Memory Between CGImageRef and UIImage

Is there any way to create a UIImage object from an offscreen surface without copying the underlying pixel data?
I would like to do something like this:
// These numbers are made up obviously...
CGContextRef offscreenContext = MyCreateBitmapContext(width, height);
// Draw into the offscreen buffer
// Draw commands not relevant...
// Convert offscreen into CGImage
// This consumes ~15MB
CGImageRef offscreenContextImage = CGBitmapContextCreateImage(offscreenContext);
// This allocates another ~15MB
// Is there any way to share the bits from the
// CGImageRef instead of copying the data???
UIImage * newImage = [[UIImage alloc] initWithCGImage:offscreenContextImage];
// Releases the original 15MB, but the spike of 30MB total kills the app.
CGImageRelease(offscreenContextImage);
CGContextRelease(offscreenContext);
The memory is released and levels out at the acceptable size, but the 30MB memory spike is what kills the application. Is there any way to share the pixel data?
I've considered saving the offscreen buffer to a file and loading the data again, but this is a hack and the convenience methods for the iPhone require a UIImage to save it...
You could try releasing the context right after you created the CGImage, releasing the memory used by the context, because CGBitmapContextCreateImage() creates a copy of the context.
Like this:
CGImageRef offscreenContextImage = CGBitmapContextCreateImage(offscreenContext);
CGContextRelease(offscreenContext);
UIImage * newImage = [[UIImage alloc] initWithCGImage:offscreenContextImage];
// ...
CGImageRelease(offscreenContextImage);
Maybe
UIImage *newImage = [[UIImage alloc[ initWithData:offScreenContext];