Question about rotating UIImage - iphone

The size of A UIImage in my app is (320,460)
I created another UIImage object using
- (id)initWithCGImage:(CGImageRef)imageRef scale:(CGFloat)scale orientation:(UIImageOrientation)orientation
I assigned orientation to UIImageOrientationLeft.
Then I printed the new UIImage object's size, the result was (460,320).
It has rotated to left already.
I needed to store the UIImage in my document directory.
NSData *imageData = UIImagePNGRepresentation(rotateImageView);
NSString * path = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0];
[imageData writeToFile:[path stringByAppendingPathComponent:#"test.png"] atomically:NO];
But when I got the UIImage object from "test.png"
the size of it was changed to (320,460),it has rotated to its orignal status.
I wanted that it can be stored in (460,320)
Did I make some mistakes?
Thanks!

I've run into this problem as well. When you pass around image orientations within Apple code, you don't actually rotate any pixel data. Rather, there is basically an enum value stored with the image. Many of Apple's image renderer's are smart enough to read this enum value, and use it to display the image properly. So the code snippets you share just change this enum value. The renderers that respect this value will display what you want, while many other renderers will ignore it.
There are a couple solutions available.
First, if you're displaying an image through iOS, you can use the transform property of UIImageView along with CGAffineTransformMakeRotation to get the desired orientation.
Second, you could actually rotate the raw pixel data, which can be accomplished like this:
How to rotate image file?
I would recommend the first solution, since it's easier to code, and more efficient. However, if you will be sharing these images outside of iOS, the second approach will give more reliable results.

Related

How decrease the size of image jpeg/png

I create an image using the following method by passing the two parameter one is UIImage object is imagdata and NSString object is storeImage
JPG: [UIImageJPEGRepresentation(imageData, 1.0) writeToFile:storeImage atomically:YES];
or
PNG: [UIImagePNGRepresentation(imageData) writeToFile:storeImage atomically:YES];
my problem is that, original image size is 2.1 MB after i using the above method the image size is 4.2 MB in simulator.
I don't want to use any compression method and i don't want to loss any quality of image. i want copy the image as it is in given path. in actual size.
Use simple data object of your image.Try following code where imageData in NSData object.
[imageData writeToFile:storeImage atomically:YES];
could you tell me which is the original size and format? One way to reduce the size is lose a little of quality.
[UIImageJPEGRepresentation(imageData, 0.5) writeToFile:storeImage atomically:YES]
Another way could be use ImageIO framework.

Specifying scale for UIImage loaded from NSData

My app stores images as NSData objects. However, when these are loaded on an iPhone 4, they are displayed at double the size because the default scale factor is 1. I have 2 questions I would appreciate help with please:
Is there any way to set the scale of the UIImage without using initWithCGImage:scale:orientation:
If the answer to 1 is no, what is the most efficient way to load the NSData into a UIImage using the method above? At present it seems I will have to create a UIImage from the NSData and then create another UIImage using the method noted in 1 above.
Thank you.
UIImage is immutable, so I guess there is no way to do so without hacking.
UIImage is just a wrapper of CGImage , so I think using initWithCGImage: as you describe won't have any noticeable performance impact. If you really worry about that, you can load it to CGImageRef first.

Is it possible to tile images in a UIScrollView without having to manually create all the tiles?

In the iPhone sample code "PhotoScroller" from WWDC 2010, they show how to do a pretty good mimmic of the Photos app with scrolling, zooming, and paging of images. They also tile the images to show how to display high resolution images and maintain good performance.
Tiling is implemented in the sample code by grabbing pre scaled and cut images for different resolutions and placing them in the grid which makes up the entire image.
My question is: is there a way to tile images without having to manually go through all your photos and create "tiles"? How is it the Photos app is able to display large images on the fly?
Edit
Here is the code from Deepa's answer below:
- (UIImage *)tileForScale:(float)scale row:(int)row col:(int)col size:(CGSize)tileSize image:(UIImage *)inImage
{
CGRect subRect = CGRectMake(col*tileSize.width, row * tileSize.height, tileSize.width, tileSize.height);
CGImageRef tiledImage = CGImageCreateWithImageInRect([inImage CGImage], subRect);
UIImage *tileImage = [UIImage imageWithCGImage:tiledImage];
return tileImage;
}
Here goes the piece of code for tiled image generation:
In PhotoScroller source code replace tileForScale: row:col: with the following:
inImage - Image that you want to create tiles
- (UIImage *)tileForScale: (float)scale row: (int)row column: (int)col size: (CGSize)tileSize image: (UIImage*)inImage
{
CGRect subRect = CGRectMake(col*tileSize.width, row * tileSize.height, tileSize.width, tileSize.height);
CGImageRef tiledImage = CGImageCreateWithImageInRect([inImage CGImage], subRect);
UIImage *tileImage = [UIImage imageWithCGImage: tiledImage];
return tileImage;
}
Regards,
Deepa
I've found this which may be of help: http://www.mikelin.ca/blog/2010/06/iphone-splitting-image-into-tiles-for-faster-loading-with-imagemagick/
You just run it in the Terminal as a shell script on your Mac.
Sorry Jonah, but I think that you cannot do what you want to.
I have been implementing a comic app using the same example as a reference and had the same doubt. Finally, I realized that, even if you could load the image and cut it into tiles the first time that you use it, you shouldn't. There are two reasons for that:
You do the tiling to save time and be more responsive. Loading and tiling takes time for a large image.
Previous reason is particularly important the first time the user runs the app.
If these two reasons make no sense to you, and you still want to do it, I would use Quartz to create the tiles. CGImage function CGImageCreateWithImageInRect would be my starting point.
Deepa's answer above will load the entire image into memory as a UIImage (the input variable in his function), defeating the purpose of tiling.
Many image formats support region-based decoding. Instead of loading the whole image into memory, decompressing the whole thing, and discarding all but the region of interest (ROI), you can load and decode only the ROI, on-demand. For the most part, this eliminates the need to pre-generate and save image tiles. I've never worked with ImageMagick but I'd be amazed if it couldn't do it. (I have done it using the Java Advanced Imaging (JAI) API, which isn't going to help you on the iPhone...)
I've played with the PhotoScroller example and the way it works with pre-generated tiles is only to demonstrate the idea behind CATiledLayer, and make a working-self contained project. It's straightforward to replace the image tile loading strategy - just rewrite the TilingView tileForScale:row:col: method to return a UIImage tile from some other source, be it Quartz or ImageMagick or whatever.

How can I adjust the RGB pixel data of a UIImage on the iPhone?

I want to be able to take a UIImage instance, modify some of its pixel data, and get back a new UIImage instance with that pixel data. I've made some progress with this, but when I try to change the array returned by CGBitmapContextGetData, the end result is always an overlay of vertical blue lines over the image. I want to be able to change the colors and alpha values of pixels within the image.
Can someone provide an end-to-end example of how to do this, starting with a UIImage and ending with a new UIImage with modified pixels?
This might answer your question: How to get the RGB values for a pixel on an image on the iphone
I guess you'd follow those instructions to get a copy of the data, modify it, and then create a new CGDataProvider (CGDataProviderCreateWithCFData), use that to create a new CGImage (CGImageCreate) and then create a new UIImage from that.
I haven't actually tried this (no access to a Mac at the moment), though.

Reduce UIImage size to a manageable size (reduce bytes)

I want to reduce the number of bytes of an image captured by the device, since i believe the _imageScaledToSize does not reduce the number of bytes of the picture (or does it?) - i want to store a thumbnail of the image in a local dictionary object and can't afford to put full size images in the dictionary. Any idea?
If you wish to simply compress your UIImage, you can use
NSData *dataForPNGFile = UIImagePNGRepresentation(yourImage);
to generate an NSData version of your image encoded as a PNG (easily inserted into an NSDictionary or written to disk), or you can use
NSData *dataForPNGFile = UIImageJPEGRepresentation(yourImage, 0.9f);
to do the same, only in a JPEG format. The second parameter is the image quality of the JPEG. Both of these should produce images that are smaller, memory-wise, than your UIImage.
Resizing a UIImage to create a smaller thumbnail (pixels-wise) using published methods is a little trickier. _imageScaledToSize is from the private API, and I'd highly recommend you not use it. For a means that works within the documented methods, see this post.
I ran into this problem the other day and did quite a bit of research. I found an awesome solution complete with code here:
http://vocaro.com/trevor/blog/2009/10/12/resize-a-uiimage-the-right-way/
You need to draw the image into a graphics context at a smaller size. Then, release the original image.
When you say 'physical size', are you talking about a print? Because you can just change the printer page size.
Are you talking about the number of pixels used to capture the image? As in, if you have a pixel array of 3000x2000, and you only want 150x150, then you can crop the images. At the time of capture, if you have a scientific imager, then you can just set the area that will be captured. The camera driver would include instructions for that. If you want to capture 3000x2000 in 1500x1000, you can try to bin the image, if that's what you need.
Or, you can use resampling post-capture in order to make the image smaller. One such algorithm is bicubic resampling, also linear resampling-- there are many variations.
I'm thinking this last is what you're most interested in... in which case, check out this Wikipedia page on the algorithm. Or, you can go to FreeImage and get a library that will read in the image and can also resize images.
UIImageJPEGRepresentation does the trick but I find that using the ImageIO framework often gets significantly better compression results for the same quality setting. It may be slower, but depending on your use case this may not be an issue.
(Code adapted for NSData from this blog post by Zachary West).
#import <MobileCoreServices/MobileCoreServices.h>
#import <ImageIO/ImageIO.h>
...
+ (NSData*)JPEGDataFromImage:(UIImage*)image quality:(double)quality
{
CFMutableDataRef outputImageDataRef = CFDataCreateMutable(kCFAllocatorDefault, 0);
CGImageDestinationRef imageDestinationRef = CGImageDestinationCreateWithData(outputImageDataRef, kUTTypeJPEG, 1, NULL);
NSDictionary* properties = #{
(__bridge NSString*)kCGImageDestinationLossyCompressionQuality: #(quality)
};
CGImageDestinationSetProperties(imageDestinationRef, (__bridge CFDictionaryRef)properties);
CGImageDestinationAddImage(imageDestinationRef, image.CGImage, NULL);
CGImageDestinationFinalize(imageDestinationRef);
CFRelease(imageDestinationRef);
NSData* imageData = CFBridgingRelease(outputImageDataRef);
return imageData;
}