Comparing two images whether same or not (iOS) [duplicate] - iphone

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How does one compare one image to another to see if they are similar by a certain percentage, on the iPhone?
I've found this code and am trying to understand it better:
UIImage *img1 = // Some photo;
UIImage *img2 = // Some photo;
NSData *imgdata1 = UIImagePNGRepresentation(img1);
NSData *imgdata2 = UIImagePNGRepresentation(img2);
if ([imgdata1 isEqualToData:imgdata2]) {
NSLog(#"Same Image");
}
Will this confirm that image 1 is exactly the same as image 2? Is this method best practice, or is there a better approach to this?

Your code is comparing the two images bit by bit, so yes it's a 100%-comparison.
If you need something faster you can generate an hash from each UIImage and compare the two hashes, as explained here.

Take a look at this link, it talks all about sampling to images to see the percentage similarity: How does one compare one image to another to see if they are similar by a certain percentage, on the iPhone?

Related

Percentage whiteness of UIImage [duplicate]

This question already has an answer here:
Closed 10 years ago.
Possible Duplicate:
Good way to calculate ‘brightness’ of UIImage?
For a UIImage how can you determine the percentage whiteness of the whole image?
cheers
Depending on your definition of 'whiteness', you may be able to simply draw the image to a 1x1 CGBitmapContextRef, then check the whiteness of that single pixel.

how to split an image in to multiple parts [duplicate]

This question already has answers here:
how to crop image in to pieces programmatically
(3 answers)
Closed 8 years ago.
how can I slice an image into multiple pieces ? my image size is 300x300 an I want to make 9 pieces of it.
Thanks..
CWUIKit project available at https://github.com/jayway/CWUIKit has a category on UIImage adding a method like this:
UIImage* subimage = [originalImage subimageWithRect:CGRectMake(0, 0, 100, 100)];
Should be useful for you.

Question about rotating UIImage

The size of A UIImage in my app is (320,460)
I created another UIImage object using
- (id)initWithCGImage:(CGImageRef)imageRef scale:(CGFloat)scale orientation:(UIImageOrientation)orientation
I assigned orientation to UIImageOrientationLeft.
Then I printed the new UIImage object's size, the result was (460,320).
It has rotated to left already.
I needed to store the UIImage in my document directory.
NSData *imageData = UIImagePNGRepresentation(rotateImageView);
NSString * path = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0];
[imageData writeToFile:[path stringByAppendingPathComponent:#"test.png"] atomically:NO];
But when I got the UIImage object from "test.png"
the size of it was changed to (320,460),it has rotated to its orignal status.
I wanted that it can be stored in (460,320)
Did I make some mistakes?
Thanks!
I've run into this problem as well. When you pass around image orientations within Apple code, you don't actually rotate any pixel data. Rather, there is basically an enum value stored with the image. Many of Apple's image renderer's are smart enough to read this enum value, and use it to display the image properly. So the code snippets you share just change this enum value. The renderers that respect this value will display what you want, while many other renderers will ignore it.
There are a couple solutions available.
First, if you're displaying an image through iOS, you can use the transform property of UIImageView along with CGAffineTransformMakeRotation to get the desired orientation.
Second, you could actually rotate the raw pixel data, which can be accomplished like this:
How to rotate image file?
I would recommend the first solution, since it's easier to code, and more efficient. However, if you will be sharing these images outside of iOS, the second approach will give more reliable results.

Is it possible to tile images in a UIScrollView without having to manually create all the tiles?

In the iPhone sample code "PhotoScroller" from WWDC 2010, they show how to do a pretty good mimmic of the Photos app with scrolling, zooming, and paging of images. They also tile the images to show how to display high resolution images and maintain good performance.
Tiling is implemented in the sample code by grabbing pre scaled and cut images for different resolutions and placing them in the grid which makes up the entire image.
My question is: is there a way to tile images without having to manually go through all your photos and create "tiles"? How is it the Photos app is able to display large images on the fly?
Edit
Here is the code from Deepa's answer below:
- (UIImage *)tileForScale:(float)scale row:(int)row col:(int)col size:(CGSize)tileSize image:(UIImage *)inImage
{
CGRect subRect = CGRectMake(col*tileSize.width, row * tileSize.height, tileSize.width, tileSize.height);
CGImageRef tiledImage = CGImageCreateWithImageInRect([inImage CGImage], subRect);
UIImage *tileImage = [UIImage imageWithCGImage:tiledImage];
return tileImage;
}
Here goes the piece of code for tiled image generation:
In PhotoScroller source code replace tileForScale: row:col: with the following:
inImage - Image that you want to create tiles
- (UIImage *)tileForScale: (float)scale row: (int)row column: (int)col size: (CGSize)tileSize image: (UIImage*)inImage
{
CGRect subRect = CGRectMake(col*tileSize.width, row * tileSize.height, tileSize.width, tileSize.height);
CGImageRef tiledImage = CGImageCreateWithImageInRect([inImage CGImage], subRect);
UIImage *tileImage = [UIImage imageWithCGImage: tiledImage];
return tileImage;
}
Regards,
Deepa
I've found this which may be of help: http://www.mikelin.ca/blog/2010/06/iphone-splitting-image-into-tiles-for-faster-loading-with-imagemagick/
You just run it in the Terminal as a shell script on your Mac.
Sorry Jonah, but I think that you cannot do what you want to.
I have been implementing a comic app using the same example as a reference and had the same doubt. Finally, I realized that, even if you could load the image and cut it into tiles the first time that you use it, you shouldn't. There are two reasons for that:
You do the tiling to save time and be more responsive. Loading and tiling takes time for a large image.
Previous reason is particularly important the first time the user runs the app.
If these two reasons make no sense to you, and you still want to do it, I would use Quartz to create the tiles. CGImage function CGImageCreateWithImageInRect would be my starting point.
Deepa's answer above will load the entire image into memory as a UIImage (the input variable in his function), defeating the purpose of tiling.
Many image formats support region-based decoding. Instead of loading the whole image into memory, decompressing the whole thing, and discarding all but the region of interest (ROI), you can load and decode only the ROI, on-demand. For the most part, this eliminates the need to pre-generate and save image tiles. I've never worked with ImageMagick but I'd be amazed if it couldn't do it. (I have done it using the Java Advanced Imaging (JAI) API, which isn't going to help you on the iPhone...)
I've played with the PhotoScroller example and the way it works with pre-generated tiles is only to demonstrate the idea behind CATiledLayer, and make a working-self contained project. It's straightforward to replace the image tile loading strategy - just rewrite the TilingView tileForScale:row:col: method to return a UIImage tile from some other source, be it Quartz or ImageMagick or whatever.

Reduce UIImage size to a manageable size (reduce bytes)

I want to reduce the number of bytes of an image captured by the device, since i believe the _imageScaledToSize does not reduce the number of bytes of the picture (or does it?) - i want to store a thumbnail of the image in a local dictionary object and can't afford to put full size images in the dictionary. Any idea?
If you wish to simply compress your UIImage, you can use
NSData *dataForPNGFile = UIImagePNGRepresentation(yourImage);
to generate an NSData version of your image encoded as a PNG (easily inserted into an NSDictionary or written to disk), or you can use
NSData *dataForPNGFile = UIImageJPEGRepresentation(yourImage, 0.9f);
to do the same, only in a JPEG format. The second parameter is the image quality of the JPEG. Both of these should produce images that are smaller, memory-wise, than your UIImage.
Resizing a UIImage to create a smaller thumbnail (pixels-wise) using published methods is a little trickier. _imageScaledToSize is from the private API, and I'd highly recommend you not use it. For a means that works within the documented methods, see this post.
I ran into this problem the other day and did quite a bit of research. I found an awesome solution complete with code here:
http://vocaro.com/trevor/blog/2009/10/12/resize-a-uiimage-the-right-way/
You need to draw the image into a graphics context at a smaller size. Then, release the original image.
When you say 'physical size', are you talking about a print? Because you can just change the printer page size.
Are you talking about the number of pixels used to capture the image? As in, if you have a pixel array of 3000x2000, and you only want 150x150, then you can crop the images. At the time of capture, if you have a scientific imager, then you can just set the area that will be captured. The camera driver would include instructions for that. If you want to capture 3000x2000 in 1500x1000, you can try to bin the image, if that's what you need.
Or, you can use resampling post-capture in order to make the image smaller. One such algorithm is bicubic resampling, also linear resampling-- there are many variations.
I'm thinking this last is what you're most interested in... in which case, check out this Wikipedia page on the algorithm. Or, you can go to FreeImage and get a library that will read in the image and can also resize images.
UIImageJPEGRepresentation does the trick but I find that using the ImageIO framework often gets significantly better compression results for the same quality setting. It may be slower, but depending on your use case this may not be an issue.
(Code adapted for NSData from this blog post by Zachary West).
#import <MobileCoreServices/MobileCoreServices.h>
#import <ImageIO/ImageIO.h>
...
+ (NSData*)JPEGDataFromImage:(UIImage*)image quality:(double)quality
{
CFMutableDataRef outputImageDataRef = CFDataCreateMutable(kCFAllocatorDefault, 0);
CGImageDestinationRef imageDestinationRef = CGImageDestinationCreateWithData(outputImageDataRef, kUTTypeJPEG, 1, NULL);
NSDictionary* properties = #{
(__bridge NSString*)kCGImageDestinationLossyCompressionQuality: #(quality)
};
CGImageDestinationSetProperties(imageDestinationRef, (__bridge CFDictionaryRef)properties);
CGImageDestinationAddImage(imageDestinationRef, image.CGImage, NULL);
CGImageDestinationFinalize(imageDestinationRef);
CFRelease(imageDestinationRef);
NSData* imageData = CFBridgingRelease(outputImageDataRef);
return imageData;
}