How do you get the cropped version of an image using ALAsset? - iphone

I'm trying to get the cropped version of an image that's pulled using ALAsset. Specifically, I'm selecting items from the user's Photo Library and then uploading them. The issue is that in the library thumbnail view, iOS is showing us the cropped version. When you select that thumbnail and pull that image's asset using ALAsset, I get the full resolution version.
I did some research and couldn't find anything that helps in getting a second coordinate system of where the cropping happens.
To test it, you need iOS5 to edit the image in your library. Select an image in your image library, select "Edit", and crop the image. When you get the ALAsset you'll get the full image, and if you sync using iPhoto, iPhoto also pulls the full image. Also, you can re-edit the image and undo your crop.
This is how I'm getting the image:
UIImage *tmpImage = [UIImage imageWithCGImage:[[asset defaultRepresentation] fullResolutionImage]];
That gives me the full resolution image, obviously. There's a fullScreenImage flag which scales the full resolution image to the size of the screen. That's not what I want.
The ALAssetRepresenation class has a scale field, but that's a float value, which is also what I don't want.
If anyone can tell me where this cropped coordinate system can be found, I'd appreciate it.

Your Options:
Option 1 (ALAssetLibrary)
Use the - (CGImageRef)fullScreenImage method of AlAssetRepresentation.
Pros:
All the hard work is done for you, you get an image that looks just like the one in the Photos app. This includes cropping, and other changes. Easy.
Cons:
The resolution is "screen size", only as big as the device you are using, not the full possible resolution of the cropped image. If this doesn't concern you, then this is the perfect option.
Option 2 (ALAssetLibrary)
Extract the cropping data using the AdjustmentXMP key in the image's metadata (what #tom is referring to). Apply the crop.
Benefit:
It is possible to get a cropped image at the best possible resolution
Cons
You only get the cropping edits, not any other adjustments (like red-eye)
Who knows what Apple will support in the future in "Edit" mode, you may have to apply more edits in the future.
It's complicated, you first have to parse the XML data to read the crop rectangle, crop the unrotated image, and then apply the rotation.
Option 3 (Wishful Thinking)
Beg Apple to include a method like fullResolutionEditedImage which gives you the best possible quality photo, with all edits applied.
Pros:
Everything magically solved.
Cons:
Apple may never add this method.
Option 4 (UIImagePickerController)
This option only applies if you are using the image picker, you can't use it directly with the asset library
In the NSDictionary returned by -(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
You can extract the full sized, adjusted image from the UIImagePickerControllerOriginalImage key. Save this image somewhere. Then, instead of retrieving the image from the asset library, load the copy you made.
Pros:
You get the full size image, with adjustments
This is the only option Apple gives us for getting the full size image with all adjustments (like red-eye, etc), and not just the crop. This is particularly important in iOS 7 with the introduction of filters that can drastically alter the image.
Cons:
Can only be used with the image picker (not ALAssetRepresentation)
You must keep around a full-sized copy of the image. Depending on the number of such images, the disk usage by your app could grow substantially.
Update for iOS 7: you may wish to consider Option 4, or Option 1, as iOS 7 supports many operations now like filters, and your users will probably notice if they are missing. These two options support filters (and other edits), with Option 4 giving you a higher resolution result.

When a photo has been cropped with the iOS Photos App, the cropping coordinates can be found in the ALAssetRepresentation's metadata dictionary. fullResolutionImage will give you the uncropped photo, you have to perform the cropping yourself.
The AdjustmentXMP metadata contains not only the cropping coordinates but also indicates if auto-enhance or remove-red-eyes has been applied.
As of iOS 6.0 CIFilter provides filterArrayFromSerializedXMP:inputImageExtent:error: Probably you can use the ALAssetRepresentation's AdjustmentXMP metadata here and apply the CIFilter onto the ALAssetRepresentation's fullResolutionImage to recreate the modified image.
Be aware that the iOS Photos App handles JPG and RAW images differently. For JPG images a new ALAsset with the XMP metadata is stored in the Camera Roll. For RAW images an ALAssetRepresentation is added to the original ALAsset. I'm not sure if this additional ALAssetRepresentation is the modified image and if it has the AdjustmentXMP metadata. In addition to JPG and RAW images you should also test the behaviour for RAW+JPG images.

Related

Want to resize a image with better resolution in swift

I have a image which is uploaded in the image cloud-cloudinary using the API.The response of the upload gives me the cloudinary uploaded url.
Example of one image is as given:
This image is of 120*67 which is uploaded.Now in my app,it looks like this.
The width of my image is as per the phone screen width and the height is fixed to 324.Now i want to resize this 120*67 image to the width and height of my image in the app without losing its clarity.I have made the content mode as scale to fill for the imageview.
Generally, scaling up (non vectorized) images without compromising their quality is not possible (without machine learning). Some software programs have complex algorithms to help with upscaling, but that's also to a limited extent.
You should upload a larger version of your image to Cloudinary, and request it downscaled to the desired resolution.
You should also check out responsive layout design.
If the image is vectorized (e.g - SVG), you need to make sure you're not requesting a rasterized version of it. I.E -
https://res.cloudinary.com/<cloud name>/image/upload/fl_sanitize/<image name>.svg
will keep the image vectorized.

Can I detect if an imported ALAsset photo was taken via screenshot?

I have an imagePickerController that is used for importing photos from library into my app.
When in ALAssetsLibraryAssetForURLResultBlock, I'm trying to find out if the ALAsset I've got in the block is a photo taken as a screen-shot or is it a "genuine" photo, taken by the camera.
I've tried to go through the ALAsset's metadata dictionaries but couldn't find any flag / indication that might fit.
Anyone have any ideas?
For screenshot, its UTI is always a "public.png" and same size as screen (be sure you have multiply [UIScreen scale] on screen bounds width and height), just need to check these 2 metadata, you can easily identify screenshot.
Add MetaData to UIImage while saving to Photo Library. Same metadata of UIImage can be used to know if its screenshot or not.
Refer Save_Photo_to_Album_with_Metadata
Well, I was researching and experimenting .. and the closest solution I've found is based on the fact that iPhone screenshots don't yield EXIF records (while all other generated photos do generate them).
Therefore, once a photo is selected in the picker, I'm checking if the photo's metadata consists an EXIF record and if it doesn't - I conclude that the photo was screenshot.
I found it's the "as good as it gets" solution for now, although it's not an official one.
Cheers.

Retina images from server display incorrectly in table cells

I have a table where the cells' image views are being populated by images that have been previously pulled down off a server. So:
[[cell imageView] setImage:[[[UIImage alloc] initWithContentsOfFile:filePath] autorelease]];
Where "filePath" is the location of these images. Working beautifully, until I decided to be clever and add retina display images to my server. These images (double-sized, obviously) are being displayed, but are shrunk. I had labelled them image#2x.png, hoping that the iPhone would just know what to do with them, but obviously that doesn't work in this context.
I've looked at the discussions, and am guessing I need to do something with the contentsScale of the cell's imageView, like matching it to the screen scale, but I'm not sure exactly how to do this. Any help appreciated.
From server you cannot download automatically retina images.
You need a control like
if (iphone == 4) image=img#2x.png
else image=img.png
to get correct images.
You would need to set the scale factor of the image correctly. Please check the scale property in UIImage
If you load an image from a file whose
name includes the #2x modifier, the
scale is set to 2.0. If the filename
does not include the modifier but is
in the PNG or JPEG format and has an
associated DPI value, a corresponding
scale factor is computed and reflected
in this property. You can also specify
an explicit scale factor when
initializing an image from a Core
Graphics image. All other images are
assumed to have a scale factor of 1.0.
So you can read your image as above, get the CGImage from it and create a new UIImage using + (UIImage *)imageWithCGImage:(CGImageRef)imageRef scale:(CGFloat)scale orientation:(UIImageOrientation)orientation.
A nicer way could be to check the DPI of your retina image and make it 144dpi with a graphics program. According to the scale property documentation this might work, too.

How to create facebook wall posts and add retina version of picture

We're using the facebook graph API http://developers.facebook.com/docs/reference/api/post/ and adding the picture parameter. Our picture is a 30x30 pixel image, which is exactly the size we want for the facebook web version. However, the image will be pixelated when using the FB mobile app on an iPhone4 (retina display).
Is there any way to serve a 60x60 high resolution image, but render it always at 30x30 for facebook wall posts?
Well.. as of this moment, here is what I have found out, and offer a 'solution' that has worked for me based on the time i've had to test & play with this concept. For all the readers out there, who need a quick answer to the question, i don't have the exact solution to the question, but…. Essentially, your 30x30 image is being scaled to 90x90. The 60x60 image is being scaled to 90x90. And I can not find a way to go around this.
Below is what I have tried. Feel free to add input.
Take your feed image, and stroke a 2-5px black line around the frame of the image.
Load up your app, initiate a wall feed on the device. With the image present, take a screenshot. Mail yourself the image. Open it up in Photoshop (or photo editing program). Use a Marquee tool to outline the image. Cut it out of the screenshot and paste it as a new image. What size is it? 90x90, right? (and obviously 180x180 if image is retina)
Create a 90x 90 image. Copy your original 30x30 image and paste it anywhere you want within the new 90x90 images' frame. Upload it to the URL parameter's location. Re-run your app. By re-running it, i mean you have to shut it down completely, it appears as though the SDK is cacheing the image upon first launch of the feed and you can clear that cache by closing the app completely, and rerunning it. When you do, you will see significant improvements with the look of the image. It may not be a retina image, but it at least won't be 'fuzzy ugly'. At this point, it boils down to how nice of illustrative lines that where done in the design process to remove the aliasing effect produced from the conversion to a raster graphic. As well, i'm not sure if a variation of resampling method will produce even better results.
Some things i've tried:
I've also saved it as a png file with no transparency : 144ppi at 90 x 90 size. In other words, save your 90x90 image with a higher resolution (pixels per inch). Remember to not constrain proportions as you image resize. And note that If you are using adobe products, i.e. photoshop ) - don't save for web, just use 'save as…', as this will retain the ppi you specified. Although, i don't believe i see much of a difference in the quality which this is displayed going this route, and best to try to keep the file size down as this will increase the overall image size by about 500% or more.
I've tried variations of hosting the image twice the size (180x180) within the same hosted folder and naming it image#2x.png & image-large.png <--(just for the heck of it). This is not really solving the problem either.
Some other things I have not tried:
Monitoring your web server traffic, and any "not found" errors to a resource to see if FB is trying to access an a potential alternate resource when grabbing your image for display, the wall feed box that comes up is a webview. Meaning web graphics. (It's FB's web page…meaning their rules, and i doubt the pages' source is available to dabble with within the SDK.. so!…
Look at the HTML of the feed itself with safari browser:
The inspection of the HTML within the final resulting image that is posted on my FB wall I can see this….
<img class="img" src="http://platform.ak.fbcdn.net/www/app_full_proxy.php?app=153675474666495&v=1&size=z&cksum=773bba91f6146b2463eed0a0bb77dc42&src=http%3A%2F%2Fwww.thumbwizards.com%2Fspeakinapps%2Fgraphics%2Fboxed%2Faussie.png" alt="">
I am wondering:
Within HTML5 isn't there a mechanism to provide a toolkit type of javascript to display retina graphics from a web page?
Would it be possible to have that code run when grabbing the url to the image (in meaning, the url of the image would be acting as a pointer to the code.? I haven't tried playing with this, since my logic tells me that per the url above that FB is essentially taking control over the image at this point. I have noticed (and not waited long enough to see) that the image is apparently cached and posting to the wall with a new image, sometimes results in the older image still being used. (and yes, i've cleared my browser cache)… perhaps simply cached in another location..
If there is another parameter for the image type, that is not published, I have not stumbled across any yet.
Can anyone figure out if through source of:
[http://platform.ak.fbcdn.net/www/app_full_proxy.php] if this php file is part of an available image processor out there we can access to view what could be done?
Can anyone mention an app that uses a retina graphic in their feed post?
Just thoughts really, I've decided to not really give a crop, and if
you've made it this far. Thanks for tuning in. ..So, Sulf, your 30x30 is being scaled to 90x90. making it UGLY!.
Good luck.. If you figure anything else out, let me know!
Mark
apple specify that if you want to add the retina effect for your ios app then the images you are using in this format -i.e
sampleImag.png- 57*57(size) , 163 (DPI)
sampleImag#2x.png - 114*114(size),326 (DPI) when you use these specific graphic images you will get your app is showing retina effect in iphone 4 and above generation.
Just point your code to a larger scaled image and Facebook will take care of the rest.

iPhone - access location information from a photo

Is it possible, in an iPhone app, to extract location information (geocode, I suppose it's called) from a photo taken with the iPhone camera?
If there is no API call to do it, is there any known way to parse the bytes of data to extract the information? Something I can roll on my own?
Unfortunately no.
The problem is thus;
A jpeg file consists of several parts. For this question the ones we are interested in are the image data and the exif data. The image data is the picture and the exif data are where things like geocoding, shutter speed, camera type and so on are stored.
A UIImage (and CGImage) only contain image data, no tags.
When the image picker selects an image (either from the library or the camera) it returns a UIImage, not a jpeg. This UIImage is created from the jpeg image data, but the exif data in the jpeg is discarded.
This means this data is not in the UIImage at all and thus is not accessible.
I think the selected answer is wrong, actually. Well, not wrong. Everything it said is correct, but there is a way around that limitation.
UIImagePickerController passes a dictionary along with the UIImage it returns. One of the keys is UIImagePickerControllerMediaURL which is "the filesystem URL for the movie". However, as noted here in newer iOS versions it returns a url for images as well. Couple that with the exif library mentioned by #Jasper and you might be able to pull geotags out of photos.
I haven't tried this method, but as #tomtaylor mentioned, this has to be possible somehow, as there are a few apps that do it. (e.g. Lab).