In my iOS application I'm writing, I deal with PNGs because I deal with the alpha channel. For some reason, I can load a PNG into my imageView just fine, but when it comes time to either copy the image out of my application (onto the PasteBoard) or save the image to my camera roll, the image rotates 90 degrees.
I've searched everywhere on this, and one of the things I learned is that if I used JPEGs, I wouldn't have this problem (it sounds), due to the EXIF information.
My app has full copy/paste functionality, and here's the kicker (I'll write this in steps so it is easier to follow):
Go to my camera roll and copy an image
Go into my app and press "Paste", image pastes just fine, and I can do that all day
Click the copy function I implemented, and then click "Paste", and the image pastes but is rotated.
I am 100% sure my copy and paste code isn't what is wrong here, because if I go back to Step 2 above, and click "save", the photo saves to my library but it is rotated 90 degrees!
What is even more strange is that it seems to work fine with images downloaded from the internet, but is very hit or miss with images I manually took with the phone. Some it works, some it doesn't...
Does anybody have any thoughts on this? Any possible work arounds I can use? I'm pretty confident in the code being it works for about 75% of my images. I can post the code upon request though.
For those that want a Swift solution, create an extension of UIImage and add the following method:
func correctlyOrientedImage() -> UIImage {
if self.imageOrientation == .up {
return self
}
UIGraphicsBeginImageContextWithOptions(size, false, scale)
draw(in: CGRect(x: 0, y: 0, width: size.width, height: size.height))
let normalizedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return normalizedImage ?? self;
}
If you're having trouble due to the existing image imageOrientation property, you can construct an otherwise identical image with different orientation like this:
CGImageRef imageRef = [sourceImage CGImage];
UIImage *rotatedImage = [UIImage imageWithCGImage:imageRef scale:1.0 orientation:UIImageOrientationUp];
You may need to experiment with just what orientation to set on your replacement images, possibly switching based on the orientation you started with.
Also keep an eye on your memory usage. Photography apps often run out, and this will double your storage per picture, until you release the source image.
Took a few days, but I finally figured it out thanks to the answer #Dondragmer posted. But I figured I'd post my full solution.
So basically I had to write a method to intelligently auto-rotate my images. The downside is that I have to call this method everywhere throughout my code and it is kind of processor intensive, especially when working on mobile devices, but the plus side is that I can take images, copy images, paste images, and save images and they all rotate properly. Here's the code I ended up using (the method isn't 100% complete yet, still need to edit memory leaks and what not).
I ended up learning that the very first time an image was insert into my application (whether that be due to a user pressing "take image", "paste image", or "select image", for some reason it insert just fine without auto rotating. At this point, I stored whatever the rotation value was in a global variable called imageOrientationWhenAddedToScreen. This made my life easier because when it came time to manipulate the image and save the image out of the program, I simply checked this cached global variable and determined if I needed to properly rotate the image.
- (UIImage*) rotateImageAppropriately:(UIImage*) imageToRotate {
//This method will properly rotate our image, we need to make sure that
//We call this method everywhere pretty much...
CGImageRef imageRef = [imageToRotate CGImage];
UIImage* properlyRotatedImage;
if (imageOrientationWhenAddedToScreen == 0) {
//Don't rotate the image
properlyRotatedImage = imageToRotate;
} else if (imageOrientationWhenAddedToScreen == 3) {
//We need to rotate the image back to a 3
properlyRotatedImage = [UIImage imageWithCGImage:imageRef scale:1.0 orientation:3];
} else if (imageOrientationWhenAddedToScreen == 1) {
//We need to rotate the image back to a 1
properlyRotatedImage = [UIImage imageWithCGImage:imageRef scale:1.0 orientation:1];
}
return properlyRotatedImage;
}
I am still not 100% sure why Apple has this weird image rotation behavior (try this... Take your phone and turn it upside down and take a picture, you'll notice that the final picture turns out right side up - perhaps this is why Apple has this type of functionality?).
I know I spent a great deal of time figuring this out, so I hope it helps other people!
This "weird rotation" behavior is really not that weird at all. It is smart, and by smart I mean memory efficient. When you rotate an iOS device the camera hardware rotates with it. When you take a picture that picture will be captured however the camera is oriented. The UIImage is able to use this raw picture data without copying by just keeping track of the orientation it should be in. When you use UIImagePNGRepresentation() you lose this orientation data and get a PNG of the underlying image as it was taken by the camera. To fix this instead of rotating you can tell the original image to draw itself to a new context and get the properly oriented UIImage from that context.
UIImage *image = ...;
//Have the image draw itself in the correct orientation if necessary
if(!(image.imageOrientation == UIImageOrientationUp ||
image.imageOrientation == UIImageOrientationUpMirrored))
{
CGSize imgsize = image.size;
UIGraphicsBeginImageContext(imgsize);
[image drawInRect:CGRectMake(0.0, 0.0, imgsize.width, imgsize.height)];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
NSData *png = UIImagePNGRepresentation(image);
Here is one more way to achieve that:
#IBAction func rightRotateAction(sender: AnyObject) {
let imgToRotate = CIImage(CGImage: sourceImageView.image?.CGImage)
let transform = CGAffineTransformMakeRotation(CGFloat(M_PI_2))
let rotatedImage = imgToRotate.imageByApplyingTransform(transform)
let extent = rotatedImage.extent()
let contex = CIContext(options: [kCIContextUseSoftwareRenderer: false])
let cgImage = contex.createCGImage(rotatedImage, fromRect: extent)
adjustedImage = UIImage(CGImage: cgImage)!
UIView.transitionWithView(sourceImageView, duration: 0.5, options: UIViewAnimationOptions.TransitionCrossDissolve, animations: {
self.sourceImageView.image = self.adjustedImage
}, completion: nil)
}
You can use Image I/O to save PNG image to file(or NSMutableData) with respect to the orientation of the image. In the example below I save the PNG image to a file at path.
- (BOOL)savePngFile:(UIImage *)image toPath:(NSString *)path {
NSData *data = UIImagePNGRepresentation(image);
int exifOrientation = [UIImage cc_iOSOrientationToExifOrientation:image.imageOrientation];
NSDictionary *metadata = #{(__bridge id)kCGImagePropertyOrientation:#(exifOrientation)};
NSURL *url = [NSURL fileURLWithPath:path];
CGImageSourceRef source = CGImageSourceCreateWithData((__bridge CFDataRef)data, NULL);
if (!source) {
return NO;
}
CFStringRef UTI = CGImageSourceGetType(source);
CGImageDestinationRef destination = CGImageDestinationCreateWithURL((__bridge CFURLRef)url, UTI, 1, NULL);
if (!destination) {
CFRelease(source);
return NO;
}
CGImageDestinationAddImageFromSource(destination, source, 0, (__bridge CFDictionaryRef)metadata);
BOOL success = CGImageDestinationFinalize(destination);
CFRelease(destination);
CFRelease(source);
return success;
}
cc_iOSOrientationToExifOrientation: is a method of UIImage category.
+ (int)cc_iOSOrientationToExifOrientation:(UIImageOrientation)iOSOrientation {
int exifOrientation = -1;
switch (iOSOrientation) {
case UIImageOrientationUp:
exifOrientation = 1;
break;
case UIImageOrientationDown:
exifOrientation = 3;
break;
case UIImageOrientationLeft:
exifOrientation = 8;
break;
case UIImageOrientationRight:
exifOrientation = 6;
break;
case UIImageOrientationUpMirrored:
exifOrientation = 2;
break;
case UIImageOrientationDownMirrored:
exifOrientation = 4;
break;
case UIImageOrientationLeftMirrored:
exifOrientation = 5;
break;
case UIImageOrientationRightMirrored:
exifOrientation = 7;
break;
default:
exifOrientation = -1;
}
return exifOrientation;
}
You can alternatively save the image to NSData using CGImageDestinationCreateWithData and pass NSMutableData instead of NSURL in CGImageDestinationCreateWithURL.
Related
I have a blog application that I'm making. To compose a new entry, there is a "Compose Entry" view where the user can select a photo and input text. For the photo, there is a UIImageView placeholder and upon clicking this, a custom ImagePicker comes up where the user can select up to 3 photos.
This is where the problem comes in. I don't need the full resolution photo from the ALAsset, but at the same time, the thumbnail is too low resolution for me to use.
So what I'm doing at this point is resizing the fullResolution photos to a smaller size. However, this takes some time, especially when resizing up to 3 photos to a smaller size.
Here is a code snipped to show what I'm doing:
ALAssetRepresentation *rep = [[dict objectForKey:#"assetObject"] defaultRepresentation];
CGImageRef iref = [rep fullResolutionImage];
if (iref)
{
CGRect screenBounds = [[UIScreen mainScreen] bounds];
UIImage *previewImage;
UIImage *largeImage;
if([rep orientation] == ALAssetOrientationUp) //landscape image
{
largeImage = [[UIImage imageWithCGImage:iref] scaledToWidth:screenBounds.size.width];
previewImage = [[UIImage imageWithCGImage:iref] scaledToWidth:300];
}
else // portrait image
{
previewImage = [[[UIImage imageWithCGImage:iref] scaledToHeight:300] imageRotatedByDegrees:90];
largeImage = [[[UIImage imageWithCGImage:iref] scaledToHeight:screenBounds.size.height] imageRotatedByDegrees:90];
}
}
Here, from the fullresolution image, I am creating two images: a preview image (max 300px on the long end) and a large image (max 960px or 640px on the long end). The preview image is what is shown on the app itself in the "new entry" preview. The large image is what will be used when uploading to the server.
The actual code I'm using to resize, I grabbed somewhere from here:
-(UIImage*)scaledToWidth:(float)i_width
{
float oldWidth = self.size.width;
float scaleFactor = i_width / oldWidth;
float newHeight = self.size.height * scaleFactor;
float newWidth = oldWidth * scaleFactor;
UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight));
[self drawInRect:CGRectMake(0, 0, newWidth, newHeight)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Am I doing things wrong here? As it stands, the ALAsset thumbnail is too low clarity, and at the same time, I dont need the entire full resolution. It's all working now, but the resizing takes some time. Is this just a necessary consequence?
Thanks!
It is a necessary consequence of resizing your image that it will take some amount of time. How much depends on the device, the resolution of the asset and the format of the asset. But you don't have any control over that. But you do have control over where the resizing takes place. I suspect that right now you are resizing the image in your main thread, which will cause the UI to grind to a halt while you are doing the resizing. Do enough images, and your app will appear hung for long enough that the user will just go off and do something else (perhaps check out competing apps in the App Store).
What you should be doing is performing the resizing off the main thread. With iOS 4 and later, this has become much simpler because you can use Grand Central Dispatch to do the resizing. You can take your original block of code from above and wrap it in a block like this:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^{
ALAssetRepresentation *rep = [[dict objectForKey:#"assetObject"] defaultRepresentation];
CGImageRef iref = [rep fullResolutionImage];
if (iref)
{
CGRect screenBounds = [[UIScreen mainScreen] bounds];
__block UIImage *previewImage;
__block UIImage *largeImage;
if([rep orientation] == ALAssetOrientationUp) //landscape image
{
largeImage = [[UIImage imageWithCGImage:iref] scaledToWidth:screenBounds.size.width];
previewImage = [[UIImage imageWithCGImage:iref] scaledToWidth:300];
}
else // portrait image
{
previewImage = [[[UIImage imageWithCGImage:iref] scaledToHeight:300] imageRotatedByDegrees:90];
largeImage = [[[UIImage imageWithCGImage:iref] scaledToHeight:screenBounds.size.height] imageRotatedByDegrees:90];
}
dispatch_async(dispatch_get_main_queue(), ^{
// do what ever you need to do in the main thread here once your image is resized.
// this is going to be things like setting the UIImageViews to show your new images
// or adding new views to your view hierarchy
});
}
});
You'll have to think about things a little differently this way. For example, you've now broken up what used to be a single step into multiple steps now. Code that was running after this will end up running before the image resize is complete or before you actually do anything with the images, so you need to make sure that you didn't have any dependencies on those images or you'll likely crash.
A late answer, but for those stumbling on this question, you might want to consider using the fullScreenImage rather than the fullResolutionImage of the defaultRepresentation. It's usually much smaller, but still large enough to maintain good quality for larger thumbnails.
I've got a UIView which I'm rendering to a UIImage via the typical UIGraphicsBeginImageContextWithOptions method, using a scale of 2.0 so the image output will always be the "retina display" version of what would show up onscreen, regardless of the user's actual screen resolution.
The UIView I'm rendering contains both images and text (UIImages and UILabels). The image is appearing in the rendered UIImage at its full resolution, and looks great. But the UILabels appear to have been rasterized at a 1.0 scale and then upscaled to 2.0, resulting in blurry text.
Is there something I'm doing wrong, or is there some way to get the text to render nice and crisp at the higher scale level? Or is there some way to do this other than using the scaling parameter of UIGraphicsBeginImageContextWithOptions that would have better results? Thanks!
The solution is to change the labels's contentsScale to 2 before you draw it, then set it back immediately thereafter. I just coded up a project to verify it, and its working just fine making a 2x image in a normal retina phone (simulator). [If you have a public place I can put it let me know.]
EDIT: the extended code walks the subviews and any container UIViews to set/unset the scale
- (IBAction)snapShot:(id)sender
{
[self changeScaleforView:snapView scale:2];
UIGraphicsBeginImageContextWithOptions(snapView.bounds.size, snapView.opaque, 2);
[snapView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
imageDisplay.image = img; // contentsScale
imageDisplay.contentMode = UIViewContentModeScaleAspectFit;
[self changeScaleforView:snapView scale:1];
}
- (void)changeScaleforView:(UIView *)aView scale:(CGFloat)scale
{
[aView.subviews enumerateObjectsUsingBlock:^void(UIView *v, NSUInteger idx, BOOL *stop)
{
if([v isKindOfClass:[UILabel class]]) {
v.layer.contentsScale = scale;
} else
if([v isKindOfClass:[UIImageView class]]) {
// labels and images
// v.layer.contentsScale = scale; won't work
// if the image is not "#2x", you could subclass UIImageView and set the name of the #2x
// on it as a property, then here you would set this imageNamed as the image, then undo it later
} else
if([v isMemberOfClass:[UIView class]]) {
// container view
[self changeScaleforView:v scale:scale];
}
} ];
}
Try rendering to an image with double size, and then create the scaled image:
UIGraphicsBeginImageContextWithOptions(size, NO, 1.0);
// Do stuff
UImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
newImage=[UIImage imageWithCGImage:[newImage CGImage] scale:2.0 orientation:UIImageOrientationUp];
Where:
size = realSize * scale;
I have been struggling with much the same oddities in the context of textview to PDF rendering. I found out that there are some documented properties on the CALayer objects which make up the view. Maybe setting the rasterizationScale of the relevant (sub)layer(s) helps.
I have developed an application of tile game for iPhone 3.
In which I took an image from my resource and divided it into number of tiles using CGImageCreateWithImageInRect ( originalImage.CGImage, frame ); function.
It works great on all iPhones but now I want it to work on Retina Displays also.
So as per this link I have taken anothe image with its size double the current images size and rename it by adding suffix #2x. But the problem is it takes the upper half part of the retina display image only. I think thats because of the frame I have set while using CGImageCreateWithImageInRect. So What shall be done in respect to make this work.
Any kind of help will be really appreciated.
Thanks in advance...
The problem is likely that the #2x image scale is only automatically set up properly for certain initializers of UIImage... Try loading your UIImages using code like this from Tasty Pixel. The entry at that link talks more about this issue.
Using the UIImage+TPAdditions category from the link, you'll implement it like so (after making sure that the images and their #2x counterparts are in your project):
NSString *baseImagePath = [NSHomeDirectory() stringByAppendingPathComponent:#"Documents"];
NSString *myImagePath = [baseImagePath stringByAppendingPathComponent:#"myImage.png"]; // note no need to add #2x.png here
UIImage *myImage = [UIImage imageWithContentsOfResolutionIndependentFile:myImagePath];
Then you should be able to use CGImageCreateWithImageInRect(myImage.CGImage, frame);
Here's how I got it to work in an app I did:
//this is a method that takes a UIImage and slices it into 16 tiles (GridSize * GridSize)
#define GridSize 4
- (void) sliceImage:(UIImage *)image {
CGSize imageSize = [image size];
CGSize square = CGSizeMake(imageSize.width/GridSize, imageSize.height/GridSize);
CGFloat scaleMultiplier = [image scale];
square.width *= scaleMultiplier;
square.height *= scaleMultiplier;
CGFloat scale = ([self frame].size.width/GridSize)/square.width;
CGImageRef source = [image CGImage];
if (source != NULL) {
for (int r = 0; r < GridSize; ++r) {
for (int c = 0; c < GridSize; ++c) {
CGRect slice = CGRectMake(c*square.width, r*square.height, square.width, square.height);
CGImageRef sliceImage = CGImageCreateWithImageInRect(source, slice);
if (sliceImage) {
//we have a tile (as a CGImageRef) from the source image
//do something with it
CFRelease(sliceImage);
}
}
}
}
}
The trick is using the -[UIImage scale] property to figure out how big of a rect you should be slicing.
I basically want to automatically create a tiled image from a bunch of source images, and then save that to the user's photo album. I'm not having any success drawing a bunch of small UIImage's into one big UIImage. What's the best way to accomplish this? Currently I'm using UIGraphicsBeginImageContext() and [UIImage drawAtPoint], etc. All I ever end up with is a 512x512 black square. How should I be doing this? I'm looking to CGLayer's, etc. seems there are a lot of options but none that work particularly easily.
Let me actually put my code in:
CGSize size = CGSizeMake(512, 512);
UIGraphicsBeginImageContext(size);
UIGraphicsPushContext(UIGraphicsGetCurrentContext());
for (int i = 0; i < 4; i++)
{
for (int j = 0; j < 4; j++)
{
UIImage *image = [self getImageAt:i:j];
[image drawAtPoint:CGPointMake(i*128,j*128)];
}
}
UIGraphicsPopContext();
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(result, nil, nil, nil);
I should note that actually the above is not what happens in my code exactly. What really happens is that I call every line before and including to UIGraphicsPushContext, then in an animation timer I slowly increment the drawing and draw to the context. Then after it's all done I call everything from UIGraphicsPopContext onward.
Oh, then you can just save the onscreen view after it has been rendered on screen:
UIGraphicsBeginImageContext(myBigView.bounds.size);
[view drawRect:myBigView.bounds];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
are you storing it back to an image?
UIImage *myBigImage = UIGraphicsGetImageFromCurrentImageContext();
To do exactly what I wanted to do...
Make your GLView as big as the total thing you want. Also make sure glOrtho and your viewport have the right size. Then just draw whatever you want wherever you want, and take a single OpenGL screenshot. Then you don't need to worry about combining to a single UIImage over multiple OpenGL rendering passes, which is no doubt what was causing my issue.
I'm trying to write an animation on the iPhone, without much success, getting crashes and nothing seems to work.
What I wanna do appears simple, create a UIImage, and draw part of another UIImage into it, I got a bit confused with the context and layers and stuff.
Could someone please explain how to write something like that (efficiently), with example code?
For the record, this turns out to be fairly straightforward - everything you need to know is somewhere in the example below:
+ (UIImage*) addStarToThumb:(UIImage*)thumb
{
CGSize size = CGSizeMake(50, 50);
UIGraphicsBeginImageContext(size);
CGPoint thumbPoint = CGPointMake(0, 25 - thumb.size.height / 2);
[thumb drawAtPoint:thumbPoint];
UIImage* starred = [UIImage imageNamed:#"starred.png"];
CGPoint starredPoint = CGPointMake(0, 0);
[starred drawAtPoint:starredPoint];
UIImage* result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
I just want to add a comment about the answer above by dpjanes, because it is a good answer but will look blocky on iPhone 4 (with high resolution retina display), since "UIGraphicsGetImageFromCurrentImageContext()" does not render at the full resolution of an iPhone 4.
Use "...WithOptions()" instead. But since WithOptions is not available until iOS 4.0, you could weak link it (discussed here) then use the following code to only use the hires version if it is supported:
if (UIGraphicsBeginImageContextWithOptions != NULL) {
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0);
}
else {
UIGraphicsBeginImageContext();
}
Here is an example to merge two images that are the same size into one. I don't know if this is the best and don't know if this kind of code is posted somewhere else. Here is my two cents.
+ (UIImage *)mergeBackImage:(UIImage *)backImage withFrontImage:(UIImage *)frontImage
{
UIImage *newImage;
CGRect rect = CGRectMake(0, 0, backImage.size.width, backImage.size.height);
// Begin context
UIGraphicsBeginImageContextWithOptions(rect.size, NO, 0);
// draw images
[backImage drawInRect:rect];
[frontImage drawInRect:rect];
// grab context
newImage = UIGraphicsGetImageFromCurrentImageContext();
// end context
UIGraphicsEndImageContext();
return newImage;
}
Hope this helps.