Image not rotating correctly - iphone

rotateI am trying to rotate my images by 90 degrees. It works without the rotate, but when I rotate and translate it doesn't show. What am i doing wrong?
NSString *filePath = [[NSBundle mainBundle] pathForResource:#"check" ofType:#"jpg"];
UIImage *newImage = [[UIImage alloc] initWithContentsOfFile:filePath];
UIGraphicsBeginImageContext(newImage.size);
//CGContextRef c = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), newImage.size.width, 0);
//CGContextTranslateCTM(UIGraphicsGetCurrentContext(), 0, newImage.size.width);
//CGContextScaleCTM(UIGraphicsGetCurrentContext(), 1.0, -1.0);
CGContextRotateCTM(UIGraphicsGetCurrentContext(), 90);
CGRect imageRect = CGRectMake(0, 0, newImage.size.height, newImage.size.width);
CGContextDrawImage(UIGraphicsGetCurrentContext(), imageRect, newImage.CGImage);
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
ImageView.image = viewImage;

I would suggest using image view and use it's transform like so:
UIImage *newImage = [[UIImage alloc] initWithContentsOfFile:filePath];
UIImageView *newImageView = [[UIImageView alloc] initWithImage:newImage];
newImageView.transform = CGAffineTransformMakeRotation(M_PI_2);

You should do:
UIGraphicsBeginImageContext(newImage.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextRotateCTM (context, 90* M_PI/180); //Degrees should be in radians.
[newImage drawAtPoint:CGPointMake(0, 0)];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
Good luck.
UPDATE:
Actually, this is a duplicate from How to Rotate a UIImage 90 degrees?

The problem was my define statement and I needed to flip the image. Here is the completed code: ( The image has to be perfect because an OCR process the image and if it's even a little off, it gets rejected.
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextSaveGState(c);
NSString *filePath = [[NSBundle mainBundle] pathForResource:#"check" ofType:#"jpg"];
UIImage *newImage = [[UIImage alloc] initWithContentsOfFile:filePath];
NSLog(#"Rect width: %#", NSStringFromCGSize(newImage.size) );
UIGraphicsBeginImageContext(newImage.size);
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), 0, newImage.size.width);
CGContextScaleCTM(UIGraphicsGetCurrentContext(), 1.0, -1.0);
CGContextTranslateCTM( UIGraphicsGetCurrentContext(), 0.5f * newImage.size.width, 0.5f * newImage.size.height ) ;
CGContextRotateCTM( UIGraphicsGetCurrentContext(), radians( -90 ) ) ;
CGRect imageRect = CGRectMake(-(newImage.size.height* 0.5f) + 200, -(0.5f * newImage.size.width) + 200 , newImage.size.height - 200, newImage.size.width - 200);
CGContextDrawImage(UIGraphicsGetCurrentContext(), imageRect, newImage.CGImage);
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
ImageView.image = viewImage;
CGContextRestoreGState(c);

Related

Splitting an UIImage in two different UIimageviews in CoreGraphics-iPhone

Problem: I have an three UIImageViews. One has an UIImage and the other two are namely - leftImageView & rightImageView. The leftImageView has the left half of the image and the right one has the other. I am trying to achieve this using Coregraphics, i.e. drawing an image in the two imageviews.
But the right imageview isn't showing up. Refer to the code below:
UIImage *image = imgView.image;
CGSize sz = [image size];
CGRect leftRect = CGRectMake(0, 0, imgView.frame.size.width/2, imgView.frame.size.height);
CGRect rightRect = CGRectMake(imgView.frame.size.width/2, 0, imgView.frame.size.width/2, imgView.frame.size.height);
CGImageRef leftReference = CGImageCreateWithImageInRect([image CGImage], leftRect);
CGImageRef rightReference = CGImageCreateWithImageInRect([image CGImage], rightRect);
// Left Image ...
UIGraphicsBeginImageContextWithOptions(CGSizeMake(leftRect.size.width, leftRect.size.height), NO, 0);
CGContextRef con = UIGraphicsGetCurrentContext();
CGContextDrawImage(con, leftRect,flip(leftReference));
imgViewLeft = [[UIImageView alloc] initWithImage:UIGraphicsGetImageFromCurrentImageContext()];
UIGraphicsEndImageContext();
[self.view addSubview:imgViewLeft];
// Right Image ...
UIGraphicsBeginImageContextWithOptions(CGSizeMake(rightRect.size.width, rightRect.size.height), NO, 0);
con = UIGraphicsGetCurrentContext();
CGContextDrawImage(con, rightRect,flip(rightReference));
imgViewRight = [[UIImageView alloc] initWithImage:UIGraphicsGetImageFromCurrentImageContext()];
UIGraphicsEndImageContext();
[self.view addSubview:imgViewRight];
I think that you can use the following code part.
UIImage *image = [UIImage imageNamed:#"xxx.png"]; // You can change this line.
BOOL isLeft = YES; // You can change this variable(isLeft = YES : left piece, NO : right piece).
UIGraphicsBeginImageContext(CGSizeMake(image.size.width / 2, image.size.height));
CGContextRef context = UIGraphicsGetCurrentContext();
if (isLeft)
CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, image.size.width, image.size.height), image.CGImage);
else
CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(- image.size.width / 2, 0, image.size.width, image.size.height), image.CGImage);
UIImage *imageCopy = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return imageCopy;
I couldn't launch this code now, but I think that my code is almost correct. Please try this.
Edit :
In your code :
// Right Image ...
UIGraphicsBeginImageContextWithOptions(CGSizeMake(rightRect.size.width, rightRect.size.height), NO, 0);
con = UIGraphicsGetCurrentContext();
CGContextDrawImage(con, rightRect,flip(rightReference));
imgViewRight = [[UIImageView alloc] initWithImage:UIGraphicsGetImageFromCurrentImageContext()];
UIGraphicsEndImageContext();
There is a wrong part in the third line :
CGContextDrawImage(con, rightRect,flip(rightReference));
It should be
CGContextDrawImage(con, leftRect,flip(rightReference));

How to combine UIImage and UILabel into one image and save

I have 2 UILabels and 2 images that i need to merge into a single UIImage to save.
I know I could do it with screen shots but my main image is rounded so if I rect it, it will still show the sharp edge.
I can do this to combine the images :
//CGSize newImageSize = CGSizeMake(cropImage.frame.size.width, cropImage.frame.size.height);
CGSize newImageSize = CGSizeMake(480, 320);
NSLog(#"CGSize %#",NSStringFromCGSize(newImageSize));
UIGraphicsBeginImageContextWithOptions(newImageSize, NO, 0.0); //retina res
[self.viewForImg.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
NSData *imgData = UIImageJPEGRepresentation(image, 0.9); //UIImagePNGRepresentation ( image ); // get JPEG representation
UIImage * imagePNG = [UIImage imageWithData:imgData]; // wrap UIImage around PNG representation
UIGraphicsEndImageContext();
return imagePNG;
but not sure how to add in the UILabel.
Any reply is much appreciated.
Use [myLabel.layer renderInContext:UIGraphicsGetCurrentContext()]; to draw in current context.
For eg:-
UIGraphicsBeginImageContextWithOptions(newImageSize, NO, 0.0); //retina res
[self.viewForImg.layer renderInContext:UIGraphicsGetCurrentContext()];
[myLabel.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
Based on your comments, if you want to draw this in a particular frame do it as follows,
[myLabel drawTextInRect:CGRectMake(0.0f, 0.0f, 100.0f, 50.0f)];
If you want to color the background, try this,
CGRect drawRect = CGRectMake(rect.origin.x, rect.origin.y,rect.size.width, rect.size.height);
CGContextSetRGBFillColor(context, 100.0f/255.0f, 100.0f/255.0f, 100.0f/255.0f, 1.0f);
CGContextFillRect(context, drawRect);
or you can check this question Setting A CGContext Transparent Background.
UIEdgeInsets insets = UIEdgeInsetsMake(1, 1, 1, 1);
CGSize imageSizeWithBorder = CGSizeMake(view.frame.size.width + insets.left + insets.right, view.frame.size.height + insets.top + insets.bottom);
UIGraphicsBeginImageContextWithOptions(imageSizeWithBorder, UIEdgeInsetsEqualToEdgeInsets(insets, UIEdgeInsetsZero), 0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextClipToRect(context, (CGRect){{insets.left, insets.top}, view.frame.size});
CGContextTranslateCTM(context, -view.frame.origin.x + insets.left, -view.frame.origin.y + insets.top);
[view.layer renderInContext:context];
UIImage *viewCopy = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Try this!
UIGraphicsBeginImageContextWithOptions(newImageSize, NO, scale); //retina res
[COGI.layer renderInContext:UIGraphicsGetCurrentContext()];
[COGI.image drawInRect:CGRectMake(0, 0, 248, 290)];
[iconI.image drawInRect:CGRectMake(4, 20, 240, 240)];
[stampI.image drawInRect:CGRectMake(0, -5, 248, 290)];
[headerL drawTextInRect:CGRectMake(14, 35, 220, 40)];
[detailL drawTextInRect:CGRectMake(16, 200, 215, 65)];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
[[UIColor redColor] set];
NSData *imgData = UIImageJPEGRepresentation(image, 1.0); //UIImagePNGRepresentation ( image ); // get JPEG representation
UIImage * imagePNG = [UIImage imageWithData:imgData]; // wrap UIImage around PNG representation
UIGraphicsEndImageContext();
return imagePNG;

UIImageJPEGRepresentation giving 2x images on retina display

I have this code, which creates an image, and then adds some effects to it and sizes it down to make largeThumbnail.
UIImage *originalImage = [UIImage imageWithData:self.originalImage];
thumbnail = createLargeThumbnailFromImage(originalImage);
NSLog(#"thumbnail: %f", thumbnail.size.height);
NSData *thumbnailData = UIImageJPEGRepresentation(thumbnail, 1.0);
Later on:
UIImage *image = [UIImage imageWithData:self.largeThumbnail];
NSLog(#"thumbnail 2: %f", image.size.height);
NSLog returns:
thumbnail: 289.000000
thumbnail 2: 578.000000
As you can see, when it converts the image back from data, it makes it 2x the size. Any ideas why this may be happening?
Large thumbnail code:
UIImage *createLargeThumbnailFromImage(UIImage *image) {
UIImage *resizedImage;
resizedImage = [image imageScaledToFitSize:LARGE_THUMBNAIL_SIZE];
CGRect largeThumbnailRect = CGRectMake(0, 0, resizedImage.size.width, resizedImage.size.height);
UIGraphicsBeginImageContextWithOptions(resizedImage.size, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
//Image
CGContextTranslateCTM(context, 0, resizedImage.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, largeThumbnailRect, resizedImage.CGImage);
//Border
CGContextSaveGState(context);
CGRect innerRect = rectForRectWithInset(largeThumbnailRect, 1.5);
CGMutablePathRef borderPath = createRoundedRectForRect(innerRect, 0);
CGContextSetStrokeColorWithColor(context, [[UIColor whiteColor] CGColor]);
CGContextSetLineWidth(context, 3);
CGContextAddPath(context, borderPath);
CGContextStrokePath(context);
CGContextRestoreGState(context);
UIImage *thumbnail = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return thumbnail;
}
Try replacing the part where you load the second image:
UIImage *image = [UIImage imageWithData:self.largeThumbnail];
with this one:
UIImage *jpegImage = [UIImage imageWithData:self.largeThumbnail];
UIImage *image = [UIImage imageWithCGImage:jpegImage.CGImage scale:originalImage.scale orientation:jpegImage.imageOrientation];
What happens here is that the image scale is not set, so you get double image dimensions.

Draw UIImage with transformations in iOS

I have some transformable UIImageViews (decorates on a photo), which user can drag, pinch and rotate. After editing, I want the app to generate the final image. How can I draw the UIImageViews with their transform property considered.
My first solution is using renderInContext, but it can only generate a 320x480 image. Can I generate an image with any resolution?
Then I use code as follows:
UIGraphicsBeginImageContext(CGSizeMake(640.0f, 896.0f));
CGContextRef currentContext = UIGraphicsGetCurrentContext();
UIImage *backgroundImage = [[UIImage alloc] initWithContentsOfFile:[[NSBundle mainBundle] pathForResource:#"fittingBackground" ofType:#"jpg"]];
[backgroundImage drawInRect:CGRectMake(0.0f, 0.0f, 640.0f, 896.0f)];
CGContextSaveGState(currentContext);
CGContextConcatCTM(currentContext, self.photoImageView.transform);
CGRect photoFrame = self.photoImageView.frame;
[self.photoImageView.image drawInRect:CGRectMake(photoFrame.origin.x * 2, photoFrame.origin.y * 2, photoFrame.size.width * 2, photoFrame.size.height * 2)];
CGContextRestoreGState(currentContext);
CGContextSaveGState(currentContext);
CGContextConcatCTM(currentContext, self.productImageView.transform);
CGRect productFrame = self.productImageView.frame;
[self.productImageView.image drawInRect:CGRectMake(productFrame.origin.x * 2, productFrame.origin.y * 2, productFrame.size.width * 2, productFrame.size.height * 2)];
CGContextRestoreGState(currentContext);
UIImage *resultImage = [UIImage imageWithCGImage:CGBitmapContextCreateImage(currentContext)];
UIGraphicsEndImageContext();
However, the result seems to be weird. And I've logged the frame, bounds and center. I don't know which Rect should I use in drawInRect after applying transform?
Try this,
-(void)drawBackgroundInLayer:(CALayer*)destinationLayer {
CGFloat scaledFactor = [[UIScreen mainScreen] scale];
CGRect layerFrame = destinationLayer.frame;
CGRect scaledRect = CGRectMake(layerFrame.origin.x,
layerFrame.origin.y,
(layerFrame.size.width)*scaledFactor,
(layerFrame.size.height)*scaledFactor);
// Get the size and do some drawings
CGSize sizeBack = CGSizeMake(layerFrame.size.width*scaledFactor, layerFrame.size.height*scaledFactor);
UIGraphicsBeginImageContext(sizeBack);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRGBFillColor(context, 0., 0., 0., 1.);
drawBorder(context, scaledRect);
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
if (destinationLayer) {
[destinationLayer setContents:(id)img.CGImage];
}
}
The most important lines are
UIGraphicsBeginImageContext(sizeBack);
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();

Capturing Screen

I am trying to capture (screen shot) a view. For that I am using a piece of code shown below that saves it to my document directory as a PNG image.
UIGraphicsBeginImageContextWithOptions(highlightViewController.fhView.centerView.frame.size, YES, 1.0);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *appFile = [documentsDirectory stringByAppendingPathComponent:#"1.png"];
NSData *imageData = UIImagePNGRepresentation(screenshot);
[imageData writeToFile:appFile atomically:YES];
UIGraphicsEndImageContext();
Question: can I capture part of the view? Because in the above code I can't change the origin (frame). If anyone has other approach to capture a particular part of view please share it.
You could crop the image:
http://iosdevelopertips.com/graphics/how-to-crop-an-image.html
CGRect rect = CGRectMake(0,0,10,10);
CGImageRef imageRef = CGImageCreateWithImageInRect([screenshot CGImage], rect);
UIImage *croppedScreenshot = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
Try this code. This surely works as I have implemented it in many of my projects:
- (UIImage *)image
{
if (cachedImage == nil) {
//YOU CAN CHANGE THE FRAME HERE TO WHATEVER YOU WANT TO CAPTURE
CGRect imageFrame = CGRectMake(0, 0, 400, 300);
UIView *imageView = [[UIView alloc] initWithFrame:imageFrame];
[imageView setOpaque:YES];
[imageView setUserInteractionEnabled:NO];
[self renderInView:imageView withTheme:nil];
UIGraphicsBeginImageContext(imageView.bounds.size);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextGetCTM(c);
CGContextScaleCTM(c, 1, -1);
CGContextTranslateCTM(c, 0, -imageView.bounds.size.height);
[imageView.layer renderInContext:c];
cachedImage = [UIGraphicsGetImageFromCurrentImageContext() retain];
// rescale graph
UIImage* bigImage = UIGraphicsGetImageFromCurrentImageContext();
CGImageRef scaledImage = [self newCGImageFromImage:[bigImage CGImage] scaledToSize:CGSizeMake(100.0f, 75.0f)];
cachedImage = [[UIImage imageWithCGImage:scaledImage] retain];
CGImageRelease(scaledImage);
UIGraphicsEndImageContext();
[imageView release];
}
return cachedImage;
}
I hope this will help you.
See if you can specify the rect like this and then take screenshot.
CGRect requiredRect = CGRectMake(urView.frame.origin.x, urView.frame.origin.y, urView.bounds.size.width, urView.bounds.size.height);
UIGraphicsBeginImageContext(requiredRect.size);
You can alter the origin and see if it works.
If this doesn't work out, you can try cropping the image as mentioned by #mcb
You can use this code
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGRect rect;
rect = CGRectMake(250,61 ,410, 255);
CGImageRef imageRef = CGImageCreateWithImageInRect([viewImage CGImage], rect);
UIImage *img = [UIImage imageWithCGImage:imageRef];
UIImageWriteToSavedPhotosAlbum(img, nil, nil, nil);
CGImageRelease(imageRef);