How to draw a Mirrored UIImage in UIView with drawRect? - iphone

I load a image and create a mirror in this way:
originalImg = [UIImage imageNamed:#"ms06.png"];
mirrorImg = [UIImage imageWithCGImage:[originalImg CGImage] scale:1.0 orientation:UIImageOrientationUpMirrored];
Then I set the above UIImage object to a subclass of UIView, and override the drawRect:
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGAffineTransform t0 = CGContextGetCTM(context);
CGContextConcatCTM(context, t0);
CGContextDrawImage(context, self.bounds, [image CGImage]);
CGContextRestoreGState(context);
}
No matter which image I draw, the displayed image always be the original one, mirrored image was never displayed when set to the UIView subclass.
I'm sure that the mirrored image was set to the UIView correctly because the debug info showed that the orientation member variable equals to 4, which means "UIImageOrientationUpMirrored", while the original image equals to 0.
Does anyone could help me with this problem, thanks.
I also tried to display the mirrored image in UIImageView with setImage: and it works correctly. By the way I found that the breakpoint in drawRect was never hit when call the setImage of UIImageView, how can we define the drawing behavior(such as draw a line above the image) when loading image to the UIImageView?

You mirrow the image on UI-Level. This returns a new UIImage, but the CGImage stays the same. If you do some NSLogs, you will notice this.
You can also do transformations on UI-Level. If you use this approach, I would suggest to use originalImg.scale instead of 1.0. Then the code would work for retina and non-retina displays.
[UIImage imageWithCGImage:[originalImg CGImage] scale:originalImg.scale orientation:UIImageOrientationUpMirrored];
If you really need to mirror the CGImage, take a look at NYXImagesKit on GitHub (see UIImage+Rotating.m)

Related

Resize UIImage for editing and save at a high resolution?

Is there a way to load a picture from the library or take a new one, resize it to a smaller size to be able to edit it and then save it at the original size? I'm struggling with this and can't make it to work. I have the resize code setup like this :
firstImage = [firstImage resizedImageWithContentMode:UIViewContentModeScaleAspectFit bounds:CGSizeMake(960, 640) interpolationQuality:kCGInterpolationHigh];
and then i have an UIImageView :
[finalImage setFrame:CGRectMake(10, 100, 300, 232)];
finalImage.image = firstImage;
if I setup the CGSizeMake at the original size of the picture it is a very slow process. I saw in other apps that they work on a smaller image and the editing process is fairly quick even for effects. What's the approach on this?
You can refer to Move and Scale Demo. This is a custom control which implements Moving and scaling and cropping image which can be really helpful to you.
Also this is the most simple code for scaling images to given size. Refer to it here: Resize/Scale of an Image
You can refer to it's code here
// UIImage+Scale.h
#interface UIImage (scale)
-(UIImage*)scaleToSize:(CGSize)size;
#end
Implementation UIImage Scale Category
With the interface in place, let’s write the code for the method that will be added to the UIImage class.
// UIImage+Scale.h
#import "UIImage+Scale.h"
#implementation UIImage (scale)
-(UIImage*)scaleToSize:(CGSize)size
{
// Create a bitmap graphics context
// This will also set it as the current context
UIGraphicsBeginImageContext(size);
// Draw the scaled image in the current context
[self drawInRect:CGRectMake(0, 0, size.width, size.height)];
// Create a new image from current context
UIImage* scaledImage = UIGraphicsGetImageFromCurrentImageContext();
// Pop the current context from the stack
UIGraphicsEndImageContext();
// Return our new scaled image
return scaledImage;
}
#end
Using the UIImage Scale Method
Calling the new scale method we added to UIImage is as simple as this:
#import "UIImage+Scale.h"
...
// Create an image
UIImage *image = [UIImage imageNamed:#"myImage.png"];
// Scale the image
UIImage *scaledImage = [image scaleToSize:CGSizeMake(25.0f, 35.0f)];
Let me know if you want more help.
Hope this helps you.

Draw image constrain aspects

I have an imageview, the image within this imageview is for you to choose from the photoroll. I also have a button, when you click this button, there's an image added to a view with the addSubview code. This piece of image is draggable, resizeable and rotatable.
One problem, when I finish the image I use the method drawInRect. This draws all the layers onto eachother and creates an image. However the layers are on the wrong place and are the wrong size. It's also never rotated. I don't know how to fix this, the piece of code is beneath this text. Is it possible to keep the original image size and still have the layers drawn on the same place I drag them onto the imageview, if not how do I create a new size for this and have the result I want. And how do I draw the image rotated.
UIGraphicsBeginImageContext(imageView2.image.size);
// Draw image1
[imageView2.image drawInRect:CGRectMake(0, 0, imageView2.image.size.width, imageView2.image.size.height)];
// Draw image2
for(UIImageView *viewsSub in [self.imageViewer subviews])
{
[viewsSub.image drawInRect:CGRectMake(viewsSub.frame.origin.x, viewsSub.frame.origin.y, viewsSub.frame.size.width, viewsSub.frame.size.height)];
}
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
pld.imageChosen2 = resultingImage;
UIGraphicsEndImageContext();
So you want something like taking a "screenshot" of your actual imageview (with subviews included), don't you?
I used this piece of code to do something similar, but don't know if would work for you.
- (UIImage *)screenshot {
CGFloat scale = [UIScreen screenScale];
if(scale > 1.5) {
UIGraphicsBeginImageContextWithOptions(self.frame.size, NO, scale);
} else {
UIGraphicsBeginImageContext(self.frame.size);
}
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return screenshot;
}
You should add this method inside your imageview (the one that contains all the subviews you're adding).

How can I save a photo of part of the screen to the local iPhone's Photos?

I have put a UILabel that the user has chosen over a UIImageView that was also chosen by the user. I would like to put these two into a picture, kind of like a screenshot of a small part of the screen. I have absolutely no idea how to do this and have no experience in this. Any help is appreciated!!
You could setup a Bitmap context with a clipping mask of the area you want to save. Then you use the backing layer's renderInContext method to draw onto that context.
CGSize imageSize = CGSizeMake(960, 580);
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextClipToRect(context, CGRectMake(10,10,200,200); // whatever rect you want
[self.layer renderInContext:context];
UIImage *myImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Save to camera roll
UIImageWriteToSavedPhotosAlbum(myImage, self, #selector(didSaveImage), null);

Capturing 3d object with its Background view in iPhone

I am new in Open GLES.
I got stuck at capturing the 3d object with its Background image.
I am adding the CAEAGLLayer view as subview in my view
and i am able to take the image of 3d object but it is coming with black background but i want to take the image of whole view on which i am showing my 3d object.
So please help me to resolve this issue.
You need to give more information. What is the background?
You use 1 view, it is EAGLView the I guess you should get the correct result.
You use EAGLView as subview of your background then you need to capture 2 images then combine them into one.
Call [CALayer drawInContext:viewContext] to get image of your background view.
Combine 2 images
+ (UIImage *) imageFromImage:(UIImage *)img1 andImage:(UIImage*) img2 {
UIGraphicsBeginImageContext(img1.size);
[img1 drawAtPoint:CGPointMake(0, 0)];
[img2 drawAtPoint:CGPointMake(0, 0)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}

drawInRect not preserving transform

UIGraphicsBeginImageContext(backgroundImageView.frame.size);
UIImageView *image1 = backgroundImageView;
UIImageView *image2 = boat;
// Draw image1
[image1.image drawInRect:CGRectMake(0, 0, image1.image.size.width, image1.image.size.height)];
// Draw image2
[image2.image drawInRect:CGRectMake(boat.center.x, boat.center.y, boat.image.size.width, boat.image.size.height)];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
When resultingImage is displayed the transform angle value for image2 is not preserved after I set it equal to the boat UIImageView. I thought that when I set image2 = boat that the transform value of boat would carry over to image2.
I suspect it relates to the fact that I am I am working with UIImageView instead of UIVIew. I have tried many different things but have been unable to resolve this issue.
Any help would be appreciated.
Thanks.
When you draw an image into a graphics context, it uses the context's transformation matrix, which is distinct from any transformation applied to a view. So the transformation on image2 (a UIImageView presumably) is completely irrelevant to direct image drawing.
You can either specify a new transformation with the various CGContext and CGAffineTransform functions in Quartz, or you can directly apply a copy of the UIImageView 's transformation like so:
CGContextConcatCTM(UIGraphicsGetCurrentContext(), someView.transform);
[someView.image drawInRect: CGRectMake(...)];