how to continue to rotate a image - iphone

there is a button, I want to rotate the image for 180 when I click the button, but it only works when I first click the button, how to continue to rotate the image, thanks in advance?
the following is my source code:
- (IBAction)touchButton_Refresh {
photo *photos = [photoArray objectAtIndexhotoIndex];
NSData *xmlData = [NSData dataWithContentsOfFilehotos.localPath];
UIImage *image;
if ([xmlData isKindOfClass:[NSData class]]) {
image = [[UIImage imageWithData:xmlData] retain];
}
SlideItem *slideItemRotate;
if (toOrientation == 1 || toOrientation == 2) {
slideItemRotate = [[SlideItem alloc] initWithFrame:CGRectMake(768*photoIndex, 0, 768, 980)];
}
else if (toOrientation == 3 || toOrientation == 4) {
slideItemRotate = [[SlideItem alloc] initWithFrame:CGRectMake(1024*photoIndex, 0, 1024, 700)];
}
slideItemRotate.photoImageView.image = image;
CGAffineTransform rotation = CGAffineTransformMakeRotation(3.14159265);
[slideItemRotate.photoImageView setTransform:rotation];
[[[slideScrollView subviews] objectAtIndex:0] removeFromSuperview];
[slideScrollView addSubview:slideItemRotate];
[slideItemRotate release];
}

You have to concatenate the tranform, otherwise you just keep applying the same rotation (not actually rotating it more). So, replace these lines:
CGAffineTransform rotation = CGAffineTransformMakeRotation(3.14159265);
[slideItemRotate.photoImageView setTransform:rotation];
with this:
CGAffineTransform rotation = CGAffineTransformConcat([[slideItemRotate photoImageView] transform], CGAffineTransformMakeRotation(3.14159265));
[slideItemRotate.photoImageView setTransform:rotation];
This will actually concatenate the rotation and keep the image rotating around. Another way to do it if you're always going around 180 degrees is to test for the identity transform. If the view's transform is the identity transform, apply the rotation. If not, reset the transform to the identity transform (effectively inverting the process).

Related

ImageView Rotation behaves weird after Zooming

I have a UISlider for zooming the imageView(instead of UIPinchGesture) and i'm using UIRotationGesture,both of them works fine independently.Zooming without doing rotation gesture works fine! but Once i perform a rotation and them zoom in or out the imageview it behaves weird as it looses the rotation scale.(that's what i guess!)
How do i fix this?
Well im not good with this maths stuff struggling with this since few days and i have searched through the forums couldn't find the solution. Kindly help me:)
For Zooming:(here i'm not handling transform as im unsure of)
-(void)scale:(UISlider *)sender
{
float sliderValue = [sender value];
CGRect newFrame = placeRing.frame;
newFrame.size = CGSizeMake(sliderValue, sliderValue);
placeRing.frame = newFrame;
}
For rotation:
- (void)twoFingersRotate:(UIRotationGestureRecognizer *)recognizer
{
isRotated = TRUE;
if ([recognizer state] == UIGestureRecognizerStateBegan || [recognizer state] == UIGestureRecognizerStateChanged) {
rotation= recognizer.rotation;
rotatedTransform = CGAffineTransformRotate([placeRing transform], [recognizer rotation]);
placeRing.transform = rotatedTransform;
[recognizer setRotation:0];
}
}
The rotation applies transform which invalidates the frame property you are using to resize the view. Use bounds and center properties to zoom.
See the warning box in: http://developer.apple.com/library/ios/#documentation/UIKit/Reference/UIView_Class/UIView/UIView.html#//apple_ref/doc/uid/TP40006816
-(void)scale:(UISlider *)sender
{
float sliderValue = [sender value];
placeRing.bounds = CGRectMake(0, 0, sliderValue, sliderValue);
}

What is the proper way to handle image rotations for iphone/ipad?

I have 2 images, one in portrait mode, and the other in landscape mode.
What is the best way to switch these images when a mobile device view rotation takes place?
Currently I just display the portrait image. And when the device rotates to landscape mode the portrait image is simply stretched.
Should I be checking within the orientation rotation handler and simply reset the image to the proper orientational image (i.e. set it manually based on the orientation)??
Thanks!
I found three ways.I think the last one is better
1: Autoresizing
Example:
UIImageView *myImageView=[[UIImageView alloc] initWithImage:[UIImage imageNamed:#"yourImage.png"]];
myImageView.frame = self.view.bounds;
myImageView.autoresizingMask=UIViewAutoresizingFlexibleWidth|UIViewAutoresizingFlexibleHeight
myImageView.contentMode = UIViewContentModeScaleAspectFill;
[self.view addSubview:myImageView];
[imageView release];
2: CGAffineTransformMakeRotation
Example:
-(void)willRotateToInterfaceOrientation:(UIInterfaceOrientation)toInterfaceOrientation
duration:(NSTimeInterval)duration {
if (toInterfaceOrientation == UIInterfaceOrientationLandscapeLeft) {
myImageView.transform = CGAffineTransformMakeRotation(M_PI / 2);
}
else if (toInterfaceOrientation == UIInterfaceOrientationLandscapeRight){
myImageView.transform = CGAffineTransformMakeRotation(-M_PI / 2);
}
else {
myImageView.transform = CGAffineTransformMakeRotation(0.0);
}
}
3:Set autosizing of myImageView as auto fill the screen in Interface Builder
Example:
-(void)didRotateFromInterfaceOrientation:(UIInterfaceOrientation)fromInterfaceOrientation {
if((self.interfaceOrientation == UIDeviceOrientationLandscapeLeft) || (self.interfaceOrientation == UIDeviceOrientationLandscapeRight)){
    myImageView.image = [UIImage imageNamed:#"myImage-landscape.png"];
} else  if((self.interfaceOrientation == UIDeviceOrientationPortrait) || (self.interfaceOrientation == UIDeviceOrientationPortraitUpsideDown)){
    myImageView.image = [UIImage imageNamed:#"myImage-portrait.png"];
} }
see more solutions here
developer.apple solution is here

How to get a rotated, zoomed and panned image from an UIImageView at its full resolution?

I have an UIImageView which can be rotated, panned and scaled with gesture recognisers. As a result it is cropped in its enclosing view.
Everything is working fine but I don't know how to save the visible part of the picture in its full resolution. It's not a screen grab.
I know I get the UIImage straight from the visible content of the UIImageView but it is limited to the resolution of the screen.
I assume that I have to do the same transformations on the UIImage and crop it. IS there an easy way to do that?
Update:
For example, I have an UIImageView with an image in high resolution, let's say a 8MP iPhone 4s camera photo, which is transformed with gestures, so it becomes scaled, rotated and moved around in its enclosing view. Obviously there is some cropping going on so only a part of the image is displayed. There is a huge difference between the displayed screen resolution and the underlining image resolution, I need an image in the image resolution. The UIImageView is in UIViewContentModeScaleAspectFit, but a solution with UIViewContentModeScaleAspectFill is also fine.
This is my code:
- (void)rotatePiece:(UIRotationGestureRecognizer *)gestureRecognizer {
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
[gestureRecognizer view].transform = CGAffineTransformRotate([[gestureRecognizer view] transform], [gestureRecognizer rotation]);
[gestureRecognizer setRotation:0];
}
}
- (void)scalePiece:(UIPinchGestureRecognizer *)gestureRecognizer {
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
[gestureRecognizer view].transform = CGAffineTransformScale([[gestureRecognizer view] transform], [gestureRecognizer scale], [gestureRecognizer scale]);
[gestureRecognizer setScale:1];
}
}
-(void)panGestureMoveAround:(UIPanGestureRecognizer *)gestureRecognizer;
{
UIView *piece = [gestureRecognizer view];
//We pass in the gesture to a method that will help us align our touches so that the pan and pinch will seems to originate between the fingers instead of other points or center point of the UIView
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
CGPoint translation = [gestureRecognizer translationInView:[piece superview]];
[piece setCenter:CGPointMake([piece center].x + translation.x, [piece center].y+translation.y)];
[gestureRecognizer setTranslation:CGPointZero inView:[piece superview]];
} else if([gestureRecognizer state] == UIGestureRecognizerStateEnded) {
//Put the code that you may want to execute when the UIView became larger than certain value or just to reset them back to their original transform scale
}
}
- (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer {
// if the gesture recognizers are on different views, don't allow simultaneous recognition
if (gestureRecognizer.view != otherGestureRecognizer.view)
return NO;
// if either of the gesture recognizers is the long press, don't allow simultaneous recognition
if ([gestureRecognizer isKindOfClass:[UILongPressGestureRecognizer class]] || [otherGestureRecognizer isKindOfClass:[UILongPressGestureRecognizer class]])
return NO;
return YES;
}
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view from its nib.
appDelegate = (AppDelegate *)[[UIApplication sharedApplication] delegate];
faceImageView.image = appDelegate.faceImage;
UIRotationGestureRecognizer *rotationGesture = [[UIRotationGestureRecognizer alloc] initWithTarget:self action:#selector(rotatePiece:)];
[faceImageView addGestureRecognizer:rotationGesture];
[rotationGesture setDelegate:self];
UIPinchGestureRecognizer *pinchGesture = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(scalePiece:)];
[pinchGesture setDelegate:self];
[faceImageView addGestureRecognizer:pinchGesture];
UIPanGestureRecognizer *panRecognizer = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(panGestureMoveAround:)];
[panRecognizer setMinimumNumberOfTouches:1];
[panRecognizer setMaximumNumberOfTouches:2];
[panRecognizer setDelegate:self];
[faceImageView addGestureRecognizer:panRecognizer];
[[UIApplication sharedApplication] setStatusBarHidden:YES withAnimation:UIStatusBarAnimationNone];
[appDelegate fadeObject:moveIcons StartAlpha:0 FinishAlpha:1 Duration:2];
currentTimer = [NSTimer timerWithTimeInterval:4.0f target:self selector:#selector(fadeoutMoveicons) userInfo:nil repeats:NO];
[[NSRunLoop mainRunLoop] addTimer: currentTimer forMode: NSDefaultRunLoopMode];
}
The following code creates a snapshot of the enclosing view (superview of faceImageView with clipsToBounds set to YES) using a calculated scale factor.
It assumes that the content mode of faceImageView is UIViewContentModeScaleAspectFit and that the frame of faceImageView is set to the enclosingView's bounds.
- (UIImage *)captureView {
float imageScale = sqrtf(powf(faceImageView.transform.a, 2.f) + powf(faceImageView.transform.c, 2.f));
CGFloat widthScale = faceImageView.bounds.size.width / faceImageView.image.size.width;
CGFloat heightScale = faceImageView.bounds.size.height / faceImageView.image.size.height;
float contentScale = MIN(widthScale, heightScale);
float effectiveScale = imageScale * contentScale;
CGSize captureSize = CGSizeMake(enclosingView.bounds.size.width / effectiveScale, enclosingView.bounds.size.height / effectiveScale);
NSLog(#"effectiveScale = %0.2f, captureSize = %#", effectiveScale, NSStringFromCGSize(captureSize));
UIGraphicsBeginImageContextWithOptions(captureSize, YES, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextScaleCTM(context, 1/effectiveScale, 1/effectiveScale);
[enclosingView.layer renderInContext:context];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Depending on the current transform the resulting image will have a different size. For example when you zoom in, the size gets smaller. You can also set effectiveScale to a constant value in order to get an image with a constant size.
Your gesture recognizer code does not limit the scale factor, i.e. you can zoom out/in without being limited. That can be very dangerous! My capture method can output really large images when you've zoomed out very much.
If you have zoomed out the background of the captured image will be black. If you want it to be transparent, you must set the opaque-parameter of UIGraphicsBeginImageContextWithOptions to NO.
Why capturing the view if you have the original image? Just apply the transformations to it. Something like this may be a start:
UIImage *image = [UIImage imageNamed:#"<# original #>"];
CIImage *cimage = [CIImage imageWithCGImage:image.CGImage];
// build the transform you want
CGAffineTransform t = CGAffineTransformIdentity;
CGFloat angle = [(NSNumber *)[self.faceImageView valueForKeyPath:#"layer.transform.rotation.z"] floatValue];
CGFloat scale = [(NSNumber *)[self.faceImageView valueForKeyPath:#"layer.transform.scale"] floatValue];
t = CGAffineTransformConcat(t, CGAffineTransformMakeScale(scale, scale));
t = CGAffineTransformConcat(t, CGAffineTransformMakeRotation(-angle));
// create a new CIImage using the transform, crop, filters, etc.
CIImage *timage = [cimage imageByApplyingTransform:t];
// draw the result
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef imageRef = [context createCGImage:timage fromRect:[timage extent]];
UIImage *result = [UIImage imageWithCGImage:imageRef];
// save to disk
NSData *png = UIImagePNGRepresentation(result);
NSString *path = [NSHomeDirectory() stringByAppendingPathComponent:#"Documents/result.png"];
if (png && [png writeToFile:path atomically:NO]) {
NSLog(#"\n%#", path);
}
CGImageRelease(imageRef);
You can easily crop the output if that's what you want (see -[CIImage imageByCroppingToRect] or take into account the translation, apply a Core Image filter, etc. depending on what are your exact needs.
I think Bellow Code Capture Your Current View ...
- (UIImage *)captureView {
CGRect rect = [self.view bounds];
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[self.yourImage.layer renderInContext:context];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
I think you want to save the display screen and use it ,so i post this code...
Hope,this help you...
:)

Image rotation Issues in Iphone

I want to rotate image (left and right rotation) with angle 90.I am using following code it rotate left with angle 360.how can i rotate my image with angle 90.
-(IBAction)btnLeftRotateClicked:(id)sender
{
CATransform3D rotationTransform = CATransform3DMakeRotation(1.0f * M_PI, 0, 0, 1.0);
CABasicAnimation* rotationAnimation;
rotationAnimation = [CABasicAnimation animationWithKeyPath:#"transform"];
rotationAnimation.toValue = [NSValue valueWithCATransform3D:rotationTransform];
rotationAnimation.duration = 0.25f;
rotationAnimation.cumulative = YES;
rotationAnimation.repeatCount = 1;
[imageAdjustView.layer addAnimation:rotationAnimation forKey:#"rotationAnimation"];
}
Thanks.
Why don't you use the affineTransform if you just want to rotate the image in 2D
CGAffineTransformMakeRotation(-M_PI * 0.5);
A rotation of 360 degree is no Rotation. Guess you mean 180 = PI. Thus, 90 degree = .5*PI.
If you want to use some easy drop in classes that rotate a UIImage and also scale it, try looking at: http://bit.ly/w3r5t6 it is a UIImage category called WBImage
#M.S.B:
I think this link will help you.
How to Rotate a UIImage 90 degrees?
How to programmatically rotate image by 90 Degrees in iPhone?
You can refer to this answer by fbrereto in the above link
What about something like:
static inline double radians (double degrees) {return degrees * M_PI/180;}
UIImage* rotate(UIImage* src, UIImageOrientation orientation)
{
UIGraphicsBeginImageContext(src.size);
CGContextRef context = UIGraphicsGetCurrentContext();
if (orientation == UIImageOrientationRight) {
CGContextRotateCTM (context, radians(90));
} else if (orientation == UIImageOrientationLeft) {
CGContextRotateCTM (context, radians(-90));
} else if (orientation == UIImageOrientationDown) {
// NOTHING
} else if (orientation == UIImageOrientationUp) {
CGContextRotateCTM (context, radians(90));
}
[src drawAtPoint:CGPointMake(0, 0)];
return UIGraphicsGetImageFromCurrentImageContext();
}
EDIT: You can refer to answer given by Sabobin under the link How to programmatically rotate image by 90 Degrees in iPhone?. I have posted the same code so that you can refer it from here:
UIImageView *myImageView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"my_image.png"]];
//set point of rotation
myImageView.center = CGPointMake(100.0, 100.0);
//rotate rect
myImageView.transform = CGAffineTransformMakeRotation(3.14159265/2);
Hope this helps you.

Discrepancies between bounds, frame, and frame calculated through CGAffineTransformScale

I have a simple app which places an imageview on the screen, and allows the user to move it and scale it using the pinch and pan gesture recognizers. In my uipangesturerecognizer delegate method (scale), I'm trying to calculate the value of the transformed frame before I actually perform the transformation using CGAffineTransformScale. I'm not getting the correct value however, and am getting some weird result that isn't inline with what the transformed frame should be. Below is my method:
- (void)scale:(UIPinchGestureRecognizer *)sender
{
if([sender state] == UIGestureRecognizerStateBegan)
{
lastScale = [sender scale];
}
if ([sender state] == UIGestureRecognizerStateBegan ||
[sender state] == UIGestureRecognizerStateChanged)
{
CGFloat currentScale = [[[sender view].layer valueForKeyPath:#"transform.scale"] floatValue];
CGFloat newScale = 1 - (lastScale - [sender scale]) * (CropThePhotoViewControllerPinchSpeed);
newScale = MIN(newScale, CropThePhotoViewControllerMaxScale / currentScale);
newScale = MAX(newScale, CropThePhotoViewControllerMinScale / currentScale);
NSLog(#"currentBounds: %#", NSStringFromCGRect([[sender view] bounds]));
NSLog(#"currentFrame: %#", NSStringFromCGRect([[sender view] frame]));
CGAffineTransform transform = CGAffineTransformScale([[sender view] transform], newScale, newScale);
CGRect nextFrame = CGRectApplyAffineTransform([[sender view] frame], transform);
NSLog(#"nextFrame: %#", NSStringFromCGRect(nextFrame));
//NSLog(#"nextBounds: %#", NSStringFromCGRect(nextBounds));
[sender view].transform = transform;
lastScale = [sender scale];
}
}
Here is a printed result I get:
/* currentBounds: {{0, 0}, {316, 236.013}}
currentFrame: {{-115.226,-53.4392}, {543.452, 405.891}}
nextFrame: {{-202.566, -93.9454}, {955.382, 713.551}} */
With these results, the currentBounds coordinates is obviously 0,0 like it always is, and the size and width are that of the original image before it ever gets transformed. This value seems to stay the same no matter how many transformations I do.
currentFrame is the correct coordinates and the correct size and width based on the current state of the image in it's transformed state.
nextFrame is incorrect, it should match up with the currentFrame from the next set of values that gets printed but it doesn't.
So I have some questions:
1) Why is currentFrame displaying the correct value for the frame? I thought the frame was invalid after you perform transformations? This set of values was displayed after many enlargements and minimizations I was doing on the image with my fingers. It seems like the height and width of currentBounds is what I'd expect for the currentFrame.
2) Why is my next frame value being calculated incorrectly, and how do I accurately calculate the value of the transformed frame before I actually implement the transformation?
You need to apply your new transform to the view's untransformed frame. Try this:
CGAffineTransform transform = CGAffineTransformScale([[sender view] transform], newScale, newScale);
CGAffineTransform iTransform = CGAffineTransformInvert([[sender view] transform]);
CGRect rawFrame = CGRectApplyAffineTransform([[sender view] frame], iTransform);
CGRect nextFrame = CGRectApplyAffineTransform(rawFrame, transform);