With the use of new scale to my picture, I then apply a rotation, then scale image is getting old scale
-(void)rotate:(UIRotationGestureRecognizer *)rotationRecognizer{
double rotation;
rotation=rotationRecognizer.rotation;
((UIImageView*)[rotationRecognizer view]).transform = CGAffineTransformMakeRotation(rotation);
self.photoParent = [self.pictures objectAtIndex:0];
self.photoParent.rotation =[NSNumber numberWithDouble:rotation];
NSLog(#"rotation%#",self.photoParent);
}
See my answer here.
Applying a "Make.." transform will wipe out any previously-applied transforms. You need to use the cumulative form, CGAffineTransformRotate
Related
I have a simple rotation gesture implemented in my code, but the problem is when I rotate the image it goes off the screen/out of the view always to the right.
The image view that is being rotated center X gets off or increases (hence it going right off the screen out of the view).
I would like it to rotate around the current center, but it's changing for some reason. Any ideas what is causing this?
Code Below:
- (void)viewDidLoad
{
[super viewDidLoad];
CALayer *l = [self.viewCase layer];
[l setMasksToBounds:YES];
[l setCornerRadius:30.0];
self.imgUserPhoto.userInteractionEnabled = YES;
[self.imgUserPhoto setClipsToBounds:NO];
UIRotationGestureRecognizer *rotationRecognizer = [[UIRotationGestureRecognizer alloc] initWithTarget:self action:#selector(rotationDetected:)];
[self.view addGestureRecognizer:rotationRecognizer];
rotationRecognizer.delegate = self;
}
- (void)rotationDetected:(UIRotationGestureRecognizer *)rotationRecognizer
{
CGFloat angle = rotationRecognizer.rotation;
self.imageView.transform = CGAffineTransformRotate(self.imageView.transform, angle);
rotationRecognizer.rotation = 0.0;
}
You want to rotate the image around it's center, but that's not what it is actually happening. Rotation transforms take place around the origin. So what you have to do is to apply a translate transform first to map the origin to the center of the image, and then apply the rotation transform, like so:
self.imageView.transform = CGAffineTransformTranslate(self.imageView.transform, self.imageView.bounds.size.width/2, self.imageView.bounds.size.height/2);
Please note that after rotating you'll probably have to undo the translate transform in order to correctly draw the image.
Hope this helps
Edit:
To quickly answer your question, what you have to do to undo the Translate Transform is to subtract the same difference you add to it in the first place, for example:
// The next line will add a translate transform
self.imageView.transform = CGAffineTransformTranslate(self.imageView.transform, 10, 10);
self.imageView.transform = CGAffineTransformRotate(self.imageView.transform, radians);
// The next line will undo the translate transform
self.imageView.transform = CGAffineTransformTranslate(self.imageView.transform, -10, -10);
However, after creating this quick project I realized that when you apply a rotation transform using UIKit (like the way you're apparently doing it) the rotation actually takes place around the center. It is only when using CoreGraphics that the rotation happens around the origin. So now I'm not sure why your image goes off the screen. Anyway, take a look at the project and see if any code there helps you.
Let me know if you have any more questions.
The 'Firefox' image is drawn using UIKit. The blue rect is drawn using CoreGraphics
You aren't rotating the image around its centre. You'll need correct this manually by translating it back to the correct position
When the screen of the iPhone orientation changes, I adjust my projection matrix of my 3D rendering to the new aspect value. However, doing this in either willRotateToInterfaceOrientation or didRotateFromInterfaceOrientation would cause the aspect ratio being wrong during the transition, because at the beginning of the animation, the screen size is still the same as before and is changed to the new bounds gradually while being rotated. Therefore I want the aspect value used for my 3D projection matrix to change gradually as well. To achieve this, I retrieve start time and duration for the rotation animation in willAnimateRotationToInterfaceOrientation:
- (void)willAnimateRotationToInterfaceOrientation:(UIInterfaceOrientation)toInterfaceOrientation duration:(NSTimeInterval)duration
{
_aspect.isChanging = YES;
_aspect.startedChanging = [[NSDate date] retain];
_aspect.changeDuration = duration;
_aspect.oldValue = self.renderer.aspect;
_aspect.currentValue = fabsf(self.view.bounds.size.width / self.view.bounds.size.height);
}
Note that the view bound size is already set to the new value that will be valid after the animation.
Then, in update, I do the following:
- (void)update
{
if (_aspect.isChanging) {
float f = MIN(1, [[NSDate date] timeIntervalSinceDate:_aspect.startedChanging] / _aspect.changeDuration);
self.renderer.aspect = _aspect.oldValue * (1-f) + _aspect.currentValue * f;
} else {
self.renderer.aspect = _aspect.currentValue;
}
[self.renderer update];
}
This already works quite well, but the render aspect change does not match the actual aspect change, which is because I'm interpolating it linearly. Therefore I tried to match the easing of the actual aspect change by throwing math functions at the problem: The best result I could get was by adding the following line:
f = 0.5f - 0.5f*cosf(f*M_PI);
This results in almost no visible stretching of the image during the rotation, however if you look closely, it still seems to be a bit unmatched somewhere in between. I guess, the end user won't notice it, but I'm asking here if there might be a better solution, so these are my questions:
What is the actual easing function used for the change in aspect ratio during the rotation change animation?
Is there a way to get the actual width and height of the view as it is displayed during the orientation change animation? This would allow me to retrieve the in-between aspect directly.
On the first bullet point, you can use CAMediaTimingFunction (probably with kCAMediaTimingFunctionEaseInEaseOut) to get the control points for the curve that defines the transition. Then just use a cubic bezier curve formula.
On the second bullet point, you can use [[view layer] presentationLayer] to get a version of that view's CALayer with all current animations applied as per their current state. So if you check the dimensions of that you should get the then current values — I guess if you act upon a CADisplayLink callback then you'll be at most one frame behind.
I have an image that i rotate to face the touchLocation of the user, but because the bounding box of the UIImageView gets larger. This is ruining some collision detection.
My only plan is to code a new bounding box system to get each of the 4 points in a bounding box and rotate that myself, then write check collide code for that.
But before i do that, is there an easy way to do this?
My rotate code:
- (void)ObjectPointAtTouch{
//Get the angle
objectAngle = [self findAngleToPoint:Object :touchLocation];
//Convert to radian, +90 for image alignment
double radian = (90 + objectAngle)/(180 / M_PI);
//Transform by radian
Object.transform = CGAffineTransformMakeRotation(radian);
}
I figured it out, as seen in other tutorials they also resize the scale off the bouncing box to fix this problem. CGAffineTransform scale I think it is.
UIImageViews have a property called contentMode that you can use as
imageView.contentMode = UIViewContentModeScaleAspectFit;
and it will fill the entire view with your image without distorting, even if it has to bleed the image to do that.
Is there any similar stuff on Cocos2D? Sorry about the question, but I am new to Cocos2d.
I am creating the sprite like this:
CCTexture2D *textBack = [[CCTexture2D alloc] initWithImage:image];
CCSprite *sprite = [CCSprite spriteWithTexture:textBack];
thanks.
The equivalent method to performing a UIViewContentModeScaleAspectFit would be the .scale property. When a CCNode (or any of the sub nodes such as CCSprite etc.) is first created, the scale property is 1. Keep increasing it to scale the sprite up proportionally.
sprite.scale = 2.0f; // Scales the sprite proportionally at a factor of 2
As for it fitting to a specific size, you would have to write a routine:
Pass in desired rect and CCSprite bounding box rect.
Scale the box rect to aspect fit the desired rect.
Return the scaling factor
The result can then be applied to the CCSprite.scale property.
You can certainly scale the sprite to do that...
sprite.scale = ?
sprite.scaleX = ?
sprite.scaleY = ?
but I don't believe there is a function to automatically fill the entire screen. If you don't get a definitive reply here I would suggest posting on the Cocos2D forums (http://www.cocos2d-iphone.org/forum/).
Me again. I have a simple question. I have an UIImageView like the one shown below.
alt text http://img683.imageshack.us/img683/1999/volumen.png
That UIimageView is supposed to be the knob to control the volume of my iphone project. My question is, how to know the positions of bar on the UIImageView when it is rotated? Because the volume needs to be 0.5 when the little bar on the cercle is vertical.
I got a piece of code which is (in the touchMoved method):
float dx = locationT.x - imgVVolume.center.x;
float dy = locationT.y - imgVVolume.center.y;
CGFloat angleDif = 0.0f;
movedRotationAngle = atan2(dy,dx);
if (beganRotationAngle == 0.0) {
beganRotationAngle = movedRotationAngle;
initialTransform = imgVVolume.transform;
}
else {
angleDif = beganRotationAngle - movedRotationAngle;
CGAffineTransform newTrans = CGAffineTransformRotate(initialTransform, -angleDif);
imgVVolume.transform = newTrans;
}
Help please.
It depends on what input mechanism you want to use to control the rotation.
If the knob is to rotate based on a single finger touch dragging from side to side then you can create a UIPanGestureRecognizer and attach it to the knob UIImageView. The translationInView: method returns a CGPoint which is the amount of X and Y movement from the touch-down point. You can feed that into a formula like the one you post to get an angle of rotation. You'll want to keep track of delta from last position and also check for stop limits (like 0..360) to prevent over-rotation.
OTOH, if you're going to use two finger rotation then you'll want to use a UIRotationGestureRecognizer and look for the rotation value. Just feed that into a CGAffineTransformRotate and set it to the UIImageView transform. That takes care of all of the above for you. Again, you'll want to check for stop limits.