Rotating UILabels from any position to upright - iphone

I have a label that I rotate using
pieceBlack.transform = CGAffineTransformMakeRotation((M_PI * (180) / 180.0));
and that works perfectly, EXCEPT:
I rotate this label during the game to either right side up or upside down. How do I say, "Whatever angle you are at, go back to upright." I'm thinking maybe like an:
int PreviousAngle = ?;
pieceBlack.transform = CGAffineTransformMakeRotation(degreesToRadian(0-PreviousAngle));
so I guess what I'm asking is how you ask for the rotation angle. Or, alternately, maybe there is a sort of
pieceBlack.transform = CGAffineTransformMakeRotation(RotateToUpright);

From what I remember transform is always relative from the upright position (original), so 0.0f? So you can just do pieceBlack.transform = CGAffineTransformIdentity

What I did was to first position the (in my case) view in the "straight up" orientation. Then I used the CGAffineTransformMakeRotation to create the somewhat off-kilter view. Finally, I applied the identity transform to bring the view back to its straight-up position.

you don't want to set the transform, you want to modify it,
view.transform = CGAffineTransformRotate(view.transform, angle);
if you need to keep the old one around, then do

Related

Moving camera to proper position in Zoom function in Unity

Hi I have a question that I'm hoping someone can help me work through. I've asked elsewhere to no avail but it seems like a standard problem so I'm not sure why I haven't been getting answers.
Its basically setting up a zoom function that mirrors Google Maps zoom. Like, the camera zooms in/out onto where your mouse is. I know this probably gets asked a lot but I think Unity's new Input System changed things up a bit since the 4-6 year old questions that I've found in my own research.
In any case, I've set up an parent GameObject that holds all 2D sprites that will be in my scene and an orthographic camera. I can set the orthographic size through code to change to zoom, but its moving the camera to the proper place that I am having trouble with.
This was my 1st attempt:
public Zoom(float direction, Vector2 mousePosition) {
// zoom calcs
float rate 1 + direction * Time.deltaTime;
float targetOrtho = Mathf.MoveTowards(mainCam.orthographicSize, mainCam.orthoGraphicSize/rate, 0.1f);
// move calcs
mousePosition = mainCam.ScreenToWorldPoint(mousePosition);
Vector2 deltaPosition = previousPosition - mousePosition;
// move and zoom
transform.position += new Vector3(deltaPosition.x, deltaPosition.y, 0);
// zoomLevels are a generic struct that holds the max/min values.
SetZoomLevel(Mathf.Clamp(targetOrthoSize, zoomLevels.min, zoomLevels.max));
previousPosition = mousePosition;
}
This function gets called through my input controller, activated through Unity's Input System events. When the mouse wheel scrolls, the Zoom function is given a normalized value as direction (1 or -1) and the current mousePosition. When its finished its calculation, the mousePosition is stored in previousPosition.
The code actually works -- except it is extremely jittery. This, of course happens because there is no Time.deltaTime applied to the camera movement, nor is this in LateUpdate; both of which helps to smooth the movements. Except, in the former case, multiplying Time.deltaTime to new Vector3(deltaPosition.x, deltaPosition.y, 0) seems to cause the zoom occur at the camera's centre rather than the mouse position. When i put zoom into LateUpdate, it creates a cool but unwanted vibration effect when the camera moves.
So, after doing some thinking and reading, I thought it may be best to calculate the difference between the mouse position and the camera's center point, then multiply it by a scale factor, which is the camera's orthographic size * 2 (maybe...??). Hence my updated code here:
public void Zoom(float direction, Vector2 mousePosition)
{
// zoom
float rate = 1 + direction * Time.unscaledDeltaTime * zoomSpeed;
float orthoTarget = Mathf.MoveTowards(mainCam.orthographicSize, mainCam.orthographicSize * rate, maxZoomDelta);
SetZoomLevel(Mathf.Clamp(orthoTarget, zoomLevels.min, zoomLevels.max));
// movement
if (mainCam.orthographicSize < zoomLevels.max && mainCam.orthographicSize > zoomLevels.min)
{
mousePosition = mainCam.ScreenToWorldPoint(mousePosition);
Vector2 offset = (mousePosition - new Vector2(transform.position.x, transform.position.y)) / (mainCam.orthographicSize * 2);
// panPositions are the same generic struct holding min/max values
offset.x = Mathf.Clamp(offset.x, panPositions.min.x, panPositions.max.x);
offset.y = Mathf.Clamp(offset.y, panPositions.min.y, panPositions.max.y);
transform.position += new Vector3(offset.x, offset.y, 0) * Time.deltaTime;
}
}
This seems a little closer to what I'm trying to achieve but the camera still zooms in near its center point and zooms out on some point... I'm a bit lost as to what I am missing out here.
Is anyone able to help guide my thinking about what I need to do to create a smooth zoom in/out on the point where the mouse currently is? Much appreciated & thanks for reading through this.
Ok I figured it out for if anyone ever comes across the same problem. it is a standard problem that is easily solved once you know the math.
Basically, its a matter of scaling and translating the camera. You can do one or the other first - it does not matter; the outcome is the same. Imagine your screen looks like this:
The green box is your camera viewport, the arrow is your cursor. When you zoom in, the orthographic size gets smaller and shrinks around its anchor point (usually P1(0,0)). This is the scaling aspect of the problem and the following image explains it well:
So, now we want to move the camera position to the new position:
So how do we do this? Its just a matter of getting distance from the old camera position (P1(0, 0)) to the new camera position (P2(x,y)). Basically, we only want this:
My solution to find the length of the arrow in the picture above was to basically subtract the length of the cursor position from the old camera position (oldLength) from the length of the cursor position to the new camera position (newLength).
But how do you find newLength? Well, since we know the length will be scaled accordingly to the size of the camera viewport, newLength will be either oldLength / scaleFactor or oldLength * scaleFactor, depending on whether you want to zoom in or out, respectively. The scale factor can be whatever you want (zoom in/out by 2, 4, 1.4... whatever).
From there, its just a matter of subtracting newLength from oldLength and adding that difference from the current camera position. The psuedo code is below:
(Note that i changed 'newLength' to 'length' and 'oldLength' to 'scaledLength')
// make sure you're working in world space
mousePosition = camera.ScreenToWorldPoint(mousePosition);
length = mousePosition - currentCameraPosition;
scaledLength = length / scaleFactor // to zoom in, otherwise its length * scaleFactor
deltaLength = length - scaledLength;
// change position
cameraPosition = currentCameraPosition - deltaLength;
// do zoom
camera.orthographicSize /= scaleFactor // to zoom in, otherwise orthographic size *= scaleFactor
Works perfectly for me. Thanks to those who helped me in a discord coding community!

CGAffineTransformScale modified after device rotation

I have a view that I am performing a transform on
originalTransform = self.appContainer.transform;
self.appContainer.transform = CGAffineTransformMakeScale(.8, .8);
If I do not rotate the device I can revert this back to the original by using
self.appContainer.transform = CGAffineTransformScale(originalTransform, 1, 1);
Unfortunately, if I rotate the device, the transformation matrix for the view gets modified by the view rotation and breaks the original scale. If I undo the scale after rotation I am left with the UIView not returning to its original full size.
I'm sure that I need to make calculations between the original transformation matrix and the new one but I am unsure how to do this. The only thing I have been able to do to make this work is to revert the scale back to 1 before rotation
- (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation
{
self.appContainer.transform = CGAffineTransformScale(originalTransform, 1, 1);
}
And then rescale it with the newly modified transformation after the
-(void)didRotateFromInterfaceOrientation:(UIInterfaceOrientation)fromInterfaceOrientation {
self.appContainer.transform = CGAffineTransformMakeScale(.8, .8);
}
This makes it work as I expect, but instead of going back to full screen, and then back to .8 scale, I'd like it to go from its new transformation to the correct .8 scale.
If you want to revert it to original, try setting the transform to CGAffineTransformIdentity.
self.appContainer.transform = CGAffineTransformIdentity;
I think you have to remove all autoresizing masks using.....
[self.appContainer setAutoresizingMask:UIViewAutoresizingNone];

CATransform3D transform

I'm trying to add perspective to a view by using CATransform3D. Currently, this is what I'm getting:
And this is what I wanna get:
I'm having a hard time doing that. I'm completely lost here. Here's my code:
CATransform3D t = CATransform3DIdentity;
t.m11 = 0.8;
t.m21 = 0.1;
t.m31 = -0.1;
t.m41 = 0.1;
[[viewWindow layer] setTransform:t];
Matrix element .m34 is responsible for perspective. It's not discussed much in the documentation, so you'll have to toy with it. This answer talks a little bit about how to use it: https://stackoverflow.com/a/7596326/1228525
To actually see the effects of that matrix you need to do two things:
1. Apply that perspective matrix to the parent view's sublayer transform
2. Rotate the child view (the one on which you want perspective) - otherwise it will remain flat and you won't be able to tell it now has a 3D perspective.
The numbers are arbitrary, make them whatever looks best:
CATransform3D t = CATransform3DIdentity;
t.m34 = .005;
parentView.layer.sublayerTransform. = t;
childview.layer.transform = CATransform3DMakeRotation(45,1,0,0);
The perspective will look different depending on where the child is in the parent view. If the child is in the center of the parent it will be like you are looking at the child view in 3D straight on. The further from the center it is, the more it will be like you are viewing from a glancing angle.
This is what I got using the above code and centering the child view: (apparently I'm not allowed to post pictures since I'm new, so you'll have to see the link) http://i.stack.imgur.com/BiYCS.png
It's very hard to tell what you're going for based on those pictures; a bit more explanation might be helpful if my answer isn't what you want. From what I can tell from the picture, the bottom one isn't perspective...
I was able to easily achieve the right CATransform3D using AGGeometryKit.
#import <AGGeometryKit/AGGeometryKit.h>
UIView *view = ...; // create a view
// setting anchorPoint to zero
view.layer.anchorPoint = CGPointZero;
view.layer.transform = CATransform3DMakeTranslation(-view.layer.bounds.size.width * .5, -view.layer.bounds.size.height * .5, 0);
// setting a trapezoid transform
AGKQuad quad = view.layer.quadrilateral;
quad.tl.x -= 10; // shift top left x-value with 10 pixels
view.layer.quadrilateral = quad; // the quad is converted to CATransform3D and applied

Moving/Rotating an object using it's updated position and tilt?

This is basically a simple issue, which I can't get around ...
So, I have an object of UIImageView of a certain frame over which, I implement CAAnimation with rotation and translation, and it is at new coordinates (x,y), and has been rotated by some degrees.
This animation works beautifully. But if I again do a rotation and movement from THAT state, I want the object to use the new frame and new properties from STEP 1 after its rotation and again rotate by the new angle.
As of now, when I try rotation again, it uses its standard state & frame size(during initialization) and performs rotation on it...
And by this I mean... If I have a square of frame (100, 200, 10, 10), and I rotate it by 45 degrees, the shape is now a different square, with different frame size and end points compared to the original square, and I implement a new rotation by (say) 152 degrees on it and it needs to perform a rotation on the newer square... But it turns out that it uses the same frame size as the previous one (x,y, 10, 10).
How can I continue rotating / moving the object with its updated position and state ??
Note: (if you need to see the code for animation)
This is the code for my animation, which involves simple rotation and movement ! http://pastebin.com/cR8zrKRq
You need to save the rotation step and update object rotation in animationDidStop: method. So, in your cas, you should apply:
-(void)animationDidStop:(CAAnimation *)anim finished:(BOOL)flag
{
//angle is global
CATransform3D rotationTransform = CATransform3DMakeRotation((angle%4)*M_PI_4, 0, 0, 1);
[object.layer setTransform:rotationTransform];
object.center = tempFrame; //already there
}
where angle is an integer counter of animations(steps) with values 0,1,2,3. My step is M_PI_4. There is probably a better solution to the problem, but this should do the trick

Two-finger rotation gesture on the iPhone?

I'm working on an iPhone app with a lot of different gesture inputs that you can do. Currently there is single finger select / drag, two finger scroll, and two finger pinch zoom-in / zoom-out. I want to add in two finger rotation (your fingers rotate a point in between them), but I can't figure out how to get it to work right. All the other gestures were linear so they were only a matter of using the dot or cross product, pretty much.
I'm thinking I've got to store the slope between the previous two points of each finger, and if the angle between the vectors is near 90, then there is the possibility of a rotation. If the next finger movement angle is also near 90, and the direction of the vector on one finger changed positively and changed negatively, then you've got a rotation. The problem is, I need a really clean distinction between this gesture and the other ones - and the above isn't far enough removed.
Any suggestions?
EDIT: Here's how I did it in a vector analysis manner (as opposed to the suggestion below about matching pixels, note that I use my Vector struct in here, you should be able to guess what each function does):
//First, find the vector formed by the first touch's previous and current positions.
struct Vector2f firstChange = getSubtractedVector([theseTouches get:0], [lastTouches get:0]);
//We're going to store whether or not we should scroll.
BOOL scroll = NO;
//If there was only one touch, then we'll scroll no matter what.
if ([theseTouches count] <= 1)
{
scroll = YES;
}
//Otherwise, we might scroll, scale, or rotate.
else
{
//In the case of multiple touches, we need to test the slope between the two touches.
//If they're going in roughly the same direction, we should scroll. If not, zoom.
struct Vector2f secondChange = getSubtractedVector([theseTouches get:1], [lastTouches get:1]);
//Get the dot product of the two change vectors.
float dotChanges = getDotProduct(&firstChange, &secondChange);
//Get the 2D cross product of the two normalized change vectors.
struct Vector2f normalFirst = getNormalizedVector(&firstChange);
struct Vector2f normalSecond = getNormalizedVector(&secondChange);
float crossChanges = getCrossProduct(&normalFirst, &normalSecond);
//If the two vectors have a cross product that is less than cosf(30), then we know the angle between them is 30 degrees or less.
if (fabsf(crossChanges) <= SCROLL_MAX_CROSS && dotChanges > 0)
{
scroll = YES;
}
//Otherwise, they're in different directions so we should zoom or rotate.
else
{
//Store the vectors represented by the two sets of touches.
struct Vector2f previousDifference = getSubtractedVector([lastTouches get:1], [lastTouches get:0]);
struct Vector2f currentDifference = getSubtractedVector([theseTouches get:1], [theseTouches get:0]);
//Also find the normals of the two vectors.
struct Vector2f previousNormal = getNormalizedVector(&previousDifference);
struct Vector2f currentNormal = getNormalizedVector(&currentDifference );
//Find the distance between the two previous points and the two current points.
float previousDistance = getMagnitudeOfVector(&previousDifference);
float currentDistance = getMagnitudeOfVector(&currentDifference );
//Find the angles between the two previous points and the two current points.
float angleBetween = atan2(previousNormal.y,previousNormal.x) - atan2(currentNormal.y,currentNormal.x);
//If we had a short change in distance and the angle between touches is a big one, rotate.
if ( fabsf(previousDistance - currentDistance) <= ROTATE_MIN_DISTANCE && fabsf(angleBetween) >= ROTATE_MAX_ANGLE)
{
if (angleBetween > 0)
{
printf("Rotate right.\n");
}
else
{
printf("Rotate left.\n");
}
}
else
{
//Get the dot product of the differences of the two points and the two vectors.
struct Vector2f differenceChange = getSubtracted(&secondChange, &firstChange);
float dotDifference = getDot(&previousDifference, &differenceChange);
if (dotDifference > 0)
{
printf("Zoom in.\n");
}
else
{
printf("Zoom out.\n");
}
}
}
}
if (scroll)
{
prinf("Scroll.\n");
}
You should note that if you're just doing image manipulation or direct rotation / zooming, then the above approach should be fine. However, if you're like me and you're using a gesture to cause something that takes time to load, then it's likely that you'll want to avoid doing the action until that gesture has been activated a few times in a row. The difference between each with my code is still not perfectly separate, so occasionally in a bunch of zooms you'll get a rotation, or vise versa.
I've done that before by finding the previous and current distances between the two fingers, and the angle between the previous and current lines.
Then I picked some empirical thresholds for that distance delta and angle theta, and that has worked out pretty well for me.
If the distance was greater than my threshold, and the angle was less than my threshold, I scaled the image. Otherwise I rotated it.
2 finger scroll seems easy to distinguish.
BTW in case you are actually storing the values, the touches have previous point values already stored.
CGPoint previousPoint1 = [self scalePoint:[touch1 previousLocationInView:nil]];
CGPoint previousPoint2 = [self scalePoint:[touch2 previousLocationInView:nil]];
CGPoint currentPoint1 = [self scalePoint:[touch1 locationInView:nil]];
CGPoint currentPoint2 = [self scalePoint:[touch2 locationInView:nil]];
Two fingers, both moving, opposit(ish) directions. What gesture conflicts with this?
Pinch/zoom I guess comes close, but whereas pinch/zoom will start off moving away from a center point (if you trace backwards from each line, your lines will be parallel and close), rotate will initially have parallel lines (tracing backwards) that will be far away from each other and those lines will constantly change slope (while retaining distance).
edit: You know--both of these could be solved with the same algorithm.
Rather than calculating lines, calculate the pixel under each finger. If the fingers move, translate the image so that the two initial pixels are still under the two fingers.
This solves all two-finger actions including scroll.
Two-finger scroll or Zoom might look a little wobbly at times since it will do other operations as well, but this is how the map app seems to work (excluding the rotate which it doesn't have).