I am creating an iPad app and want to update the user with the current size of a view. I used to be doing this by calculating the scale of a CGAffineTransform and multiplying the xScale by the width (of the related view) and the yScale by the height. I'd like to keep doing this, but I'm no longer doing 2d transformations, and I'm not sure what equation to use to extract the scale information from CATransform3D.
Can you help?
To get the current scale of the layer you just perform a valueForKeyPath: on the layer:
CGFloat currentScale = [[layer valueForKeyPath: #"transform.scale"] floatValue];
Other keys can be found in Apples Core Animation Programming Guide
I'm not familiar with the API but CATransform3D looks like a regular 4x4 transformation matrix for doing 3D transformations.
Assuming that it represents nothing more than a combination of scale, rotation and translation, the scale factors can be extracted by calculating the magnitudes of either the rows or columns of the upper left 3x3 depending on whether CATransform3D is row or column major respectively.
For example, if it is row-major, the scale in the x-direction is the square root of ( m11 * m11 ) + ( m12 * m12 ) + ( m13 * m13 ). The y and z scales would similarly be the magnitudes of the second and third rows.
From the documentation for CATransform3DMakeTranslation it appears that CATransform3D is indeed row-major.
Here's an update answer for swift 4+
guard let currentScaleX = view.layer.value(forKeyPath: "transform.scale.x") as? CGFloat {
return
}
print ("the scale x is \(currentScaleX)")
guard let currentScaleY = view.layer.value(forKeyPath: "transform.scale.y") as? CGFloat {
return
}
print ("the scale y is \(currentScaleY)")
Related
I am trying to render a 2D triangle using user touches. So, I will let a user touch three points on the screen and those points will be used as vertices of a triangle.
You're already aware that you need to return clip-space coordinates (technically not normalized device coordinates) from your vertex shader. The question is how and where to go from UIKit coordinates to Metal's clip-space coordinates.
Let's start by defining these different spaces. Note that below, I actually am using NDC coordinates for the sake of simplicity, since in this particular case, we aren't introducing perspective by returning vertex positions with w != 1. (Here I'm referring to the w coordinate of the clip-space position; in the following discussion, w always refers to the view width).
We pass the vertices into our vertex shader in whatever space is convenient (this is often called model space). Since we're working in 2D, we don't need the usual series of transformations to world space, then eye space. Essentially, the coordinates of the UIKit view are our model space, world space, and eye space all in one.
We need some kind of orthographic projection matrix to move from this space into clip space. If we strip out the unnecessary parts related to the z axis and assume that our view bounds' origin is (0, 0), we come up with the following transformation:
We could pass this matrix into our vertex shader, or we could do the transformation prior to sending the vertices to the GPU. Considering how little data is involved, it really doesn't matter at this point. In fact, using a matrix at all is a little wasteful, since we can just transform each coordinate with a couple of multiplies and an add. Here's how that might look in a Metal vertex function:
float2 inverseViewSize(1.0f / width, 1.0f / height); // passed in a buffer
float clipX = (2.0f * in.position.x * inverseViewSize.x) - 1.0f;
float clipY = (2.0f * -in.position.y * inverseViewSize.y) + 1.0f;
float4 clipPosition(clipX, clipY, 0.0f, 1.0f);
Just to verify that we get the correct results from this transformation, let's plug in the upper-left and lower-right points of our view to ensure they wind up at the extremities of clip space (by linearity, if these points transform correctly, so will all others):
These points appear correct, so we're done. If you're concerned about the apparent distortion introduced by this transformation, note that it is exactly canceled by the viewport transformation that happens prior to rasterization.
Here is a function that will convert UIKit view-based coordinates to Metal's clip space coordinates (based on warrenm`s answer). It can be added directly to a shader file & called from the vertex shader function.
float2 convert_to_metal_coordinates(float2 point, float2 viewSize) {
float2 inverseViewSize = 1 / viewSize;
float clipX = (2.0f * point.x * inverseViewSize.x) - 1.0f;
float clipY = (2.0f * -point.y * inverseViewSize.y) + 1.0f;
return float2(clipX, clipY);
}
You'll want to pass the viewSize (UIKit's bounds) to Metal somehow, say via a buffer parameter on the vertex function.
Translated Thompsonmachine's code to swift, using SIMD values which is what I need to pass to shaders.
func convertToMetalCoordinates(point: CGPoint, viewSize: CGSize) -> simd_float2 {
let inverseViewSize = CGSize(width: 1.0 / viewSize.width, height: 1.0 / viewSize.height)
let clipX = Float((2.0 * point.x * inverseViewSize.width) - 1.0)
let clipY = Float((2.0 * -point.y * inverseViewSize.height) + 1.0)
return simd_float2(clipX, clipY)
}
Giving the following code:
self.transform = CGAffineTransformRotate(self.transform, ?);
Is there a way to set the rotation back to 0 without knowing what the current rotation angle is? I have other transforms that I'd like to maintain, hence why I'm using CGAffineTransformRotate instead of CGAffineTransformMakeRotation.
If it's just scaling and rotation then your options are actually either (i) determine just the scale and make a brand new scaling matrix of that scale; or (ii) as you suggest, determine the rotation and adjust the transform you have by the opposite amount.
You can achieve either by looking at the components of the transform matrix. If you think about how matrix multiplication works you've got the output x axis being (transform.a, transform.b) and the output y axis being (transform.c, transform.d).
So, for (i):
// use the Pythagorean theorem to get the length of either axis;
// that's the scale, e.g.
CGFloat scale = sqrtf(
self.transform.a*self.transform.a +
self.transform.b*self.transform.b);
self.transform = CGAffineTransformMakeScale(scale, scale);
For (ii):
// use arctan to get the rotation of the x axis
CGFloat angle = atan2f(
self.transform.b*self.transform.b, self.transform.b*self.transform.a);
self.transform = CGAffineTransformRotate(self.transform, -angle);
You could also invert the transform
CGAffineTransform CGAffineTransformInvert (
CGAffineTransform t
);
eg
self.transform = CGAffineTransformInvert(CGAffineTransformRotate(self.transform, -angle));
Suppose the current scale of my UIView is x. Suppose I apply a scale transformation to my UIView of the amount y ie:
view.transform = CGAffineTransformScale(view.transform, y, y);
. How do I determine what the value of the scale of the UIView after the scale transformation occurs (in terms of x and y?).
The scale transform multiplies the current scale with your scale y.
if the scale was 2.0 for retina, it is y* 2.0 afterwards.
So x*y is the answer. but dont forget x.achsis scale and y achsis can be different.
x, and y for scale is confusing, better use s1 and s2, or sx, sy if you have different scale on y and x achsis, in your code.
Scaling combines by multiplication, translation (movement) by addition, rotation is a matrix multiplication. All three can be combined into an AffineTransformation (a matrix with 1 more row than the dimensions of the space), these are combined by matrix multiplication. 2D AffineTransformations are 3x2 or 3x3 matrices, the extra column just makes them easier to work with.
Edit:
Using clearer names: if he current scale was currxs, currys and the scale applied was xs,ys the new scale would be currxs*xs, currys*ys. Note that applying a scale will also scale any translation component that is contained in the AffineTransformation, this is why order of application is important.
Its quite simple if you are just using the CGAffineTransformScale and not the other transformations like rotation, you can use the view frame and bounds size to calculate the resulting scale values.
float scaleX = view.frame.size.width/view.bounds.size.width;
float scaleY = view.frame.size.height/view.bounds.size.height;
Ok, so I realize I can find the scale value from a layer's CATransform3D like this:
float scale = [[layer valueForKeyPath: #"transform.scale"] floatValue];
But I can't for the life of me figure out how I would find the scale value from a CGAffineTransform. Say for instance I have this CGAffineTransform called "cameraTransform":
UIImagePickerController *imagePicker = [[UIImagePickerController alloc] init];
CGAffineTransform *cameraTransform = imagePicker.cameraViewTransform;
Now how do I get the scale value from cameraTransform?
I try to give a general answer for all kinds of CGAffineTransforms, even rotated ones.
Assuming your CGAffineTransform contains (optionally)
rotation
translation
scaling
and
NO skew
then the there’s a general formula that gives you the scale factor:
CGAffineTransform transform = ...;
CGFloat scaleFactor = sqrt(fabs(transform.a * transform.d - transform.b * transform.c));
"Mirroring" or flipping coordinate directions will be ignored, that means (x --> -x; y --> y) will result in scaleFactor == 1 instead of -1.
http://en.wikipedia.org/wiki/Determinant
"A geometric interpretation can be given to the value of the determinant of a square matrix with real entries: the absolute value of the determinant gives the scale factor by which area or volume is multiplied under the associated linear transformation, while its sign indicates whether the transformation preserves orientation. Thus a 2 × 2 matrix with determinant −2, when applied to a region of the plane with finite area, will transform that region into one with twice the area, while reversing its orientation."
The article goes on to give formulas for the determinant of a 3x3 matrix and a 2x2 matrix. CGAffineTransforms are 3x3 matrices, but their right column is always 0 0 1. The result is the determinant will be equal to the determinant of the 2x2 upper left square of the matrix. So you can use the values from the struct and compute the scale yourself.
The Quartz 2D Programming Guide tells you where to find the scale values. Compare that with the definition of the CGAffineTransform structure.
I have a UIImageView that I applied a:
CGAffineTransformConcat(scaleTransform, rotateTransform);
Now I need to get the actual scale value and rotation degrees. I can get the rotation degrees, but the problem comes when getting the scale value. The scale value I get is not the right one, unless I rotate the view to 0.0;
I think this is happening because the matrix is now multiplied (scale * rotation).
Any ideas on how to get the right scale value?
I solved me problem.
To get the Angle:
float angle = atan2(imageView.transform.b, imageView.transform.a);
The scale transform applied to the imageView was uniform, so to get the Scale Value:
CATransform3D localScaleTransform = [(CALayer *)[imageView.layer presentationLayer] transform];
float scale = sqrt(pow(localScaleTransform.m11, 2) + pow(localScaleTransform.m12, 2));
If you applied different scale values follow this:
scaleInX = sqrt(M11^2 + M12^2)
scaleInY = sqrt(M21^2 + M22^2)