The values of convenience initializer init(keyPath:) for CABasicAnimation - swift

I'm using CABasicAnimation for layer animations. In convenience initializer init(keyPath:) I specify which values I want to animate, but I do it mostly intuitively. I mean, I know that it should animate layer's position.x, for example, so I use that value. But where can I find the complete list of values? I checked the documentation for both the initializer and CABasicAnimation and found just some examples of values.

The resource you are looking for is the Key-Value Coding Extensions page of the Core Animation Programming Guide.
There are additions for properties of the types CGPoint, CGSize, CGRect, and CATransform3D.
CGPoint
For point properties you can use .x and .y. For example:
"position.x" // use a number
CGSize
For size properties you can use .width and .height. For example:
"shadowOffset.height" // use a number
CGRect
For rectangle properties you can use origin and size, as well as the point and size additions on those. For example:
"bounds.origin.x" // use a number
"frame.size.width" // use a number
"frame.origin" // use a point
CATransform3D
Core Animation transform properties have additions for scale (.x, .y, .z), rotation (.x, .y, .z), and translation (.x, .y, .z). For example:
"transform.rotation.z" // use a number
"transform.translation.x" // use a number
You can also just use .scale as a number that scales uniformly on all axis, .rotation as a number for the rotation around the z-axis (same as rotation.z), and .translation as a size that translates along the x- and y-axis.

Related

How to convert the position in CGDisplayBounds() to global coordinate ( like NSEvent.mouseLocation) in Swift

I'm a newbie in Swift and MacOS.
I gonna find a method to get the exact display coordinate
NSEvent.mouseLocation
I have found method in CoreGraphic :
func CGDisplayBounds(_ display: CGDirectDisplayID) -> CGRect
but the coordinate is different.
I can workaround to apply a method to mathematically method to convert point Y.
But is there any method to get or convert the position programmatically?
I expect to get the same coordinate with NSEvent.mouseLocation.
Thank for your attention.
It returns to the same coordinate with mouse location.
As you noted, CoreGraphics has what Apple calls ‘flipped’ geometry, with the origin at the top left and the y coordinates increasing toward the bottom of the screen. This is the geometry used by most computer graphics systems.
AppKit prefers what Apple calls ‘non-flipped’, with the origin at the bottom left and the y coordinates increasing toward the top of the screen. This is the geometry normally used in mathematics.
The origin (0, 0) of the CoreGraphics global geometry is always at the top-left of the ‘main’ display (identified by CGMainDisplayID()). The origin of the AppKit global geometry is always at the bottom-left of the main display. To convert between the two geometries, subtract your y coordinate from the height of the main display.
That is:
extension CGPoint {
func convertedToAppKit() -> CGPoint {
return .init(
x: x,
y: CGDisplayBounds(CGMainDisplayID()).height - y
)
}
func convertedToCoreGraphics() -> CGPoint {
return .init(
x: x,
y: CGDisplayBounds(CGMainDisplayID()).height - y
)
}
}
You may notice that these two functions have the same implementation. You don't really need two functions; you can just use one. It converts in both directions.
Calling CGDisplayBounds(CGMainDisplayID()) might also be inefficient. You might want to cache the value or batch your transformations if you're going to be doing a lot of them. But if you cache the value, you'll want to subscribe to NSApplication.didChangeScreenParametersNotification so you can update the cached value if it needs to change.

Best way to obtain ARKit or RealityKit camera rotation?

I am writing in Swift and trying to obtain the RealityKit camera's rotation.
I've successfully gotten the position using:
xsettings.xcam = arView.session.currentFrame?.camera.transform.columns.3.x ?? 0);
xsettings.ycam = arView.session.currentFrame?.camera.transform.columns.3.y ?? 0);
xsettings.zcam = arView.session.currentFrame?.camera.transform.columns.3.z ?? 0);
this works excellently, but I haven't found a rotation solution that seems to work as well.
Currently I am doing this:
xsettings.xcamrot = arView.session.currentFrame?.camera.eulerAngles[0] ?? 0;
xsettings.ycamrot = arView.session.currentFrame?.camera.eulerAngles[1] ?? 0;
xsettings.zcamrot = arView.session.currentFrame?.camera.eulerAngles[2] ?? 0;
but it doesn't seem to work correctly, there is a lot of weirdness on the roll (eulerAngles[2]) and just some inconsistency overall, at least compared to the positioning which is excellent.
Just curious if there is a better way to access the camera's rotation?
It's not weird. An orientation of ARKit's or RealityKit's camera is expressed as roll (z), pitch (x), and yaw (y). Thus you can easily get right values with expressions you've mentioned earlier:
arView.session.currentFrame?.camera.eulerAngles.x
arView.session.currentFrame?.camera.eulerAngles.y
arView.session.currentFrame?.camera.eulerAngles.z
However, the order of rotation is ZYX. Read about it here.
And several words about subscript and dot notation. Each two lines are identical:
DispatchQueue.main.asyncAfter(deadline: .now() + 4.0) {
// Pitch
print(arView.session.currentFrame?.camera.eulerAngles[0]) // -0.6444593
print(arView.session.currentFrame?.camera.eulerAngles.x) // -0.6444593
// Yaw
print(arView.session.currentFrame?.camera.eulerAngles[1]) // -0.69380975
print(arView.session.currentFrame?.camera.eulerAngles.y) // -0.69380975
// Roll
print(arView.session.currentFrame?.camera.eulerAngles[2]) // -1.5064332
print(arView.session.currentFrame?.camera.eulerAngles.z) // -1.5064332
}
That's because ARView camera's eulerAngles instance property is SIMD3<Float> (a.k.a. simd_float3) type that supports subscript and dot notation.
On the other hand, ARSCNView pointOfView's eulerAngles instance property is SCNVector3 type that doesn't support subscript but supports dot notation.
P.S.
You don't need to (and can't) assign a rotation order explicitly because it's implicit inner rotation mechanism.
You might be better off taking the quaternion for rotation, depending on what you're wanting to do with the output.
You can also use arView.cameraTransform to get the camera's transform. From there, translation can be taken from arView.cameraTransform.translation.{x,y,z}, and quaternion rotation with arView.cameraTransform.rotation. One of the benefits of a quaternion here is that you will not have a problem with rotation order.
If you still wanted to get Euler rotations, you can always use MDLTransform:
MDLTransform(matrix: self.cameraTransform.matrix).rotation.{x,y,z}

Can 2D and 3D transforms be applied to iOS controls?

Having never actually developed with iOS before, I am investigating the possibilies with standard iOS controls such as text fields, list etc. and would like to know which transforms can be applied to them.
Do these standard controls support 2D transforms such as scale, translate and rotate?
Do these standard controls also support 3D transforms such as scale, translate and rotate which include a z axis?
If the answer is yes to either questions, what "level" of support exists? For example with a text field, if I transform it in 3D coordinate space can I still enter text into it?
Yes, and a lot. UITextField for example inherits from UIControl with inherits from UIView. This means that you have access to the view's transform property directly via its transform property:
[myTextField setTransform:CGAffineTransformMake(a, b, c, d, x, y)];
Or for 3D support, you can access the transform property of the view's layer to apply a CATransform3D:
[myTextField.layer setTransform:CATransform3DRotate(trans, theta, x, y, z)];
CATransform3D is defined as follows, and as #Xman pointed out, you'll need to import the QuartzCore framework using #import <QuartzCore/QuartzCore.h>, as well as link against it in your build phases.
struct CATransform3D
{
CGFloat m11, m12, m13, m14;
CGFloat m21, m22, m23, m24;
CGFloat m31, m32, m33, m34;
CGFloat m41, m42, m43, m44;
};
typedef struct CATransform3D CATransform3D;
In both of these cases, you can still interact with the text field after a transform has been applied to it.
More information can be found in Apple's documentation.
Check CATransform3D also,
CATransform3D yourTransform = CATransform3DIdentity;
yourTransform.m34 = 1.0 / -500;
//You can rotate the component in any angle around x,y,z axis using following line.
//Below line will rotate the component in 60 degree around y-axis.
yourTransform = CATransform3DRotate(yourTransform, DEGREES_TO_RADIANS(60), 0.0f, 1.0f, 0.0f); //#define DEGREES_TO_RADIANS(d) (d * M_PI / 180)
//You can even translate the component along x,y,z axis
//Below line will translate component by 50 on y-axis
yourTransform = CATransform3DTranslate(yourTransform, 0, 50, 0);
//apply transform to component
yourComponent.layer.transform = yourTransform;
Don't forget to import
#import <QuartzCore/QuartzCore.h>
Hope this will help you.

Draw image with CGAffineTransform and CGContextDrawImage

I want to draw UIimage with CGAffineTransform but It gives wrong output with CGContextConcatCTM
I have try with below code :
CGAffineTransform t = CGAffineTransformMake(1.67822, -1.38952, 1.38952, 1.67822, 278.684, 209.129); // transformation of uiimageview
UIGraphicsBeginImageContext(CGSizeMake(1024, 768));
CGContextRef imageContext = UIGraphicsGetCurrentContext();
CGContextDrawImage(imageContext, dragView.frame, dragView.image.CGImage);
CGContextConcatCTM(imageContext, t);
NSLog(#"\n%#\n%#", NSStringFromCGAffineTransform(t),NSStringFromCGAffineTransform(CGContextGetCTM(imageContext)));
Output :
[1.67822, -1.38952, 1.38952, 1.67822, 278.684, 209.129] // imageview transformation
[1.67822, 1.38952, 1.38952, -1.67822, 278.684, 558.871] // drawn image transformation
CGAffineTransform CGAffineTransformMake (
CGFloat a,
CGFloat b,
CGFloat c,
CGFloat d,
CGFloat tx,
CGFloat ty
);
Parameter b, d and ty changed, How to solve this?
There is no problem to solve. Your log output is correct.
Comparing the two matrixes, the difference between the two is this:
scale vertically by -1 (which affects two of the first four members)
translate vertically by 349.742 (which affects the last member)
I'm going to take a guess and say your view is about 350 points tall. Am I right? Actually, the 349.742 is weird, since you set the context's height to 768. It's almost half (perhaps because the anchor point is centered?), but well short, and cutting off the status bar wouldn't make sense here (and wouldn't account for a 68.516-point difference). So that is a puzzle. But, what follows is still true:
A vertical scale and translate is how you would flip a context. This context has gone from lower-left origin to upper-left origin, or vice versa.
That happened before you concatenated your (unexplained, hard-coded) matrix in. Assuming you didn't flip the context yourself, it probably came that way (I would guess as a UIKit implementation detail).
Concatenation (as in CGContextConcatCTM) does not replace the old transformation matrix with the new one; it is matrix multiplication. The matrix you have afterward is the product of both the matrix you started with and the one you concatenated onto it. The resulting matrix is both flipped and then… whatever your matrix does.
You can see this for yourself by simply getting the CTM before you concatenate your matrix onto it, and logging that. You should see this:
[0, -1, 0, -1, 0, 349.742]
See also “The Math Behind the Matrices” in the Quartz 2D Programming Guide.

Transforming a Stroked CAShapeLayer

I have a CAShapeLayer which contains a CGMutablePath that has a stroke drawn around it. In my app, I transform this CAShapeLayer to increase / decrease it's size at certain times. I'm noticing when I transform the CAShapeLayer, the stroke gets transformed as well. Ideally I'd like to keep the lineWidth of the stroke at 3 at all times even when the CAShapeLayers transformed.
I tried shutting off the stroke before I transformed then readding it afterwards but it didn't work:
subLayerShapeLayer.lineWidth = 0;
subLayerShapeLayer.strokeColor = nil;
self.layer.sublayerTransform = CATransform3DScale(self.layer.sublayerTransform, graphicSize.width / self.graphic.size.width, graphicSize.height / self.graphic.size.height, 1);
shapeLayer.strokeColor = [UIColor colorWithRed:0 green:0 blue:0 alpha:1].CGColor;;
shapeLayer.lineWidth = 3;
Does anyone know how I might be able to accomplish this task? Seems as though it should be able to redraw the stroke after transforming somehow.
Transform the CGPath itself and not its drawn representation (the CAShapeLayer).
Have a close look at CGPathCreateMutableCopyByTransformingPath - CGPath Reference
CGPathCreateMutableCopyByTransformingPath
Creates a mutable copy of a graphics path transformed by a
transformation matrix.
CGMutablePathRef CGPathCreateMutableCopyByTransformingPath(
CGPathRef path,
const CGAffineTransform *transform
);
Parameters
path The path to copy.
transform A pointer to an affine transformation matrix, or NULL if no transformation is needed. If specified, Quartz applies the transformation to all elements of the new path.
Return Value
A new, mutable copy of the specified path transformed by the transform parameter. You are responsible for releasing this object.
Availability
Available in iOS 5.0 and later.
Declared In
CGPath.h