In ARKit, when I perform a hit-test, I get back an instance of ARHitTestResult. One of the properties of this is worldTransform, which I understand contains a 4x4 transformation matrix of the position of the object – simd_float4x4.
As someone who is very unfamiliar with linear algebra and 3D graphics, how would I edit this matrix to, say, increase its y coordinate by 0.05?
If there is a blog post or something I could look at that would help me wrap my head around this, please let me know. I am not sure what terms I should be googling.
Sorry if my question is full of misunderstandings! As you can probably tell, I am not too familiar with these concepts.
Thank you to anyone who helps.
EDIT: The original question is best addressed by just adding 0.05 to the y component of the node's position. However, the original answer below does address a bit about composing transformation matrices, if that is something you are interested in.
======================================================================
If you want to apply an operation to a matrix, the most immediately simple way is to make a matrix that does that operation, and then multiply your original matrix by that new matrix.
For a translation, assuming you want to translate by x, y, z, you can do this:
let translation = simd_float4x4(
float4(1, 0, 0, 0),
float4(0, 1, 0, 0),
float4(0, 0, 1, 0),
float4(x, y, z, 1)
)
Note that this is just an identity matrix (1 down the diagonal) with the last column (!!!important, the float4s above are COLUMNS, not ROWS, as they would visually seem) set to contain the x/y/z values. You can research further into homogeneous coordinates, but think of this as just how a translation is represented.
Then, in simd, just do this: let newWorldTransform = translation * oldWorldTransform and you will have the old world transform translated by your x/y/z translation values (in your example, [x, y, z] = [0, 0.05, 0]).
However, it may be worth exploring why you want to edit your hit test results. I cannot think of a practical use case for that, so maybe if you explain a bit more about what you are trying to do I could suggest a more intuitive way to do it.
Matrices in 3D graphics is a regular way to translate, rotate, scale and shear 3D objects. In ARKit, RealityKit and SceneKit for consistent linear transformations you need to use simd_float4x4-like matrices:
var myMatrix: simd_float4x4
Translation 4x4 Matrix has 16 elements inside – 4 elements (float4) by 4 columns. Columns indices are 0, 1, 2 and 3. Translation matrix uses the fourth column with index 3.
SceneKit example
Use the following code to place your model 25 cm above its default position SCNVector3(0,0,0):
let sphereNode = SCNNode()
sphereNode.geometry = SCNSphere(radius: 1.0)
sphereNode.geometry?.firstMaterial?.diffuse.contents = UIColor.red
scene.rootNode.addChildNode(sphereNode)
var translation = matrix_identity_float4x4
translation.columns.3.y = 0.25
sphereNode.simdWorldTransform = translation
RealityKit example
let model = ModelEntity(mesh: .generateBox(size: 0.3))
let anchor = AnchorEntity()
anchor.addChild(model)
let currentMatrix = anchor.transform.matrix
var positionMatrix = simd_float4x4()
positionMatrix.columns.3.y = 0.25
let transform = simd_mul(currentMatrix, positionMatrix)
anchor.move(to: transform, relativeTo: model, duration: 1.0)
Related
How to create a line between two points in 3d space with RealityKit?
There are examples of creating lines between two points in Scenekit, however, there are basically none using RealityKit.
To create the line, I've created a rectangle model entity and placed it between my first touched point and the current touched point. From here, all I would need to do is rotate the rectangle to face the current touched point. However, using the simd_quatf(from: to:) doesn't work as intended.
rectangleModelEntity.transform.rotation = simd_quatf(from: firstTouchedPoint,
to: currTouchedPoint)
If I were to touch a point and then drag directly downwards, the rectangle model should be to be a straight line between first touched and current touched point, but it stays horizontal with a slight tilt.
To solve this, I tried getting the angle between my initially horzontal line as a vector and a vector from the first touched to current touched point
let startVec = currTouchedPoint - firstTouchedPoint
let endVec = endOfModelEntityPoint - modelEntityCenterPoint
let lengthVec = simd_length(cross(startVec, endVec))
let theta = atan2(lengthVec, dot(startVec, endVec))
This gives me the angle between two vectors in 3d space, which seems correct, when I checked it gave me 90 degrees when touching and dragging directly between it.
The problem is I don't know what the axis to rotate it on should be. Since this is 3d space, the line doesn't need to be on a 2d plane, the current touched position can be downwards and in front of the starting touch position.
rectangleModelEntity = simd_quatf(angle: theta, axis: ???)
Personally, I'm not even too sure if the above is the correct solution to creating a line between two points. In theory it's rather basic, create a rectangle with low height/depth to mimic a line, position it in the center of the starting and current touch point then rotate it so it's oriented correctly.
What should be the axis for the above degrees between two vectors?
Is there a better method of creating two lines between points in 3d space with RealityKit/ARKit?
I have implemented using a box. Let me know if you have a better way.
let midPosition = SIMD3(x:(position1.x + position2.x) / 2,
y:(position1.y + position2.y) / 2,
z:(position1.z + position2.z) / 2)
let anchor = AnchorEntity()
anchor.position = midPosition
anchor.look(at: position1, from: midPosition, relativeTo: nil)
let meters = simd_distance(position1, position2)
let lineMaterial = SimpleMaterial.init(color: .red,
roughness: 1,
isMetallic: false)
let bottomLineMesh = MeshResource.generateBox(width:0.025,
height: 0.025/2.5,
depth: meters)
let bottomLineEntity = ModelEntity(mesh: bottomLineMesh,
materials: [lineMaterial])
bottomLineEntity.position = .init(0, 0.025, 0)
anchor.addChild(bottomLineEntity)
The axis is the cross product of the direction your object is facing at the beginning and the direction it should be facing now.
Like if it's at position p1=[x1,y1,z1], initially facing d1=[0, 0, -1], and you want it to face a point p2=[x, y, z] the axis would be the cross product: |d1|✕|p2 - p1|.
May have to swap the two around, or just negate the angle though, depending on how it works out.
I am working on an Augmented Reality project using ARCore. Coordinate system of ARCore changes every time you launch the application making the initial position as origin. I have 5 points in another coordinate system and i can find 4 of these positions in Unity world space using ARCore Augmented Image. These points have different values in my other coordinate system of course. I have to find position of a 5th point in Unity world space using its position in other coordinate system.
I have followed this tutorial to achieve this. But since Unity does not support 3x3 matrices i used Accord.NET framework. Using the tutorial and Accord matrices i can calculate a 3x3 Rotation matrix and a Translation vector.
However, when i tried to apply this to my 5th point using TestObject.transform.Translate(AccordtoUnity(Translation),Space.World)i am having trouble. When the initial 4 objects and reference objects are at same orientation my translation works perfect. However, when my reference objects are rotated this translation does not work. This makes sense of course since i have only done a translation. My question is how can i apply rotation and translation to my 5th point. Or is there a way to convert my 3x3 rotation matrix and translation to Unity 4x4Matrix since then i can use Matrix4x4.MultiplyPoint3x4. Or is it possible to convert my 3x3 rotation matrix to a Quaternion which let me use4x4Matrix.SetTRS. I am a bit confused about this conversion because 4x4Matrix includes scaling as well but i am not doing any scaling.
I would be happy if someone can give me some hint or offer a better approach to find a way to find 5th point. Thanks!
EDIT:
I actually solved the problem based on Daveloper's answer. I constructed a Unity 4x4 matrix like this:
[ R00 R01 R02 T.x ]
[ R10 R11 R12 T.y ]
[ R20 R21 R22 T.z ]
[ 0 0 0 1 ]
I tested this creating primitive objects in Unity and apply translation and rotation using the matrix above like this:
TestObject.transform.position = TransformationMatrix.MultiplyPoint3x4(TestObject.transform.position);
TestObject.transform.rotation *= Quaternion.LookRotation(TransformationMatrix.GetColumn(2), TransformationMatrix.GetColumn(1));
to use a 3x3 rotation matrix and a translation vector to set a transform, use
// rotationMatrixCV = your 3x3 rotation matrix; translation = your translation vector
var rotationMatrix = new Matrix4x4();
for (int i = 0; i < 3; i++)
{
for (int j = 0; j < 3; j++)
{
rotationMatrix[i, j] = rotationMatrixCV[i, j];
}
}
rotationMatrix[3, 3] = 1f;
var localToWorldMatrix = Matrix4x4.Translate(translation) * rotationMatrix);
Vector3 scale;
scale.x = new Vector4(localToWorldMatrix.m00, localToWorldMatrix.m10, matrix.m20, localToWorldMatrix.m30).magnitude;
scale.y = new Vector4(localToWorldMatrix.m01, localToWorldMatrix.m11, matrix.m21, localToWorldMatrix.m31).magnitude;
scale.z = new Vector4(localToWorldMatrix.m02, localToWorldMatrix.m12, matrix.m22, localToWorldMatrix.m32).magnitude;
transform.localScale = scale;
Vector3 position;
position.x = localToWorldMatrix.m03;
position.y = localToWorldMatrix.m13;
position.z = localToWorldMatrix.m23;
transform.position = position;
Vector3 forward;
forward.x = localToWorldMatrix.m02;
forward.y = localToWorldMatrix.m12;
forward.z = localToWorldMatrix.m22;
Vector3 upwards;
upwards.x = localToWorldMatrix.m01;
upwards.y = localToWorldMatrix.m11;
upwards.z = localToWorldMatrix.m21;
transform.rotation = Quaternion.LookRotation(forward, upwards);
NOTICE:
This is only useful if this rotation and translation define your 5th point's location in the world in the coordinate system that is actively being used...
If your rotation and translation mean anything else, you'll have to do more. Glad to help further if you can define what this rotation and translation mean exactly.
If I understand your question correctly, you want to be able to create a rotation quaternion from a 3x3 matrix.
You can think of a 3x3 rotation matrix as three vectors of length 1 all at 90 degrees to each other. e.g.:
| forward.x forward.y forward.z |
| up.x up.y up.z |
| right.x right.y right.z |
A pretty reliable way to do the conversion is to take the forward and up vectors out of your matrix and applying them to the Unity method https://docs.unity3d.com/ScriptReference/Quaternion.LookRotation.html
This will create a quaternion that corresponds to your matrix. Depending on your actual situation, you might need to use the inverse of the quaternion, but essentially this is what you need.
Note you only need the forward and up, because the right is always the cross product of those two and adds no information. Also take care that you matrix is a pure rotation matrix (i.e. no scaling or skewing), otherwise you might get unexpected results.
The image below shows a rotated box that should be moved horizontally on the X and Z axes. Y should stay unaffected to simplify the scenario. The box could also be the SCNNode of the camera, so I guess a projection does not make sense at this point.
So lets say we want to move the box in the direction of the red arrow. How to achieve this using SceneKit?
The red arrow indicates -Z direction of the box. It also shows us it is not parallel to the camera's projection or to the global axes that are shown as dark grey lines of the grid.
My last approach is the product of a translation matrix and a rotation matrix that results in a new transformation matrix. Do I have to add the current transform to the new transform then?
If yes, where is the SceneKit function for the addition of matrices like SCNMatrix4Mult for multiplication or do I have to write it myself using Metal?
If no, what I'm missing out with the matrix calculations?
I don't want to make use of GLKit.
So my understanding is that you want to move the Box Node along its own X axis (not it's parents X axis). And because the Box Node is rotated, its X axis is not aligned with its parent's one, so you have the problem to convert the translation between the two coordinate systems.
The node hierarchy is
parentNode
|
|----boxNode // rotated around Y (vertical) axis
Using Transformation Matrices
To move boxNode along its own X axis
// First let's get the current boxNode transformation matrix
SCNMatrix4 boxTransform = boxNode.transform;
// Let's make a new matrix for translation +2 along X axis
SCNMatrix4 xTranslation = SCNMatrix4MakeTranslation(2, 0, 0);
// Combine the two matrices, THE ORDER MATTERS !
// if you swap the parameters you will move it in parent's coord system
SCNMatrix4 newTransform = SCNMatrix4Mult(xTranslation, boxTransform);
// Allply the newly generated transform
boxNode.transform = newTransform;
Please Note: The order matters when multiplying matrices
Another option:
Using SCNNode coordinate conversion functions, looks more straight forward to me
// Get the boxNode current position in parent's coord system
SCNVector3 positionInParent = boxNode.position;
// Convert that coordinate to boxNode's own coord system
SCNVector3 positionInSelf = [boxNode convertPosition:positionInParent fromNode:parentNode];
// Translate along own X axis by +2 points
positionInSelf = SCNVector3Make(positionInSelf.x + 2,
positionInSelf.y,
positionInSelf.z);
// Convert that back to parent's coord system
positionInParent = [parentNode convertPosition: positionInSelf fromNode:boxNode];
// Apply the new position
boxNode.position = positionInParent;
Building on #Sulevus's correct answer, here's an extension to SCNNode that simplifies things by using the convertVector rather than the convertPosition transformation, in Swift.
I've done it as a var returning a unit vector, and supplied an SCNVector3 overload of multiply so you can say things like
let action = SCNAction.move(by: 2 * cameraNode.leftUnitVectorInParent, duration: 1)
public extension SCNNode {
var leftUnitVectorInParent: SCNVector3 {
let vectorInSelf = SCNVector3(x: 1, y: 0, z: 0)
guard let parent = self.parent else { return vectorInSelf }
// Convert to parent's coord system
return parent.convertVector(vectorInSelf, from: self)
}
var forwardUnitVectorInParent: SCNVector3 {
let vectorInSelf = SCNVector3(x: 0, y: 0, z: 1)
guard let parent = self.parent else { return vectorInSelf }
// Convert to parent's coord system
return parent.convertVector(vectorInSelf, from: self)
}
func *(lhs: SCNVector3, rhs: CGFloat) -> SCNVector3 {
return SCNVector3(x: lhs.x * rhs, y: lhs.y * rhs, z: lhs.z * rhs)
}
func *(lhs: CGFloat, rhs: SCNVector3) -> SCNVector3 {
return SCNVector3(x: lhs * rhs.x, y: lhs * rhs.y, z: lhs * rhs.z)
}
}
The far easier way this is usually done:
The usual, normal, and extremely easy way to do this in any game engine or 3D engine is:
You simply have a wrapper node, which, holds the node in question.
This is indeed the entire point of transforms, they enable you to abstract out a certain motion.
That's the whole point of 3D engines - the GPU just multiplies out all the quaternions on the way down to the object; it's wholly pointless to (A) figure out in your head the math and (B) do it manually (indeed in the CPU).
In Unity it's "game objects", in scene kit it's "nodes" and so on.
In all 3D engines, including scene kit, almost everything has one or more "holders" around it.
To repeat, the reasons for this are (A) it's the entire raison d'etre of a game engine, to achieve performance in multiplying out the quaternions of every vertex and (B) sheer convenience and code solidity.
One of a million examples ...
Of course you can trivially do it in code,
cameraHolder.addChildNode(camera)
In the OP's example. It looks like you would use cameraHolder only to rotate the camera. And then for the motion the OP is asking about, simply move camera.
It's perfectly normal to have a chain of a number of nodes to get to an object.
This is often used for "effects". Say you have an object, which sometimes has to "vibrate up and down". You can have one node which only does that movement. Note that then, all the animations etc for that movement only have to be on that node. And critically, they can run independently of any other animations or movements. (And indeed you can just use the node elsewhere to jiggle something else.)
I have two example objects in Unity structured as follows:
EmptyGameObject1: scale(-1, 1, 1)
- Child1: rotation(-4, 167, 179)
EmptyGameObject2: scale(1, -1, -1)
- Child2: rotation(-1, -10, 0)
Now I want to get the difference between the euler angles of the childs considering the scale of its parent. Vector3.Distance returns a quite high value, but I see that the rotation of the childs is very similar in the scene view.
I know that a negative scale of the parent mirrors the child object - but
what does it mathematically do to the rotation?
How can I calculate this rotation difference in unity for x, y and z?
Let's say that var rotation1 = Child1.transform.rotation; and var rotation2 = Child2.transform.rotation; (we're working in world space, right?). We want to find a rotation (let's call it difference) from Child1 to Child2 such that rotation1 * difference == rotation2. This means that we can find it simply by calculating var difference = rotation2 * Quaternion.Inverse(rotation1);. Now that we know the rotation, we can just access it's Euler angles property to determine the x, y and z angles: difference.eulerAngles.x and so on.
I am creating an iPad app and want to update the user with the current size of a view. I used to be doing this by calculating the scale of a CGAffineTransform and multiplying the xScale by the width (of the related view) and the yScale by the height. I'd like to keep doing this, but I'm no longer doing 2d transformations, and I'm not sure what equation to use to extract the scale information from CATransform3D.
Can you help?
To get the current scale of the layer you just perform a valueForKeyPath: on the layer:
CGFloat currentScale = [[layer valueForKeyPath: #"transform.scale"] floatValue];
Other keys can be found in Apples Core Animation Programming Guide
I'm not familiar with the API but CATransform3D looks like a regular 4x4 transformation matrix for doing 3D transformations.
Assuming that it represents nothing more than a combination of scale, rotation and translation, the scale factors can be extracted by calculating the magnitudes of either the rows or columns of the upper left 3x3 depending on whether CATransform3D is row or column major respectively.
For example, if it is row-major, the scale in the x-direction is the square root of ( m11 * m11 ) + ( m12 * m12 ) + ( m13 * m13 ). The y and z scales would similarly be the magnitudes of the second and third rows.
From the documentation for CATransform3DMakeTranslation it appears that CATransform3D is indeed row-major.
Here's an update answer for swift 4+
guard let currentScaleX = view.layer.value(forKeyPath: "transform.scale.x") as? CGFloat {
return
}
print ("the scale x is \(currentScaleX)")
guard let currentScaleY = view.layer.value(forKeyPath: "transform.scale.y") as? CGFloat {
return
}
print ("the scale y is \(currentScaleY)")