I'm trying to get the bone rotations related to their parents, but I end up getting pretty weird angles.
I've tried everything, matrix multiplications, offsets, axis swapping, and no luck.
guard let bodyAnchor = anchor as? ARBodyAnchor else { continue }
let skeleton = bodyAnchor.skeleton
let jointTransforms = skeleton.jointLocalTransforms
for (i, jointTransform) in jointTransforms.enumerated() {
//RETRIEVE ANGLES HERE
}
In //RETRIEVE ANGLES HERE I've tried different approaches:
let n = SCNNode()
n.transform = SCNMatrix4(jointTransform)
print(n.eulerAngles)
In this try, I set the jointTransformation to a SCNNode.transform so I can retrieve the eulerAngles to make them human readable and try to understand what's happening.
I get to work some joints, but I think it's pure coincidence or luck, because the rest of the bones rotate very weird.
In other try I get them using jointModelTransforms (Model, instead of Local) so all transforms are relative to the Root bone of the Skeleton.
With this approach I do matrix multiplications like this:
LocalMatrix = Inverse(JointModelMatrix) * (ParentJointModelMatrix)
To get the rotations relative to its parent, but same situation, some bones rotate okay other rotate weird. Pure coincidence I bet.
Why do I want to get the bone rotations?
I'm trying build a MoCap app with my phone that passes to Blender the rotations, trying to build .BVH files from this, so I can use them on Blender.
This is my own rig:
I've done this before with Kinect, but I've been trying for days to do it on ARKit 3 with no luck :(
Using simd_quatf(from:to:) with the right input should do it. I had trouble with weird angles until i started normalising the vectors:
guard let bodyAnchor = anchor as? ARBodyAnchor else { continue }
let skeleton = bodyAnchor.skeleton
let jointTransforms = skeleton.jointLocalTransforms
for (i, jointTransform) in jointTransforms.enumerated() {
// First i filter out the root (Hip) joint because it doesn't have a parent
let parentIndex = skeleton.definition.parentIndices[i]
guard parentIndex >= 0 else { continue } // root joint has parent index of -1
//RETRIEVE ANGLES HERE
let jointVectorFromParent = simd_make_float3(jointTransform.columns.3)
let referenceVector: SIMD3<Float>
if skeleton.definition.parentIndices[parentIndex] >= 0 {
referenceVector = simd_make_float3(jointTransforms[parentIndex].columns.3)
} else {
// The parent joint is the Hip joint which should have
// a vector of 0 going to itself
// It's impossible to calculate an angle from a vector of length 0,
// So we're using a vector that's just pointing up
referenceVector = SIMD3<Float>(x: 0, y: 1, z: 0)
}
// Normalizing is important because simd_quatf gives weird results otherwise
let jointNormalized = normalize(jointVectorFromParent)
let referenceNormalized = normalize(referenceVector)
let orientation = simd_quatf(from: referenceNormalized, to: jointNormalized)
print("angle of joint \(i) = \(orientation.angle)")
}
One important thing to keep in mind though:
ARKit3 tracks only some joints (AFAIK the named joints in ARSkeleton.JointName). The other joints are extrapolated from that using a standardized skeleton. Which means, that the angle you get for the elbow for example won't be the exact angle the tracked persons elbow has there.
Just a guess… does this do the job?
let skeleton = bodyAnchor.skeleton
let jointTransforms = skeleton.jointLocalTransforms
for (i, jointTransform) in jointTransforms.enumerated() {
print(Transform(matrix: jointTransform).rotation)
}
Related
My Swift ARKit app needs the position and orientation of the face relative to the front-facing camera. If I set ARConfiguration.worldAlignment = .camera all I need to do is call for the faceAnchor.transform, which works perfectly; but I need to run in the default worldAlignment = .gravity. In this mode I can get faceAnchor.transform and camera.transform, which are both supplied in world coordinates. How can I use those transforms to get the face anchor in camera coordinates? I've tried multiplying those together as well as multiplying one by the other's inverse, in all four order combinations, but none of these results works. I just don't understand matrix operations well enough to succeed here. Can someone shed light on this for me?
I finally figured this out using SceneKit functions!
let currentFaceTransform = currentFaceAnchor!.transform
let currentCameraTransform = frame.camera.transform
let newFaceMatrix = SCNMatrix4.init(currentFaceTransform)
let newCameraMatrix = SCNMatrix4.init(currentCameraTransform)
let cameraNode = SCNNode()
cameraNode.transform = newCameraMatrix
let originNode = SCNNode()
originNode.transform = SCNMatrix4Identity
//Converts a transform from the node’s local coordinate space to that of another node.
let transformInCameraSpace = originNode.convertTransform(newFaceMatrix, to: cameraNode)
let faceTransformFromCamera = simd_float4x4(transformInCameraSpace)
Hope this helps some others out there!
I am working on an ARKit playground project and I just can't get an SCNNode to move along the axis of sceneView.pointofview. When I try with constants like 0.04 etc it adjusts the position properly but when I provide the coordinates relative to the frame of pointOfView I can't get to position it anywhere else but in the centre.
Here is the code for that part:
let winglevMain = button(ButtonType: .wing)
let wingLevButton = winglevMain.button1
wingLevButton.name = "wing"
let x = (sceneView.pointOfView?.frame.width)!
let y = x/2
let z = x/5
let total = y+z
wingLevButton.position = SCNVector3(total, 0.12, -0.5)
sceneView.pointOfView?.addChildNode(wingLevButton)
P.S I used separate constants to store each value because when I tried putting em as it is into the arguments for position, I got an error signifying that it was hard for playground to calculate it that way.
I've been trying to figure this out for a few days now.
Given an ARKit-based app where I track a user's face, how can I get the face's rotation in absolute terms, from its anchor?
I can get the transform of the ARAnchor, which is a simd_matrix4x4.
There's a lot of info on how to get the position out of that matrix (it's the 3rd column), but nothing on the rotation!
I want to be able to control a 3D object outside of the app, by passing YAW, PITCH and ROLL.
The latest I thing I tried actually works somewhat:
let arFrame = session.currentFrame!
guard let faceAnchor = arFrame.anchors[0] as? ARFaceAnchor else { return }
let faceMatrix = SCNMatrix4.init(faceAnchor.transform)
let node = SCNNode()
node.transform = faceMatrix
let rotation = node.worldOrientation
rotation.x .y and .z have values I could use, but as I move my phone the values change. For instance, if I turn 180˚ and keep looking at the phone, the values change wildly based on the position of the phone.
I tried changing the world alignment in the ARConfiguration, but that didn't make a difference.
Am I reading the wrong parameters? This should have been a lot easier!
I've figured it out...
Once you have the face anchor, some calculations need to happen with its transform matrix, and the camera's transform.
Like this:
let arFrame = session.currentFrame!
guard let faceAnchor = arFrame.anchors[0] as? ARFaceAnchor else { return }
let projectionMatrix = arFrame.camera.projectionMatrix(for: .portrait, viewportSize: self.sceneView.bounds.size, zNear: 0.001, zFar: 1000)
let viewMatrix = arFrame.camera.viewMatrix(for: .portrait)
let projectionViewMatrix = simd_mul(projectionMatrix, viewMatrix)
let modelMatrix = faceAnchor.transform
let mvpMatrix = simd_mul(projectionViewMatrix, modelMatrix)
// This allows me to just get a .x .y .z rotation from the matrix, without having to do crazy calculations
let newFaceMatrix = SCNMatrix4.init(mvpMatrix)
let faceNode = SCNNode()
faceNode.transform = newFaceMatrix
let rotation = vector_float3(faceNode.worldOrientation.x, faceNode.worldOrientation.y, faceNode.worldOrientation.z)
rotation.x .y and .z will return the face's pitch, yaw, roll (respectively)
I'm adding a small multiplier and inverting 2 of the axis, so it ends up like this:
yaw = -rotation.y*3
pitch = -rotation.x*3
roll = rotation.z*1.5
Phew!
I understand that you are using front camera and ARFaceTrackingConfiguration, which is not supposed to give you absolute values. I would try to configure second ARSession for back camera with ARWorldTrackingConfiguration which does provide absolute values. The final solution will probably require values from both ARSession's. I haven't tested this hypothesis yet but it seems to be the only way.
UPDATE quote from ARWorldTrackingConfiguration -
The ARWorldTrackingConfiguration class tracks the device's movement with six degrees of freedom (6DOF): specifically, the three rotation axes (roll, pitch, and yaw), and three translation axes (movement in x, y, and z). This kind of tracking can create immersive AR experiences: A virtual object can appear to stay in the same place relative to the real world, even as the user tilts the device to look above or below the object, or moves the device around to see the object's sides and back.
Apparently, other tracking configurations do not have this ability.
In SCNARView I can access a property of camera node called worldFront, which represents camera rotation. I would like to calculate similar vector from CoreMotion values not using SCNARView, just data from CoreMotion. So that I can get a vector that would be equal to worldFront in SCNARView if camera was facing the same direction. Can someone explain me how to calculate such a value?
The attitude property probably could help
func rollCam(motion: CMDeviceMotion) {
let attitude = motion.attitude
let roll = Float(attitude.roll-M_PI/2)
let yaw = Float(attitude.yaw)
let pitch = Float(attitude.pitch)
camNode.eulerAngles = SCNVector3Make(roll, -yaw, pitch)
}
With this piece of code, quite long time ago, I experimented a bit with CoreMotion. I was trying to first detect human walking and then (with the startDeviceMotionUpdates data) move and roll the camera near to an "anchored" SCNBox. Later on ARKit solved my need with the ARAnchor class
What feature are you looking after?
I have found the answer:
override var cameraFrontVector: double3 {
guard let quaternion = motionService.deviceMotion?.attitude.quaternion else { return .zero }
let x = 2 * -(quaternion.x * quaternion.z + quaternion.w * quaternion.y)
let z = 2 * (quaternion.y * quaternion.z - quaternion.w * quaternion.x)
let y = 2 * (quaternion.x * quaternion.x + quaternion.y * quaternion.y) - 1
return double3(x: x, y: y, z: z)
}
This gives me values like worldFront in SCNNode.
The image below shows a rotated box that should be moved horizontally on the X and Z axes. Y should stay unaffected to simplify the scenario. The box could also be the SCNNode of the camera, so I guess a projection does not make sense at this point.
So lets say we want to move the box in the direction of the red arrow. How to achieve this using SceneKit?
The red arrow indicates -Z direction of the box. It also shows us it is not parallel to the camera's projection or to the global axes that are shown as dark grey lines of the grid.
My last approach is the product of a translation matrix and a rotation matrix that results in a new transformation matrix. Do I have to add the current transform to the new transform then?
If yes, where is the SceneKit function for the addition of matrices like SCNMatrix4Mult for multiplication or do I have to write it myself using Metal?
If no, what I'm missing out with the matrix calculations?
I don't want to make use of GLKit.
So my understanding is that you want to move the Box Node along its own X axis (not it's parents X axis). And because the Box Node is rotated, its X axis is not aligned with its parent's one, so you have the problem to convert the translation between the two coordinate systems.
The node hierarchy is
parentNode
|
|----boxNode // rotated around Y (vertical) axis
Using Transformation Matrices
To move boxNode along its own X axis
// First let's get the current boxNode transformation matrix
SCNMatrix4 boxTransform = boxNode.transform;
// Let's make a new matrix for translation +2 along X axis
SCNMatrix4 xTranslation = SCNMatrix4MakeTranslation(2, 0, 0);
// Combine the two matrices, THE ORDER MATTERS !
// if you swap the parameters you will move it in parent's coord system
SCNMatrix4 newTransform = SCNMatrix4Mult(xTranslation, boxTransform);
// Allply the newly generated transform
boxNode.transform = newTransform;
Please Note: The order matters when multiplying matrices
Another option:
Using SCNNode coordinate conversion functions, looks more straight forward to me
// Get the boxNode current position in parent's coord system
SCNVector3 positionInParent = boxNode.position;
// Convert that coordinate to boxNode's own coord system
SCNVector3 positionInSelf = [boxNode convertPosition:positionInParent fromNode:parentNode];
// Translate along own X axis by +2 points
positionInSelf = SCNVector3Make(positionInSelf.x + 2,
positionInSelf.y,
positionInSelf.z);
// Convert that back to parent's coord system
positionInParent = [parentNode convertPosition: positionInSelf fromNode:boxNode];
// Apply the new position
boxNode.position = positionInParent;
Building on #Sulevus's correct answer, here's an extension to SCNNode that simplifies things by using the convertVector rather than the convertPosition transformation, in Swift.
I've done it as a var returning a unit vector, and supplied an SCNVector3 overload of multiply so you can say things like
let action = SCNAction.move(by: 2 * cameraNode.leftUnitVectorInParent, duration: 1)
public extension SCNNode {
var leftUnitVectorInParent: SCNVector3 {
let vectorInSelf = SCNVector3(x: 1, y: 0, z: 0)
guard let parent = self.parent else { return vectorInSelf }
// Convert to parent's coord system
return parent.convertVector(vectorInSelf, from: self)
}
var forwardUnitVectorInParent: SCNVector3 {
let vectorInSelf = SCNVector3(x: 0, y: 0, z: 1)
guard let parent = self.parent else { return vectorInSelf }
// Convert to parent's coord system
return parent.convertVector(vectorInSelf, from: self)
}
func *(lhs: SCNVector3, rhs: CGFloat) -> SCNVector3 {
return SCNVector3(x: lhs.x * rhs, y: lhs.y * rhs, z: lhs.z * rhs)
}
func *(lhs: CGFloat, rhs: SCNVector3) -> SCNVector3 {
return SCNVector3(x: lhs * rhs.x, y: lhs * rhs.y, z: lhs * rhs.z)
}
}
The far easier way this is usually done:
The usual, normal, and extremely easy way to do this in any game engine or 3D engine is:
You simply have a wrapper node, which, holds the node in question.
This is indeed the entire point of transforms, they enable you to abstract out a certain motion.
That's the whole point of 3D engines - the GPU just multiplies out all the quaternions on the way down to the object; it's wholly pointless to (A) figure out in your head the math and (B) do it manually (indeed in the CPU).
In Unity it's "game objects", in scene kit it's "nodes" and so on.
In all 3D engines, including scene kit, almost everything has one or more "holders" around it.
To repeat, the reasons for this are (A) it's the entire raison d'etre of a game engine, to achieve performance in multiplying out the quaternions of every vertex and (B) sheer convenience and code solidity.
One of a million examples ...
Of course you can trivially do it in code,
cameraHolder.addChildNode(camera)
In the OP's example. It looks like you would use cameraHolder only to rotate the camera. And then for the motion the OP is asking about, simply move camera.
It's perfectly normal to have a chain of a number of nodes to get to an object.
This is often used for "effects". Say you have an object, which sometimes has to "vibrate up and down". You can have one node which only does that movement. Note that then, all the animations etc for that movement only have to be on that node. And critically, they can run independently of any other animations or movements. (And indeed you can just use the node elsewhere to jiggle something else.)