SceneKit – Get direction of camera - swift

I need to find out which direction a camera is looking at, e.g. if it is looking towards Z+, Z-, X+, or X-.
I've tried using eulerAngles, but the range for yaw goes 0 -> 90 -> 0 -> -90 -> 0 which means I can only detect if the camera is looking towards Z or X, not if it's looking towards the positive or negative directions of those axes.

You can create an SCNNode that place it in worldFront property to get a vector with the x, y, and z direction.
Another way you could do it is like how this project did it:
// Credit to https://github.com/farice/ARShooter
func getUserVector() -> (SCNVector3, SCNVector3) { // (direction, position)
if let frame = self.sceneView.session.currentFrame {
let mat = SCNMatrix4(frame.camera.transform) // 4x4 transform matrix describing camera in world space
let dir = SCNVector3(-1 * mat.m31, -1 * mat.m32, -1 * mat.m33) // orientation of camera in world space
let pos = SCNVector3(mat.m41, mat.m42, mat.m43) // location of camera in world space
return (dir, pos)
}
return (SCNVector3(0, 0, -1), SCNVector3(0, 0, -0.2))
}

Related

How to calculate the angle in between two points with regards to camera orientation in 3-D space

I am currently facing the problem that I want to calculate the angle in radians from the camera's position to a target position. However, this calculation needs to take into account the heading of the camera as well.
For example, when the camera is facing away from the object the function should return π. So far the function I have written works most of the time. However when the user gets close to the X and Z axis the arrow does not point to the target any more, rather it points slightly to the left or right depending if you are at positive or negative X and z space.
Currently, I'm not sure why my function does not work. The only explanation I would have for this behavior is gimbal lock. However I'm not quite sure how to implement the same function using quaternions.
I also attached some photos to this post that the issue is a little bit more clear.
Here is the function I'm using right now:
func getAngle() -> Float {
guard let pointOfView = self.sceneView.session.currentFrame else { return 0.0 }
let cameraPosition = pointOfView.camera.transform.columns.3
let heading = getUserVector()
let distance = SCNVector3Make(TargetPosition.x - cameraPosition.x ,TargetPosition.y - cameraPosition.y - TargetPosition.y,TargetPosition.z - cameraPosition.z)
let heading_scalar = sqrtf(heading.x * heading.x + heading.z * heading.z)
let distance_scalar = sqrtf(distance.z * distance.z + distance.z * distance.z)
let x = ((heading.x * distance.x) + (heading.z * distance.z) / (heading_scalar * distance_scalar))
let theta = acos(max(min(x, 1), -1))
if theta < 0.35 {
return 0
}
if (heading.x * (distance.z / distance_scalar) - heading.z * (distance.x/distance_scalar)) > 0{
return theta
}
else{
return -theta
}
}
func getUserVector() -> (SCNVector3) { // (direction)
if let frame = self.sceneView.session.currentFrame {
let mat = SCNMatrix4(frame.camera.transform) // 4x4 transform matrix describing camera in world space
let dir = SCNVector3(-1 * mat.m31, -1 * mat.m32, -1 * mat.m33) // orientation of camera in world space
print(mat)
return dir
}
return SCNVector3(0, 0, -1)
}
Consider the following image as an example. The arrow in the top right corner should be pointing straight up to follow the line to the center object but instead it is pointing slightly to the left. As I am aligned with the z-axis the same behavior happens when aligning with the x-axis.
I figured out the answer to my problem the solution was transforming the object into the prospective of the camera and then simply taking the atan2 to get the angle in between the camera and object hope this post will help future readers!
func getAngle() -> Float {
guard let pointOfView = self.sceneView.session.currentFrame else { return 0.0 }
let cameraPosition = pointOfView.camera.transform
let targetPosition = simd_float4x4(targetNode.transform)
let newTransform = simd_mul(cameraPosition.inverse, targetPosition).columns.3
let theta = atan2(newTransform.z, newTransform.y)
return theta + (Float.pi / 2)
}

Calculating worldFront like in SCNARView

In SCNARView I can access a property of camera node called worldFront, which represents camera rotation. I would like to calculate similar vector from CoreMotion values not using SCNARView, just data from CoreMotion. So that I can get a vector that would be equal to worldFront in SCNARView if camera was facing the same direction. Can someone explain me how to calculate such a value?
The attitude property probably could help
func rollCam(motion: CMDeviceMotion) {
let attitude = motion.attitude
let roll = Float(attitude.roll-M_PI/2)
let yaw = Float(attitude.yaw)
let pitch = Float(attitude.pitch)
camNode.eulerAngles = SCNVector3Make(roll, -yaw, pitch)
}
With this piece of code, quite long time ago, I experimented a bit with CoreMotion. I was trying to first detect human walking and then (with the startDeviceMotionUpdates data) move and roll the camera near to an "anchored" SCNBox. Later on ARKit solved my need with the ARAnchor class
What feature are you looking after?
I have found the answer:
override var cameraFrontVector: double3 {
guard let quaternion = motionService.deviceMotion?.attitude.quaternion else { return .zero }
let x = 2 * -(quaternion.x * quaternion.z + quaternion.w * quaternion.y)
let z = 2 * (quaternion.y * quaternion.z - quaternion.w * quaternion.x)
let y = 2 * (quaternion.x * quaternion.x + quaternion.y * quaternion.y) - 1
return double3(x: x, y: y, z: z)
}
This gives me values like worldFront in SCNNode.

How to move a rotated SCNNode in SceneKit, on its "own" axis?

The image below shows a rotated box that should be moved horizontally on the X and Z axes. Y should stay unaffected to simplify the scenario. The box could also be the SCNNode of the camera, so I guess a projection does not make sense at this point.
So lets say we want to move the box in the direction of the red arrow. How to achieve this using SceneKit?
The red arrow indicates -Z direction of the box. It also shows us it is not parallel to the camera's projection or to the global axes that are shown as dark grey lines of the grid.
My last approach is the product of a translation matrix and a rotation matrix that results in a new transformation matrix. Do I have to add the current transform to the new transform then?
If yes, where is the SceneKit function for the addition of matrices like SCNMatrix4Mult for multiplication or do I have to write it myself using Metal?
If no, what I'm missing out with the matrix calculations?
I don't want to make use of GLKit.
So my understanding is that you want to move the Box Node along its own X axis (not it's parents X axis). And because the Box Node is rotated, its X axis is not aligned with its parent's one, so you have the problem to convert the translation between the two coordinate systems.
The node hierarchy is
parentNode
|
|----boxNode // rotated around Y (vertical) axis
Using Transformation Matrices
To move boxNode along its own X axis
// First let's get the current boxNode transformation matrix
SCNMatrix4 boxTransform = boxNode.transform;
// Let's make a new matrix for translation +2 along X axis
SCNMatrix4 xTranslation = SCNMatrix4MakeTranslation(2, 0, 0);
// Combine the two matrices, THE ORDER MATTERS !
// if you swap the parameters you will move it in parent's coord system
SCNMatrix4 newTransform = SCNMatrix4Mult(xTranslation, boxTransform);
// Allply the newly generated transform
boxNode.transform = newTransform;
Please Note: The order matters when multiplying matrices
Another option:
Using SCNNode coordinate conversion functions, looks more straight forward to me
// Get the boxNode current position in parent's coord system
SCNVector3 positionInParent = boxNode.position;
// Convert that coordinate to boxNode's own coord system
SCNVector3 positionInSelf = [boxNode convertPosition:positionInParent fromNode:parentNode];
// Translate along own X axis by +2 points
positionInSelf = SCNVector3Make(positionInSelf.x + 2,
positionInSelf.y,
positionInSelf.z);
// Convert that back to parent's coord system
positionInParent = [parentNode convertPosition: positionInSelf fromNode:boxNode];
// Apply the new position
boxNode.position = positionInParent;
Building on #Sulevus's correct answer, here's an extension to SCNNode that simplifies things by using the convertVector rather than the convertPosition transformation, in Swift.
I've done it as a var returning a unit vector, and supplied an SCNVector3 overload of multiply so you can say things like
let action = SCNAction.move(by: 2 * cameraNode.leftUnitVectorInParent, duration: 1)
public extension SCNNode {
var leftUnitVectorInParent: SCNVector3 {
let vectorInSelf = SCNVector3(x: 1, y: 0, z: 0)
guard let parent = self.parent else { return vectorInSelf }
// Convert to parent's coord system
return parent.convertVector(vectorInSelf, from: self)
}
var forwardUnitVectorInParent: SCNVector3 {
let vectorInSelf = SCNVector3(x: 0, y: 0, z: 1)
guard let parent = self.parent else { return vectorInSelf }
// Convert to parent's coord system
return parent.convertVector(vectorInSelf, from: self)
}
func *(lhs: SCNVector3, rhs: CGFloat) -> SCNVector3 {
return SCNVector3(x: lhs.x * rhs, y: lhs.y * rhs, z: lhs.z * rhs)
}
func *(lhs: CGFloat, rhs: SCNVector3) -> SCNVector3 {
return SCNVector3(x: lhs * rhs.x, y: lhs * rhs.y, z: lhs * rhs.z)
}
}
The far easier way this is usually done:
The usual, normal, and extremely easy way to do this in any game engine or 3D engine is:
You simply have a wrapper node, which, holds the node in question.
This is indeed the entire point of transforms, they enable you to abstract out a certain motion.
That's the whole point of 3D engines - the GPU just multiplies out all the quaternions on the way down to the object; it's wholly pointless to (A) figure out in your head the math and (B) do it manually (indeed in the CPU).
In Unity it's "game objects", in scene kit it's "nodes" and so on.
In all 3D engines, including scene kit, almost everything has one or more "holders" around it.
To repeat, the reasons for this are (A) it's the entire raison d'etre of a game engine, to achieve performance in multiplying out the quaternions of every vertex and (B) sheer convenience and code solidity.
One of a million examples ...
Of course you can trivially do it in code,
cameraHolder.addChildNode(camera)
In the OP's example. It looks like you would use cameraHolder only to rotate the camera. And then for the motion the OP is asking about, simply move camera.
It's perfectly normal to have a chain of a number of nodes to get to an object.
This is often used for "effects". Say you have an object, which sometimes has to "vibrate up and down". You can have one node which only does that movement. Note that then, all the animations etc for that movement only have to be on that node. And critically, they can run independently of any other animations or movements. (And indeed you can just use the node elsewhere to jiggle something else.)

Use PhysicsBody to rotate an SCNNode to match the Thumbstick's radian

I'm working on a top-down space game built using Swift and SceneKit with the following setup:
SCNNode representing a spaceship
Rotation is constrained to the y axis; values range from -M_PI_2 to M_PI + M_PI_2
Movement is constrained to the x and z axes.
Game controller thumbstick input
Values range from -1.0 to 1.0 on the x and y axes.
When the game controller's thumbstick changes position, the spaceship should rotate using the physics body to match the thumbstick's radian.
The target radian of the thumbstick can be calculated with the following:
let targetRadian = M_PI_2 + atan2(-y, -x)
The current radian of the node can be obtained with the following:
let currentRadian = node.presentationNode.rotation.w * node.presentationNode.rotation.y
NSTimeInterval deltaTime provides the time in seconds since the last rotation calculation.
How can the node be rotated using angularVelocity, applyTorque, or another physics method to reach the targetRadian?
The difference between the targetRadian and the currentRadian ranged from 0.0 to -2π depending on the value of currentRadian. This equation will determine the shortest direction to turn, .Clockwise or .CounterClockwise, to reach the targetRadian:
let turnDirection = (radianDifference + (M_PI * 2)) % (M_PI * 2) < M_PI ? RotationDirection.CounterClockwise : RotationDirection.Clockwise
Using applyTorque, there is a possibility to over-rotate past the targetRadian resulting in a wobbling effect, like a compass magnetizing toward a point, as the rotation changes direction back and forth to reach the targetRadian. The following, while not a perfect solution, dampened the effect:
let turnDampener = abs(radianDifference) < 1.0 ? abs(radianDifference) : 1.0
The complete solution is thus:
enum RotationDirection: Double {
case Clockwise = -1.0
case CounterClockwise = 1.0
}
func rotateNodeTowardDirectionalVector(node: SCNNode, targetDirectionalVector: (x: Double, y: Double), deltaTime: NSTimeInterval) {
guard abs(targetDirectionalVector.x) > 0.0 || abs(targetDirectionalVector.y) > 0.0 else { return }
let currentRadian = Double(node.presentationNode.rotation.w * node.presentationNode.rotation.y)
let targetRadian = M_PI_2 + atan2(-targetDirectionalVector.y, -targetDirectionalVector.x)
let radianDifference = targetRadian - currentRadian
let π2 = M_PI * 2
let turnDirection = (radianDifference + π2) % π2 < M_PI ? RotationDirection.CounterClockwise : RotationDirection.Clockwise
let absRadianDifference = abs(radianDifference)
let turnDampener = absRadianDifference < 1.0 ? absRadianDifference : 1.0
node.physicsBody?.applyTorque(SCNVector4Make(0, CGFloat(turnDirection.rawValue), 0, CGFloat(deltaTime * turnDampener)), impulse: true)
}

Three.js camera rotation issue

I want to rotate the camera around the x-axis on the y-z plane while looking at the (0, 0, 0) point. It turns out the lookAt function behaves weird. When after rotating 180°, the geometry jump to another side unexpectedly. Could you please explain why this happens, and how to avoid it?
You can see the live demo on jsFiddle: http://jsfiddle.net/ysmood/dryEa/
class Stage
constructor: ->
window.requestAnimationFrame =
window.requestAnimationFrame or
window.webkitRequestAnimationFrame or
window.mozRequestAnimationFrame
#init_scene()
#make_meshes()
init_scene: ->
#scene = new THREE.Scene
# Renderer
width = window.innerWidth;
height = window.innerHeight;
#renderer = new THREE.WebGLRenderer({
canvas: document.querySelector('.scene')
})
#renderer.setSize(width, height)
# Camera
#camera = new THREE.PerspectiveCamera(
45, # fov
width / height, # aspect
1, # near
1000 # far
)
#scene.add(#camera)
make_meshes: ->
size = 20
num = 1
geo = new THREE.CylinderGeometry(0, size, size)
material = new THREE.MeshNormalMaterial()
mesh = new THREE.Mesh(geo, material)
mesh.rotation.z = Math.PI / 2
#scene.add(mesh)
draw: =>
angle = Date.now() * 0.001
radius = 100
#camera.position.set(
0,
radius * Math.cos(angle),
radius * Math.sin(angle)
)
#camera.lookAt(new THREE.Vector3())
#renderer.render(#scene, #camera)
requestAnimationFrame(#draw)
stage = new Stage
stage.draw()
You are rotating the camera around the X-axis in the Y-Z plane. When the camera passes over the "north" and "south" poles, it flips so as to stay right-side-up. The camera's up-vector is (0, 1, 0) by default.
Set the camera x-position to 100 so, and its behavior will appear correct to you. Add some axes to your demo for a frame of reference.
This is not a fault of the library. Have a look at the Camera.lookAt() source code.
If you want to set the camera orientation via its quaternion instead, you can do that.
three.js r.59