How far is the far plane in an ARKit session? - arkit

So,
I have been using the unproject function on the SCNSceneRenderer:
public func unprojectPoint(_ point: SCNVector3) -> SCNVector3
When I want to unproject a screen point I pass in Z = 1.
To check on things I also placed a node in the scene at the unprojected vector position. Things seem to check out.
In the process I have wondered about how ARKit really handle the near and far plane.
The unprojected point on the far plane when logged gives me this, and this is when I point the camera (as straight as possible downtime -Z in world coordinates):
SCNVector3(x: 121.191811, y: -176.614227, z: -1111.88794)
Given that in ARKit the unit is meters, does -1111 mean that the far plane is about 1K away?
I am trying to understand how the near and far planes are positioned in an ARKit session, specifically, is the far plane at a fixed position, meaning, is it always at a fixed distance from the camera? Does it change? And is that about 1K meters seem to make sense?

The unprojectPoint function uses the same projection as the camera. If you want to know what the camera projection’s near and far planes are, ask the view for its pointOfView node, ask that node for its camera, and ask the camera for its zNear and zFar settings.

Related

What's the difference between ScreenToWorldPoint and ScreenPointToWorldPointInRectangle?

What's the difference between ScreenToWorldPoint and ScreenPointToWorldPointInRectangle? And when should we use which one?
Senario:
I'm using UI system creating my card game similar to Hearthstone. I want to transform my mouse drag positions to world position. RectTransformUtility.ScreenPointToWorldPointInRectangle(UIObjectBeingDragged.transform.parent as RectTransform, Input.mousePosition, Camera.main, out resultV3) works fine. But I also tried Camera.main.ScreenToWorldPoint(Input.mousePosition), and it give a different and "wrong" result.
ScreenToWorldPoint
Gives you a world position (the return value) that is along a ray shot through the near plane of the camera (the Camera whose method is being called) at some given point (the x and y components of the position parameter) and a given distance from that near plane (the z component of the position parameter).
You should use this when you:
have a specific distance from the near plane of the camera you are interested in and
don't need to know if it hit inside some rectangle or not
You could think of this as a shortcut for Ray.GetPoint that uses the x and y of position and various info of the Camera to make the Ray, and the z component of position is the distance parameter.
ScreenPointToWorldPointInRectangle
Also gives you a world position (worldPoint) along a ray shot through the near plane of a camera (cam) at a given point (screenPoint). Only this time instead of giving you the point a given distance along the ray, it gives you the intersection point between that ray and a given rectangle (rect) if it exists, and tells you if such an intersection exists or not (the return value).
You should use this when you:
have a specific rectangle you are interested in the intersection with a camera ray,
You don't know the distance between the camera or its near plane and the intersection point
Want to know if that rectangle is hit by the ray or not.
You could think of this as a shortcut for Plane.Raycast which uses cam and screenPoint to make the Ray, and rect to make the Plane, and also gives some more information of if it would intersect outside the boundaries of the rect.

How to smoothly move a node in an ARkit Scene View based off device motion?

Swift beginner struggling with moving a scene node in ARkit in response to the device motion.
What I want to achieve is: First detect the floor plane, then place a sphere on the floor. From that point onwards depending on the movement of the device, I want to move the sphere along its x and z axis to move it around the floor of the room. (The sphere once created needs to be in the center of the device screen and locked to that view)
So far I can detect the floor and place a node no problem. I can use device motion to obtain the device attitude (pitch, roll and yaw) but how to translate these values into meaningful x, y, z positions that I can update my node with?
Are there any formulas or methods that are used to calculate such information or is this the wrong approach? I would appreciate a link to some info or an explanation of how to go about this. Also I am unsure how to ensure the node would be always at the center of the device screen.
so, as far as I understood you want to have a following workflow:
Step 1. You create a sphere on a plane (which is already done)
Step 2. Move the sphere with respect to the camera's horizontal plane (i.e. along its x and z axis to move it around the floor of the room depending on the movement of the device)
Assuming that the Step 1 is done, what you can do:
Get the position of the camera and the sphere
This should be first called within the function that is invoked after sphere creation (be it a tapGestureRecognizer(), touchesBegan(), etc.).
You can do it by calling position property of SCNNode for sphere and for camera position and/or orientation by calling sceneView.session.currentFrame's .camera.transform which contains all necessary parameters about current position of the camera
Move the sphere as camera moves
Having the sphere position on the Scene and the transformation matrix of the camera, you can find the distance relation between them. Here you can find a good explanation of how exactly you can do it
After you get those things you should implement a proper logic within renderer(_:didUpdate:for:) to obtain continuous lock of the ball with respect to the camera position
If you are interested about the math behind it, you can kick off by reading more about transformation matrices which is a big part of Image Processing and many other areas
Hope that this will help!

ARKit: How to tell if user's face is parallel to camera

In my Swift / ARKit / SceneKit project, I need to tell if the user's face in front-facing camera is parallel to the camera.
I was able to tell horizontal parallel by comparing the left and right eyes distance (using faceAnchor.leftEyeTransform and the worldPosition property) from the camera.
But I am stuck on vertical parallel. Any ideas, how to achieve that?
Assuming you are using ARFaceTrackingConfiguration in your app, you can actually retrieve the transforms of both the ARFaceAnchor and the camera to determine their orientations. You can get a simd_float4x4 matrix of the head's orientation in world space by using ARFaceAnchor.transform property. Similarly, you can get the transform of the SCNCamera or ARCamera of your scene.
To compare the camera's and face's orientations relative to each other in a SceneKit app (though there are similar functions on the ARKit side of things), I get the world transform for the node that is attached to each of them, let's call them faceNode attached to the ARFaceAnchor and cameraNode representing the ARSCNView.pointOfView. To find the angle between the camera and your face, for example, you could do something like this:
let faceOrientation: simd_quatf = faceNode.simdWorldTransform
let cameraOrientation: simd_quatf = cameraNode.simdWorldTransform
let deltaOrientation: simd_quatf = faceOrientation.inverse * cameraOrientation
By looking at deltaOrientation.angle and deltaOrientation.axis you can determine the relative angles on each axis between the face and the camera. If you do something like deltaOrientation.axis * deltaOrientation.angles, you have a simd_float3 vector giving you a sense of the pitch, yaw and roll (in radians) of the head relative to the camera.
There are a number of ways you can do this using the face anchor and camera transforms, but this simd quaternion method works quite well for me. Hope this helps!

Orient SCNNode in an ARKit scene using real-world bearing

I have a simple SCNNode that I want to place in the real-world position, the node corresponds to a landmark with known coordinates.
The ARKit configuration has the worldAlignment property set to .gravityAndHeading so the x and z axes should be oriented with the world already.
After creating the node, I am setting the position at 100m away from the
node.position = SCNVector3(0, 0, -100)
Then I would like to project the node but with the correct bearing (from user and landmark coordinates). I am trying to rotate the node on the y-axis rotation(yaw)
node.eulerAngles = SCNVector3Make(0, bearingRadians, 0)
However the node still points to north direction, no matter what values I have for bearingRadians.
Do I need to do an extra transformation?
With eulerAngles you just rotate your node in its own coord system.
What you actually need is to perform a full transform of your node relative to the camera position. The transformation is a translation on -z axis followed by a (negative) rotation on y-axis according to your bearing.
Check out this ARKit WindRose repo which align and project the cardinal directions in the real world: https://github.com/vasile/ARKit-CompassRose

Scene Kit: How to make SCNNode only rotate horizontally in Swift 2.0

I managed to create a tree and the program is supposed to only allow users to rotate this tree horizontally.
After looking at the doc, I use the following code
treeNode.physicsBody?.angularVelocityFactor = SCNVector3Make(0, 0, 1.0)
But this didn't do anything, I was still able to rotate the tree in every direction.
What is the correct way to limit node rotation in only horizontal direction?
Are you really trying to restrict the node's rotation? Or do you want to restrict the camera's rotation?
If the former, you'll have to provide much more detail on your body's physics and structure. An approach using SCNPhysicsHingeJoint seems like it would work.
let joint = SCNPhysicsHingeJoint.init(body: treeNode,
axis: SCNVector3Make(0, 0, 1.0),
anchor: SCNVector3Make(xpos, ypos, zpos))
If you're just trying to control the camera, though, you should turn off allowsCameraControl for the SCNView. That's only useful for quick and dirty testing. Then you can implement the technique described here (Rotate SCNCamera node looking at an object around an imaginary sphere) and modified here (SCNCamera limit arcball rotation).