Orient SCNNode in an ARKit scene using real-world bearing - swift

I have a simple SCNNode that I want to place in the real-world position, the node corresponds to a landmark with known coordinates.
The ARKit configuration has the worldAlignment property set to .gravityAndHeading so the x and z axes should be oriented with the world already.
After creating the node, I am setting the position at 100m away from the
node.position = SCNVector3(0, 0, -100)
Then I would like to project the node but with the correct bearing (from user and landmark coordinates). I am trying to rotate the node on the y-axis rotation(yaw)
node.eulerAngles = SCNVector3Make(0, bearingRadians, 0)
However the node still points to north direction, no matter what values I have for bearingRadians.
Do I need to do an extra transformation?

With eulerAngles you just rotate your node in its own coord system.
What you actually need is to perform a full transform of your node relative to the camera position. The transformation is a translation on -z axis followed by a (negative) rotation on y-axis according to your bearing.
Check out this ARKit WindRose repo which align and project the cardinal directions in the real world: https://github.com/vasile/ARKit-CompassRose

Related

How to drag SCNNode along specific axis after rotation?

I am currently working on Swift ARKit project.
I am trying to figure out how can I drag node object at specific axis even after some rotations. For example I want to move node along Y axis after rotations but It's axis directions stays same so even if I change Y position it's still move along World Y. SCNNode.localup is static and returns SCNVector3(0, 1, 0) and as far as I see there is no function for node's local up. If I remember correctly, it was enough to increase the local axis to drag after rotating in Unity.
Node object before rotation
Before applying some rotations to drag object all you need to do is increasing or decreasing specific axis.
Node object after rotation
After rotate green Y axis rotates too but when I increase or decrease local Y value object still moves along World Y.
Sorry for my bad English. Thanks for your helps.
Out of curiosity, how are you currently applying the rotation?
A straightforward way to achieve this without needing to dig into quaternion math would be to wrap your node in question inside a parent node, and apply those transformations separately. You can apply the rotation to the parent node, and then the drag motion along the axis to the child node.
If introducing this layer would be problematic outside of this operation, you can add/rotate/translate/remove as a single atomic operation, using node.convertPosition(_:to:) to interchange between local and world coordinates after you've applied all the transformations.
let parent = SCNNode()
rootNode.addChildNode(parent)
parent.simdPosition = node.simdPosition
node.simdPosition = .zero
parent.simdRotation = /../
node.simdPosition = simd_float3(0, localYAxisShift, 0)
node.simdPosition = rootNode.convertPosition(node.simdPosition, from: parent)
rootNode.addChildNode(node)
rootNode.removeChildNode(parent)
I didn't test the above code, but the general approach should work. In general, compound motion as you describe is a bit more complex to do directly on the node itself, and under the hood SceneKit is doing all of that for you when using the above approach.
Edit
Here's a version that just does the matrix transform directly rather than relying on the built in accommodations.
let currentTransform = node.transform
let yShift = SCNMatrix4MakeTranslation(0, localYAxisShift, 0)
node.transform = SCNMatrix4Mult(yShift, currentTransform)
This should shift your object along the 'local' y axis. Note that matrix multiplication is non-commutative, i.e. the order of parameters in the SCNMatrix4Mult call is important (try reversing them to illustrate).

Size of 3D models in AR

I am trying to place large 3D models (SCNNode) in ARSCNView using ARKit.
The approximate size is as follows :
I have been through the following links :
is-there-any-size-limitations-on-3d-files-loaded-using-arkit
load-large-3d-object-scn-file-in-arscnview-aspect-fit-in-to-the-screen-arkit-sw
As per the above link, upvoted answer by alex papa, the model gets placed in scene. But the model seems above ground hanging in air. The 3D object seems floating in air and not placed on detected/tapped horizontal plane using hit test.
The x & z position is right but y seems some meters above the horizontal plane.
I need scale to be 1.0. Without scaling down the 3D model is it possible to place / visualise it right?
Any help or leads will be of help. Please provide valuable inputs!
The scale of ARKit, SceneKit and RealityKit is meters. Hence, your model's size is 99m X 184m X 43m. Solution is simple – you need to take one 100th of the nominal scale:
let scaleFactor: Float = 0.01
node.scale = SCNVector3(scaleFactor, scaleFactor, scaleFactor)
And here you can read about positioning of pivot point.

What's the difference between ScreenToWorldPoint and ScreenPointToWorldPointInRectangle?

What's the difference between ScreenToWorldPoint and ScreenPointToWorldPointInRectangle? And when should we use which one?
Senario:
I'm using UI system creating my card game similar to Hearthstone. I want to transform my mouse drag positions to world position. RectTransformUtility.ScreenPointToWorldPointInRectangle(UIObjectBeingDragged.transform.parent as RectTransform, Input.mousePosition, Camera.main, out resultV3) works fine. But I also tried Camera.main.ScreenToWorldPoint(Input.mousePosition), and it give a different and "wrong" result.
ScreenToWorldPoint
Gives you a world position (the return value) that is along a ray shot through the near plane of the camera (the Camera whose method is being called) at some given point (the x and y components of the position parameter) and a given distance from that near plane (the z component of the position parameter).
You should use this when you:
have a specific distance from the near plane of the camera you are interested in and
don't need to know if it hit inside some rectangle or not
You could think of this as a shortcut for Ray.GetPoint that uses the x and y of position and various info of the Camera to make the Ray, and the z component of position is the distance parameter.
ScreenPointToWorldPointInRectangle
Also gives you a world position (worldPoint) along a ray shot through the near plane of a camera (cam) at a given point (screenPoint). Only this time instead of giving you the point a given distance along the ray, it gives you the intersection point between that ray and a given rectangle (rect) if it exists, and tells you if such an intersection exists or not (the return value).
You should use this when you:
have a specific rectangle you are interested in the intersection with a camera ray,
You don't know the distance between the camera or its near plane and the intersection point
Want to know if that rectangle is hit by the ray or not.
You could think of this as a shortcut for Plane.Raycast which uses cam and screenPoint to make the Ray, and rect to make the Plane, and also gives some more information of if it would intersect outside the boundaries of the rect.

Clipping node to camera for only one axis

I would like to have a node in the scene, that is somehow clipped to camera node only for y axis. So that when camera moves, this node stays at the same y but moves in x and z with the rest of the scene. Is there some special way I could do something like this, or is the only way to movie the node every time camera moves?
You can use SCNTransformConstraint to have a blocked called at every frame that takes the node's transform and returns a new transform that satisfies your criteria.
There are conversion utils such as convertPosition:toNode: that will allow you to have the node's position in the coordinate system of the camera, and then back to coordinate system of the node's parent after you modified the y coordinate. Just remember to use the presentation nodes if there are animations, actions or physics in your scene.

How far is the far plane in an ARKit session?

So,
I have been using the unproject function on the SCNSceneRenderer:
public func unprojectPoint(_ point: SCNVector3) -> SCNVector3
When I want to unproject a screen point I pass in Z = 1.
To check on things I also placed a node in the scene at the unprojected vector position. Things seem to check out.
In the process I have wondered about how ARKit really handle the near and far plane.
The unprojected point on the far plane when logged gives me this, and this is when I point the camera (as straight as possible downtime -Z in world coordinates):
SCNVector3(x: 121.191811, y: -176.614227, z: -1111.88794)
Given that in ARKit the unit is meters, does -1111 mean that the far plane is about 1K away?
I am trying to understand how the near and far planes are positioned in an ARKit session, specifically, is the far plane at a fixed position, meaning, is it always at a fixed distance from the camera? Does it change? And is that about 1K meters seem to make sense?
The unprojectPoint function uses the same projection as the camera. If you want to know what the camera projection’s near and far planes are, ask the view for its pointOfView node, ask that node for its camera, and ask the camera for its zNear and zFar settings.