How to set ARSCNView to non-mirroring? - swift

When ARSCNView is configured as ARFaceTrackingConfiguration, how can it be set to non-mirroring ?

Selfie camera's matrix is mirrored absolutely correctly.
ARFaceTrackingConfig uses selfie camera that is oriented 180 degrees - away from the rear camera. Such an orientation places a user's face in the positive Z direction. To the right of the user is the negative X-axis. Thus, when combining the scene with the ARWorldTrackingConfig and ARFaceTrackingConfig, we get an absolutely correct 3D environment.

Related

Get Angle from a rotation

I have a Cube in Unity. I'm moving and rotating the cube and I want to track the rotation around a special axis.
The Application is a Holo Lens 2 Application and I'm grabbing an Object with my hand. I then want to rotate the Object in my Hand around one of its local axis. I need that rotation around that axis, meaning a float, to rotate another object around its oft local axis by the same amount.
Any Idea how to achive this?

Automatically calculating new position of camera after we increase our chessboard size but want it still to stay in shot

Say my camera is rotated around the X axis 60 degrees and looking down on a 9x9 block chess board. As we adjust board size, I want to zoom out the camera. Say for arguments sake the camera's position is (4,20,-7) and like this the whole board is visible and taking up the full screen.
If I adjust my board size to say 11x11 blocks I will now need to zoom out the camera. Say I want to maintain the same 60 degree angle and want the board to fill as much of the screen as it did before. What should the camera's new position be and how do you calculate it?
The X part is easy since you simple give the camera the same X position as the middle of the board. I'm not sure about how to calculate the new Y and Z positions though.
Any advice appreciated. Thanks.
edit: and if i wanted to change the angle of the camera as well as zoom out, is that possible to calculate? this is less important since i'll probably stick with the same angle, but i'm interested to know the maths behind it anyway.
Transform.Translate() method will move the transform according to the rotation. So you don't have to worry about the direction where your camera is looking at, just
yourCamera.transform.Translate(Vector3.forward * moveAmount);
will move your camera forward, which means zoom in. If you want to zoom out, just change the sign of the value to minus.
When I didn't know this, I used Mathf.Sin() and Mathf.Cos() to calculate each y and z world coordinates, which sucks.

How to smoothly move a node in an ARkit Scene View based off device motion?

Swift beginner struggling with moving a scene node in ARkit in response to the device motion.
What I want to achieve is: First detect the floor plane, then place a sphere on the floor. From that point onwards depending on the movement of the device, I want to move the sphere along its x and z axis to move it around the floor of the room. (The sphere once created needs to be in the center of the device screen and locked to that view)
So far I can detect the floor and place a node no problem. I can use device motion to obtain the device attitude (pitch, roll and yaw) but how to translate these values into meaningful x, y, z positions that I can update my node with?
Are there any formulas or methods that are used to calculate such information or is this the wrong approach? I would appreciate a link to some info or an explanation of how to go about this. Also I am unsure how to ensure the node would be always at the center of the device screen.
so, as far as I understood you want to have a following workflow:
Step 1. You create a sphere on a plane (which is already done)
Step 2. Move the sphere with respect to the camera's horizontal plane (i.e. along its x and z axis to move it around the floor of the room depending on the movement of the device)
Assuming that the Step 1 is done, what you can do:
Get the position of the camera and the sphere
This should be first called within the function that is invoked after sphere creation (be it a tapGestureRecognizer(), touchesBegan(), etc.).
You can do it by calling position property of SCNNode for sphere and for camera position and/or orientation by calling sceneView.session.currentFrame's .camera.transform which contains all necessary parameters about current position of the camera
Move the sphere as camera moves
Having the sphere position on the Scene and the transformation matrix of the camera, you can find the distance relation between them. Here you can find a good explanation of how exactly you can do it
After you get those things you should implement a proper logic within renderer(_:didUpdate:for:) to obtain continuous lock of the ball with respect to the camera position
If you are interested about the math behind it, you can kick off by reading more about transformation matrices which is a big part of Image Processing and many other areas
Hope that this will help!

ARKit: How to tell if user's face is parallel to camera

In my Swift / ARKit / SceneKit project, I need to tell if the user's face in front-facing camera is parallel to the camera.
I was able to tell horizontal parallel by comparing the left and right eyes distance (using faceAnchor.leftEyeTransform and the worldPosition property) from the camera.
But I am stuck on vertical parallel. Any ideas, how to achieve that?
Assuming you are using ARFaceTrackingConfiguration in your app, you can actually retrieve the transforms of both the ARFaceAnchor and the camera to determine their orientations. You can get a simd_float4x4 matrix of the head's orientation in world space by using ARFaceAnchor.transform property. Similarly, you can get the transform of the SCNCamera or ARCamera of your scene.
To compare the camera's and face's orientations relative to each other in a SceneKit app (though there are similar functions on the ARKit side of things), I get the world transform for the node that is attached to each of them, let's call them faceNode attached to the ARFaceAnchor and cameraNode representing the ARSCNView.pointOfView. To find the angle between the camera and your face, for example, you could do something like this:
let faceOrientation: simd_quatf = faceNode.simdWorldTransform
let cameraOrientation: simd_quatf = cameraNode.simdWorldTransform
let deltaOrientation: simd_quatf = faceOrientation.inverse * cameraOrientation
By looking at deltaOrientation.angle and deltaOrientation.axis you can determine the relative angles on each axis between the face and the camera. If you do something like deltaOrientation.axis * deltaOrientation.angles, you have a simd_float3 vector giving you a sense of the pitch, yaw and roll (in radians) of the head relative to the camera.
There are a number of ways you can do this using the face anchor and camera transforms, but this simd quaternion method works quite well for me. Hope this helps!

Counting rotations on the iPhone 4

I'm using Core Motion and would like to count rotations, so if i place the iphone on the table and start to rotate it clockwise and do 360 degrees, i would get 1 rotation.
What should i use from Core Motion, yaw, roll, pitch, gravity, rotationMatrix or ?
Please help me.
Cheers.
You'd want to watch yaw — that's rotation around a notional line that would come straight upwards, out of the screen. It actually goes from +180 to -180, but for your purposes if you're rotating clockwise, screen upward, then you can just watch for the number to get higher instead of lower — that'll detect when the value goes beyond -180 and reappears somewhere below +180. Alternatively, look for any absolute change in value greater than, say, 180, if you want to be able to detect rotations clockwise or anticlockwise.