I am writing in Swift and trying to obtain the RealityKit camera's rotation.
I've successfully gotten the position using:
xsettings.xcam = arView.session.currentFrame?.camera.transform.columns.3.x ?? 0);
xsettings.ycam = arView.session.currentFrame?.camera.transform.columns.3.y ?? 0);
xsettings.zcam = arView.session.currentFrame?.camera.transform.columns.3.z ?? 0);
this works excellently, but I haven't found a rotation solution that seems to work as well.
Currently I am doing this:
xsettings.xcamrot = arView.session.currentFrame?.camera.eulerAngles[0] ?? 0;
xsettings.ycamrot = arView.session.currentFrame?.camera.eulerAngles[1] ?? 0;
xsettings.zcamrot = arView.session.currentFrame?.camera.eulerAngles[2] ?? 0;
but it doesn't seem to work correctly, there is a lot of weirdness on the roll (eulerAngles[2]) and just some inconsistency overall, at least compared to the positioning which is excellent.
Just curious if there is a better way to access the camera's rotation?
It's not weird. An orientation of ARKit's or RealityKit's camera is expressed as roll (z), pitch (x), and yaw (y). Thus you can easily get right values with expressions you've mentioned earlier:
arView.session.currentFrame?.camera.eulerAngles.x
arView.session.currentFrame?.camera.eulerAngles.y
arView.session.currentFrame?.camera.eulerAngles.z
However, the order of rotation is ZYX. Read about it here.
And several words about subscript and dot notation. Each two lines are identical:
DispatchQueue.main.asyncAfter(deadline: .now() + 4.0) {
// Pitch
print(arView.session.currentFrame?.camera.eulerAngles[0]) // -0.6444593
print(arView.session.currentFrame?.camera.eulerAngles.x) // -0.6444593
// Yaw
print(arView.session.currentFrame?.camera.eulerAngles[1]) // -0.69380975
print(arView.session.currentFrame?.camera.eulerAngles.y) // -0.69380975
// Roll
print(arView.session.currentFrame?.camera.eulerAngles[2]) // -1.5064332
print(arView.session.currentFrame?.camera.eulerAngles.z) // -1.5064332
}
That's because ARView camera's eulerAngles instance property is SIMD3<Float> (a.k.a. simd_float3) type that supports subscript and dot notation.
On the other hand, ARSCNView pointOfView's eulerAngles instance property is SCNVector3 type that doesn't support subscript but supports dot notation.
P.S.
You don't need to (and can't) assign a rotation order explicitly because it's implicit inner rotation mechanism.
You might be better off taking the quaternion for rotation, depending on what you're wanting to do with the output.
You can also use arView.cameraTransform to get the camera's transform. From there, translation can be taken from arView.cameraTransform.translation.{x,y,z}, and quaternion rotation with arView.cameraTransform.rotation. One of the benefits of a quaternion here is that you will not have a problem with rotation order.
If you still wanted to get Euler rotations, you can always use MDLTransform:
MDLTransform(matrix: self.cameraTransform.matrix).rotation.{x,y,z}
Related
I am making a scientific visualization app of the Galaxy. Among other things, it displays where certain deep sky objects (stars, star clusters, nebulae, etc) are located in and around the Galaxy.
There are 6 or 7 classes of object types (stars, star clusters, nebulae, globular clusters, etc). Each object within a class looks the same (i.e. using the same image).
I've tried creating GameObjects for each deep sky object, but the system can get bogged down with many objects (~10,000). So instead I create a particle system for each class of deep sky object, setting the specific image to display for each class.
Each particle (i.e. deep sky object) is created at the appropriate location and then I do a SetParticles() to add them to that class's particle system. This works really well and I can have 100,000 objects (particles) with decent performance.
However, I need to allow the user to click/tap on an object to select it. I have not found any examples of how to do hit testing on individual particles in the particle system. Is this possible in Unity?
Thanks,
Bill
You'll have to do the raycasting yourself.
Just implement a custom raycasting algorithm using a simple line rectangle intersection. Simply assume a small rectangle at each particle's position. Since you do not rely on Unity's built in methods you can do this async. For performance optimization you can also cluster the possible targets at simulation start, allowing the elimination of whole clusters when their bounding box is not hit by your ray.
Note: Imho you should choose a completely different approach for your data rendering.
Take a look at unity's entity component system. This allows for large amounts of data, but comes with some disadvantages (e.g. when using Unity's physics engine) (which will not be of relevance for your case I suppose).
I ended up rolling my own solution.
In Update(), upon detecting a click, I iterate through all the particles. For each particle, I calculate its size on the screen based on the particle's size and its distance from the camera.
Then I take the particle's position and translate that into screen coordinates. I use the screen size to generate a bounding rectangle and then test to see if the mouse point is inside it.
As I iterate through the particles I keep track of which is the closest hit. At the end, that is my answer.
if (Input.GetMouseButtonDown(0))
{
Particle? closestHitParticle = null;
var closestHitDist = float.MaxValue;
foreach (var particle in gcParticles)
{
var pos = particle.position;
var size = particle.GetCurrentSize(gcParticleSystem);
var distance = Vector3.Distance(pos, camera.transform.position);
var screenSize = Utility.angularSizeOnScreen(size, distance, camera);
var screenPos = camera.WorldToScreenPoint(pos);
var screenRect = new Rect(screenPos.x - screenSize / 2, screenPos.y - screenSize / 2, screenSize, screenSize);
if (screenRect.Contains(Input.mousePosition) && distance < closestHitDist)
{
closestHitParticle = particle;
closestHitDist = distance;
}
}
if (closestHitDist < float.MaxValue)
{
Debug.Log($"Hit particle at {closestHitParticle?.position}");
}
Here is the angularSizeOnScreen method:
public static float angularSizeOnScreen (float diam, float dist, Camera cam)
{
var aSize = (diam / dist) * Mathf.Rad2Deg;
var pSize = ((aSize * Screen.height) / cam.fieldOfView);
return pSize;
}
I want a determined angle in a local rotated axis system. Basically I want to achieve the angle in a plane of a determined rotated axis system. The best way to explain it is graphically.
I can do that projecting the direction from origin to target in my plane, and then use Vector3.Angle(origin forward dir, Projected direction in plane).
Is there is a way to obtain this in a similar way like Quaternion.FromToRotation(from, to).eulerAngles; but, with the Euler angles, with respect to a coordinate system that is not the world's one, but the local rotated one (the one represented by the rotated plane in the picture above)?
So that the desired angle would be, for the rotation in the local y axis: Quaternion.FromToRotation(from, to).localEulerAngles.y, as the locan Euler angles would be (0, -desiredAngle, 0), based on this approach.
Or is there a more direct way than the way I achieved it?
If I understand you correct there are probably many possible ways to go.
I think you could e.g. use Quaternion.ToAngleAxis which returns an Angle and the axis aroun and which was rotated. This axis you can then convert into the local space of your object
public Vector3 GetLocalEulerAngles(Transform obj, Vector3 vector)
{
// As you had it already, still in worldspace
var rotation = Quaternion.FromToRotation(obj.forward, vector);
rotation.ToAngleAxis(out var angle, out var axis);
// Now convert the axis from currently world space into the local space
// Afaik localAxis should already be normalized
var localAxis = obj.InverseTransformDirection(axis);
// Or make it float and only return the angle if you don't need the rest anyway
return localAxis * angle;
}
As alternative as mentioned I guess yes, you could also simply convert the other vector into local space first, then Quaternion.FromToRotation should already be in local space
public Vector3 GetLocalEulerAngles(Transform obj, Vector3 vector)
{
var localVector = obj.InverseTransformDirection(vector);
// Now this already is a rotation in local space
var rotation = Quaternion.FromToRotation(Vector3.forward, localVector);
return rotation.eulerAngles;
}
I've been trying to figure this out for a few days now.
Given an ARKit-based app where I track a user's face, how can I get the face's rotation in absolute terms, from its anchor?
I can get the transform of the ARAnchor, which is a simd_matrix4x4.
There's a lot of info on how to get the position out of that matrix (it's the 3rd column), but nothing on the rotation!
I want to be able to control a 3D object outside of the app, by passing YAW, PITCH and ROLL.
The latest I thing I tried actually works somewhat:
let arFrame = session.currentFrame!
guard let faceAnchor = arFrame.anchors[0] as? ARFaceAnchor else { return }
let faceMatrix = SCNMatrix4.init(faceAnchor.transform)
let node = SCNNode()
node.transform = faceMatrix
let rotation = node.worldOrientation
rotation.x .y and .z have values I could use, but as I move my phone the values change. For instance, if I turn 180˚ and keep looking at the phone, the values change wildly based on the position of the phone.
I tried changing the world alignment in the ARConfiguration, but that didn't make a difference.
Am I reading the wrong parameters? This should have been a lot easier!
I've figured it out...
Once you have the face anchor, some calculations need to happen with its transform matrix, and the camera's transform.
Like this:
let arFrame = session.currentFrame!
guard let faceAnchor = arFrame.anchors[0] as? ARFaceAnchor else { return }
let projectionMatrix = arFrame.camera.projectionMatrix(for: .portrait, viewportSize: self.sceneView.bounds.size, zNear: 0.001, zFar: 1000)
let viewMatrix = arFrame.camera.viewMatrix(for: .portrait)
let projectionViewMatrix = simd_mul(projectionMatrix, viewMatrix)
let modelMatrix = faceAnchor.transform
let mvpMatrix = simd_mul(projectionViewMatrix, modelMatrix)
// This allows me to just get a .x .y .z rotation from the matrix, without having to do crazy calculations
let newFaceMatrix = SCNMatrix4.init(mvpMatrix)
let faceNode = SCNNode()
faceNode.transform = newFaceMatrix
let rotation = vector_float3(faceNode.worldOrientation.x, faceNode.worldOrientation.y, faceNode.worldOrientation.z)
rotation.x .y and .z will return the face's pitch, yaw, roll (respectively)
I'm adding a small multiplier and inverting 2 of the axis, so it ends up like this:
yaw = -rotation.y*3
pitch = -rotation.x*3
roll = rotation.z*1.5
Phew!
I understand that you are using front camera and ARFaceTrackingConfiguration, which is not supposed to give you absolute values. I would try to configure second ARSession for back camera with ARWorldTrackingConfiguration which does provide absolute values. The final solution will probably require values from both ARSession's. I haven't tested this hypothesis yet but it seems to be the only way.
UPDATE quote from ARWorldTrackingConfiguration -
The ARWorldTrackingConfiguration class tracks the device's movement with six degrees of freedom (6DOF): specifically, the three rotation axes (roll, pitch, and yaw), and three translation axes (movement in x, y, and z). This kind of tracking can create immersive AR experiences: A virtual object can appear to stay in the same place relative to the real world, even as the user tilts the device to look above or below the object, or moves the device around to see the object's sides and back.
Apparently, other tracking configurations do not have this ability.
The image below shows a rotated box that should be moved horizontally on the X and Z axes. Y should stay unaffected to simplify the scenario. The box could also be the SCNNode of the camera, so I guess a projection does not make sense at this point.
So lets say we want to move the box in the direction of the red arrow. How to achieve this using SceneKit?
The red arrow indicates -Z direction of the box. It also shows us it is not parallel to the camera's projection or to the global axes that are shown as dark grey lines of the grid.
My last approach is the product of a translation matrix and a rotation matrix that results in a new transformation matrix. Do I have to add the current transform to the new transform then?
If yes, where is the SceneKit function for the addition of matrices like SCNMatrix4Mult for multiplication or do I have to write it myself using Metal?
If no, what I'm missing out with the matrix calculations?
I don't want to make use of GLKit.
So my understanding is that you want to move the Box Node along its own X axis (not it's parents X axis). And because the Box Node is rotated, its X axis is not aligned with its parent's one, so you have the problem to convert the translation between the two coordinate systems.
The node hierarchy is
parentNode
|
|----boxNode // rotated around Y (vertical) axis
Using Transformation Matrices
To move boxNode along its own X axis
// First let's get the current boxNode transformation matrix
SCNMatrix4 boxTransform = boxNode.transform;
// Let's make a new matrix for translation +2 along X axis
SCNMatrix4 xTranslation = SCNMatrix4MakeTranslation(2, 0, 0);
// Combine the two matrices, THE ORDER MATTERS !
// if you swap the parameters you will move it in parent's coord system
SCNMatrix4 newTransform = SCNMatrix4Mult(xTranslation, boxTransform);
// Allply the newly generated transform
boxNode.transform = newTransform;
Please Note: The order matters when multiplying matrices
Another option:
Using SCNNode coordinate conversion functions, looks more straight forward to me
// Get the boxNode current position in parent's coord system
SCNVector3 positionInParent = boxNode.position;
// Convert that coordinate to boxNode's own coord system
SCNVector3 positionInSelf = [boxNode convertPosition:positionInParent fromNode:parentNode];
// Translate along own X axis by +2 points
positionInSelf = SCNVector3Make(positionInSelf.x + 2,
positionInSelf.y,
positionInSelf.z);
// Convert that back to parent's coord system
positionInParent = [parentNode convertPosition: positionInSelf fromNode:boxNode];
// Apply the new position
boxNode.position = positionInParent;
Building on #Sulevus's correct answer, here's an extension to SCNNode that simplifies things by using the convertVector rather than the convertPosition transformation, in Swift.
I've done it as a var returning a unit vector, and supplied an SCNVector3 overload of multiply so you can say things like
let action = SCNAction.move(by: 2 * cameraNode.leftUnitVectorInParent, duration: 1)
public extension SCNNode {
var leftUnitVectorInParent: SCNVector3 {
let vectorInSelf = SCNVector3(x: 1, y: 0, z: 0)
guard let parent = self.parent else { return vectorInSelf }
// Convert to parent's coord system
return parent.convertVector(vectorInSelf, from: self)
}
var forwardUnitVectorInParent: SCNVector3 {
let vectorInSelf = SCNVector3(x: 0, y: 0, z: 1)
guard let parent = self.parent else { return vectorInSelf }
// Convert to parent's coord system
return parent.convertVector(vectorInSelf, from: self)
}
func *(lhs: SCNVector3, rhs: CGFloat) -> SCNVector3 {
return SCNVector3(x: lhs.x * rhs, y: lhs.y * rhs, z: lhs.z * rhs)
}
func *(lhs: CGFloat, rhs: SCNVector3) -> SCNVector3 {
return SCNVector3(x: lhs * rhs.x, y: lhs * rhs.y, z: lhs * rhs.z)
}
}
The far easier way this is usually done:
The usual, normal, and extremely easy way to do this in any game engine or 3D engine is:
You simply have a wrapper node, which, holds the node in question.
This is indeed the entire point of transforms, they enable you to abstract out a certain motion.
That's the whole point of 3D engines - the GPU just multiplies out all the quaternions on the way down to the object; it's wholly pointless to (A) figure out in your head the math and (B) do it manually (indeed in the CPU).
In Unity it's "game objects", in scene kit it's "nodes" and so on.
In all 3D engines, including scene kit, almost everything has one or more "holders" around it.
To repeat, the reasons for this are (A) it's the entire raison d'etre of a game engine, to achieve performance in multiplying out the quaternions of every vertex and (B) sheer convenience and code solidity.
One of a million examples ...
Of course you can trivially do it in code,
cameraHolder.addChildNode(camera)
In the OP's example. It looks like you would use cameraHolder only to rotate the camera. And then for the motion the OP is asking about, simply move camera.
It's perfectly normal to have a chain of a number of nodes to get to an object.
This is often used for "effects". Say you have an object, which sometimes has to "vibrate up and down". You can have one node which only does that movement. Note that then, all the animations etc for that movement only have to be on that node. And critically, they can run independently of any other animations or movements. (And indeed you can just use the node elsewhere to jiggle something else.)
I am implementing a boids simulation using Swift and Scenekit. Implementing the simulation itself has been fairly straight forward however I have been unable to make my models faces the direction they are flying (at least all the time and correctly) To see the full project, you can get it here (https://github.com/kingreza/Swift-Boids)
Here is what I am doing to rotate the models to face the direction they are going:
func rotateShipToFaceForward(ship: Ship, positionToBe: SCNVector3)
{
var source = (ship.node.position - ship.velocity).normalized();
// positionToBe is ship.node.position + ship.velocity which is assigned to ship.position at the end of this call
var destination = (positionToBe - ship.node.position).normalized();
var dot = source.dot(destination)
var rotAngle = GLKMathDegreesToRadians(acos(dot));
var rotAxis = source.cross(destination);
rotAxis.normalize();
var q = GLKQuaternionMakeWithAngleAndAxis(Float(rotAngle), Float(rotAxis.x), Float(rotAxis.y), Float(rotAxis.z))
ship.node.rotation = SCNVector4(x: CGFloat(q.x), y: CGFloat(q.y), z: CGFloat(q.z), w: CGFloat(q.w))
}
Here is how they are behaving right now
https://youtu.be/9k07wxod3yI
Three years too late to help the original questioner, and the original YouTube video is gone, but you can see one at the project's GitHub page.
The original Boids code stored orientation as the three basis vectors of the boid’s local coordinate space, which can be thought of as the columns of a 3x3 rotation matrix. Each frame a behavioral “steering force” would act on the current velocity to produce a new velocity. Assuming “velocity alignment” this new velocity is parallel to the new forward (z) vector. It did a cross product of the old up (y) vector and the new forward vector to produce a new side vector. Then it crossed the new side and forward to get a new up vector. FYI, here is the code for that in OpenSteer
Since it looks like you want orientation as a quaternion, there is probably a constructor for your quaternion class that takes a rotation matrix as an argument.