I am new to ARKit and I've been trying figure out the easiest way to plot an ARFaceAnchor's lookAtPoint into world coordinates, since they are provided relative to the face's current orientation (I believe). I've been trying to do this myself but I'm not confident that the answer is correct, I know there's some variability, but it seems to change too much if I stare at the same point while moving and rotating my head:
let faceMatrix = SCNMatrix4.init(currentFaceAnchor.transform);
let node = SCNNode();
node.transform = faceMatrix;
let eyeLocation = SCNVector3.init(currentFaceAnchor.lookAtPoint;)
node.localTranslate(by:eyeLocation!);
let eyeLocation = node.worldPosition;
Thanks in advance for any help!
Related
It's so confusing to me, would be grateful if anyone help me on it.
I have a shadow plane to show the shadow below the AR object. I read some article that they define this shadow in viewDidLoadand add it as the child bode to sceneView.scene. The question is, it should be defined only once for the floor surface?
for instance, I can add the shadow plane to renderer(_:didAdd:for:), it call it once when a new surface is detected. That is so cool for me. But the position of the shadow plane should be changed as well? can someone explain it to me that where it should be defined and wehere/when it should be updated?
here how I define the shadow plane
private func addShadowPlane(node: SCNNode, planeAnchor: ARPlaneAnchor) {
let anchorX = planeAnchor.center.x
let anchorY: planeAnchor.center.y
let anchorZ = planeAnchor.center.z
let floor = SCNFloor()
let floorNode = SCNNode(geometry: floor)
floorNode.position = SCNVector3(anchorX, anchorY, anchorZ)
floor.length = CGFloat(planeAnchor.extent.z)
floor.width = CGFloat(planeAnchor.extent.x)
floor.reflectivity = 0
floor.materials = [shadowMaterialStandard()]
node.addChildNode(floorNode)
}
func shadowMaterialStandard() -> SCNMaterial {
let material = SCNMaterial()
material.lightingModel = .physicallyBased
material.writesToDepthBuffer = true
material.readsFromDepthBuffer = true
material.colorBufferWriteMask = []
return material
}
The issue you might run into is: Do you want one single shadow plane in a kind of initial defined position and then remains there (or can be repositioned). Or do you want a lots of shadow planes, like on any surface captured with the ARKit? The problem might be, that all those planes will not be exact and accurate to the surfaces on top they are created (just more or less). You can make more accurate shapes for surfaces, but they are built up in an ongoing process and need more time to complete (imagine you scan a table by walking around). I also did some ARApps with Shadow planes. I usually create one single shadow plane (like 20x20 meters) on my request using a focus square. I fetch the worldPosition from the focus square, then I add a plane to that location using Scenekit (and not the Renderer for plane anchors). Keep in mind, there are many ways to do this. There is no best way.
Try to study this Apple Sample App for more information on placing objects, casting shadows etc:
https://developer.apple.com/documentation/arkit/environmental_analysis/placing_objects_and_handling_3d_interaction
How to create a line between two points in 3d space with RealityKit?
There are examples of creating lines between two points in Scenekit, however, there are basically none using RealityKit.
To create the line, I've created a rectangle model entity and placed it between my first touched point and the current touched point. From here, all I would need to do is rotate the rectangle to face the current touched point. However, using the simd_quatf(from: to:) doesn't work as intended.
rectangleModelEntity.transform.rotation = simd_quatf(from: firstTouchedPoint,
to: currTouchedPoint)
If I were to touch a point and then drag directly downwards, the rectangle model should be to be a straight line between first touched and current touched point, but it stays horizontal with a slight tilt.
To solve this, I tried getting the angle between my initially horzontal line as a vector and a vector from the first touched to current touched point
let startVec = currTouchedPoint - firstTouchedPoint
let endVec = endOfModelEntityPoint - modelEntityCenterPoint
let lengthVec = simd_length(cross(startVec, endVec))
let theta = atan2(lengthVec, dot(startVec, endVec))
This gives me the angle between two vectors in 3d space, which seems correct, when I checked it gave me 90 degrees when touching and dragging directly between it.
The problem is I don't know what the axis to rotate it on should be. Since this is 3d space, the line doesn't need to be on a 2d plane, the current touched position can be downwards and in front of the starting touch position.
rectangleModelEntity = simd_quatf(angle: theta, axis: ???)
Personally, I'm not even too sure if the above is the correct solution to creating a line between two points. In theory it's rather basic, create a rectangle with low height/depth to mimic a line, position it in the center of the starting and current touch point then rotate it so it's oriented correctly.
What should be the axis for the above degrees between two vectors?
Is there a better method of creating two lines between points in 3d space with RealityKit/ARKit?
I have implemented using a box. Let me know if you have a better way.
let midPosition = SIMD3(x:(position1.x + position2.x) / 2,
y:(position1.y + position2.y) / 2,
z:(position1.z + position2.z) / 2)
let anchor = AnchorEntity()
anchor.position = midPosition
anchor.look(at: position1, from: midPosition, relativeTo: nil)
let meters = simd_distance(position1, position2)
let lineMaterial = SimpleMaterial.init(color: .red,
roughness: 1,
isMetallic: false)
let bottomLineMesh = MeshResource.generateBox(width:0.025,
height: 0.025/2.5,
depth: meters)
let bottomLineEntity = ModelEntity(mesh: bottomLineMesh,
materials: [lineMaterial])
bottomLineEntity.position = .init(0, 0.025, 0)
anchor.addChild(bottomLineEntity)
The axis is the cross product of the direction your object is facing at the beginning and the direction it should be facing now.
Like if it's at position p1=[x1,y1,z1], initially facing d1=[0, 0, -1], and you want it to face a point p2=[x, y, z] the axis would be the cross product: |d1|✕|p2 - p1|.
May have to swap the two around, or just negate the angle though, depending on how it works out.
My Swift ARKit app needs the position and orientation of the face relative to the front-facing camera. If I set ARConfiguration.worldAlignment = .camera all I need to do is call for the faceAnchor.transform, which works perfectly; but I need to run in the default worldAlignment = .gravity. In this mode I can get faceAnchor.transform and camera.transform, which are both supplied in world coordinates. How can I use those transforms to get the face anchor in camera coordinates? I've tried multiplying those together as well as multiplying one by the other's inverse, in all four order combinations, but none of these results works. I just don't understand matrix operations well enough to succeed here. Can someone shed light on this for me?
I finally figured this out using SceneKit functions!
let currentFaceTransform = currentFaceAnchor!.transform
let currentCameraTransform = frame.camera.transform
let newFaceMatrix = SCNMatrix4.init(currentFaceTransform)
let newCameraMatrix = SCNMatrix4.init(currentCameraTransform)
let cameraNode = SCNNode()
cameraNode.transform = newCameraMatrix
let originNode = SCNNode()
originNode.transform = SCNMatrix4Identity
//Converts a transform from the node’s local coordinate space to that of another node.
let transformInCameraSpace = originNode.convertTransform(newFaceMatrix, to: cameraNode)
let faceTransformFromCamera = simd_float4x4(transformInCameraSpace)
Hope this helps some others out there!
I've been trying to figure this out for a few days now.
Given an ARKit-based app where I track a user's face, how can I get the face's rotation in absolute terms, from its anchor?
I can get the transform of the ARAnchor, which is a simd_matrix4x4.
There's a lot of info on how to get the position out of that matrix (it's the 3rd column), but nothing on the rotation!
I want to be able to control a 3D object outside of the app, by passing YAW, PITCH and ROLL.
The latest I thing I tried actually works somewhat:
let arFrame = session.currentFrame!
guard let faceAnchor = arFrame.anchors[0] as? ARFaceAnchor else { return }
let faceMatrix = SCNMatrix4.init(faceAnchor.transform)
let node = SCNNode()
node.transform = faceMatrix
let rotation = node.worldOrientation
rotation.x .y and .z have values I could use, but as I move my phone the values change. For instance, if I turn 180˚ and keep looking at the phone, the values change wildly based on the position of the phone.
I tried changing the world alignment in the ARConfiguration, but that didn't make a difference.
Am I reading the wrong parameters? This should have been a lot easier!
I've figured it out...
Once you have the face anchor, some calculations need to happen with its transform matrix, and the camera's transform.
Like this:
let arFrame = session.currentFrame!
guard let faceAnchor = arFrame.anchors[0] as? ARFaceAnchor else { return }
let projectionMatrix = arFrame.camera.projectionMatrix(for: .portrait, viewportSize: self.sceneView.bounds.size, zNear: 0.001, zFar: 1000)
let viewMatrix = arFrame.camera.viewMatrix(for: .portrait)
let projectionViewMatrix = simd_mul(projectionMatrix, viewMatrix)
let modelMatrix = faceAnchor.transform
let mvpMatrix = simd_mul(projectionViewMatrix, modelMatrix)
// This allows me to just get a .x .y .z rotation from the matrix, without having to do crazy calculations
let newFaceMatrix = SCNMatrix4.init(mvpMatrix)
let faceNode = SCNNode()
faceNode.transform = newFaceMatrix
let rotation = vector_float3(faceNode.worldOrientation.x, faceNode.worldOrientation.y, faceNode.worldOrientation.z)
rotation.x .y and .z will return the face's pitch, yaw, roll (respectively)
I'm adding a small multiplier and inverting 2 of the axis, so it ends up like this:
yaw = -rotation.y*3
pitch = -rotation.x*3
roll = rotation.z*1.5
Phew!
I understand that you are using front camera and ARFaceTrackingConfiguration, which is not supposed to give you absolute values. I would try to configure second ARSession for back camera with ARWorldTrackingConfiguration which does provide absolute values. The final solution will probably require values from both ARSession's. I haven't tested this hypothesis yet but it seems to be the only way.
UPDATE quote from ARWorldTrackingConfiguration -
The ARWorldTrackingConfiguration class tracks the device's movement with six degrees of freedom (6DOF): specifically, the three rotation axes (roll, pitch, and yaw), and three translation axes (movement in x, y, and z). This kind of tracking can create immersive AR experiences: A virtual object can appear to stay in the same place relative to the real world, even as the user tilts the device to look above or below the object, or moves the device around to see the object's sides and back.
Apparently, other tracking configurations do not have this ability.
I have a coordinate region that I have determined contains the limits of what I want to show for my app. I have set this up as an MKCoordinateRegion with center point lat, longitude and a span. How do I determine if the current userLocation is inside of my coordinate region?
Use map rects. Here's an example using the map's current visible rect. With regards to your question, you could use convertRegion:toRectToView: to first convert your region to a MKMapRect beforehand.
MKMapPoint userPoint = MKMapPointForCoordinate(mapView.userLocation.location.coordinate);
MKMapRect mapRect = mapView.visibleMapRect;
BOOL inside = MKMapRectContainsPoint(mapRect, userPoint);
Swift 3 version of firstresponder's answer:
let userPoint = MKMapPointForCoordinate(mapView.userLocation.coordinate)
let mapRect = mapView.visibleMapRect
let inside = MKMapRectContainsPoint(mapRect, userPoint)
Pretty much the same. This API has not been Swift-ified (i.e., updated to conform to the Swift API design guidelines) yet. It really should be...
let userPoint = mapView.userLocation.coordinate.mapPoint
let inside = mapView.visibleMapRect.contains(userPoint)
There is a simple solution to decide if a point is inside your area if the area is given by a polygon using the ray casting algorithm: See here http://en.wikipedia.org/wiki/Point_in_polygon
As a starting point use a location guaranteed to be outside your region, e.g. (geographic) north pole.