The ARFaceTrackingConfiguration of ARKit places ARFaceAnchor with information about the position and orientation of the face onto the scene. Among others, this anchor has the lookAtPoint property that I'm interested in. I know that this vector is relative to the face. How can I draw a point on the screen for this position, meaning how can I translate this point's coordinates?
.lookAtPoint instance property is for direction's estimation only.
Apple documentation says: .lookAtPoint is a position in face coordinate space that is estimating only the gaze of face's direction. It's a vector of three scalar values, and it's just gettable, not settable:
var lookAtPoint: SIMD3<Float> { get }
In other words, this is the resulting vector from the product of two quantities – .rightEyeTransform and .leftEyeTransform instance properties (which also are just gettable):
var rightEyeTransform: simd_float4x4 { get }
var leftEyeTransform: simd_float4x4 { get }
Here's an imaginary situation on how you could use this instance property:
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
if let faceAnchor = anchor as? ARFaceAnchor,
let faceGeometry = node.geometry as? ARSCNFaceGeometry {
if (faceAnchor.lookAtPoint.x >= 0) { // Looking (+X)
faceGeometry.firstMaterial?.diffuse.contents = UIImage(named: "redTexture.png")
} else { // Looking (-X)
faceGeometry.firstMaterial?.diffuse.contents = UIImage(named: "cyanTexture.png")
}
faceGeometry.update(from: faceAnchor.geometry)
facialExrpession(anchor: faceAnchor)
DispatchQueue.main.async {
self.label.text = self.textBoard
}
}
}
And here's an image showing axis directions for ARFaceTrackingConfiguration():
Answering your question:
I could say that you can't manage this point's coordinates directly because it's gettable-only property (and there is just XYZ orientation, not XYZ translation).
So if you need both – translation and rotation – use .rightEyeTransform and .lefttEyeTransform instance properties instead.
There are two projection methods:
FIRST. In SceneKit/ARKit you need to take the following instance method for projecting a point onto 2D view (for sceneView instance):
func projectPoint(_ point: SCNVector3) -> SCNVector3
or:
let sceneView = ARSCNView()
sceneView.projectPoint(myPoint)
SECOND. In ARKit you need to take the following instance method for projecting a point onto 2D view (for arCamera instance):
func projectPoint(_ point: simd_float3,
orientation: UIInterfaceOrientation,
viewportSize: CGSize) -> CGPoint
or:
let camera = ARCamera()
camera.projectPoint(myPoint, orientation: myOrientation, viewportSize: vpSize)
This method helps you project a point from the 3D world coordinate system of the scene to the 2D pixel coordinate system of the renderer.
There's also the opposite method (for unprojecting a point):
func unprojectPoint(_ point: SCNVector3) -> SCNVector3
...and ARKit's opposite method (for unprojecting a point):
#nonobjc func unprojectPoint(_ point: CGPoint,
ontoPlane planeTransform: simd_float4x4,
orientation: UIInterfaceOrientation,
viewportSize: CGSize) -> simd_float3?
Related
I am working on a project where I have to place a green dot to be always in the center even when we rotate the camera in ARKit. I am using ARSCNView and I have added the node so far everything is good. Now I know I need to modify the position of the node in
func session(_ session: ARSession, didUpdate frame: ARFrame)
But I have no idea how to do that. I saw some example which was close to what I have but it does not run as it suppose to.
func session(_ session: ARSession, didUpdate frame: ARFrame) {
let location = sceneView.center
let hitTest = sceneView.hitTest(location, types: .featurePoint)
if hitTest.isEmpty {
print("No Plane Detected")
return
} else {
let columns = hitTest.first?.worldTransform.columns.3
let position = SCNVector3(x: columns!.x, y: columns!.y, z: columns!.z)
var node = sceneView.scene.rootNode.childNode(withName: "CenterShip", recursively: false) ?? nil
if node == nil {
let scene = SCNScene(named: "art.scnassets/ship.scn")!
node = scene.rootNode.childNode(withName: "ship", recursively: false)
node?.opacity = 0.7
let columns = hitTest.first?.worldTransform.columns.3
node!.name = "CenterShip"
node!.position = SCNVector3(x: columns!.x, y: columns!.y, z: columns!.z)
sceneView.scene.rootNode.addChildNode(node!)
}
let position2 = node?.position
if position == position2! {
return
} else {
//action
let action = SCNAction.move(to: position, duration: 0.1)
node?.runAction(action)
}
}
}
It doesn't matter how I rotate the camera this dot must be in the middle.
It's not clear exactly what you're trying to do, but I assume its one of the following:
A) Place the green dot centered in front of the camera at a fixed distance, eg. always exactly 1 meter in front of the camera.
B) Place the green dot centered in front of the camera at the depth of the nearest detected plane, i.e. using the results of a raycast from the mid point of the ARSCNView
I would have assumed A, but your example code is using (now deprecated) sceneView.hitTest() function which in this case would give you the depth of whatever is behind the pixel at sceneView.center
Anyway here's both:
Fixed Depth Solution
This is pretty straightforward, though there are few options. The simplest is to make the green dot a child node of the scene's camera node, and give it position with a negative z value, since z increases as a position moves toward the camera.
cameraNode.addChildNode(textNode)
textNode.position = SCNVector3(x: 0, y: 0, z: -1)
As the camera moves, so too will its child nodes. More details in this very thorough answer
Scene Depth Solution
To determine the estimated depth behind a pixel, you should use ARSession.raycast instead of SceneView.hitTest, because the latter is definitely deprecated.
Note that, if the raycast() (or still hitTest()) methods return an empty result set (not uncommon given the complexity of scene estimation going on in ARKit), you won't have a position to update the node and this it might not be directly centered in every frame. To handle this is a bit more complex, as you'd need decide exactly what you want to do in that case.
The SCNAction is unnecessary and potentially causing problems. These delegate methods run 60fps, so simply updating the position directly will produce smooth results.
Adapting and simplifying the code you posted:
func createCenterShipNode() -> SCNNode {
let scene = SCNScene(named: "art.scnassets/ship.scn")!
let node = scene.rootNode.childNode(withName: "ship", recursively: false)
node!.opacity = 0.7
node!.name = "CenterShip"
sceneView.scene.rootNode.addChildNode(node!)
return node!
}
func session(_ session: ARSession, didUpdate frame: ARFrame) {
// Check the docs for what the different raycast query parameters mean, but these
// give you the depth of anything ARKit has detected
guard let query = sceneView.raycastQuery(from: sceneView.center, allowing: .estimatedPlane, alignment: .any) else {
return
}
let results = session.raycast(query)
if let hit = results.first {
let node = sceneView.scene.rootNode.childNode(withName: "CenterShip", recursively: false) ?? createCenterShipNode()
let pos = hit.worldTransform.columns.3
node.simdPosition = simd_float3(pos.x, pos.y, pos.z)
}
}
See also: ARRaycastQuery
One last note - you generally don't want to do scene manipulation within this delegate method. It runs on a different thread than the Scenekit rendering thread, and SceneKit is very thread sensitive. This will likely work fine, but beyond adding or moving a node will certainly cause crashes from time to time. You'd ideally want to store the new position, and then update the actual scene contents from within the renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) delegate method.
I am trying to get real-world human body height from ARBodyAnchor. I understand that I can get real-world distance between body joints. For example hip to foot-joint as in code below. But how do I get distance from top of head to bottom of foot?
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
if anchor is ARBodyAnchor {
let footIndex = ARSkeletonDefinition.defaultBody3D.index(for: .leftFoot)
let footTransform = ARSkeletonDefinition.defaultBody3D.neutralBodySkeleton3D!.jointModelTransforms[footIndex]
let distanceFromHipOnY = abs(footTransform.columns.3.y)
print(distanceFromHipOnY)
}
}
The default height of ARSkeleton3D from right_toes_joint (or if you wish left_toes_joint) to head_joint is 1.71 meters. And since head_joint in Apple's skeletal system's definition is the upmost skeleton's point, you can use the common skull's height – from eye line to crown.
In other words, the distance from neck_3_joint to head_joint in virtual model's skeleton is approximately the same as from head_joint to crown.
There are 91 joint in ARSkeleton3D:
print(bodyAnchor.skeleton.jointModelTransforms.count) // 91
Code:
extension ViewController: ARSessionDelegate {
func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
for anchor in anchors {
guard let bodyAnchor = anchor as? ARBodyAnchor
else { return }
let skeleton = bodyAnchor.skeleton
for (i, joint) in skeleton.definition.jointNames.enumerated() {
print(i, joint)
// [10] right_toes_joint
// [51] head_joint
}
let toesJointPos = skeleton.jointModelTransforms[10].columns.3.y
let headJointPos = skeleton.jointModelTransforms[51].columns.3.y
print(headJointPos - toesJointPos) // 1.6570237 m
}
}
}
However, we have a compensator:
bodyAnchor.estimatedScaleFactor
ARKit must know the height of a person in the camera feed to estimate an accurate world position for the person's body anchor. ARKit uses the value of estimatedScaleFactor to correct the body anchor's position in the physical environment.
The default real-world body is 1.8 meters tall. (some kind of mismatch...)
The default value of estimatedScaleFactor is 1.0.
If you set:
let config = ARBodyTrackingConfiguration()
config.automaticSkeletonScaleEstimationEnabled = true
arView.session.run(config, options: [])
ARKit sets this property to a value between 0.0 and 1.0.
I have a list of images coming from server and stored in gallery. I want to pick any image and place on live wall using ARKit and want to convert images into 3d images tp perform operation like zooming , moving image etc.
Can anybody please guide how can I create custom object in AR?
To detect vertical surfaces (e.g walls) in ARKit you need to firstly set up ARWorldTrackingConfiguration and then enable planeDetection within your app.
So under your Class Declaration you would create the following variables:
#IBOutlet var augmentedRealityView: ARSCNView!
let augmentedRealitySession = ARSession()
let configuration = ARWorldTrackingConfiguration()
And then initialise your ARSession and in ViewDidLoad for example e.g:
override func viewDidLoad() {
super.viewDidLoad()
//1. Set Up Our ARSession
augmentedRealityView.session = augmentedRealitySession
//2. Assign The ARSCNViewDelegate
augmentedRealityView.delegate = self
//3. Set Up Plane Detection
configuration.planeDetection = .vertical
//4. Run Our Configuration
augmentedRealitySession.run(configuration, options: [.resetTracking, .removeExistingAnchors])
}
Now that you are all set to detected vertical planes you need to hook into the following ARSCNViewDelegate Method:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) { }
Which simply:
Tells the delegate that a SceneKit node corresponding to a new AR
anchor has been added to the scene.
In this method we are going to explicitly look for any ARPlaneAnchors which have been detected which provide us with:
Information about the position and orientation of a real-world flat
surface detected in a world-tracking AR session.
As such placing an SCNPlane onto a detected vertical plane is a simple as this:
//-------------------------
//MARK: - ARSCNViewDelegate
//-------------------------
extension ViewController: ARSCNViewDelegate{
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. Check We Have Detected An ARPlaneAnchor
guard let planeAnchor = anchor as? ARPlaneAnchor else { return }
//2. Get The Size Of The ARPlaneAnchor
let width = CGFloat(planeAnchor.extent.x)
let height = CGFloat(planeAnchor.extent.z)
//3. Create An SCNPlane Which Matches The Size Of The ARPlaneAnchor
let imageHolder = SCNNode(geometry: SCNPlane(width: width, height: height))
//4. Rotate It
imageHolder.eulerAngles.x = -.pi/2
//5. Set It's Colour To Red
imageHolder.geometry?.firstMaterial?.diffuse.contents = UIColor.red
//4. Add It To Our Node & Thus The Hiearchy
node.addChildNode(imageHolder)
}
}
Applying This To Your Case:
In your case we need to do some additional work, as you want to be able to allow the user to apply an image to the vertical plane.
As such your best bet is to make the node you have just added a variable e.g.
class ViewController: UIViewController {
#IBOutlet var augmentedRealityView: ARSCNView!
let augmentedRealitySession = ARSession()
let configuration = ARWorldTrackingConfiguration()
var nodeWeCanChange: SCNNode?
}
As such your Delegate Callback might look like so:
//-------------------------
//MARK: - ARSCNViewDelegate
//-------------------------
extension ViewController: ARSCNViewDelegate{
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If We Havent Create Our Interactive Node Then Proceed
if nodeWeCanChange == nil{
//a. Check We Have Detected An ARPlaneAnchor
guard let planeAnchor = anchor as? ARPlaneAnchor else { return }
//b. Get The Size Of The ARPlaneAnchor
let width = CGFloat(planeAnchor.extent.x)
let height = CGFloat(planeAnchor.extent.z)
//c. Create An SCNPlane Which Matches The Size Of The ARPlaneAnchor
nodeWeCanChange = SCNNode(geometry: SCNPlane(width: width, height: height))
//d. Rotate It
nodeWeCanChange?.eulerAngles.x = -.pi/2
//e. Set It's Colour To Red
nodeWeCanChange?.geometry?.firstMaterial?.diffuse.contents = UIColor.red
//f. Add It To Our Node & Thus The Hiearchy
node.addChildNode(nodeWeCanChange!)
}
}
}
Now you have a reference to the nodeWeCanChange setting it's image at anytime is simple!
Each SCNGeometry has a set of Materials which are a:
A set of shading attributes that define the appearance of a geometry's
surface when rendered.
In our case we are looking for the materials diffuse property which is:
An object that manages the material’s diffuse response to lighting.
And then the contents property which are:
The visual contents of the material property—a color, image, or source
of animated content.
Obviously you need to handle the full logistics of this, however a very basic example might look like so assuming you stored your Images into an Array of UIImage e.g:
let imageGallery = [UIImage(named: "StackOverflow"), UIImage(named: "GitHub")]
I have created an IBAction which will change the image of our SCNNode's Geometry based on the tag of the UIButton pressed e.g:
/// Changes The Material Of Our SCNNode's Gemeotry To The Image Selected By The User
///
/// - Parameter sender: UIButton
#IBAction func changeNodesImage(_ sender: UIButton){
guard let imageToApply = imageGallery[sender.tag], let node = nodeWeCanChange else { return}
node.geometry?.firstMaterial?.diffuse.contents = imageToApply
}
This is more than enough to point you in the right direction... Hope it helps...
I need to convert a point in the 2d coordinate space of my ARSCNView to a coordinate in 3d space. Basically a ray from the point of view to the touched location (up to a set distance away).
I wanted to use arView.unprojectPoint(vec2d) for that, but the point returned always seems to be located in the center of the view
vec2d is a SCNVector3 created from a 2d coordinate like this
SCNVector3(x, y, 0) // 0 specifies camera near plane
What am I doing wrong? How do I get the desired result?
I think you have at least 2 possible solutions:
First
Use hitTest(_:types:) instance method:
This method searches for real-world objects or AR anchors in the captured camera image corresponding to a point in the SceneKit view.
let sceneView = ARSCNView()
func calculateVector(point: CGPoint) -> SCNVector3? {
let hitTestResults = sceneView.hitTest(point,
types: [.existingPlane])
if let result = hitTestResults.first {
return SCNVector3.init(SIMD3(result.worldTransform.columns.3.x,
result.worldTransform.columns.3.y,
result.worldTransform.columns.3.z))
}
return nil
}
calculateVector(point: yourPoint)
Second
Use unprojectPoint(_:ontoPlane:) instance method:
This method returns the projection of a point from 2D view onto a plane in the 3D world space detected by ARKit.
#nonobjc func unprojectPoint(_ point: CGPoint,
ontoPlane planeTransform: simd_float4x4) -> simd_float3?
or:
let point = CGPoint()
var planeTransform = simd_float4x4()
sceneView.unprojectPoint(point,
ontoPlane: planeTransform)
Add a empty node infront of camera at 'x' cm offset and making it the child of camera.
//Add a node in front of camera just after creating scene
hitNode = SCNNode()
hitNode!.position = SCNVector3Make(0, 0, -0.25) //25 cm offset
sceneView.pointOfView?.addChildNode(hitNode!)
func unprojectedPosition(touch: CGPoint) -> SCNVector3 {
guard let hitNode = self.hitNode else {
return SCNVector3Zero
}
let projectedOrigin = sceneView.projectPoint(hitNode.worldPosition)
let offset = sceneView.unprojectPoint(SCNVector3Make(Float(touch.x), Float(touch.y), projectedOrigin.z))
return offset
}
See the Justaline GitHub implementation of the code here
In my program, I am trying to make myself collidable with other objects so that I can test collisions. Is there a way to add a physics body that moves with myself in the new AR Kit which follows you? I can make one which is at my place at the beginning of the AR but I'm sure there is a way to add it to your actual self.
Good day!
There is a solution to your problem.
First of all you need to create a collider. For example:
func addCollider(position: SCNVector3, name: String) {
let collider = SCNNode(geometry: SCNSphere(radius: 0.5))
collider.position = position
collider.geometry?.firstMaterial?.diffuse.contents = UIColor.clear
collider.physicsBody?.categoryBitMask = 0 // example
collider.physicsBody?.contactTestBitMask = 1 // example
collider.name = name
sceneView.scene.rootNode.addChildNode(collider)
}
Don't forget to add proper bit masks, to collide with other objects.
Next you need to implement ARSCNViewDelegate protocol and call method willRenderScene. Next you need to locate your point of view and transform it into the matrix. Than you need to extract values from third column of the matrix to get current orientation. To get current location you need to extract values of forth column from your matrix. And finally you need to combine your location and orientation to get your current position.
To combine to SCNVector3 values you need to write global function "+". Example:
func +(left: SCNVector3, right: SCNVector3) -> SCNVector3 {
return SCNVector3Make(left.x + right.x, left.y + right.y, left.z + right.z)
}
Be sure, that you wrote this method outside of class or struct.
Final accord! A function that will move a collider accordingly to your current position.
func renderer(_ renderer: SCNSceneRenderer, willRenderScene scene: SCNScene, atTime time: TimeInterval) {
guard let pontOfView = self.sceneView.pointOfView else { return }
let transform = pontOfView.transform
let orientation = SCNVector3(-transform.m31, -transform.m32, -transform.m33)
let location = SCNVector3(transform.m41, transform.m42, transform.m43)
let currentPosition = orientation + location
for node in sceneView.scene.rootNode.childNodes {
if node.name == "your collider name" {
node.position = currentPosition
}
}
}
P.S. Don't forget to add
self.sceneView.delegate = self
to your viewDidLoad.
Best of luck. I hope it helped!