I am trying to get real-world human body height from ARBodyAnchor. I understand that I can get real-world distance between body joints. For example hip to foot-joint as in code below. But how do I get distance from top of head to bottom of foot?
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
if anchor is ARBodyAnchor {
let footIndex = ARSkeletonDefinition.defaultBody3D.index(for: .leftFoot)
let footTransform = ARSkeletonDefinition.defaultBody3D.neutralBodySkeleton3D!.jointModelTransforms[footIndex]
let distanceFromHipOnY = abs(footTransform.columns.3.y)
print(distanceFromHipOnY)
}
}
The default height of ARSkeleton3D from right_toes_joint (or if you wish left_toes_joint) to head_joint is 1.71 meters. And since head_joint in Apple's skeletal system's definition is the upmost skeleton's point, you can use the common skull's height – from eye line to crown.
In other words, the distance from neck_3_joint to head_joint in virtual model's skeleton is approximately the same as from head_joint to crown.
There are 91 joint in ARSkeleton3D:
print(bodyAnchor.skeleton.jointModelTransforms.count) // 91
Code:
extension ViewController: ARSessionDelegate {
func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
for anchor in anchors {
guard let bodyAnchor = anchor as? ARBodyAnchor
else { return }
let skeleton = bodyAnchor.skeleton
for (i, joint) in skeleton.definition.jointNames.enumerated() {
print(i, joint)
// [10] right_toes_joint
// [51] head_joint
}
let toesJointPos = skeleton.jointModelTransforms[10].columns.3.y
let headJointPos = skeleton.jointModelTransforms[51].columns.3.y
print(headJointPos - toesJointPos) // 1.6570237 m
}
}
}
However, we have a compensator:
bodyAnchor.estimatedScaleFactor
ARKit must know the height of a person in the camera feed to estimate an accurate world position for the person's body anchor. ARKit uses the value of estimatedScaleFactor to correct the body anchor's position in the physical environment.
The default real-world body is 1.8 meters tall. (some kind of mismatch...)
The default value of estimatedScaleFactor is 1.0.
If you set:
let config = ARBodyTrackingConfiguration()
config.automaticSkeletonScaleEstimationEnabled = true
arView.session.run(config, options: [])
ARKit sets this property to a value between 0.0 and 1.0.
Related
I am working on a project where I have to place a green dot to be always in the center even when we rotate the camera in ARKit. I am using ARSCNView and I have added the node so far everything is good. Now I know I need to modify the position of the node in
func session(_ session: ARSession, didUpdate frame: ARFrame)
But I have no idea how to do that. I saw some example which was close to what I have but it does not run as it suppose to.
func session(_ session: ARSession, didUpdate frame: ARFrame) {
let location = sceneView.center
let hitTest = sceneView.hitTest(location, types: .featurePoint)
if hitTest.isEmpty {
print("No Plane Detected")
return
} else {
let columns = hitTest.first?.worldTransform.columns.3
let position = SCNVector3(x: columns!.x, y: columns!.y, z: columns!.z)
var node = sceneView.scene.rootNode.childNode(withName: "CenterShip", recursively: false) ?? nil
if node == nil {
let scene = SCNScene(named: "art.scnassets/ship.scn")!
node = scene.rootNode.childNode(withName: "ship", recursively: false)
node?.opacity = 0.7
let columns = hitTest.first?.worldTransform.columns.3
node!.name = "CenterShip"
node!.position = SCNVector3(x: columns!.x, y: columns!.y, z: columns!.z)
sceneView.scene.rootNode.addChildNode(node!)
}
let position2 = node?.position
if position == position2! {
return
} else {
//action
let action = SCNAction.move(to: position, duration: 0.1)
node?.runAction(action)
}
}
}
It doesn't matter how I rotate the camera this dot must be in the middle.
It's not clear exactly what you're trying to do, but I assume its one of the following:
A) Place the green dot centered in front of the camera at a fixed distance, eg. always exactly 1 meter in front of the camera.
B) Place the green dot centered in front of the camera at the depth of the nearest detected plane, i.e. using the results of a raycast from the mid point of the ARSCNView
I would have assumed A, but your example code is using (now deprecated) sceneView.hitTest() function which in this case would give you the depth of whatever is behind the pixel at sceneView.center
Anyway here's both:
Fixed Depth Solution
This is pretty straightforward, though there are few options. The simplest is to make the green dot a child node of the scene's camera node, and give it position with a negative z value, since z increases as a position moves toward the camera.
cameraNode.addChildNode(textNode)
textNode.position = SCNVector3(x: 0, y: 0, z: -1)
As the camera moves, so too will its child nodes. More details in this very thorough answer
Scene Depth Solution
To determine the estimated depth behind a pixel, you should use ARSession.raycast instead of SceneView.hitTest, because the latter is definitely deprecated.
Note that, if the raycast() (or still hitTest()) methods return an empty result set (not uncommon given the complexity of scene estimation going on in ARKit), you won't have a position to update the node and this it might not be directly centered in every frame. To handle this is a bit more complex, as you'd need decide exactly what you want to do in that case.
The SCNAction is unnecessary and potentially causing problems. These delegate methods run 60fps, so simply updating the position directly will produce smooth results.
Adapting and simplifying the code you posted:
func createCenterShipNode() -> SCNNode {
let scene = SCNScene(named: "art.scnassets/ship.scn")!
let node = scene.rootNode.childNode(withName: "ship", recursively: false)
node!.opacity = 0.7
node!.name = "CenterShip"
sceneView.scene.rootNode.addChildNode(node!)
return node!
}
func session(_ session: ARSession, didUpdate frame: ARFrame) {
// Check the docs for what the different raycast query parameters mean, but these
// give you the depth of anything ARKit has detected
guard let query = sceneView.raycastQuery(from: sceneView.center, allowing: .estimatedPlane, alignment: .any) else {
return
}
let results = session.raycast(query)
if let hit = results.first {
let node = sceneView.scene.rootNode.childNode(withName: "CenterShip", recursively: false) ?? createCenterShipNode()
let pos = hit.worldTransform.columns.3
node.simdPosition = simd_float3(pos.x, pos.y, pos.z)
}
}
See also: ARRaycastQuery
One last note - you generally don't want to do scene manipulation within this delegate method. It runs on a different thread than the Scenekit rendering thread, and SceneKit is very thread sensitive. This will likely work fine, but beyond adding or moving a node will certainly cause crashes from time to time. You'd ideally want to store the new position, and then update the actual scene contents from within the renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) delegate method.
The ARFaceTrackingConfiguration of ARKit places ARFaceAnchor with information about the position and orientation of the face onto the scene. Among others, this anchor has the lookAtPoint property that I'm interested in. I know that this vector is relative to the face. How can I draw a point on the screen for this position, meaning how can I translate this point's coordinates?
.lookAtPoint instance property is for direction's estimation only.
Apple documentation says: .lookAtPoint is a position in face coordinate space that is estimating only the gaze of face's direction. It's a vector of three scalar values, and it's just gettable, not settable:
var lookAtPoint: SIMD3<Float> { get }
In other words, this is the resulting vector from the product of two quantities – .rightEyeTransform and .leftEyeTransform instance properties (which also are just gettable):
var rightEyeTransform: simd_float4x4 { get }
var leftEyeTransform: simd_float4x4 { get }
Here's an imaginary situation on how you could use this instance property:
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
if let faceAnchor = anchor as? ARFaceAnchor,
let faceGeometry = node.geometry as? ARSCNFaceGeometry {
if (faceAnchor.lookAtPoint.x >= 0) { // Looking (+X)
faceGeometry.firstMaterial?.diffuse.contents = UIImage(named: "redTexture.png")
} else { // Looking (-X)
faceGeometry.firstMaterial?.diffuse.contents = UIImage(named: "cyanTexture.png")
}
faceGeometry.update(from: faceAnchor.geometry)
facialExrpession(anchor: faceAnchor)
DispatchQueue.main.async {
self.label.text = self.textBoard
}
}
}
And here's an image showing axis directions for ARFaceTrackingConfiguration():
Answering your question:
I could say that you can't manage this point's coordinates directly because it's gettable-only property (and there is just XYZ orientation, not XYZ translation).
So if you need both – translation and rotation – use .rightEyeTransform and .lefttEyeTransform instance properties instead.
There are two projection methods:
FIRST. In SceneKit/ARKit you need to take the following instance method for projecting a point onto 2D view (for sceneView instance):
func projectPoint(_ point: SCNVector3) -> SCNVector3
or:
let sceneView = ARSCNView()
sceneView.projectPoint(myPoint)
SECOND. In ARKit you need to take the following instance method for projecting a point onto 2D view (for arCamera instance):
func projectPoint(_ point: simd_float3,
orientation: UIInterfaceOrientation,
viewportSize: CGSize) -> CGPoint
or:
let camera = ARCamera()
camera.projectPoint(myPoint, orientation: myOrientation, viewportSize: vpSize)
This method helps you project a point from the 3D world coordinate system of the scene to the 2D pixel coordinate system of the renderer.
There's also the opposite method (for unprojecting a point):
func unprojectPoint(_ point: SCNVector3) -> SCNVector3
...and ARKit's opposite method (for unprojecting a point):
#nonobjc func unprojectPoint(_ point: CGPoint,
ontoPlane planeTransform: simd_float4x4,
orientation: UIInterfaceOrientation,
viewportSize: CGSize) -> simd_float3?
I have a list of images coming from server and stored in gallery. I want to pick any image and place on live wall using ARKit and want to convert images into 3d images tp perform operation like zooming , moving image etc.
Can anybody please guide how can I create custom object in AR?
To detect vertical surfaces (e.g walls) in ARKit you need to firstly set up ARWorldTrackingConfiguration and then enable planeDetection within your app.
So under your Class Declaration you would create the following variables:
#IBOutlet var augmentedRealityView: ARSCNView!
let augmentedRealitySession = ARSession()
let configuration = ARWorldTrackingConfiguration()
And then initialise your ARSession and in ViewDidLoad for example e.g:
override func viewDidLoad() {
super.viewDidLoad()
//1. Set Up Our ARSession
augmentedRealityView.session = augmentedRealitySession
//2. Assign The ARSCNViewDelegate
augmentedRealityView.delegate = self
//3. Set Up Plane Detection
configuration.planeDetection = .vertical
//4. Run Our Configuration
augmentedRealitySession.run(configuration, options: [.resetTracking, .removeExistingAnchors])
}
Now that you are all set to detected vertical planes you need to hook into the following ARSCNViewDelegate Method:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) { }
Which simply:
Tells the delegate that a SceneKit node corresponding to a new AR
anchor has been added to the scene.
In this method we are going to explicitly look for any ARPlaneAnchors which have been detected which provide us with:
Information about the position and orientation of a real-world flat
surface detected in a world-tracking AR session.
As such placing an SCNPlane onto a detected vertical plane is a simple as this:
//-------------------------
//MARK: - ARSCNViewDelegate
//-------------------------
extension ViewController: ARSCNViewDelegate{
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. Check We Have Detected An ARPlaneAnchor
guard let planeAnchor = anchor as? ARPlaneAnchor else { return }
//2. Get The Size Of The ARPlaneAnchor
let width = CGFloat(planeAnchor.extent.x)
let height = CGFloat(planeAnchor.extent.z)
//3. Create An SCNPlane Which Matches The Size Of The ARPlaneAnchor
let imageHolder = SCNNode(geometry: SCNPlane(width: width, height: height))
//4. Rotate It
imageHolder.eulerAngles.x = -.pi/2
//5. Set It's Colour To Red
imageHolder.geometry?.firstMaterial?.diffuse.contents = UIColor.red
//4. Add It To Our Node & Thus The Hiearchy
node.addChildNode(imageHolder)
}
}
Applying This To Your Case:
In your case we need to do some additional work, as you want to be able to allow the user to apply an image to the vertical plane.
As such your best bet is to make the node you have just added a variable e.g.
class ViewController: UIViewController {
#IBOutlet var augmentedRealityView: ARSCNView!
let augmentedRealitySession = ARSession()
let configuration = ARWorldTrackingConfiguration()
var nodeWeCanChange: SCNNode?
}
As such your Delegate Callback might look like so:
//-------------------------
//MARK: - ARSCNViewDelegate
//-------------------------
extension ViewController: ARSCNViewDelegate{
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If We Havent Create Our Interactive Node Then Proceed
if nodeWeCanChange == nil{
//a. Check We Have Detected An ARPlaneAnchor
guard let planeAnchor = anchor as? ARPlaneAnchor else { return }
//b. Get The Size Of The ARPlaneAnchor
let width = CGFloat(planeAnchor.extent.x)
let height = CGFloat(planeAnchor.extent.z)
//c. Create An SCNPlane Which Matches The Size Of The ARPlaneAnchor
nodeWeCanChange = SCNNode(geometry: SCNPlane(width: width, height: height))
//d. Rotate It
nodeWeCanChange?.eulerAngles.x = -.pi/2
//e. Set It's Colour To Red
nodeWeCanChange?.geometry?.firstMaterial?.diffuse.contents = UIColor.red
//f. Add It To Our Node & Thus The Hiearchy
node.addChildNode(nodeWeCanChange!)
}
}
}
Now you have a reference to the nodeWeCanChange setting it's image at anytime is simple!
Each SCNGeometry has a set of Materials which are a:
A set of shading attributes that define the appearance of a geometry's
surface when rendered.
In our case we are looking for the materials diffuse property which is:
An object that manages the material’s diffuse response to lighting.
And then the contents property which are:
The visual contents of the material property—a color, image, or source
of animated content.
Obviously you need to handle the full logistics of this, however a very basic example might look like so assuming you stored your Images into an Array of UIImage e.g:
let imageGallery = [UIImage(named: "StackOverflow"), UIImage(named: "GitHub")]
I have created an IBAction which will change the image of our SCNNode's Geometry based on the tag of the UIButton pressed e.g:
/// Changes The Material Of Our SCNNode's Gemeotry To The Image Selected By The User
///
/// - Parameter sender: UIButton
#IBAction func changeNodesImage(_ sender: UIButton){
guard let imageToApply = imageGallery[sender.tag], let node = nodeWeCanChange else { return}
node.geometry?.firstMaterial?.diffuse.contents = imageToApply
}
This is more than enough to point you in the right direction... Hope it helps...
I am using ARKit (with Scene Kit) and am trying to find a way to get the intersection between an ARReference image and a Horizontal ARPlaneDetection to display a 3D character on the surface directly in front of the detected image, e.g., Spawn inside the red circle see image below
At the moment I am able to get the character to spawn in front of the detected image, however, the character is floating in the air instead of standing on the surface.
let realWorldPositon = SCNVector3Make(anchor.transform.columns.3.x, anchor.transform.columns.3.y, anchor.transform.columns.3.z)
let hitTest = self.sceneView.scene.rootNode.hitTestWithSegment(from: self.sceneView.scene.rootNode.worldPosition, to: realWorldPositon, options: nil)
overlayNode.position = SCNVector3Make((hitTest.first?.worldCoordinates.x)!, 0, (hitTest.first?.worldCoordinates.z)!)
self.sceneView.scene.rootNode.addChildNode(overlayNode)
Any help on this would be greatly appreciated, thanks!
Example project
I think you were on the right lines using the hitTestWithSegment function to detect an intersection between the ARImageAnchor and the ARPlaneAnchor.
Rather than trying to explain each step of my attempt at an answer, I have provided code which is fully commented, so it should be fairly self explanatory.
My example works fairly well (although its certainly not perfect) and will definitely need some tweaking.
For example, you will need to look at determining more accurately the distance from the ARReferenceImage to the ARPlaneAnchor etc.
I can get the model (a Pokemon) to place at the correct level and fairly close to the front of the ARReferenceImage, although it will need tweaking.
Having said this, I think this will be a fairly good base for you to start refining the code and getting more accurate results.
Of note however, is that I have just enabled one ARPlaneAnchor to be detected (just for simplicities sake) and have assumed that you will be detecting a plane infront of your image marker.
I haven't taken into account rotation or anything like that. And of course, based on your proposed scenario; it also assumes your image would be on a desk or some other flat surface.
Anyway, here is my answer (hopefully it should be fairly self explanatory):
import UIKit
import ARKit
//-----------------------
//MARK: ARSCNViewDelegate
//-----------------------
extension ViewController: ARSCNViewDelegate{
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If We Have Detected Our ImageTarget Then Create A Plane To Visualize It
if let currentImageAnchor = anchor as? ARImageAnchor {
createReferenceImagePlaneForNode(currentImageAnchor, node: node)
allowTracking = true
}
//2. If We Have Detected A Horizontal Plane Then Create One
if let currentPlaneAnchor = anchor as? ARPlaneAnchor{
if planeNode == nil && !createdModel{ createReferencePlaneForNode(currentPlaneAnchor, node: node) }
}
}
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
//1. Check To See Whether An ARPlaneAnchor Has Been Updated
guard let anchor = anchor as? ARPlaneAnchor,
//2. Check It Is Our PlaneNode
let existingPlane = planeNode,
//3. Get The Geometry Of The PlaneNode
let planeGeometry = existingPlane.geometry as? SCNPlane else { return }
//4. Adjust It's Size & Positions
planeGeometry.width = CGFloat(anchor.extent.x)
planeGeometry.height = CGFloat(anchor.extent.z)
planeNode?.position = SCNVector3Make(anchor.center.x, 0.01, anchor.center.z)
}
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
//1. Detect The Intersection Of The ARPlaneAnchor & ARImageAncho
if allowTracking { detectIntersetionOfImageTarget() }
}
}
//---------------------------------------
//MARK: Model Generation & Identification
//---------------------------------------
extension ViewController {
/// Detects If We Have Intersected A Valid Image Target
func detectIntersetionOfImageTarget(){
//If We Havent Created Our Model Then Check To See If We Have Detected An Existing Plane
if !createdModel{
//a. Perform A HitTest On The Center Of The Screen For AnyExisting Planes
guard let planeHitTest = self.augmentedRealityView.hitTest(screenCenter, types: .existingPlaneUsingExtent).first,
let planeAnchor = planeHitTest.anchor as? ARPlaneAnchor else { return }
//b. Get The Transform Of The ARPlane Anchor
let x = planeAnchor.transform.columns.3.x
let y = planeAnchor.transform.columns.3.y
let z = planeAnchor.transform.columns.3.z
//b. Create The Anchors Vector
let anchorVector = SCNVector3(x,y, z)
//Perform Another HitTest From The ImageAnchor Vector To The Anchors Vector
if let _ = self.augmentedRealityView.scene.rootNode.hitTestWithSegment(from: imageAnchorVector, to: anchorVector, options: nil).first?.node {
//a. If We Havent Created The Model Then Place It As Soon As An Intersection Occures
if createdModel == false{
//b. Load The Model
loadModelAtVector(SCNVector3(imageAnchorVector.x, y, imageAnchorVector.z))
createdModel = true
planeNode?.removeFromParentNode()
}
}
}
}
}
class ViewController: UIViewController {
//1. Reference To Our ImageTarget Bundle
let AR_BUNDLE = "AR Resources"
//2. Vector To Store The Position Of Our Detected Image
var imageAnchorVector: SCNVector3!
//3. Variables To Allow Tracking & To Determine Whether Our Model Has Been Placed
var allowTracking = false
var createdModel = false
//4. Create A Reference To Our ARSCNView In Our Storyboard Which Displays The Camera Feed
#IBOutlet weak var augmentedRealityView: ARSCNView!
//5. Create Our ARWorld Tracking Configuration
let configuration = ARWorldTrackingConfiguration()
//6. Create Our Session
let augmentedRealitySession = ARSession()
//7. ARReference Images
lazy var staticReferenceImages: Set<ARReferenceImage> = {
let images = ARReferenceImage.referenceImages(inGroupNamed: AR_BUNDLE, bundle: nil)
return images!
}()
//8. Scrren Center Reference
var screenCenter: CGPoint!
//9. PlaneNode
var planeNode: SCNNode?
//--------------------
//MARK: View LifeCycle
//--------------------
override func viewDidLoad() {
super.viewDidLoad()
//1. Get Reference To The Center Of The Screen For RayCasting
DispatchQueue.main.async { self.screenCenter = CGPoint(x: self.view.bounds.width/2, y: self.view.bounds.height/2) }
//2. Setup Our ARSession
setupARSessionWithStaticImages()
}
override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() }
//---------------------------------
//MARK: ARImageAnchor Vizualization
//---------------------------------
/// Creates An SCNPlane For Visualizing The Detected ARImageAnchor
///
/// - Parameters:
/// - imageAnchor: ARImageAnchor
/// - node: SCNNode
func createReferenceImagePlaneForNode(_ imageAnchor: ARImageAnchor, node: SCNNode){
//1. Get The Targets Width & Height
let width = imageAnchor.referenceImage.physicalSize.width
let height = imageAnchor.referenceImage.physicalSize.height
//2. Create A Plane Geometry To Cover The ARImageAnchor
let planeNode = SCNNode()
let planeGeometry = SCNPlane(width: width, height: height)
planeGeometry.firstMaterial?.diffuse.contents = UIColor.white
planeNode.opacity = 0.5
planeNode.geometry = planeGeometry
//3. Rotate The PlaneNode To Horizontal
planeNode.eulerAngles.x = -.pi/2
//4. The Node Is Centered In The Anchor (0,0,0)
node.addChildNode(planeNode)
//5. Store The Vector Of The ARImageAnchor
imageAnchorVector = SCNVector3(imageAnchor.transform.columns.3.x, imageAnchor.transform.columns.3.y, imageAnchor.transform.columns.3.z)
let fadeOutAction = SCNAction.fadeOut(duration: 5)
planeNode.runAction(fadeOutAction)
}
//-------------------------
//MARK: Plane Visualization
//-------------------------
/// Creates An SCNPlane For Visualizing The Detected ARAnchor
///
/// - Parameters:
/// - imageAnchor: ARAnchor
/// - node: SCNNode
func createReferencePlaneForNode(_ anchor: ARPlaneAnchor, node: SCNNode){
//1. Get The Anchors Width & Height
let width = CGFloat(anchor.extent.x)
let height = CGFloat(anchor.extent.z)
//2. Create A Plane Geometry To Cover The ARImageAnchor
planeNode = SCNNode()
let planeGeometry = SCNPlane(width: width, height: height)
planeGeometry.firstMaterial?.diffuse.contents = UIColor.white
planeNode?.opacity = 0.5
planeNode?.geometry = planeGeometry
//3. Rotate The PlaneNode To Horizontal
planeNode?.eulerAngles.x = -.pi/2
//4. The Node Is Centered In The Anchor (0,0,0)
node.addChildNode(planeNode!)
}
//-------------------
//MARK: Model Loading
//-------------------
/// Loads Our Model Based On The Resulting Vector Of Our ARAnchor
///
/// - Parameter worldVector: SCNVector3
func loadModelAtVector(_ worldVector: SCNVector3) {
let modelPath = "ARModels.scnassets/Scatterbug.scn"
//1. Get The Reference To Our SCNScene & Get The Model Root Node
guard let model = SCNScene(named: modelPath),
let pokemonModel = model.rootNode.childNode(withName: "RootNode", recursively: false) else { return }
//2.Add It To Our SCNView
augmentedRealityView.scene.rootNode.addChildNode(pokemonModel)
//3. Scale The Scatterbug
pokemonModel.scale = SCNVector3(0.003, 0.003, 0.003)
pokemonModel.position = worldVector
augmentedRealityView.scene.rootNode.addChildNode(pokemonModel)
}
//---------------
//MARK: ARSession
//---------------
/// Sets Up The AR Session With Static Or Dynamic AEImages
func setupARSessionWithStaticImages(){
//1. Set Our Configuration
configuration.detectionImages = staticReferenceImages
configuration.planeDetection = .horizontal
//2. Run The Configuration
augmentedRealitySession.run(configuration, options: [.resetTracking, .removeExistingAnchors])
//3. Set The Session & Delegate
augmentedRealityView?.session = augmentedRealitySession
self.augmentedRealityView?.delegate = self
}
}
Hope it points you in the right direction...
I would like to check whether the ARReferenceImage is no longer visible in the camera's view. At the moment I can check if the image's node is in the camera's view, but this node is still visible in the camera's view when the ARReferenceImage is covered with another image or when the image is removed.
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
guard let node = self.currentImageNode else { return }
if let pointOfView = sceneView.pointOfView {
let isVisible = sceneView.isNode(node, insideFrustumOf: pointOfView)
print("Is node visible: \(isVisible)")
}
}
So I need to check if the image is no longer visible instead of the image's node visibility. But I can't find out if this is possible. The first screenshot shows three boxes that are added when the image beneath is found. When the found image is covered (see screenshot 2) I would like to remove the boxes.
I managed to fix the problem! Used a little bit of Maybe1's code and his concept to solving the problem, but in a different way. The following line of code is still used to reactivate the image recognition.
// Delete anchor from the session to reactivate the image recognition
sceneView.session.remove(anchor: anchor)
Let me explain. First we need to add some variables.
// The scnNodeBarn variable will be the node to be added when the barn image is found. Add another scnNode when you have another image.
var scnNodeBarn: SCNNode = SCNNode()
// This variable holds the currently added scnNode (in this case scnNodeBarn when the barn image is found)
var currentNode: SCNNode? = nil
// This variable holds the UUID of the found Image Anchor that is used to add a scnNode
var currentARImageAnchorIdentifier: UUID?
// This variable is used to call a function when there is no new anchor added for 0.6 seconds
var timer: Timer!
The complete code with comments below.
/// - Tag: ARImageAnchor-Visualizing
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
let referenceImage = imageAnchor.referenceImage
// The following timer fires after 0.6 seconds, but everytime when there found an anchor the timer is stopped.
// So when there is no ARImageAnchor found the timer will be completed and the current scene node will be deleted and the variable will set to nil
DispatchQueue.main.async {
if(self.timer != nil){
self.timer.invalidate()
}
self.timer = Timer.scheduledTimer(timeInterval: 0.6 , target: self, selector: #selector(self.imageLost(_:)), userInfo: nil, repeats: false)
}
// Check if there is found a new image on the basis of the ARImageAnchorIdentifier, when found delete the current scene node and set the variable to nil
if(self.currentARImageAnchorIdentifier != imageAnchor.identifier &&
self.currentARImageAnchorIdentifier != nil
&& self.currentNode != nil){
//found new image
self.currentNode!.removeFromParentNode()
self.currentNode = nil
}
updateQueue.async {
//If currentNode is nil, there is currently no scene node
if(self.currentNode == nil){
switch referenceImage.name {
case "barn":
self.scnNodeBarn.transform = node.transform
self.sceneView.scene.rootNode.addChildNode(self.scnNodeBarn)
self.currentNode = self.scnNodeBarn
default: break
}
}
self.currentARImageAnchorIdentifier = imageAnchor.identifier
// Delete anchor from the session to reactivate the image recognition
self.sceneView.session.remove(anchor: anchor)
}
}
Delete the node when the timer is finished indicating that there was no new ARImageAnchor found.
#objc
func imageLost(_ sender:Timer){
self.currentNode!.removeFromParentNode()
self.currentNode = nil
}
In this way the currently added scnNode wil be deleted when the image is covered or when there is found a new image.
This solution does unfortunately not solve the positioning problem of images because of the following:
ARKit doesn’t track changes to the position or orientation of each detected image.
I don't think this is currently possible.
From the Recognizing Images in an AR Experience documentation:
Design your AR experience to use detected images as a starting point for virtual content.
ARKit doesn’t track changes to the position or orientation of each detected image. If you try to place virtual content that stays attached to a detected image, that content may not appear to stay in place correctly. Instead, use detected images as a frame of reference for starting a dynamic scene.
New Answer for iOS 12.0
ARKit 2.0 and iOS 12 finally adds this feature, either via ARImageTrackingConfiguration or via the ARWorldTrackingConfiguration.detectionImages property that now also tracks the position of the images.
The Apple documentation to ARImageTrackingConfiguration lists advantages of both methods:
With ARImageTrackingConfiguration, ARKit establishes a 3D space not by tracking the motion of the device relative to the world, but solely by detecting and tracking the motion of known 2D images in view of the camera. ARWorldTrackingConfiguration can also detect images, but each configuration has its own strengths:
World tracking has a higher performance cost than image-only tracking, so your session can reliably track more images at once with ARImageTrackingConfiguration.
Image-only tracking lets you anchor virtual content to known images only when those images are in view of the camera. World tracking with image detection lets you use known images to add virtual content to the 3D world, and continues to track the position of that content in world space even after the image is no longer in view.
World tracking works best in a stable, nonmoving environment. You can use image-only tracking to add virtual content to known images in more situations—for example, an advertisement inside a moving subway car.
The correct way to check if an image that you are tracking is not currently tracked by ARKit is by using the "isTracked" property in the ARImageAnchor on the didUpdate node for anchor function.
For that, I use the next struct:
struct TrackedImage {
var name : String
var node : SCNNode?
}
And then an array of that struct with the name of all the images.
var trackedImages : [TrackedImage] = [ TrackedImage(name: "image_1", node: nil) ]
Then in the didAdd node for anchor, set the new content to the scene and also add the node to the corresponding element in the array of trackedImages
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
// Check if the added anchor is a recognized ARImageAnchor
if let imageAnchor = anchor as? ARImageAnchor{
// Get the reference ar image
let referenceImage = imageAnchor.referenceImage
// Create a plane to match the detected image.
let plane = SCNPlane(width: referenceImage.physicalSize.width, height: referenceImage.physicalSize.height)
plane.firstMaterial?.diffuse.contents = UIColor(red: 1, green: 1, blue: 1, alpha: 0.5)
// Create SCNNode from the plane
let planeNode = SCNNode(geometry: plane)
planeNode.eulerAngles.x = -.pi / 2
// Add the plane to the scene.
node.addChildNode(planeNode)
// Add the node to the tracked images
for (index, trackedImage) in trackedImages.enumerated(){
if(trackedImage.name == referenceImage.name){
trackedImage[index].node = planeNode
}
}
}
}
Finally in the didUpdate node for anchor function we search for the anchor name in our array and check if the property isTracked is false.
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
var trackedImages : [TrackedImage] = [ TrackedImage(name: "image_1", node: nil) ]
if let imageAnchor = anchor as? ARImageAnchor{
// Search the corresponding node for the ar image anchor
for (index, trackedImage) in trackedImages.enumerated(){
if(trackedImage.name == referenceImage.name){
// Check if track is lost on ar image
if(imageAnchor.isTracked){
// The image is being tracked
trackedImage.node?.isHidden = false // Show or add content
}else{
// The image is lost
trackedImage.node?.isHidden = true // Hide or delete content
}
break
}
}
}
}
This solution works when you want to tracked multiple images at the same time and know when any of them is lost.
Note: For this solution to work the maximumNumberOfTrackedImages in the AR configuration must be set to a nonzero number.
For what its worth, I spent hours trying to figure out how to constantly check for image references. The didUpdate function was the answer. Then you just need to test of the reference image is being tracked using the .isTracked property. At that point, you can set the .isHidden property to true or false. Heres my example:
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
let trackedNode = node
if let imageAnchor = anchor as? ARImageAnchor{
if (imageAnchor.isTracked) {
trackedNode.isHidden = false
print("\(trackedNode.name)")
}else {
trackedNode.isHidden = true
//print("\(trackedImageName)")
print("No image in view")
}
}
}
I'm not entirely sure I have understood what your asking (so apologies), but if I have then perhaps this might help...
It seems that for insideOfFrustum to work correctly, that their must be some SCNGeometry associated with the node for it to work (an SCNNode alone will not suffice).
For example if we do something like this in the delegate callback and save the added SCNNode into an array:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If Out Target Image Has Been Detected Than Get The Corresponding Anchor
guard let currentImageAnchor = anchor as? ARImageAnchor else { return }
//2. Print The Anchor ID & It's Associated Node
print("""
Anchor With ID Has Been Detected \(currentImageAnchor.identifier)
Associated Node Details = \(node)
""")
//3. Store The Node
imageTargets.append(node)
}
And then use the insideOfFrustum method, 99% of the time it will say that the node is in view even when we know it shouldn't be.
However if we do something like this (whereby we create a transparent marker node e.g. one that has some geometry):
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If Out Target Image Has Been Detected Than Get The Corresponding Anchor
guard let currentImageAnchor = anchor as? ARImageAnchor else { return }
//2. Print The Anchor ID & It's Associated Node
print("""
Anchor With ID Has Been Detected \(currentImageAnchor.identifier)
Associated Node Details = \(node)
""")
//3. Create A Transpanrent Geometry
node.geometry = SCNSphere(radius: 0.1)
node.geometry?.firstMaterial?.diffuse.contents = UIColor.clear
//3. Store The Node
imageTargets.append(node)
}
And then call the following method, it does detect if the ARReferenceImage is inView:
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
//1. Get The Current Point Of View
guard let pointOfView = augmentedRealityView.pointOfView else { return }
//2. Loop Through Our Image Target Markers
for addedNode in imageTargets{
if augmentedRealityView.isNode(addedNode, insideFrustumOf: pointOfView){
print("Node Is Visible")
}else{
print("Node Is Not Visible")
}
}
}
In regard to your other point about an SCNNode being occluded by another one, the Apple Docs state that the inViewOfFrostrum:
does not perform occlusion testing. That is, it returns
true if the tested node lies within the specified viewing frustum
regardless of whether that node’s contents are obscured by other
geometry.
Again, apologies if I haven't understood you correctly, but hopefully it might help to some extent...
Update:
Now I fully understand your question, I agree with #orangenkopf that this isn't possible. Since as the docs state:
ARKit doesn’t track changes to the position or orientation of each
detected image.
From the Recognizing Images in an AR Experience documentation:
ARKit adds an image anchor to a session exactly once for each
reference image in the session configuration’s detectionImages array.
If your AR experience adds virtual content to the scene when an image
is detected, that action will by default happen only once. To allow
the user to experience that content again without restarting your app,
call the session’s remove(anchor:) method to remove the corresponding
ARImageAnchor. After the anchor is removed, ARKit will add a new
anchor the next time it detects the image.
So, maybe you can find a workaround for your case:
Let's say we are that structure which saves our ARImageAnchor detected and the virtual content associated:
struct ARImage {
var anchor: ARImageAnchor
var node: SCNNode
}
Then, when the renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) is called, you save the image detected into a temporary list of ARImage:
...
var tmpARImages: [ARImage] = []
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
let referenceImage = imageAnchor.referenceImage
// If the ARImage does not exist
if !tmpARImages.contains(where: {$0.anchor.referenceImage.name == referenceImage.name}) {
let virtualContent = SCNNode(...)
node.addChildNode(virtualContent)
tmpARImages.append(ARImage(anchor: imageAnchor, node: virtualContent))
}
// Delete anchor from the session to reactivate the image recognition
sceneView.session.remove(anchor: anchor)
}
If you understood, while your camera's view point out of the image/marker, the delegate function will loop endlessly... (because we removed the anchor from the session).
The idea will be to combine the image recognition loop, the image detected saved into the tmp list and the sceneView.isNode(node, insideFrustumOf: pointOfView) function to determine if the image/marker detected is no longer view.
I hope it was clear...
This code works only if You hold the device strictly horizontally or vertically. If You hold iPhone tilted or starting to tilt if, this code doesn't work:
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
//1. Get The Current Point Of View
guard let pointOfView = augmentedRealityView.pointOfView else { return }
//2. Loop Through Our Image Target Markers
for addedNode in imageTargets{
if augmentedRealityView.isNode(addedNode, insideFrustumOf: pointOfView){
print("Node Is Visible")
}else{
print("Node Is Not Visible")
}
}
}