ARAnchor for SCNNode - swift

I'm trying to get a hold of the anchor after adding an SCNNode to the scene of an ARSCNView. My understanding is that the anchor should be created automatically, but I can't seem to retrieve it.
Below is how I add it. The node is saved in a variable called testNode.
let node = SCNNode()
node.geometry = SCNBox(width: 0.5, height: 0.1, length: 0.3, chamferRadius: 0)
node.geometry?.firstMaterial?.diffuse.contents = UIColor.green
sceneView.scene.rootNode.addChildNode(node)
testNode = node
Here is how I try to retrieve it. It always prints nil.
if let testNode = testNode {
print(sceneView.anchor(for: testNode))
}
Does it not create the anchor? If it does: is there another method I can use to retrieve it?

If you have a look at the Apple Docs it states that:
To track the positions and orientations of real or virtual objects
relative to the camera, create anchor objects and use the add(anchor:)
method to add them to your AR session.
As such, I think that since your aren't using PlaneDetection you would need to create an ARAnchor manually if it is needed:
Whenever you place a virtual object, always add an ARAnchor representing its position and orientation to the ARSession. After moving a virtual object, remove the anchor at the old position and create a new anchor at the new position. Adding an anchor tells ARKit that a position is important, improving world tracking quality in that area and helping virtual objects appear to stay in place relative to real-world surfaces.
You can read more about this in the following thread What's the difference between using ARAnchor to insert a node and directly insert a node?
Anyway, in order to get you started I began by creating an SCNNode called currentNode:
var currentNode: SCNNode?
Then using a UITapGestureRecognizer I created an ARAnchor manually at the touchLocation:
#objc func handleTap(_ gesture: UITapGestureRecognizer){
//1. Get The Current Touch Location
let currentTouchLocation = gesture.location(in: self.augmentedRealityView)
//2. If We Have Hit A Feature Point Get The Result
if let hitTest = augmentedRealityView.hitTest(currentTouchLocation, types: [.featurePoint]).last {
//2. Create An Anchore At The World Transform
let anchor = ARAnchor(transform: hitTest.worldTransform)
//3. Add It To The Scene
augmentedRealitySession.add(anchor: anchor)
}
}
Having added the anchor, I then used the ARSCNViewDelegate callback to create the currentNode like so:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
if currentNode == nil{
currentNode = SCNNode()
let nodeGeometry = SCNBox(width: 0.2, height: 0.2, length: 0.2, chamferRadius: 0)
nodeGeometry.firstMaterial?.diffuse.contents = UIColor.cyan
currentNode?.geometry = nodeGeometry
currentNode?.position = SCNVector3(anchor.transform.columns.3.x, anchor.transform.columns.3.y, anchor.transform.columns.3.z)
node.addChildNode(currentNode!)
}
}
In order to test that it works, e.g being able to log the corresponding ARAnchor, I changed the tapGesture method to include this at the end:
if let anchorHitTest = augmentedRealityView.hitTest(currentTouchLocation, options: nil).first,{
print(augmentedRealityView.anchor(for: anchorHitTest.node))
}
Which in my ConsoleLog prints:
Optional(<ARAnchor: 0x1c0535680 identifier="23CFF447-68E9-451D-A64D-17C972EB5F4B" transform=<translation=(-0.006610 -0.095542 -0.357221) rotation=(-0.00° 0.00° 0.00°)>>)
Hope it helps...

Related

ARKit SCNNode always in the center when camera move

I am working on a project where I have to place a green dot to be always in the center even when we rotate the camera in ARKit. I am using ARSCNView and I have added the node so far everything is good. Now I know I need to modify the position of the node in
func session(_ session: ARSession, didUpdate frame: ARFrame)
But I have no idea how to do that. I saw some example which was close to what I have but it does not run as it suppose to.
func session(_ session: ARSession, didUpdate frame: ARFrame) {
let location = sceneView.center
let hitTest = sceneView.hitTest(location, types: .featurePoint)
if hitTest.isEmpty {
print("No Plane Detected")
return
} else {
let columns = hitTest.first?.worldTransform.columns.3
let position = SCNVector3(x: columns!.x, y: columns!.y, z: columns!.z)
var node = sceneView.scene.rootNode.childNode(withName: "CenterShip", recursively: false) ?? nil
if node == nil {
let scene = SCNScene(named: "art.scnassets/ship.scn")!
node = scene.rootNode.childNode(withName: "ship", recursively: false)
node?.opacity = 0.7
let columns = hitTest.first?.worldTransform.columns.3
node!.name = "CenterShip"
node!.position = SCNVector3(x: columns!.x, y: columns!.y, z: columns!.z)
sceneView.scene.rootNode.addChildNode(node!)
}
let position2 = node?.position
if position == position2! {
return
} else {
//action
let action = SCNAction.move(to: position, duration: 0.1)
node?.runAction(action)
}
}
}
It doesn't matter how I rotate the camera this dot must be in the middle.
It's not clear exactly what you're trying to do, but I assume its one of the following:
A) Place the green dot centered in front of the camera at a fixed distance, eg. always exactly 1 meter in front of the camera.
B) Place the green dot centered in front of the camera at the depth of the nearest detected plane, i.e. using the results of a raycast from the mid point of the ARSCNView
I would have assumed A, but your example code is using (now deprecated) sceneView.hitTest() function which in this case would give you the depth of whatever is behind the pixel at sceneView.center
Anyway here's both:
Fixed Depth Solution
This is pretty straightforward, though there are few options. The simplest is to make the green dot a child node of the scene's camera node, and give it position with a negative z value, since z increases as a position moves toward the camera.
cameraNode.addChildNode(textNode)
textNode.position = SCNVector3(x: 0, y: 0, z: -1)
As the camera moves, so too will its child nodes. More details in this very thorough answer
Scene Depth Solution
To determine the estimated depth behind a pixel, you should use ARSession.raycast instead of SceneView.hitTest, because the latter is definitely deprecated.
Note that, if the raycast() (or still hitTest()) methods return an empty result set (not uncommon given the complexity of scene estimation going on in ARKit), you won't have a position to update the node and this it might not be directly centered in every frame. To handle this is a bit more complex, as you'd need decide exactly what you want to do in that case.
The SCNAction is unnecessary and potentially causing problems. These delegate methods run 60fps, so simply updating the position directly will produce smooth results.
Adapting and simplifying the code you posted:
func createCenterShipNode() -> SCNNode {
let scene = SCNScene(named: "art.scnassets/ship.scn")!
let node = scene.rootNode.childNode(withName: "ship", recursively: false)
node!.opacity = 0.7
node!.name = "CenterShip"
sceneView.scene.rootNode.addChildNode(node!)
return node!
}
func session(_ session: ARSession, didUpdate frame: ARFrame) {
// Check the docs for what the different raycast query parameters mean, but these
// give you the depth of anything ARKit has detected
guard let query = sceneView.raycastQuery(from: sceneView.center, allowing: .estimatedPlane, alignment: .any) else {
return
}
let results = session.raycast(query)
if let hit = results.first {
let node = sceneView.scene.rootNode.childNode(withName: "CenterShip", recursively: false) ?? createCenterShipNode()
let pos = hit.worldTransform.columns.3
node.simdPosition = simd_float3(pos.x, pos.y, pos.z)
}
}
See also: ARRaycastQuery
One last note - you generally don't want to do scene manipulation within this delegate method. It runs on a different thread than the Scenekit rendering thread, and SceneKit is very thread sensitive. This will likely work fine, but beyond adding or moving a node will certainly cause crashes from time to time. You'd ideally want to store the new position, and then update the actual scene contents from within the renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) delegate method.

ARKit detecting intersection between planes

I am using ARKit (with Scene Kit) and am trying to find a way to get the intersection between an ARReference image and a Horizontal ARPlaneDetection to display a 3D character on the surface directly in front of the detected image, e.g., Spawn inside the red circle see image below
At the moment I am able to get the character to spawn in front of the detected image, however, the character is floating in the air instead of standing on the surface.
let realWorldPositon = SCNVector3Make(anchor.transform.columns.3.x, anchor.transform.columns.3.y, anchor.transform.columns.3.z)
let hitTest = self.sceneView.scene.rootNode.hitTestWithSegment(from: self.sceneView.scene.rootNode.worldPosition, to: realWorldPositon, options: nil)
overlayNode.position = SCNVector3Make((hitTest.first?.worldCoordinates.x)!, 0, (hitTest.first?.worldCoordinates.z)!)
self.sceneView.scene.rootNode.addChildNode(overlayNode)
Any help on this would be greatly appreciated, thanks!
Example project
I think you were on the right lines using the hitTestWithSegment function to detect an intersection between the ARImageAnchor and the ARPlaneAnchor.
Rather than trying to explain each step of my attempt at an answer, I have provided code which is fully commented, so it should be fairly self explanatory.
My example works fairly well (although its certainly not perfect) and will definitely need some tweaking.
For example, you will need to look at determining more accurately the distance from the ARReferenceImage to the ARPlaneAnchor etc.
I can get the model (a Pokemon) to place at the correct level and fairly close to the front of the ARReferenceImage, although it will need tweaking.
Having said this, I think this will be a fairly good base for you to start refining the code and getting more accurate results.
Of note however, is that I have just enabled one ARPlaneAnchor to be detected (just for simplicities sake) and have assumed that you will be detecting a plane infront of your image marker.
I haven't taken into account rotation or anything like that. And of course, based on your proposed scenario; it also assumes your image would be on a desk or some other flat surface.
Anyway, here is my answer (hopefully it should be fairly self explanatory):
import UIKit
import ARKit
//-----------------------
//MARK: ARSCNViewDelegate
//-----------------------
extension ViewController: ARSCNViewDelegate{
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If We Have Detected Our ImageTarget Then Create A Plane To Visualize It
if let currentImageAnchor = anchor as? ARImageAnchor {
createReferenceImagePlaneForNode(currentImageAnchor, node: node)
allowTracking = true
}
//2. If We Have Detected A Horizontal Plane Then Create One
if let currentPlaneAnchor = anchor as? ARPlaneAnchor{
if planeNode == nil && !createdModel{ createReferencePlaneForNode(currentPlaneAnchor, node: node) }
}
}
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
//1. Check To See Whether An ARPlaneAnchor Has Been Updated
guard let anchor = anchor as? ARPlaneAnchor,
//2. Check It Is Our PlaneNode
let existingPlane = planeNode,
//3. Get The Geometry Of The PlaneNode
let planeGeometry = existingPlane.geometry as? SCNPlane else { return }
//4. Adjust It's Size & Positions
planeGeometry.width = CGFloat(anchor.extent.x)
planeGeometry.height = CGFloat(anchor.extent.z)
planeNode?.position = SCNVector3Make(anchor.center.x, 0.01, anchor.center.z)
}
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
//1. Detect The Intersection Of The ARPlaneAnchor & ARImageAncho
if allowTracking { detectIntersetionOfImageTarget() }
}
}
//---------------------------------------
//MARK: Model Generation & Identification
//---------------------------------------
extension ViewController {
/// Detects If We Have Intersected A Valid Image Target
func detectIntersetionOfImageTarget(){
//If We Havent Created Our Model Then Check To See If We Have Detected An Existing Plane
if !createdModel{
//a. Perform A HitTest On The Center Of The Screen For AnyExisting Planes
guard let planeHitTest = self.augmentedRealityView.hitTest(screenCenter, types: .existingPlaneUsingExtent).first,
let planeAnchor = planeHitTest.anchor as? ARPlaneAnchor else { return }
//b. Get The Transform Of The ARPlane Anchor
let x = planeAnchor.transform.columns.3.x
let y = planeAnchor.transform.columns.3.y
let z = planeAnchor.transform.columns.3.z
//b. Create The Anchors Vector
let anchorVector = SCNVector3(x,y, z)
//Perform Another HitTest From The ImageAnchor Vector To The Anchors Vector
if let _ = self.augmentedRealityView.scene.rootNode.hitTestWithSegment(from: imageAnchorVector, to: anchorVector, options: nil).first?.node {
//a. If We Havent Created The Model Then Place It As Soon As An Intersection Occures
if createdModel == false{
//b. Load The Model
loadModelAtVector(SCNVector3(imageAnchorVector.x, y, imageAnchorVector.z))
createdModel = true
planeNode?.removeFromParentNode()
}
}
}
}
}
class ViewController: UIViewController {
//1. Reference To Our ImageTarget Bundle
let AR_BUNDLE = "AR Resources"
//2. Vector To Store The Position Of Our Detected Image
var imageAnchorVector: SCNVector3!
//3. Variables To Allow Tracking & To Determine Whether Our Model Has Been Placed
var allowTracking = false
var createdModel = false
//4. Create A Reference To Our ARSCNView In Our Storyboard Which Displays The Camera Feed
#IBOutlet weak var augmentedRealityView: ARSCNView!
//5. Create Our ARWorld Tracking Configuration
let configuration = ARWorldTrackingConfiguration()
//6. Create Our Session
let augmentedRealitySession = ARSession()
//7. ARReference Images
lazy var staticReferenceImages: Set<ARReferenceImage> = {
let images = ARReferenceImage.referenceImages(inGroupNamed: AR_BUNDLE, bundle: nil)
return images!
}()
//8. Scrren Center Reference
var screenCenter: CGPoint!
//9. PlaneNode
var planeNode: SCNNode?
//--------------------
//MARK: View LifeCycle
//--------------------
override func viewDidLoad() {
super.viewDidLoad()
//1. Get Reference To The Center Of The Screen For RayCasting
DispatchQueue.main.async { self.screenCenter = CGPoint(x: self.view.bounds.width/2, y: self.view.bounds.height/2) }
//2. Setup Our ARSession
setupARSessionWithStaticImages()
}
override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() }
//---------------------------------
//MARK: ARImageAnchor Vizualization
//---------------------------------
/// Creates An SCNPlane For Visualizing The Detected ARImageAnchor
///
/// - Parameters:
/// - imageAnchor: ARImageAnchor
/// - node: SCNNode
func createReferenceImagePlaneForNode(_ imageAnchor: ARImageAnchor, node: SCNNode){
//1. Get The Targets Width & Height
let width = imageAnchor.referenceImage.physicalSize.width
let height = imageAnchor.referenceImage.physicalSize.height
//2. Create A Plane Geometry To Cover The ARImageAnchor
let planeNode = SCNNode()
let planeGeometry = SCNPlane(width: width, height: height)
planeGeometry.firstMaterial?.diffuse.contents = UIColor.white
planeNode.opacity = 0.5
planeNode.geometry = planeGeometry
//3. Rotate The PlaneNode To Horizontal
planeNode.eulerAngles.x = -.pi/2
//4. The Node Is Centered In The Anchor (0,0,0)
node.addChildNode(planeNode)
//5. Store The Vector Of The ARImageAnchor
imageAnchorVector = SCNVector3(imageAnchor.transform.columns.3.x, imageAnchor.transform.columns.3.y, imageAnchor.transform.columns.3.z)
let fadeOutAction = SCNAction.fadeOut(duration: 5)
planeNode.runAction(fadeOutAction)
}
//-------------------------
//MARK: Plane Visualization
//-------------------------
/// Creates An SCNPlane For Visualizing The Detected ARAnchor
///
/// - Parameters:
/// - imageAnchor: ARAnchor
/// - node: SCNNode
func createReferencePlaneForNode(_ anchor: ARPlaneAnchor, node: SCNNode){
//1. Get The Anchors Width & Height
let width = CGFloat(anchor.extent.x)
let height = CGFloat(anchor.extent.z)
//2. Create A Plane Geometry To Cover The ARImageAnchor
planeNode = SCNNode()
let planeGeometry = SCNPlane(width: width, height: height)
planeGeometry.firstMaterial?.diffuse.contents = UIColor.white
planeNode?.opacity = 0.5
planeNode?.geometry = planeGeometry
//3. Rotate The PlaneNode To Horizontal
planeNode?.eulerAngles.x = -.pi/2
//4. The Node Is Centered In The Anchor (0,0,0)
node.addChildNode(planeNode!)
}
//-------------------
//MARK: Model Loading
//-------------------
/// Loads Our Model Based On The Resulting Vector Of Our ARAnchor
///
/// - Parameter worldVector: SCNVector3
func loadModelAtVector(_ worldVector: SCNVector3) {
let modelPath = "ARModels.scnassets/Scatterbug.scn"
//1. Get The Reference To Our SCNScene & Get The Model Root Node
guard let model = SCNScene(named: modelPath),
let pokemonModel = model.rootNode.childNode(withName: "RootNode", recursively: false) else { return }
//2.Add It To Our SCNView
augmentedRealityView.scene.rootNode.addChildNode(pokemonModel)
//3. Scale The Scatterbug
pokemonModel.scale = SCNVector3(0.003, 0.003, 0.003)
pokemonModel.position = worldVector
augmentedRealityView.scene.rootNode.addChildNode(pokemonModel)
}
//---------------
//MARK: ARSession
//---------------
/// Sets Up The AR Session With Static Or Dynamic AEImages
func setupARSessionWithStaticImages(){
//1. Set Our Configuration
configuration.detectionImages = staticReferenceImages
configuration.planeDetection = .horizontal
//2. Run The Configuration
augmentedRealitySession.run(configuration, options: [.resetTracking, .removeExistingAnchors])
//3. Set The Session & Delegate
augmentedRealityView?.session = augmentedRealitySession
self.augmentedRealityView?.delegate = self
}
}
Hope it points you in the right direction...

Check whether the ARReferenceImage is no longer visible in the camera's view

I would like to check whether the ARReferenceImage is no longer visible in the camera's view. At the moment I can check if the image's node is in the camera's view, but this node is still visible in the camera's view when the ARReferenceImage is covered with another image or when the image is removed.
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
guard let node = self.currentImageNode else { return }
if let pointOfView = sceneView.pointOfView {
let isVisible = sceneView.isNode(node, insideFrustumOf: pointOfView)
print("Is node visible: \(isVisible)")
}
}
So I need to check if the image is no longer visible instead of the image's node visibility. But I can't find out if this is possible. The first screenshot shows three boxes that are added when the image beneath is found. When the found image is covered (see screenshot 2) I would like to remove the boxes.
I managed to fix the problem! Used a little bit of Maybe1's code and his concept to solving the problem, but in a different way. The following line of code is still used to reactivate the image recognition.
// Delete anchor from the session to reactivate the image recognition
sceneView.session.remove(anchor: anchor)
Let me explain. First we need to add some variables.
// The scnNodeBarn variable will be the node to be added when the barn image is found. Add another scnNode when you have another image.
var scnNodeBarn: SCNNode = SCNNode()
// This variable holds the currently added scnNode (in this case scnNodeBarn when the barn image is found)
var currentNode: SCNNode? = nil
// This variable holds the UUID of the found Image Anchor that is used to add a scnNode
var currentARImageAnchorIdentifier: UUID?
// This variable is used to call a function when there is no new anchor added for 0.6 seconds
var timer: Timer!
The complete code with comments below.
/// - Tag: ARImageAnchor-Visualizing
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
let referenceImage = imageAnchor.referenceImage
// The following timer fires after 0.6 seconds, but everytime when there found an anchor the timer is stopped.
// So when there is no ARImageAnchor found the timer will be completed and the current scene node will be deleted and the variable will set to nil
DispatchQueue.main.async {
if(self.timer != nil){
self.timer.invalidate()
}
self.timer = Timer.scheduledTimer(timeInterval: 0.6 , target: self, selector: #selector(self.imageLost(_:)), userInfo: nil, repeats: false)
}
// Check if there is found a new image on the basis of the ARImageAnchorIdentifier, when found delete the current scene node and set the variable to nil
if(self.currentARImageAnchorIdentifier != imageAnchor.identifier &&
self.currentARImageAnchorIdentifier != nil
&& self.currentNode != nil){
//found new image
self.currentNode!.removeFromParentNode()
self.currentNode = nil
}
updateQueue.async {
//If currentNode is nil, there is currently no scene node
if(self.currentNode == nil){
switch referenceImage.name {
case "barn":
self.scnNodeBarn.transform = node.transform
self.sceneView.scene.rootNode.addChildNode(self.scnNodeBarn)
self.currentNode = self.scnNodeBarn
default: break
}
}
self.currentARImageAnchorIdentifier = imageAnchor.identifier
// Delete anchor from the session to reactivate the image recognition
self.sceneView.session.remove(anchor: anchor)
}
}
Delete the node when the timer is finished indicating that there was no new ARImageAnchor found.
#objc
func imageLost(_ sender:Timer){
self.currentNode!.removeFromParentNode()
self.currentNode = nil
}
In this way the currently added scnNode wil be deleted when the image is covered or when there is found a new image.
This solution does unfortunately not solve the positioning problem of images because of the following:
ARKit doesn’t track changes to the position or orientation of each detected image.
I don't think this is currently possible.
From the Recognizing Images in an AR Experience documentation:
Design your AR experience to use detected images as a starting point for virtual content.
ARKit doesn’t track changes to the position or orientation of each detected image. If you try to place virtual content that stays attached to a detected image, that content may not appear to stay in place correctly. Instead, use detected images as a frame of reference for starting a dynamic scene.
New Answer for iOS 12.0
ARKit 2.0 and iOS 12 finally adds this feature, either via ARImageTrackingConfiguration or via the ARWorldTrackingConfiguration.detectionImages property that now also tracks the position of the images.
The Apple documentation to ARImageTrackingConfiguration lists advantages of both methods:
With ARImageTrackingConfiguration, ARKit establishes a 3D space not by tracking the motion of the device relative to the world, but solely by detecting and tracking the motion of known 2D images in view of the camera. ARWorldTrackingConfiguration can also detect images, but each configuration has its own strengths:
World tracking has a higher performance cost than image-only tracking, so your session can reliably track more images at once with ARImageTrackingConfiguration.
Image-only tracking lets you anchor virtual content to known images only when those images are in view of the camera. World tracking with image detection lets you use known images to add virtual content to the 3D world, and continues to track the position of that content in world space even after the image is no longer in view.
World tracking works best in a stable, nonmoving environment. You can use image-only tracking to add virtual content to known images in more situations—for example, an advertisement inside a moving subway car.
The correct way to check if an image that you are tracking is not currently tracked by ARKit is by using the "isTracked" property in the ARImageAnchor on the didUpdate node for anchor function.
For that, I use the next struct:
struct TrackedImage {
var name : String
var node : SCNNode?
}
And then an array of that struct with the name of all the images.
var trackedImages : [TrackedImage] = [ TrackedImage(name: "image_1", node: nil) ]
Then in the didAdd node for anchor, set the new content to the scene and also add the node to the corresponding element in the array of trackedImages
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
// Check if the added anchor is a recognized ARImageAnchor
if let imageAnchor = anchor as? ARImageAnchor{
// Get the reference ar image
let referenceImage = imageAnchor.referenceImage
// Create a plane to match the detected image.
let plane = SCNPlane(width: referenceImage.physicalSize.width, height: referenceImage.physicalSize.height)
plane.firstMaterial?.diffuse.contents = UIColor(red: 1, green: 1, blue: 1, alpha: 0.5)
// Create SCNNode from the plane
let planeNode = SCNNode(geometry: plane)
planeNode.eulerAngles.x = -.pi / 2
// Add the plane to the scene.
node.addChildNode(planeNode)
// Add the node to the tracked images
for (index, trackedImage) in trackedImages.enumerated(){
if(trackedImage.name == referenceImage.name){
trackedImage[index].node = planeNode
}
}
}
}
Finally in the didUpdate node for anchor function we search for the anchor name in our array and check if the property isTracked is false.
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
var trackedImages : [TrackedImage] = [ TrackedImage(name: "image_1", node: nil) ]
if let imageAnchor = anchor as? ARImageAnchor{
// Search the corresponding node for the ar image anchor
for (index, trackedImage) in trackedImages.enumerated(){
if(trackedImage.name == referenceImage.name){
// Check if track is lost on ar image
if(imageAnchor.isTracked){
// The image is being tracked
trackedImage.node?.isHidden = false // Show or add content
}else{
// The image is lost
trackedImage.node?.isHidden = true // Hide or delete content
}
break
}
}
}
}
This solution works when you want to tracked multiple images at the same time and know when any of them is lost.
Note: For this solution to work the maximumNumberOfTrackedImages in the AR configuration must be set to a nonzero number.
For what its worth, I spent hours trying to figure out how to constantly check for image references. The didUpdate function was the answer. Then you just need to test of the reference image is being tracked using the .isTracked property. At that point, you can set the .isHidden property to true or false. Heres my example:
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
let trackedNode = node
if let imageAnchor = anchor as? ARImageAnchor{
if (imageAnchor.isTracked) {
trackedNode.isHidden = false
print("\(trackedNode.name)")
}else {
trackedNode.isHidden = true
//print("\(trackedImageName)")
print("No image in view")
}
}
}
I'm not entirely sure I have understood what your asking (so apologies), but if I have then perhaps this might help...
It seems that for insideOfFrustum to work correctly, that their must be some SCNGeometry associated with the node for it to work (an SCNNode alone will not suffice).
For example if we do something like this in the delegate callback and save the added SCNNode into an array:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If Out Target Image Has Been Detected Than Get The Corresponding Anchor
guard let currentImageAnchor = anchor as? ARImageAnchor else { return }
//2. Print The Anchor ID & It's Associated Node
print("""
Anchor With ID Has Been Detected \(currentImageAnchor.identifier)
Associated Node Details = \(node)
""")
//3. Store The Node
imageTargets.append(node)
}
And then use the insideOfFrustum method, 99% of the time it will say that the node is in view even when we know it shouldn't be.
However if we do something like this (whereby we create a transparent marker node e.g. one that has some geometry):
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If Out Target Image Has Been Detected Than Get The Corresponding Anchor
guard let currentImageAnchor = anchor as? ARImageAnchor else { return }
//2. Print The Anchor ID & It's Associated Node
print("""
Anchor With ID Has Been Detected \(currentImageAnchor.identifier)
Associated Node Details = \(node)
""")
//3. Create A Transpanrent Geometry
node.geometry = SCNSphere(radius: 0.1)
node.geometry?.firstMaterial?.diffuse.contents = UIColor.clear
//3. Store The Node
imageTargets.append(node)
}
And then call the following method, it does detect if the ARReferenceImage is inView:
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
//1. Get The Current Point Of View
guard let pointOfView = augmentedRealityView.pointOfView else { return }
//2. Loop Through Our Image Target Markers
for addedNode in imageTargets{
if augmentedRealityView.isNode(addedNode, insideFrustumOf: pointOfView){
print("Node Is Visible")
}else{
print("Node Is Not Visible")
}
}
}
In regard to your other point about an SCNNode being occluded by another one, the Apple Docs state that the inViewOfFrostrum:
does not perform occlusion testing. That is, it returns
true if the tested node lies within the specified viewing frustum
regardless of whether that node’s contents are obscured by other
geometry.
Again, apologies if I haven't understood you correctly, but hopefully it might help to some extent...
Update:
Now I fully understand your question, I agree with #orangenkopf that this isn't possible. Since as the docs state:
ARKit doesn’t track changes to the position or orientation of each
detected image.
From the Recognizing Images in an AR Experience documentation:
ARKit adds an image anchor to a session exactly once for each
reference image in the session configuration’s detectionImages array.
If your AR experience adds virtual content to the scene when an image
is detected, that action will by default happen only once. To allow
the user to experience that content again without restarting your app,
call the session’s remove(anchor:) method to remove the corresponding
ARImageAnchor. After the anchor is removed, ARKit will add a new
anchor the next time it detects the image.
So, maybe you can find a workaround for your case:
Let's say we are that structure which saves our ARImageAnchor detected and the virtual content associated:
struct ARImage {
var anchor: ARImageAnchor
var node: SCNNode
}
Then, when the renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) is called, you save the image detected into a temporary list of ARImage:
...
var tmpARImages: [ARImage] = []
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
let referenceImage = imageAnchor.referenceImage
// If the ARImage does not exist
if !tmpARImages.contains(where: {$0.anchor.referenceImage.name == referenceImage.name}) {
let virtualContent = SCNNode(...)
node.addChildNode(virtualContent)
tmpARImages.append(ARImage(anchor: imageAnchor, node: virtualContent))
}
// Delete anchor from the session to reactivate the image recognition
sceneView.session.remove(anchor: anchor)
}
If you understood, while your camera's view point out of the image/marker, the delegate function will loop endlessly... (because we removed the anchor from the session).
The idea will be to combine the image recognition loop, the image detected saved into the tmp list and the sceneView.isNode(node, insideFrustumOf: pointOfView) function to determine if the image/marker detected is no longer view.
I hope it was clear...
This code works only if You hold the device strictly horizontally or vertically. If You hold iPhone tilted or starting to tilt if, this code doesn't work:
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
//1. Get The Current Point Of View
guard let pointOfView = augmentedRealityView.pointOfView else { return }
//2. Loop Through Our Image Target Markers
for addedNode in imageTargets{
if augmentedRealityView.isNode(addedNode, insideFrustumOf: pointOfView){
print("Node Is Visible")
}else{
print("Node Is Not Visible")
}
}
}

Get name of reference image swift ARKit 1.5

Is someone know how can I get the name of the reference image red by the camera in AR?
I think that the Anchor is reading the identifier and connect it to the reference image that's returning name and physical size.
I want to have one AR Resources folder where I can put different images and then, in base of what the camera is recognize I want to display one model instead of another one.
Thank you very much!
An ARReferenceImage has the following properties which you can access:
var name: String?
A descriptive name for the image.
var physicalSize: CGSize
The real-world dimensions, in meters, of the image.
As such, in order to get the name of the reference image and other properties you can use the following delegate callback:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If Out Target Image Has Been Detected Than Get The Corresponding Anchor
guard let currentImageAnchor = anchor as? ARImageAnchor else { return }
//2. Get The Targets Name
let name = currentImageAnchor.referenceImage.name!
//3. Get The Targets Width & Height
let width = currentImageAnchor.referenceImage.physicalSize.width
let height = currentImageAnchor.referenceImage.physicalSize.height
//4. Log The Reference Images Information
print("""
Image Name = \(name)
Image Width = \(width)
Image Height = \(height)
""")
}
Note that this code assumes that your images have a name which is set in the ARResources Asset Folder e.g:
You can then put logic within the callback or call another function which adds an SCNNode or SCNScene at the transform of the ARImageAnchor e.g:
//1. Create An SCNNode
let nodeHolder = SCNNode()
//2. Determine Which ImageTarget Has Been Detected
if name == "ImageOne"{
let nodeGeometry = SCNBox(width: 0.02, height: 0.02, length: 0.02, chamferRadius: 0)
nodeGeometry.firstMaterial?.diffuse.contents = UIColor.cyan
nodeHolder.geometry = nodeGeometry
}else if name == "ImageTwo"{
let nodeGeometry = SCNSphere(radius: 0.02)
nodeGeometry.firstMaterial?.diffuse.contents = UIColor.cyan
nodeHolder.geometry = nodeGeometry
}
//3. Add The SCNNode At The Position Of The Anchor
nodeHolder.position = SCNVector3(currentImageAnchor.transform.columns.3.x,
currentImageAnchor.transform.columns.3.y,
currentImageAnchor.transform.columns.3.z)
//4. Add It To The Scene
augmentedRealityView?.scene.rootNode.addChildNode(nodeHolder)

Unable to differentiate between plane detected by ARKit and a digital object to be placed using HitTest

I'm fairly new to iOS Swift programming. I'm using ARKit to build a very basic app to detect a horizontal plane and place,translate,rotate,modify or delete an object on it.
My main concern is to differential between the plane detected by ARKit and a digital object that I've placed. My thinking was to use hitTest(:options:) to select the object (if any) and hitTest(:types:) to select the plane through a tap gesture. I'm attaching the relevant code snippet below.
#objc func tapped(_ gesture: UITapGestureRecognizer){
let sceneView = gesture.view as! ARSCNView
let location = gesture.location(in: sceneView)
let hitTestOptions: [SCNHitTestOption: Any] = [.boundingBoxOnly: true]
let existingNodeHitTest = sceneView.hitTest(location, options: hitTestOptions)
if let existingNode = existingNodeHitTest.first?.node {
// Move, rotate, modify or delete the object
} else {
// Option to add other objects
let hitTest = sceneView.hitTest(location, types: .existingPlaneUsingExtent)
if !hitTest.isEmpty {
let node = findNode(at: location)
if node !== selectedNode {
self.addItems(hitTestResult: hitTest.first!)
}
}
}
}
func addItems(hitTestResult: ARHitTestResult) {
let scene = SCNScene(named: "BuildingModels.scnassets/model/model.scn")
let itemNode = (scene?.rootNode.childNode(withName: "SketchUp", recursively: false))!
let transform = hitTestResult.worldTransform
let position = SCNVector3(transform.columns.3.x,transform.columns.3.y,transform.columns.3.z)
itemNode.position = position
// self.sceneView.scene.lightingEnvironment.contents = scene.lightingEnvironment.contents
self.sceneView.scene.rootNode.addChildNode(itemNode)
selectedNode = itemNode
}
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let planeAnchor = anchor as? ARPlaneAnchor else {return}
let gridNode = createGrid(planeAnchor: planeAnchor)
node.addChildNode(gridNode)
}
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let planeAnchor = anchor as? ARPlaneAnchor else {return}
node.enumerateChildNodes { (childNode, _) in
childNode.removeFromParentNode()
}
let gridNode = createGrid(planeAnchor: planeAnchor)
node.addChildNode(gridNode)
}
When I run the code, the hitTest(_:options:) returns the plane detected. Are there any ways to select only the SCNNodes (objects) that I place and not the plane detected. Am I missing something? Any help is highly appreciated.
Thanks,
Sourabh.
Looking at your question you are already half way there.
The way to handle this in it's entirety, is to make use of the following HitTest functions within your UITapGestureRecognizer function:
(1) An ARSCNHitTest which:
Searches for real-world objects or AR anchors in the captured camera image corresponding to a point in the SceneKit view.
(2) AnSCNHitTest which:
Looks for SCNGeometry objects along the ray you specify. For each intersection between the ray and and a geometry, SceneKit creates a hit-test result to provide information about both the SCNNode object containing the geometry and the location of the intersection on the geometry’s surface.
Using your UITapGestureRecognizer as an example therefore, you can differentiate between an ARPlaneAnchor (detectedPlane) and any SCNNode within your scene like so:
#objc func handleTap(_ gesture: UITapGestureRecognizer){
//1. Get The Current Touch Location
let currentTouchLocation = gesture.location(in: self.augmentedRealityView)
//2. Perform An ARNSCNHitTest To See If We Have Hit An ARPlaneAnchor
if let planeHitTest = augmentedRealityView.hitTest(currentTouchLocation, types: .existingPlane).first,
let planeAnchor = planeHitTest.anchor as? ARPlaneAnchor{
print("User Has Tapped On An Existing Plane = \(planeAnchor.identifier)")
return
}
//3. Perform An SCNHitTest To See If We Have Hit An SCNNode
if let nodeHitTest = augmentedRealityView.hitTest(currentTouchLocation, options: nil).first {
let nodeTapped = nodeHitTest.node
print("An SCNNode Has Been Tapped = \(nodeTapped)")
return
}
}
If you make use of the name property for any of your SCNNode’s this will also help you further e.g:
if let name = nodeTapped.name{
print("An SCNNode Named \(name) Has Been Tapped")
}
Additionally, if you ONLY want to detect objects you have added e.g SCNNodes then you can simply remove part two of the getureRecognizer function.
Hope it helps...
To fix this issue, you should loop through your scene nodes, after that you can manipulate with your wanted node. Example:
for node in sceneView.scene.rootNode.childNodes {
if node.name == "yorNodeName" {
// do your manipulations
}
}
Don't forget to add name to your nodes. Example:
node.name = "yorNodeName"
I hope it helped!