ARKit Object not created from saved file - swift

I am working for a project using ARKit. I need to save an object position For example in my Home I set a chair/object centre in my room and come back to room after few hours later I wish to see that chair/object on that place where it was is it possible in ARKit.
I placed my chair/object and saved in file the file saved successfully. But when i retrieve the saved file and reload objects. But the Object not visible in ARSession
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = [.horizontal]
let options: ARSession.RunOptions = [.resetTracking, .removeExistingAnchors]
if let worldMap = worldMap {
configuration.initialWorldMap = worldMap
print("Found saved world map.")
self.showAlert("Found saved world map.", "")
} else {
print("Move camera around to map your surrounding space.")
}
sceneView.session.run(configuration, options: options)
sceneView.delegate = self

When you place the chair, you need to add an anchor to your scene:
let anchor = ARAnchor(name: "chair", transform: transform)
session.add(anchor: anchor)
Then you implement the ARSCNViewDelegate and add your actual model to the anchor.
func session(_ session: ARSession, didAdd node: SCNNode, for anchor: ARAnchor) {
guard anchor.name == "chair" else { return }
node.addChildNode(chairNode)
}
So what is happening? When you save an ARWorldMap it contains all anchors, but no SceneKit data.
That's why we need to first add the anchor that lates is saved and add the geometry in the delegate. The delegate is called both when you manually add an anchor and when the system adds anchors after you run the session with a new initial world map.
From the documentation:
The same ARSCNView delegate method renderer(_:didAdd:for:) fires both when you directly add an anchor to the session and when the session restores anchors from a world map. To determine which saved anchor represents the virtual object, this app uses the ARAnchor name property.

Related

Where is the .camera AnchorEntity located?

When adding a child to my AnchorEntity(.camera), it appears as if the child is spawning behind my camera (meaning I can only see my child when I turn around). I have also tried to add a mesh to my Anchor directly but unfortunately ARKit / RealityKit does not render the mesh when you are inside of it (which because its centered around the camera, is theoretically always the case. However, it could also be the case that its always located behind the screen [where the user is] and I'm never able to see it).
Also, oddly enough the child entity does not move with the camera AnchorEntity despite setting the translation transform to (0,0,0).
My two questions are:
Is the .camera anchor actually located right where the physical iPad / camera is located or is it located further back (perhaps where the user would normally hold the iPad)?
How do you get a child entity of the AnchorEntity(.camera) to move as the iPad / camera moves in real space?
Answer to the first question
In RealityKit and ARKit frameworks ARCamera has a pivot point like other entities (nodes) have, and it's located at the point where lens is attached to the camera body (at bayonet level). This pivot can tether AnchorEntity(.camera). In other words, virtual camera and real-world camera have that pivot point approximately at the same place.
So, if you attach RealityKit's AnchorEntity to a camera's pivot, you place it to the coordinates where camera's bayonet is located. And this AnchorEntity(.camera) will be tracked automatically without a need to implement session(_:didUpdate:) method.
However, if attach ARKit's ARAnchor to the camera's pivot, you have to implement session(_:didUpdate:) method to constantly update a position and orientation of that anchor for every ARFrame.
Answer to the second question
If you want to constantly update model's position in RealityKits at 60 fps (when ARCamera moves and rotates) you need to use the following approach:
import ARKit
import RealityKit
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
let box = MeshResource.generateBox(size: 0.25)
let material = SimpleMaterial(color: .systemPink, isMetallic: true)
let boxEntity = ModelEntity(mesh: box, materials: [material])
let cameraAnchor = AnchorEntity(.camera) // ARCamera anchor
cameraAnchor.addChild(boxEntity)
arView.scene.addAnchor(cameraAnchor)
boxEntity.transform.translation = [0, 0,-0.5] // Box offset 0.5 m
}
}
Or you can use ARKit's great old .currentFrame instance property in session(_:didUpdate:) delegate method:
extension ViewController: ARSessionDelegate {
func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
guard let transform = arView.session.currentFrame?.camera.transform
else { return }
let arkitAnchor = ARAnchor(transform: transform)
arView.session.add(anchor: arkitAnchor) // add to session
let anchor = AnchorEntity(anchor: arkitAnchor)
anchor.addChild(boxEntity)
arView.scene.addAnchor(anchor) // add to scene
}
}
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
var boxEntity = ModelEntity(...)
override func viewDidLoad() {
super.viewDidLoad()
arView.session.delegate = self // Session's delegate
}
}
To find out how to save the ARCamera Pose over time, read the following post.

Multi-face detection in RealityKit

I have added content to the face anchor in Reality Composer, later on, after loading the Experience that i created on Reality Composer, i create a face tracking session like this:
guard ARFaceTrackingConfiguration.isSupported else { return }
let configuration = ARFaceTrackingConfiguration()
configuration.maximumNumberOfTrackedFaces = ARFaceTrackingConfiguration.supportedNumberOfTrackedFaces
configuration.isLightEstimationEnabled = true
arView.session.delegate = self
arView.session.run(configuration, options: [.resetTracking, .removeExistingAnchors])
It is not adding the content to all the faces that is detecting, and i know it is detecting more than one face because the other faces occlude the content that is stick to the other face, is this a limitation on RealityKit or i am missing something in the composer? actually is pretty hard to miss somehting since it is so basic and simple.
Thanks.
You can't succeed in multi-face tracking in RealityKit in case you use models with embedded Face Anchor, i.e. the models that came from Reality Composer' Face Tracking preset (you can use just one model with .face anchor, not three). Or you MAY USE such models but you need to delete these embedded AnchorEntity(.face) anchors. Although there's a better approach – simply load models in .usdz format.
Let's see what Apple documentation says about embedded anchors:
You can manually load and anchor Reality Composer scenes using code, like you do with other ARKit content. When you anchor a scene in code, RealityKit ignores the scene's anchoring information.
Reality Composer supports 5 anchor types: Horizontal, Vertical, Image, Face & Object. It displays a different set of guides for each anchor type to help you place your content. You can change the anchor type later if you choose the wrong option or change your mind about how to anchor your scene.
There are two options:
In new Reality Composer project, deselect the Create with default content checkbox at the bottom left of the action sheet you see at startup.
In RealityKit code, delete existing Face Anchor and assign a new one. The latter option is not great because you need to recreate objects positions from scratch:
boxAnchor.removeFromParent()
Nevertheless, I've achieved a multi-face tracking using AnchorEntity() with ARAnchor intializer inside session(:didUpdate:) instance method (just like SceneKit's renderer() instance method).
Here's my code:
import ARKit
import RealityKit
extension ViewController: ARSessionDelegate {
func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
guard let faceAnchor = anchors.first as? ARFaceAnchor
else { return }
let anchor1 = AnchorEntity(anchor: faceAnchor)
let anchor2 = AnchorEntity(anchor: faceAnchor)
anchor1.addChild(model01)
anchor2.addChild(model02)
arView.scene.anchors.append(anchor1)
arView.scene.anchors.append(anchor2)
}
}
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
let model01 = try! Entity.load(named: "angryFace") // USDZ file
let model02 = try! FacialExpression.loadSmilingFace() // RC scene
override func viewDidLoad() {
super.viewDidLoad()
arView.session.delegate = self
guard ARFaceTrackingConfiguration.isSupported else {
fatalError("Alas, Face Tracking isn't supported")
}
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
let config = ARFaceTrackingConfiguration()
config.maximumNumberOfTrackedFaces = 2
arView.session.run(config)
}
}

Is it possible to detect the same image twice or more with ARKit's image recognition?

I'm trying to build an app which detects the same image twice or more and adds these detected images to an array to do something with it later.
I added my image and size to the AR Resources folder in the Assets.xassets. The image gets recognized and that works fine. But when I want to scan the same image twice, it only recognises one image.
I didn't find any specific documentation for this problem on the internet. I also suspect that it isn't possible with ARKit and that I probably need to use a machine learning model.
If anyone has encountered this problem and has a solution without CoreML and Vision it would be appreciated. Otherwise I'll try to make it work with vision and CoreML.
Below is the code which recognises an image and ads a transparant plane above it.
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
let referenceImage = imageAnchor.referenceImage
let imageName = referenceImage.name ?? "no name"
print("Image Anchor: ", imageAnchor)
let plane = SCNPlane(width: referenceImage.physicalSize.width, height: referenceImage.physicalSize.height)
let planeNode = SCNNode(geometry: plane)
planeNode.opacity = 0.20
planeNode.eulerAngles.x = -.pi / 2
node.addChildNode(planeNode)
detectedImages.append(imageAnchor)
DispatchQueue.main.async {
self.label.text = "Image detected: \"\(imageName)\""
}
}
The code you posted does not recognise anything, it is called when ARKit has already added a default node for an anchor which was added for a recognised image, which indeed happens only once. In case to circumvent this limitation, follow the manual -
Consider when to allow detection of each image to trigger (or repeat) AR interactions. ARKit adds an image anchor to a session exactly once for each reference image in the session configuration’s detectionImages array. If your AR experience adds virtual content to the scene when an image is detected, that action will by default happen only once. To allow the user to experience that content again without restarting your app, call the session’s remove(anchor:) method to remove the corresponding ARImageAnchor. After the anchor is removed, ARKit will add a new anchor the next time it detects the image.
https://developer.apple.com/documentation/arkit/recognizing_images_in_an_ar_experience
Yes, it's possible.
You need to add an extra variable that stores the previously detected image and then compare it with the current image.
If they do not match it replaces the information on the previous image detected with the new one. This way if you have more than one image you can keep looking at one and the other and the content always shows up.
last detected Image storage
var lastImageAnchor: ARAnchor!
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
if self.lastImageAnchor != nil && self.lastImageAnchor != imageAnchor {
self.sceneView.session.remove(anchor: self.lastImageAnchor)
}
let referenceImage = imageAnchor.referenceImage
let imageName = referenceImage.name ?? "no name"
... (insert the rest of your code here)...
}
self.lastImageAnchor = imageAnchor
}
This should work.

Check whether the ARReferenceImage is no longer visible in the camera's view

I would like to check whether the ARReferenceImage is no longer visible in the camera's view. At the moment I can check if the image's node is in the camera's view, but this node is still visible in the camera's view when the ARReferenceImage is covered with another image or when the image is removed.
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
guard let node = self.currentImageNode else { return }
if let pointOfView = sceneView.pointOfView {
let isVisible = sceneView.isNode(node, insideFrustumOf: pointOfView)
print("Is node visible: \(isVisible)")
}
}
So I need to check if the image is no longer visible instead of the image's node visibility. But I can't find out if this is possible. The first screenshot shows three boxes that are added when the image beneath is found. When the found image is covered (see screenshot 2) I would like to remove the boxes.
I managed to fix the problem! Used a little bit of Maybe1's code and his concept to solving the problem, but in a different way. The following line of code is still used to reactivate the image recognition.
// Delete anchor from the session to reactivate the image recognition
sceneView.session.remove(anchor: anchor)
Let me explain. First we need to add some variables.
// The scnNodeBarn variable will be the node to be added when the barn image is found. Add another scnNode when you have another image.
var scnNodeBarn: SCNNode = SCNNode()
// This variable holds the currently added scnNode (in this case scnNodeBarn when the barn image is found)
var currentNode: SCNNode? = nil
// This variable holds the UUID of the found Image Anchor that is used to add a scnNode
var currentARImageAnchorIdentifier: UUID?
// This variable is used to call a function when there is no new anchor added for 0.6 seconds
var timer: Timer!
The complete code with comments below.
/// - Tag: ARImageAnchor-Visualizing
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
let referenceImage = imageAnchor.referenceImage
// The following timer fires after 0.6 seconds, but everytime when there found an anchor the timer is stopped.
// So when there is no ARImageAnchor found the timer will be completed and the current scene node will be deleted and the variable will set to nil
DispatchQueue.main.async {
if(self.timer != nil){
self.timer.invalidate()
}
self.timer = Timer.scheduledTimer(timeInterval: 0.6 , target: self, selector: #selector(self.imageLost(_:)), userInfo: nil, repeats: false)
}
// Check if there is found a new image on the basis of the ARImageAnchorIdentifier, when found delete the current scene node and set the variable to nil
if(self.currentARImageAnchorIdentifier != imageAnchor.identifier &&
self.currentARImageAnchorIdentifier != nil
&& self.currentNode != nil){
//found new image
self.currentNode!.removeFromParentNode()
self.currentNode = nil
}
updateQueue.async {
//If currentNode is nil, there is currently no scene node
if(self.currentNode == nil){
switch referenceImage.name {
case "barn":
self.scnNodeBarn.transform = node.transform
self.sceneView.scene.rootNode.addChildNode(self.scnNodeBarn)
self.currentNode = self.scnNodeBarn
default: break
}
}
self.currentARImageAnchorIdentifier = imageAnchor.identifier
// Delete anchor from the session to reactivate the image recognition
self.sceneView.session.remove(anchor: anchor)
}
}
Delete the node when the timer is finished indicating that there was no new ARImageAnchor found.
#objc
func imageLost(_ sender:Timer){
self.currentNode!.removeFromParentNode()
self.currentNode = nil
}
In this way the currently added scnNode wil be deleted when the image is covered or when there is found a new image.
This solution does unfortunately not solve the positioning problem of images because of the following:
ARKit doesn’t track changes to the position or orientation of each detected image.
I don't think this is currently possible.
From the Recognizing Images in an AR Experience documentation:
Design your AR experience to use detected images as a starting point for virtual content.
ARKit doesn’t track changes to the position or orientation of each detected image. If you try to place virtual content that stays attached to a detected image, that content may not appear to stay in place correctly. Instead, use detected images as a frame of reference for starting a dynamic scene.
New Answer for iOS 12.0
ARKit 2.0 and iOS 12 finally adds this feature, either via ARImageTrackingConfiguration or via the ARWorldTrackingConfiguration.detectionImages property that now also tracks the position of the images.
The Apple documentation to ARImageTrackingConfiguration lists advantages of both methods:
With ARImageTrackingConfiguration, ARKit establishes a 3D space not by tracking the motion of the device relative to the world, but solely by detecting and tracking the motion of known 2D images in view of the camera. ARWorldTrackingConfiguration can also detect images, but each configuration has its own strengths:
World tracking has a higher performance cost than image-only tracking, so your session can reliably track more images at once with ARImageTrackingConfiguration.
Image-only tracking lets you anchor virtual content to known images only when those images are in view of the camera. World tracking with image detection lets you use known images to add virtual content to the 3D world, and continues to track the position of that content in world space even after the image is no longer in view.
World tracking works best in a stable, nonmoving environment. You can use image-only tracking to add virtual content to known images in more situations—for example, an advertisement inside a moving subway car.
The correct way to check if an image that you are tracking is not currently tracked by ARKit is by using the "isTracked" property in the ARImageAnchor on the didUpdate node for anchor function.
For that, I use the next struct:
struct TrackedImage {
var name : String
var node : SCNNode?
}
And then an array of that struct with the name of all the images.
var trackedImages : [TrackedImage] = [ TrackedImage(name: "image_1", node: nil) ]
Then in the didAdd node for anchor, set the new content to the scene and also add the node to the corresponding element in the array of trackedImages
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
// Check if the added anchor is a recognized ARImageAnchor
if let imageAnchor = anchor as? ARImageAnchor{
// Get the reference ar image
let referenceImage = imageAnchor.referenceImage
// Create a plane to match the detected image.
let plane = SCNPlane(width: referenceImage.physicalSize.width, height: referenceImage.physicalSize.height)
plane.firstMaterial?.diffuse.contents = UIColor(red: 1, green: 1, blue: 1, alpha: 0.5)
// Create SCNNode from the plane
let planeNode = SCNNode(geometry: plane)
planeNode.eulerAngles.x = -.pi / 2
// Add the plane to the scene.
node.addChildNode(planeNode)
// Add the node to the tracked images
for (index, trackedImage) in trackedImages.enumerated(){
if(trackedImage.name == referenceImage.name){
trackedImage[index].node = planeNode
}
}
}
}
Finally in the didUpdate node for anchor function we search for the anchor name in our array and check if the property isTracked is false.
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
var trackedImages : [TrackedImage] = [ TrackedImage(name: "image_1", node: nil) ]
if let imageAnchor = anchor as? ARImageAnchor{
// Search the corresponding node for the ar image anchor
for (index, trackedImage) in trackedImages.enumerated(){
if(trackedImage.name == referenceImage.name){
// Check if track is lost on ar image
if(imageAnchor.isTracked){
// The image is being tracked
trackedImage.node?.isHidden = false // Show or add content
}else{
// The image is lost
trackedImage.node?.isHidden = true // Hide or delete content
}
break
}
}
}
}
This solution works when you want to tracked multiple images at the same time and know when any of them is lost.
Note: For this solution to work the maximumNumberOfTrackedImages in the AR configuration must be set to a nonzero number.
For what its worth, I spent hours trying to figure out how to constantly check for image references. The didUpdate function was the answer. Then you just need to test of the reference image is being tracked using the .isTracked property. At that point, you can set the .isHidden property to true or false. Heres my example:
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
let trackedNode = node
if let imageAnchor = anchor as? ARImageAnchor{
if (imageAnchor.isTracked) {
trackedNode.isHidden = false
print("\(trackedNode.name)")
}else {
trackedNode.isHidden = true
//print("\(trackedImageName)")
print("No image in view")
}
}
}
I'm not entirely sure I have understood what your asking (so apologies), but if I have then perhaps this might help...
It seems that for insideOfFrustum to work correctly, that their must be some SCNGeometry associated with the node for it to work (an SCNNode alone will not suffice).
For example if we do something like this in the delegate callback and save the added SCNNode into an array:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If Out Target Image Has Been Detected Than Get The Corresponding Anchor
guard let currentImageAnchor = anchor as? ARImageAnchor else { return }
//2. Print The Anchor ID & It's Associated Node
print("""
Anchor With ID Has Been Detected \(currentImageAnchor.identifier)
Associated Node Details = \(node)
""")
//3. Store The Node
imageTargets.append(node)
}
And then use the insideOfFrustum method, 99% of the time it will say that the node is in view even when we know it shouldn't be.
However if we do something like this (whereby we create a transparent marker node e.g. one that has some geometry):
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If Out Target Image Has Been Detected Than Get The Corresponding Anchor
guard let currentImageAnchor = anchor as? ARImageAnchor else { return }
//2. Print The Anchor ID & It's Associated Node
print("""
Anchor With ID Has Been Detected \(currentImageAnchor.identifier)
Associated Node Details = \(node)
""")
//3. Create A Transpanrent Geometry
node.geometry = SCNSphere(radius: 0.1)
node.geometry?.firstMaterial?.diffuse.contents = UIColor.clear
//3. Store The Node
imageTargets.append(node)
}
And then call the following method, it does detect if the ARReferenceImage is inView:
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
//1. Get The Current Point Of View
guard let pointOfView = augmentedRealityView.pointOfView else { return }
//2. Loop Through Our Image Target Markers
for addedNode in imageTargets{
if augmentedRealityView.isNode(addedNode, insideFrustumOf: pointOfView){
print("Node Is Visible")
}else{
print("Node Is Not Visible")
}
}
}
In regard to your other point about an SCNNode being occluded by another one, the Apple Docs state that the inViewOfFrostrum:
does not perform occlusion testing. That is, it returns
true if the tested node lies within the specified viewing frustum
regardless of whether that node’s contents are obscured by other
geometry.
Again, apologies if I haven't understood you correctly, but hopefully it might help to some extent...
Update:
Now I fully understand your question, I agree with #orangenkopf that this isn't possible. Since as the docs state:
ARKit doesn’t track changes to the position or orientation of each
detected image.
From the Recognizing Images in an AR Experience documentation:
ARKit adds an image anchor to a session exactly once for each
reference image in the session configuration’s detectionImages array.
If your AR experience adds virtual content to the scene when an image
is detected, that action will by default happen only once. To allow
the user to experience that content again without restarting your app,
call the session’s remove(anchor:) method to remove the corresponding
ARImageAnchor. After the anchor is removed, ARKit will add a new
anchor the next time it detects the image.
So, maybe you can find a workaround for your case:
Let's say we are that structure which saves our ARImageAnchor detected and the virtual content associated:
struct ARImage {
var anchor: ARImageAnchor
var node: SCNNode
}
Then, when the renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) is called, you save the image detected into a temporary list of ARImage:
...
var tmpARImages: [ARImage] = []
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
let referenceImage = imageAnchor.referenceImage
// If the ARImage does not exist
if !tmpARImages.contains(where: {$0.anchor.referenceImage.name == referenceImage.name}) {
let virtualContent = SCNNode(...)
node.addChildNode(virtualContent)
tmpARImages.append(ARImage(anchor: imageAnchor, node: virtualContent))
}
// Delete anchor from the session to reactivate the image recognition
sceneView.session.remove(anchor: anchor)
}
If you understood, while your camera's view point out of the image/marker, the delegate function will loop endlessly... (because we removed the anchor from the session).
The idea will be to combine the image recognition loop, the image detected saved into the tmp list and the sceneView.isNode(node, insideFrustumOf: pointOfView) function to determine if the image/marker detected is no longer view.
I hope it was clear...
This code works only if You hold the device strictly horizontally or vertically. If You hold iPhone tilted or starting to tilt if, this code doesn't work:
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
//1. Get The Current Point Of View
guard let pointOfView = augmentedRealityView.pointOfView else { return }
//2. Loop Through Our Image Target Markers
for addedNode in imageTargets{
if augmentedRealityView.isNode(addedNode, insideFrustumOf: pointOfView){
print("Node Is Visible")
}else{
print("Node Is Not Visible")
}
}
}

ARAnchor for SCNNode

I'm trying to get a hold of the anchor after adding an SCNNode to the scene of an ARSCNView. My understanding is that the anchor should be created automatically, but I can't seem to retrieve it.
Below is how I add it. The node is saved in a variable called testNode.
let node = SCNNode()
node.geometry = SCNBox(width: 0.5, height: 0.1, length: 0.3, chamferRadius: 0)
node.geometry?.firstMaterial?.diffuse.contents = UIColor.green
sceneView.scene.rootNode.addChildNode(node)
testNode = node
Here is how I try to retrieve it. It always prints nil.
if let testNode = testNode {
print(sceneView.anchor(for: testNode))
}
Does it not create the anchor? If it does: is there another method I can use to retrieve it?
If you have a look at the Apple Docs it states that:
To track the positions and orientations of real or virtual objects
relative to the camera, create anchor objects and use the add(anchor:)
method to add them to your AR session.
As such, I think that since your aren't using PlaneDetection you would need to create an ARAnchor manually if it is needed:
Whenever you place a virtual object, always add an ARAnchor representing its position and orientation to the ARSession. After moving a virtual object, remove the anchor at the old position and create a new anchor at the new position. Adding an anchor tells ARKit that a position is important, improving world tracking quality in that area and helping virtual objects appear to stay in place relative to real-world surfaces.
You can read more about this in the following thread What's the difference between using ARAnchor to insert a node and directly insert a node?
Anyway, in order to get you started I began by creating an SCNNode called currentNode:
var currentNode: SCNNode?
Then using a UITapGestureRecognizer I created an ARAnchor manually at the touchLocation:
#objc func handleTap(_ gesture: UITapGestureRecognizer){
//1. Get The Current Touch Location
let currentTouchLocation = gesture.location(in: self.augmentedRealityView)
//2. If We Have Hit A Feature Point Get The Result
if let hitTest = augmentedRealityView.hitTest(currentTouchLocation, types: [.featurePoint]).last {
//2. Create An Anchore At The World Transform
let anchor = ARAnchor(transform: hitTest.worldTransform)
//3. Add It To The Scene
augmentedRealitySession.add(anchor: anchor)
}
}
Having added the anchor, I then used the ARSCNViewDelegate callback to create the currentNode like so:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
if currentNode == nil{
currentNode = SCNNode()
let nodeGeometry = SCNBox(width: 0.2, height: 0.2, length: 0.2, chamferRadius: 0)
nodeGeometry.firstMaterial?.diffuse.contents = UIColor.cyan
currentNode?.geometry = nodeGeometry
currentNode?.position = SCNVector3(anchor.transform.columns.3.x, anchor.transform.columns.3.y, anchor.transform.columns.3.z)
node.addChildNode(currentNode!)
}
}
In order to test that it works, e.g being able to log the corresponding ARAnchor, I changed the tapGesture method to include this at the end:
if let anchorHitTest = augmentedRealityView.hitTest(currentTouchLocation, options: nil).first,{
print(augmentedRealityView.anchor(for: anchorHitTest.node))
}
Which in my ConsoleLog prints:
Optional(<ARAnchor: 0x1c0535680 identifier="23CFF447-68E9-451D-A64D-17C972EB5F4B" transform=<translation=(-0.006610 -0.095542 -0.357221) rotation=(-0.00° 0.00° 0.00°)>>)
Hope it helps...