How to place 3D object at GPS coordinates with ARkit - swift

Working with Project Dent trying to put 3d object at absolute GPS coordinates.
Readme shows how to put a 2D information annotation object into AR space, but I can't get it to place 3D object at GPS coordinates
Project Dent doesn't use standard SceneView, which makes it hard to try and do this based on a lot of the tutorials out there. It uses SceneLocationView based on ARCL
Here's the sample code for a 2D annotation
let coordinate = CLLocationCoordinate2D(latitude: 51.504571, longitude: -0.019717)
let location = CLLocation(coordinate: coordinate, altitude: 300)
let view = UIView() // or a custom UIView subclass
let annotationNode = LocationAnnotationNode(location: location, view: view)
sceneLocationView.addLocationNodeWithConfirmedLocation(locationNode: annotationNode)
Here's what I've been trying to do to get it to work with a 3D object
let coordinate = CLLocationCoordinate2D(latitude: 51.504571, longitude: -0.019717)
let location = CLLocation(coordinate: coordinate, altitude: 300)
let box = SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0)
let objectNode = LocationNode(location: location, SCNbox: box)
sceneLocationView.addLocationNodeWithConfirmedLocation(locationNode: objectNode)
Ideally, I'd like this code to simply place 3d box at these GPS coordinates in AR space.
Sadly, I can't even get it to build at present.
As an update to this, I've done the following. Create a new class in Nodes, based on LocationNode, called ThreeDNode -
open class ThreeDNode: LocationNode {
// Class for placing 3d objects in AR space
public let threeDObjectNode: LocationNode
public init(location: CLLocation?, scene: SCNScene) {
let boxGeometry = SCNBox(width: 1, height: 1, length: 1, chamferRadius: 0)
//let boxNode = SCNNode(geometry: boxGeometry)
threeDObjectNode = LocationNode(location: location)
threeDObjectNode.geometry = boxGeometry
threeDObjectNode.removeFlicker()
super.init(location: location)
addChildNode(threeDObjectNode)
}
required public init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
and then in POIViewController, tried to place 3d object in AR space with following code -
//example using 3d box object
let coordinate2 = CLLocationCoordinate2D(latitude: 52.010339, longitude: -8.351157)
let location2 = CLLocation(coordinate: coordinate2, altitude: 300)
let asset = SCNScene(named: "art.scnassets/ship.scn")!
let object = ThreeDNode(location: location2, scene: asset)
//add to scene with confirmed location
sceneLocationView.addLocationNodeWithConfirmedLocation(locationNode: object)
No joy :( Any help, much appreciated.

You'll want to create a location node, and add your box as its geometry, or as a child node:
let boxNode = SCNNode(geometry: box)
let locationNode = LocationNode(location: location)
locationNode.addChildNode(boxNode)
sceneLocationView.addLocationNodeWithConfirmedLocation(locationNode: locationNode)
I didn't run this code in Xcode so you may need to tweak.

Related

How can I add GARAnchor to RealityKit’s scene?

I'm using the newest Geospatial API from ARCore, and trying to built it with SwiftUI and RealityKit. I have all SDK and api key set up properly, all coordinates and accuracy info are updated every frame properly. But Whenever I use GARSession.creatAnchor method, it returns a GARAnchor. I used GARAnchor's transform property to create an ARAnchor(transform: GARAnchor.transform), then I created AnchorEntity with this ARAnchor, and add AnchorEntity to ARView.scene. However, the model never showed up. I have checked coordinates, altitude, still no luck at all. So is there anyone could help me out? Thank you so much.
do {
let garAnchor = try parent.garSession.createAnchor(coordinate: CLLocationCoordinate2D(latitude: xx.xxxxxxx, longitude: xx.xxxxxx), altitude: xxx, eastUpSouthQAnchor: simd_quatf(ix: 0, iy: 0, iz: 0, r: 0))
if garAnchor.hasValidTransform && garAnchor.trackingState == .tracking {
let arAnchor = ARAnchor(transform: garAnchor.transform)
let anchorEntity = AnchorEntity(anchor: arAnchor)
let mesh = MeshResource.generateSphere(radius: 2)
let material = SimpleMaterial(color: .red, isMetallic: true)
let sephere = ModelEntity(mesh: mesh, materials: [material])
anchorEntity.addChild(sephere)
parent.arView.scene.addAnchor(anchorEntity)
print("Anchor has valid transform, and anchor is tracking")
} else {
print("Anchor has invalid transform")
}
} catch {
print("Add garAnchor failed: \(error.localizedDescription)")
}
}

ARKit detecting intersection between planes

I am using ARKit (with Scene Kit) and am trying to find a way to get the intersection between an ARReference image and a Horizontal ARPlaneDetection to display a 3D character on the surface directly in front of the detected image, e.g., Spawn inside the red circle see image below
At the moment I am able to get the character to spawn in front of the detected image, however, the character is floating in the air instead of standing on the surface.
let realWorldPositon = SCNVector3Make(anchor.transform.columns.3.x, anchor.transform.columns.3.y, anchor.transform.columns.3.z)
let hitTest = self.sceneView.scene.rootNode.hitTestWithSegment(from: self.sceneView.scene.rootNode.worldPosition, to: realWorldPositon, options: nil)
overlayNode.position = SCNVector3Make((hitTest.first?.worldCoordinates.x)!, 0, (hitTest.first?.worldCoordinates.z)!)
self.sceneView.scene.rootNode.addChildNode(overlayNode)
Any help on this would be greatly appreciated, thanks!
Example project
I think you were on the right lines using the hitTestWithSegment function to detect an intersection between the ARImageAnchor and the ARPlaneAnchor.
Rather than trying to explain each step of my attempt at an answer, I have provided code which is fully commented, so it should be fairly self explanatory.
My example works fairly well (although its certainly not perfect) and will definitely need some tweaking.
For example, you will need to look at determining more accurately the distance from the ARReferenceImage to the ARPlaneAnchor etc.
I can get the model (a Pokemon) to place at the correct level and fairly close to the front of the ARReferenceImage, although it will need tweaking.
Having said this, I think this will be a fairly good base for you to start refining the code and getting more accurate results.
Of note however, is that I have just enabled one ARPlaneAnchor to be detected (just for simplicities sake) and have assumed that you will be detecting a plane infront of your image marker.
I haven't taken into account rotation or anything like that. And of course, based on your proposed scenario; it also assumes your image would be on a desk or some other flat surface.
Anyway, here is my answer (hopefully it should be fairly self explanatory):
import UIKit
import ARKit
//-----------------------
//MARK: ARSCNViewDelegate
//-----------------------
extension ViewController: ARSCNViewDelegate{
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If We Have Detected Our ImageTarget Then Create A Plane To Visualize It
if let currentImageAnchor = anchor as? ARImageAnchor {
createReferenceImagePlaneForNode(currentImageAnchor, node: node)
allowTracking = true
}
//2. If We Have Detected A Horizontal Plane Then Create One
if let currentPlaneAnchor = anchor as? ARPlaneAnchor{
if planeNode == nil && !createdModel{ createReferencePlaneForNode(currentPlaneAnchor, node: node) }
}
}
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
//1. Check To See Whether An ARPlaneAnchor Has Been Updated
guard let anchor = anchor as? ARPlaneAnchor,
//2. Check It Is Our PlaneNode
let existingPlane = planeNode,
//3. Get The Geometry Of The PlaneNode
let planeGeometry = existingPlane.geometry as? SCNPlane else { return }
//4. Adjust It's Size & Positions
planeGeometry.width = CGFloat(anchor.extent.x)
planeGeometry.height = CGFloat(anchor.extent.z)
planeNode?.position = SCNVector3Make(anchor.center.x, 0.01, anchor.center.z)
}
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
//1. Detect The Intersection Of The ARPlaneAnchor & ARImageAncho
if allowTracking { detectIntersetionOfImageTarget() }
}
}
//---------------------------------------
//MARK: Model Generation & Identification
//---------------------------------------
extension ViewController {
/// Detects If We Have Intersected A Valid Image Target
func detectIntersetionOfImageTarget(){
//If We Havent Created Our Model Then Check To See If We Have Detected An Existing Plane
if !createdModel{
//a. Perform A HitTest On The Center Of The Screen For AnyExisting Planes
guard let planeHitTest = self.augmentedRealityView.hitTest(screenCenter, types: .existingPlaneUsingExtent).first,
let planeAnchor = planeHitTest.anchor as? ARPlaneAnchor else { return }
//b. Get The Transform Of The ARPlane Anchor
let x = planeAnchor.transform.columns.3.x
let y = planeAnchor.transform.columns.3.y
let z = planeAnchor.transform.columns.3.z
//b. Create The Anchors Vector
let anchorVector = SCNVector3(x,y, z)
//Perform Another HitTest From The ImageAnchor Vector To The Anchors Vector
if let _ = self.augmentedRealityView.scene.rootNode.hitTestWithSegment(from: imageAnchorVector, to: anchorVector, options: nil).first?.node {
//a. If We Havent Created The Model Then Place It As Soon As An Intersection Occures
if createdModel == false{
//b. Load The Model
loadModelAtVector(SCNVector3(imageAnchorVector.x, y, imageAnchorVector.z))
createdModel = true
planeNode?.removeFromParentNode()
}
}
}
}
}
class ViewController: UIViewController {
//1. Reference To Our ImageTarget Bundle
let AR_BUNDLE = "AR Resources"
//2. Vector To Store The Position Of Our Detected Image
var imageAnchorVector: SCNVector3!
//3. Variables To Allow Tracking & To Determine Whether Our Model Has Been Placed
var allowTracking = false
var createdModel = false
//4. Create A Reference To Our ARSCNView In Our Storyboard Which Displays The Camera Feed
#IBOutlet weak var augmentedRealityView: ARSCNView!
//5. Create Our ARWorld Tracking Configuration
let configuration = ARWorldTrackingConfiguration()
//6. Create Our Session
let augmentedRealitySession = ARSession()
//7. ARReference Images
lazy var staticReferenceImages: Set<ARReferenceImage> = {
let images = ARReferenceImage.referenceImages(inGroupNamed: AR_BUNDLE, bundle: nil)
return images!
}()
//8. Scrren Center Reference
var screenCenter: CGPoint!
//9. PlaneNode
var planeNode: SCNNode?
//--------------------
//MARK: View LifeCycle
//--------------------
override func viewDidLoad() {
super.viewDidLoad()
//1. Get Reference To The Center Of The Screen For RayCasting
DispatchQueue.main.async { self.screenCenter = CGPoint(x: self.view.bounds.width/2, y: self.view.bounds.height/2) }
//2. Setup Our ARSession
setupARSessionWithStaticImages()
}
override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() }
//---------------------------------
//MARK: ARImageAnchor Vizualization
//---------------------------------
/// Creates An SCNPlane For Visualizing The Detected ARImageAnchor
///
/// - Parameters:
/// - imageAnchor: ARImageAnchor
/// - node: SCNNode
func createReferenceImagePlaneForNode(_ imageAnchor: ARImageAnchor, node: SCNNode){
//1. Get The Targets Width & Height
let width = imageAnchor.referenceImage.physicalSize.width
let height = imageAnchor.referenceImage.physicalSize.height
//2. Create A Plane Geometry To Cover The ARImageAnchor
let planeNode = SCNNode()
let planeGeometry = SCNPlane(width: width, height: height)
planeGeometry.firstMaterial?.diffuse.contents = UIColor.white
planeNode.opacity = 0.5
planeNode.geometry = planeGeometry
//3. Rotate The PlaneNode To Horizontal
planeNode.eulerAngles.x = -.pi/2
//4. The Node Is Centered In The Anchor (0,0,0)
node.addChildNode(planeNode)
//5. Store The Vector Of The ARImageAnchor
imageAnchorVector = SCNVector3(imageAnchor.transform.columns.3.x, imageAnchor.transform.columns.3.y, imageAnchor.transform.columns.3.z)
let fadeOutAction = SCNAction.fadeOut(duration: 5)
planeNode.runAction(fadeOutAction)
}
//-------------------------
//MARK: Plane Visualization
//-------------------------
/// Creates An SCNPlane For Visualizing The Detected ARAnchor
///
/// - Parameters:
/// - imageAnchor: ARAnchor
/// - node: SCNNode
func createReferencePlaneForNode(_ anchor: ARPlaneAnchor, node: SCNNode){
//1. Get The Anchors Width & Height
let width = CGFloat(anchor.extent.x)
let height = CGFloat(anchor.extent.z)
//2. Create A Plane Geometry To Cover The ARImageAnchor
planeNode = SCNNode()
let planeGeometry = SCNPlane(width: width, height: height)
planeGeometry.firstMaterial?.diffuse.contents = UIColor.white
planeNode?.opacity = 0.5
planeNode?.geometry = planeGeometry
//3. Rotate The PlaneNode To Horizontal
planeNode?.eulerAngles.x = -.pi/2
//4. The Node Is Centered In The Anchor (0,0,0)
node.addChildNode(planeNode!)
}
//-------------------
//MARK: Model Loading
//-------------------
/// Loads Our Model Based On The Resulting Vector Of Our ARAnchor
///
/// - Parameter worldVector: SCNVector3
func loadModelAtVector(_ worldVector: SCNVector3) {
let modelPath = "ARModels.scnassets/Scatterbug.scn"
//1. Get The Reference To Our SCNScene & Get The Model Root Node
guard let model = SCNScene(named: modelPath),
let pokemonModel = model.rootNode.childNode(withName: "RootNode", recursively: false) else { return }
//2.Add It To Our SCNView
augmentedRealityView.scene.rootNode.addChildNode(pokemonModel)
//3. Scale The Scatterbug
pokemonModel.scale = SCNVector3(0.003, 0.003, 0.003)
pokemonModel.position = worldVector
augmentedRealityView.scene.rootNode.addChildNode(pokemonModel)
}
//---------------
//MARK: ARSession
//---------------
/// Sets Up The AR Session With Static Or Dynamic AEImages
func setupARSessionWithStaticImages(){
//1. Set Our Configuration
configuration.detectionImages = staticReferenceImages
configuration.planeDetection = .horizontal
//2. Run The Configuration
augmentedRealitySession.run(configuration, options: [.resetTracking, .removeExistingAnchors])
//3. Set The Session & Delegate
augmentedRealityView?.session = augmentedRealitySession
self.augmentedRealityView?.delegate = self
}
}
Hope it points you in the right direction...

Get name of reference image swift ARKit 1.5

Is someone know how can I get the name of the reference image red by the camera in AR?
I think that the Anchor is reading the identifier and connect it to the reference image that's returning name and physical size.
I want to have one AR Resources folder where I can put different images and then, in base of what the camera is recognize I want to display one model instead of another one.
Thank you very much!
An ARReferenceImage has the following properties which you can access:
var name: String?
A descriptive name for the image.
var physicalSize: CGSize
The real-world dimensions, in meters, of the image.
As such, in order to get the name of the reference image and other properties you can use the following delegate callback:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If Out Target Image Has Been Detected Than Get The Corresponding Anchor
guard let currentImageAnchor = anchor as? ARImageAnchor else { return }
//2. Get The Targets Name
let name = currentImageAnchor.referenceImage.name!
//3. Get The Targets Width & Height
let width = currentImageAnchor.referenceImage.physicalSize.width
let height = currentImageAnchor.referenceImage.physicalSize.height
//4. Log The Reference Images Information
print("""
Image Name = \(name)
Image Width = \(width)
Image Height = \(height)
""")
}
Note that this code assumes that your images have a name which is set in the ARResources Asset Folder e.g:
You can then put logic within the callback or call another function which adds an SCNNode or SCNScene at the transform of the ARImageAnchor e.g:
//1. Create An SCNNode
let nodeHolder = SCNNode()
//2. Determine Which ImageTarget Has Been Detected
if name == "ImageOne"{
let nodeGeometry = SCNBox(width: 0.02, height: 0.02, length: 0.02, chamferRadius: 0)
nodeGeometry.firstMaterial?.diffuse.contents = UIColor.cyan
nodeHolder.geometry = nodeGeometry
}else if name == "ImageTwo"{
let nodeGeometry = SCNSphere(radius: 0.02)
nodeGeometry.firstMaterial?.diffuse.contents = UIColor.cyan
nodeHolder.geometry = nodeGeometry
}
//3. Add The SCNNode At The Position Of The Anchor
nodeHolder.position = SCNVector3(currentImageAnchor.transform.columns.3.x,
currentImageAnchor.transform.columns.3.y,
currentImageAnchor.transform.columns.3.z)
//4. Add It To The Scene
augmentedRealityView?.scene.rootNode.addChildNode(nodeHolder)

ARAnchor for SCNNode

I'm trying to get a hold of the anchor after adding an SCNNode to the scene of an ARSCNView. My understanding is that the anchor should be created automatically, but I can't seem to retrieve it.
Below is how I add it. The node is saved in a variable called testNode.
let node = SCNNode()
node.geometry = SCNBox(width: 0.5, height: 0.1, length: 0.3, chamferRadius: 0)
node.geometry?.firstMaterial?.diffuse.contents = UIColor.green
sceneView.scene.rootNode.addChildNode(node)
testNode = node
Here is how I try to retrieve it. It always prints nil.
if let testNode = testNode {
print(sceneView.anchor(for: testNode))
}
Does it not create the anchor? If it does: is there another method I can use to retrieve it?
If you have a look at the Apple Docs it states that:
To track the positions and orientations of real or virtual objects
relative to the camera, create anchor objects and use the add(anchor:)
method to add them to your AR session.
As such, I think that since your aren't using PlaneDetection you would need to create an ARAnchor manually if it is needed:
Whenever you place a virtual object, always add an ARAnchor representing its position and orientation to the ARSession. After moving a virtual object, remove the anchor at the old position and create a new anchor at the new position. Adding an anchor tells ARKit that a position is important, improving world tracking quality in that area and helping virtual objects appear to stay in place relative to real-world surfaces.
You can read more about this in the following thread What's the difference between using ARAnchor to insert a node and directly insert a node?
Anyway, in order to get you started I began by creating an SCNNode called currentNode:
var currentNode: SCNNode?
Then using a UITapGestureRecognizer I created an ARAnchor manually at the touchLocation:
#objc func handleTap(_ gesture: UITapGestureRecognizer){
//1. Get The Current Touch Location
let currentTouchLocation = gesture.location(in: self.augmentedRealityView)
//2. If We Have Hit A Feature Point Get The Result
if let hitTest = augmentedRealityView.hitTest(currentTouchLocation, types: [.featurePoint]).last {
//2. Create An Anchore At The World Transform
let anchor = ARAnchor(transform: hitTest.worldTransform)
//3. Add It To The Scene
augmentedRealitySession.add(anchor: anchor)
}
}
Having added the anchor, I then used the ARSCNViewDelegate callback to create the currentNode like so:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
if currentNode == nil{
currentNode = SCNNode()
let nodeGeometry = SCNBox(width: 0.2, height: 0.2, length: 0.2, chamferRadius: 0)
nodeGeometry.firstMaterial?.diffuse.contents = UIColor.cyan
currentNode?.geometry = nodeGeometry
currentNode?.position = SCNVector3(anchor.transform.columns.3.x, anchor.transform.columns.3.y, anchor.transform.columns.3.z)
node.addChildNode(currentNode!)
}
}
In order to test that it works, e.g being able to log the corresponding ARAnchor, I changed the tapGesture method to include this at the end:
if let anchorHitTest = augmentedRealityView.hitTest(currentTouchLocation, options: nil).first,{
print(augmentedRealityView.anchor(for: anchorHitTest.node))
}
Which in my ConsoleLog prints:
Optional(<ARAnchor: 0x1c0535680 identifier="23CFF447-68E9-451D-A64D-17C972EB5F4B" transform=<translation=(-0.006610 -0.095542 -0.357221) rotation=(-0.00° 0.00° 0.00°)>>)
Hope it helps...

Stop sharing node's geometry with its clone programmatically

When you create a copy of an object, geometry and its properties (materials...) are shared with that object.In Xcode Scene Editor you can easily disable that by setting Geometry Sharing (under Attributes Inspector) to Unshare.
I want to achieve the same thing programmatically, but can't find any similar properties in SceneKit documentation.I have found a similar post where someone proposed copying the object, its geometry and its material. I tried doing so but had no success.
This is the relevant part of my code:
let randomColors: [UIColor] = [UIColor.blue, UIColor.red, UIColor.yellow, UIColor.gray]
let obstacleScene = SCNScene(named: "art.scnassets/Scenes/obstacleNormal.scn")
let obstacle = obstacleScene?.rootNode.childNode(withName: "obstacle", recursively: true)
for i in 1...15 {
let randomPosition = SCNVector3(x: Float(i) * 3.5, y: 0.15, z: sign * Float(arc4random_uniform(UInt32(Int(playgroundZ/2 - 2.0))) + 1))
let randomColor = randomColors[Int(arc4random_uniform(UInt32(3)))]
let obstacleCopy = obstacle?.clone()
obstacleCopy?.position = randomPosition
obstacleCopy?.geometry?.materials.first?.diffuse.contents = randomColor
obstacleCopy?.eulerAngles = SCNVector3(x: 10.0 * Float(i), y: Float(30 - i), z: 5.0 * Float(i) * sign) //malo na random
obstacleCopy?.physicsBody = SCNPhysicsBody(type: .dynamic, shape: nil)
obstacleCopy?.physicsBody?.isAffectedByGravity = false
obstacleCopy?.physicsBody?.categoryBitMask = PhysicsCategory.obstacle
obstacleCopy?.physicsBody?.collisionBitMask = PhysicsCategory.none
obstacleCopy?.physicsBody?.contactTestBitMask = PhysicsCategory.car1 | PhysicsCategory.car2 | PhysicsCategory.barrier
obstacleArray.append(obstacleCopy!)
raceScene!.rootNode.addChildNode(obstacleCopy!)
}
I want to set different attributes on those objects but can't, because their geometry is shared.I tried doing this with copying the object and cloning it, but I couldn't see any differences with copying or cloning it.
Is there a property which you can use to achieve geometry unsharing, similar to the option in the Scene Editor, OR should the method of also copying object's geometry and its materials work?
According to the clone() API reference, you can copy the geometry after cloning, this will create a new unshared geometry for your node.
let newNode = node.clone()
newNode.geometry = node.geometry?.copy() as? SCNGeometry
The material attached to the copied geometry is still the same one being used on the original, so any changes will still affect both.
Either create a new material, or make a copy.
if let newMaterial = newNode.geometry?.materials.first.copy() as? SCNMaterial {
//make changes to material
newNode.geometry?.materials = [newMaterial]
}
You could deep clone a SCNNode, which creates an unshared geometry and unshared materials with the original.
fileprivate func deepCopyNode(node: SCNNode) -> SCNNode {
let clone = node.clone()
clone.geometry = node.geometry?.copy() as? SCNGeometry
if let g = node.geometry {
clone.geometry?.materials = g.materials.map{ $0.copy() as! SCNMaterial }
}
return clone
}