ARKit – How to put 3D Object on QRCode? - swift

I'm actually trying to put a 3D Object on QRCode with ARKit
For that I use a AVCaptureDevice to detect a QRCode and establish the area of the QRCode that gives me a CGRect.
Then, I make a hitTest on every point of the CGRect to get the average 3D coordinates like so :
positionGiven = SCNVector3(0, 0, 0)
for column in Int(qrZone.origin.x)...2*Int(qrZone.origin.x + qrZone.width) {
for row in Int(qrZone.origin.y)...2*Int(qrZone.origin.y + qrZone.height) {
for result in sceneView.hitTest(CGPoint(x: CGFloat(column)/2,y:CGFloat(row)/2), types: [.existingPlaneUsingExtent,.featurePoint]) {
positionGiven.x+=result.worldTransform.columns.3.x
positionGiven.y+=result.worldTransform.columns.3.y
positionGiven.z+=result.worldTransform.columns.3.z
cpts += 1
}
}
}
positionGiven.x=positionGiven.x/cpts
positionGiven.y=positionGiven.y/cpts
positionGiven.z=positionGiven.z/cpts
But the hitTest doesn't detect any result and freeze the camera while when I make a hitTest with a touch on screen it works.
Do you have any idea why it's not working ?
Do you have an other idea that can help me to achieve what I want to do ?
I already thought about 3D translation with CoreMotion that can give me the tilt of the device but that seems really tedious.
I also heard about ARWorldAlignmentCamera that can locked the scene coordinate to match the orientation of the camera but I don't know how to use it !
Edit : I try to move my 3D Object every time I touch the screen and the hitTest is positive, and it's pretty accurate ! I really don't understand why hitTest on an area of pixels doesn't work...
Edit 2 : Here is the code of the hitTest who works with 2-5 touches on the screen:
#objc func touch(sender : UITapGestureRecognizer) {
for result in sceneView.hitTest(CGPoint(x: sender.location(in: view).x,y: sender.location(in: view).y), types: [.existingPlaneUsingExtent,.featurePoint]) {
//Pop up message for testing
alert("\(sender.location(in: view))", message: "\(result.worldTransform.columns.3)")
//Moving the 3D Object to the new coordinates
let objectList = sceneView.scene.rootNode.childNodes
for object : SCNNode in objectList {
object.removeFromParentNode()
}
addObject(SCNVector3(result.worldTransform.columns.3.x,result.worldTransform.columns.3.y,result.worldTransform.columns.3.z))
}
}
Edit 3 :
I manage to resolve my problem partially.
I take the transform matrix of the camera (session.currentFrame.camera.transform) so that the object is in front of the camera.
Then I apply a translation on (x,y) with the position of the CGRect.
However i can't translate the z-axis because i don't have enough informations.
And I will probably need a estimation of z coordinate like the hitTest do..
Thanks in advance ! :)

You could use Apple's Vision API to detect the QR code and place an anchor.
To start detecting QR codes, use:
var qrRequests = [VNRequest]()
var detectedDataAnchor: ARAnchor?
var processing = false
func startQrCodeDetection() {
// Create a Barcode Detection Request
let request = VNDetectBarcodesRequest(completionHandler: self.requestHandler)
// Set it to recognize QR code only
request.symbologies = [.QR]
self.qrRequests = [request]
}
In ARSession's didUpdate Frame
public func session(_ session: ARSession, didUpdate frame: ARFrame) {
DispatchQueue.global(qos: .userInitiated).async {
do {
if self.processing {
return
}
self.processing = true
// Create a request handler using the captured image from the ARFrame
let imageRequestHandler = VNImageRequestHandler(cvPixelBuffer: frame.capturedImage,
options: [:])
// Process the request
try imageRequestHandler.perform(self.qrRequests)
} catch {
}
}
}
Handle the Vision QR request and trigger the hit test
func requestHandler(request: VNRequest, error: Error?) {
// Get the first result out of the results, if there are any
if let results = request.results, let result = results.first as? VNBarcodeObservation {
guard let payload = result.payloadStringValue else {return}
// Get the bounding box for the bar code and find the center
var rect = result.boundingBox
// Flip coordinates
rect = rect.applying(CGAffineTransform(scaleX: 1, y: -1))
rect = rect.applying(CGAffineTransform(translationX: 0, y: 1))
// Get center
let center = CGPoint(x: rect.midX, y: rect.midY)
DispatchQueue.main.async {
self.hitTestQrCode(center: center)
self.processing = false
}
} else {
self.processing = false
}
}
func hitTestQrCode(center: CGPoint) {
if let hitTestResults = self.latestFrame?.hitTest(center, types: [.featurePoint] ),
let hitTestResult = hitTestResults.first {
if let detectedDataAnchor = self.detectedDataAnchor,
let node = self.sceneView.node(for: detectedDataAnchor) {
let previousQrPosition = node.position
node.transform = SCNMatrix4(hitTestResult.worldTransform)
} else {
// Create an anchor. The node will be created in delegate methods
self.detectedDataAnchor = ARAnchor(transform: hitTestResult.worldTransform)
self.sceneView.session.add(anchor: self.detectedDataAnchor!)
}
}
}
Then handle adding the node when the anchor is added.
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
// If this is our anchor, create a node
if self.detectedDataAnchor?.identifier == anchor.identifier {
let sphere = SCNSphere(radius: 1.0)
sphere.firstMaterial?.diffuse.contents = UIColor.redColor()
let sphereNode = SCNNode(geometry: sphere)
sphereNode.transform = SCNMatrix4(anchor.transform)
return sphereNode
}
return nil
}
Source

Related

How to move and rotate SCNode using ARKit and Gesture Recognizer?

I am working on an AR based iOS app using ARKit(SceneKit). I used the Apple sample code https://developer.apple.com/documentation/arkit/handling_3d_interaction_and_ui_controls_in_augmented_reality as base for this. Using this i am able to move or rotate the whole Virtual Object.
But i want to select and move/rotate a Child Node in Virtual object using user finger, similar to how we move/rotate the whole Virtual Object itself.
I tried the below two links but it is only moving the child node in particular axis but not freely moving anywhere as the user moves the finger.
ARKit - Drag a node along a specific axis (not on a plane)
Dragging SCNNode in ARKit Using SceneKit
Also i tried replacing the Virtual Object which is a SCNReferenceNode with SCNode so that whatever functionality present for existing Virtual Object applies to Child Node as well, it is not working.
Can anyone please help me on how to freely move/rotate not only the Virtual Object but also the child node of a Virtual Object?
Please find the code i am currently using below,
let tapPoint: CGPoint = gesture.location(in: sceneView)
let result = sceneView.hitTest(tapPoint, options: nil)
if result.count == 0 {
return
}
let scnHitResult: SCNHitTestResult? = result.first
movedObject = scnHitResult?.node //.parent?.parent
let hitResults = self.sceneView.hitTest(tapPoint, types: .existingPlane)
if !hitResults.isEmpty{
guard let hitResult = hitResults.last else { return }
movedObject?.position = SCNVector3Make(hitResult.worldTransform.columns.3.x, hitResult.worldTransform.columns.3.y, hitResult.worldTransform.columns.3.z)
}
To move an object:
Perform a hitTest to check where you have touched, and detect which plane you touched and get a position. Move your SCNNode to that position by changing the node.position value with an SCNVector3.
Code:
#objc func panDetected(recognizer: UIPanGestureRecognizer){
let hitResult = self.arSceneView.hitTest(loc, types: .existingPlane)
if !hitResult.isEmpty{
guard let hitResult = hitResult.last else { return }
self.yourNode.position = SCNVector3Make(hitResult.worldTransform.columns.3.x, hitResult.worldTransform.columns.3.y, hitResult.worldTransform.columns.3.z)
}
The above code is enough to move your node over a detected plane, anywhere you touch, and not just in a single axis.
Rotating a node according to your gesture is a very difficult task and I have worked on a solution for quite sometime, never reaching a perfect output.
But, I have come across this repository in GitHub which allows you to do just that with a very impressive result.
https://github.com/Xartec/ScreenSpaceRotationAndPan
The Swift version of the code you require to rotate your node using your gesture would be :
var previousLoc: CGPoint?
var touchCount: Int?
#objc func panDetected(recognizer: UIPanGestureRecognizer){
let loc = recognizer.location(in: self.view)
var delta = recognizer.translation(in: self.view)
if recognizer.state == .began {
previousLoc = loc
touchCount = recognizer.numberOfTouches
}
else if gestureRecognizer.state == .changed {
delta = CGPoint.init(x: 2 * (loc.x - previousLoc.x), y: 2 * (loc.y - previousLoc.y))
previousLoc = loc
if touchCount != recognizer.numberOfTouches {
return
}
var rotMatrix: SCNMatrix4!
let rotX = SCNMatrix4Rotate(SCNMatrix4Identity, Float((1.0/100) * delta.y), 1, 0, 0)
let rotY = SCNMatrix4Rotate(SCNMatrix4Identity, Float((1.0 / 100) * delta.x), 0, 1, 0)
rotMatrix = SCNMatrix4Mult(rotX, rotY)
let transMatrix = SCNMatrix4MakeTranslation(yourNode.position.x, yourNode.position.y, yourNode.position.z)
self.yourNode.transform = SCNMatrix4Mult(self.yourNode.transform, SCNMatrix4Invert(transMatrix))
let parentNoderanslationMatrix = SCNMatrix4MakeTranslation((self.yourNode.parent?.worldPosition.x)!, (self.yourNode.parent?.worldPosition.y)!, (self.yourNode.parent?.worldPosition.z)!)
let parentNodeMatWOTrans = SCNMatrix4Mult((self.yourNode.parent?.worldTransform)!, SCNMatrix4Invert(parentNoderanslationMatrix))
self.yourNode.transform = SCNMatrix4Mult(self.yourNode.transform, parentNodeMatWOTrans)
let camorbitNodeTransMat = SCNMatrix4MakeTranslation((self.arSceneView.pointOfView?.worldPosition.x)!, (self.arSceneView.pointOfView?.worldPosition.y)!, (self.arSceneView.pointOfView?.worldPosition.z)!)
let camorbitNodeMatWOTrans = SCNMatrix4Mult((self.arSceneView.pointOfView?.worldTransform)!, SCNMatrix4Invert(camorbitNodeTransMat))
self.yourNode.transform = SCNMatrix4Mult(self.yourNode.transform, SCNMatrix4Invert(camorbitNodeMatWOTrans))
self.yourNode.transform = SCNMatrix4Mult(self.yourNode.transform, rotMatrix)
self.yourNode.transform = SCNMatrix4Mult(self.yourNode.transform, camorbitNodeMatWOTrans)
self.yourNode.transform = SCNMatrix4Mult(self.yourNode.transform, SCNMatrix4Invert(parentNodeMatWOTrans))
self.yourNode.transform = SCNMatrix4Mult(self.yourNode.transform, transMatrix)
}
}

ARKit detecting intersection between planes

I am using ARKit (with Scene Kit) and am trying to find a way to get the intersection between an ARReference image and a Horizontal ARPlaneDetection to display a 3D character on the surface directly in front of the detected image, e.g., Spawn inside the red circle see image below
At the moment I am able to get the character to spawn in front of the detected image, however, the character is floating in the air instead of standing on the surface.
let realWorldPositon = SCNVector3Make(anchor.transform.columns.3.x, anchor.transform.columns.3.y, anchor.transform.columns.3.z)
let hitTest = self.sceneView.scene.rootNode.hitTestWithSegment(from: self.sceneView.scene.rootNode.worldPosition, to: realWorldPositon, options: nil)
overlayNode.position = SCNVector3Make((hitTest.first?.worldCoordinates.x)!, 0, (hitTest.first?.worldCoordinates.z)!)
self.sceneView.scene.rootNode.addChildNode(overlayNode)
Any help on this would be greatly appreciated, thanks!
Example project
I think you were on the right lines using the hitTestWithSegment function to detect an intersection between the ARImageAnchor and the ARPlaneAnchor.
Rather than trying to explain each step of my attempt at an answer, I have provided code which is fully commented, so it should be fairly self explanatory.
My example works fairly well (although its certainly not perfect) and will definitely need some tweaking.
For example, you will need to look at determining more accurately the distance from the ARReferenceImage to the ARPlaneAnchor etc.
I can get the model (a Pokemon) to place at the correct level and fairly close to the front of the ARReferenceImage, although it will need tweaking.
Having said this, I think this will be a fairly good base for you to start refining the code and getting more accurate results.
Of note however, is that I have just enabled one ARPlaneAnchor to be detected (just for simplicities sake) and have assumed that you will be detecting a plane infront of your image marker.
I haven't taken into account rotation or anything like that. And of course, based on your proposed scenario; it also assumes your image would be on a desk or some other flat surface.
Anyway, here is my answer (hopefully it should be fairly self explanatory):
import UIKit
import ARKit
//-----------------------
//MARK: ARSCNViewDelegate
//-----------------------
extension ViewController: ARSCNViewDelegate{
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If We Have Detected Our ImageTarget Then Create A Plane To Visualize It
if let currentImageAnchor = anchor as? ARImageAnchor {
createReferenceImagePlaneForNode(currentImageAnchor, node: node)
allowTracking = true
}
//2. If We Have Detected A Horizontal Plane Then Create One
if let currentPlaneAnchor = anchor as? ARPlaneAnchor{
if planeNode == nil && !createdModel{ createReferencePlaneForNode(currentPlaneAnchor, node: node) }
}
}
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
//1. Check To See Whether An ARPlaneAnchor Has Been Updated
guard let anchor = anchor as? ARPlaneAnchor,
//2. Check It Is Our PlaneNode
let existingPlane = planeNode,
//3. Get The Geometry Of The PlaneNode
let planeGeometry = existingPlane.geometry as? SCNPlane else { return }
//4. Adjust It's Size & Positions
planeGeometry.width = CGFloat(anchor.extent.x)
planeGeometry.height = CGFloat(anchor.extent.z)
planeNode?.position = SCNVector3Make(anchor.center.x, 0.01, anchor.center.z)
}
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
//1. Detect The Intersection Of The ARPlaneAnchor & ARImageAncho
if allowTracking { detectIntersetionOfImageTarget() }
}
}
//---------------------------------------
//MARK: Model Generation & Identification
//---------------------------------------
extension ViewController {
/// Detects If We Have Intersected A Valid Image Target
func detectIntersetionOfImageTarget(){
//If We Havent Created Our Model Then Check To See If We Have Detected An Existing Plane
if !createdModel{
//a. Perform A HitTest On The Center Of The Screen For AnyExisting Planes
guard let planeHitTest = self.augmentedRealityView.hitTest(screenCenter, types: .existingPlaneUsingExtent).first,
let planeAnchor = planeHitTest.anchor as? ARPlaneAnchor else { return }
//b. Get The Transform Of The ARPlane Anchor
let x = planeAnchor.transform.columns.3.x
let y = planeAnchor.transform.columns.3.y
let z = planeAnchor.transform.columns.3.z
//b. Create The Anchors Vector
let anchorVector = SCNVector3(x,y, z)
//Perform Another HitTest From The ImageAnchor Vector To The Anchors Vector
if let _ = self.augmentedRealityView.scene.rootNode.hitTestWithSegment(from: imageAnchorVector, to: anchorVector, options: nil).first?.node {
//a. If We Havent Created The Model Then Place It As Soon As An Intersection Occures
if createdModel == false{
//b. Load The Model
loadModelAtVector(SCNVector3(imageAnchorVector.x, y, imageAnchorVector.z))
createdModel = true
planeNode?.removeFromParentNode()
}
}
}
}
}
class ViewController: UIViewController {
//1. Reference To Our ImageTarget Bundle
let AR_BUNDLE = "AR Resources"
//2. Vector To Store The Position Of Our Detected Image
var imageAnchorVector: SCNVector3!
//3. Variables To Allow Tracking & To Determine Whether Our Model Has Been Placed
var allowTracking = false
var createdModel = false
//4. Create A Reference To Our ARSCNView In Our Storyboard Which Displays The Camera Feed
#IBOutlet weak var augmentedRealityView: ARSCNView!
//5. Create Our ARWorld Tracking Configuration
let configuration = ARWorldTrackingConfiguration()
//6. Create Our Session
let augmentedRealitySession = ARSession()
//7. ARReference Images
lazy var staticReferenceImages: Set<ARReferenceImage> = {
let images = ARReferenceImage.referenceImages(inGroupNamed: AR_BUNDLE, bundle: nil)
return images!
}()
//8. Scrren Center Reference
var screenCenter: CGPoint!
//9. PlaneNode
var planeNode: SCNNode?
//--------------------
//MARK: View LifeCycle
//--------------------
override func viewDidLoad() {
super.viewDidLoad()
//1. Get Reference To The Center Of The Screen For RayCasting
DispatchQueue.main.async { self.screenCenter = CGPoint(x: self.view.bounds.width/2, y: self.view.bounds.height/2) }
//2. Setup Our ARSession
setupARSessionWithStaticImages()
}
override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() }
//---------------------------------
//MARK: ARImageAnchor Vizualization
//---------------------------------
/// Creates An SCNPlane For Visualizing The Detected ARImageAnchor
///
/// - Parameters:
/// - imageAnchor: ARImageAnchor
/// - node: SCNNode
func createReferenceImagePlaneForNode(_ imageAnchor: ARImageAnchor, node: SCNNode){
//1. Get The Targets Width & Height
let width = imageAnchor.referenceImage.physicalSize.width
let height = imageAnchor.referenceImage.physicalSize.height
//2. Create A Plane Geometry To Cover The ARImageAnchor
let planeNode = SCNNode()
let planeGeometry = SCNPlane(width: width, height: height)
planeGeometry.firstMaterial?.diffuse.contents = UIColor.white
planeNode.opacity = 0.5
planeNode.geometry = planeGeometry
//3. Rotate The PlaneNode To Horizontal
planeNode.eulerAngles.x = -.pi/2
//4. The Node Is Centered In The Anchor (0,0,0)
node.addChildNode(planeNode)
//5. Store The Vector Of The ARImageAnchor
imageAnchorVector = SCNVector3(imageAnchor.transform.columns.3.x, imageAnchor.transform.columns.3.y, imageAnchor.transform.columns.3.z)
let fadeOutAction = SCNAction.fadeOut(duration: 5)
planeNode.runAction(fadeOutAction)
}
//-------------------------
//MARK: Plane Visualization
//-------------------------
/// Creates An SCNPlane For Visualizing The Detected ARAnchor
///
/// - Parameters:
/// - imageAnchor: ARAnchor
/// - node: SCNNode
func createReferencePlaneForNode(_ anchor: ARPlaneAnchor, node: SCNNode){
//1. Get The Anchors Width & Height
let width = CGFloat(anchor.extent.x)
let height = CGFloat(anchor.extent.z)
//2. Create A Plane Geometry To Cover The ARImageAnchor
planeNode = SCNNode()
let planeGeometry = SCNPlane(width: width, height: height)
planeGeometry.firstMaterial?.diffuse.contents = UIColor.white
planeNode?.opacity = 0.5
planeNode?.geometry = planeGeometry
//3. Rotate The PlaneNode To Horizontal
planeNode?.eulerAngles.x = -.pi/2
//4. The Node Is Centered In The Anchor (0,0,0)
node.addChildNode(planeNode!)
}
//-------------------
//MARK: Model Loading
//-------------------
/// Loads Our Model Based On The Resulting Vector Of Our ARAnchor
///
/// - Parameter worldVector: SCNVector3
func loadModelAtVector(_ worldVector: SCNVector3) {
let modelPath = "ARModels.scnassets/Scatterbug.scn"
//1. Get The Reference To Our SCNScene & Get The Model Root Node
guard let model = SCNScene(named: modelPath),
let pokemonModel = model.rootNode.childNode(withName: "RootNode", recursively: false) else { return }
//2.Add It To Our SCNView
augmentedRealityView.scene.rootNode.addChildNode(pokemonModel)
//3. Scale The Scatterbug
pokemonModel.scale = SCNVector3(0.003, 0.003, 0.003)
pokemonModel.position = worldVector
augmentedRealityView.scene.rootNode.addChildNode(pokemonModel)
}
//---------------
//MARK: ARSession
//---------------
/// Sets Up The AR Session With Static Or Dynamic AEImages
func setupARSessionWithStaticImages(){
//1. Set Our Configuration
configuration.detectionImages = staticReferenceImages
configuration.planeDetection = .horizontal
//2. Run The Configuration
augmentedRealitySession.run(configuration, options: [.resetTracking, .removeExistingAnchors])
//3. Set The Session & Delegate
augmentedRealityView?.session = augmentedRealitySession
self.augmentedRealityView?.delegate = self
}
}
Hope it points you in the right direction...

Placing an object in front of camera at Touch Location

The following code places the node in front of the camera but always at the center 10cm away from the camera position. I want to place the node 10cm away in z-direction but at the x and y co-ordinates of where I touch the screen. So touching on different parts of the screen should result in a node being placed 10cm away in front of the camera but at the x and y location of the touch and not always at the center.
var cameraRelativePosition = SCNVector3(0,0,-0.1)
let sphere = SCNNode()
sphere.geometry = SCNSphere(radius: 0.0025)
sphere.geometry?.firstMaterial?.diffuse.contents = UIColor.white
Service.addChildNode(sphere, toNode: self.sceneView.scene.rootNode,
inView: self.sceneView, cameraRelativePosition:
cameraRelativePosition)
Service.swift
class Service: NSObject {
static func addChildNode(_ node: SCNNode, toNode: SCNNode, inView:
ARSCNView, cameraRelativePosition: SCNVector3) {
guard let currentFrame = inView.session.currentFrame else { return }
let camera = currentFrame.camera
let transform = camera.transform
var translationMatrix = matrix_identity_float4x4
translationMatrix.columns.3.x = cameraRelativePosition.x
translationMatrix.columns.3.y = cameraRelativePosition.y
translationMatrix.columns.3.z = cameraRelativePosition.z
let modifiedMatrix = simd_mul(transform, translationMatrix)
node.simdTransform = modifiedMatrix
toNode.addChildNode(node)
}
}
The result should look exactly like this : https://justaline.withgoogle.com
We can use the unprojectPoint(_:) method of SCNSceneRenderer (SCNView and ARSCNView both conform to this protocol) to convert a point on the screen to a 3D point.
When tapping the screen we can calculate a ray this way:
func getRay(for point: CGPoint, in view: SCNSceneRenderer) -> SCNVector3 {
let farPoint = view.unprojectPoint(SCNVector3(Float(point.x), Float(point.y), 1))
let nearPoint = view.unprojectPoint(SCNVector3(Float(point.x), Float(point.y), 0))
let ray = SCNVector3Make(farPoint.x - nearPoint.x, farPoint.y - nearPoint.y, farPoint.z - nearPoint.z)
// Normalize the ray
let length = sqrt(ray.x*ray.x + ray.y*ray.y + ray.z*ray.z)
return SCNVector3Make(ray.x/length, ray.y/length, ray.z/length)
}
The ray has a length of 1, so by multiplying it by 0.1 and adding the camera location we get the point you were searching for.

Unable to differentiate between plane detected by ARKit and a digital object to be placed using HitTest

I'm fairly new to iOS Swift programming. I'm using ARKit to build a very basic app to detect a horizontal plane and place,translate,rotate,modify or delete an object on it.
My main concern is to differential between the plane detected by ARKit and a digital object that I've placed. My thinking was to use hitTest(:options:) to select the object (if any) and hitTest(:types:) to select the plane through a tap gesture. I'm attaching the relevant code snippet below.
#objc func tapped(_ gesture: UITapGestureRecognizer){
let sceneView = gesture.view as! ARSCNView
let location = gesture.location(in: sceneView)
let hitTestOptions: [SCNHitTestOption: Any] = [.boundingBoxOnly: true]
let existingNodeHitTest = sceneView.hitTest(location, options: hitTestOptions)
if let existingNode = existingNodeHitTest.first?.node {
// Move, rotate, modify or delete the object
} else {
// Option to add other objects
let hitTest = sceneView.hitTest(location, types: .existingPlaneUsingExtent)
if !hitTest.isEmpty {
let node = findNode(at: location)
if node !== selectedNode {
self.addItems(hitTestResult: hitTest.first!)
}
}
}
}
func addItems(hitTestResult: ARHitTestResult) {
let scene = SCNScene(named: "BuildingModels.scnassets/model/model.scn")
let itemNode = (scene?.rootNode.childNode(withName: "SketchUp", recursively: false))!
let transform = hitTestResult.worldTransform
let position = SCNVector3(transform.columns.3.x,transform.columns.3.y,transform.columns.3.z)
itemNode.position = position
// self.sceneView.scene.lightingEnvironment.contents = scene.lightingEnvironment.contents
self.sceneView.scene.rootNode.addChildNode(itemNode)
selectedNode = itemNode
}
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let planeAnchor = anchor as? ARPlaneAnchor else {return}
let gridNode = createGrid(planeAnchor: planeAnchor)
node.addChildNode(gridNode)
}
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let planeAnchor = anchor as? ARPlaneAnchor else {return}
node.enumerateChildNodes { (childNode, _) in
childNode.removeFromParentNode()
}
let gridNode = createGrid(planeAnchor: planeAnchor)
node.addChildNode(gridNode)
}
When I run the code, the hitTest(_:options:) returns the plane detected. Are there any ways to select only the SCNNodes (objects) that I place and not the plane detected. Am I missing something? Any help is highly appreciated.
Thanks,
Sourabh.
Looking at your question you are already half way there.
The way to handle this in it's entirety, is to make use of the following HitTest functions within your UITapGestureRecognizer function:
(1) An ARSCNHitTest which:
Searches for real-world objects or AR anchors in the captured camera image corresponding to a point in the SceneKit view.
(2) AnSCNHitTest which:
Looks for SCNGeometry objects along the ray you specify. For each intersection between the ray and and a geometry, SceneKit creates a hit-test result to provide information about both the SCNNode object containing the geometry and the location of the intersection on the geometry’s surface.
Using your UITapGestureRecognizer as an example therefore, you can differentiate between an ARPlaneAnchor (detectedPlane) and any SCNNode within your scene like so:
#objc func handleTap(_ gesture: UITapGestureRecognizer){
//1. Get The Current Touch Location
let currentTouchLocation = gesture.location(in: self.augmentedRealityView)
//2. Perform An ARNSCNHitTest To See If We Have Hit An ARPlaneAnchor
if let planeHitTest = augmentedRealityView.hitTest(currentTouchLocation, types: .existingPlane).first,
let planeAnchor = planeHitTest.anchor as? ARPlaneAnchor{
print("User Has Tapped On An Existing Plane = \(planeAnchor.identifier)")
return
}
//3. Perform An SCNHitTest To See If We Have Hit An SCNNode
if let nodeHitTest = augmentedRealityView.hitTest(currentTouchLocation, options: nil).first {
let nodeTapped = nodeHitTest.node
print("An SCNNode Has Been Tapped = \(nodeTapped)")
return
}
}
If you make use of the name property for any of your SCNNode’s this will also help you further e.g:
if let name = nodeTapped.name{
print("An SCNNode Named \(name) Has Been Tapped")
}
Additionally, if you ONLY want to detect objects you have added e.g SCNNodes then you can simply remove part two of the getureRecognizer function.
Hope it helps...
To fix this issue, you should loop through your scene nodes, after that you can manipulate with your wanted node. Example:
for node in sceneView.scene.rootNode.childNodes {
if node.name == "yorNodeName" {
// do your manipulations
}
}
Don't forget to add name to your nodes. Example:
node.name = "yorNodeName"
I hope it helped!

Add SCNNode after rotating rootNode

I'm trying to add a node (a sphere) to a body model but it doesn't work properly after I rotate the model through a pan gesture.
Here's how I'm adding the node (using a long tap gesture):
func addSphere(sender: UILongPressGestureRecognizer) {
switch sender.state {
case .Began:
let location = sender.locationInView(bodyView)
let hitResults = bodyView.hitTest(location, options: nil)
if hitResults.count > 0 {
let result = hitResults.first!
let secondSphereGeometry = SCNSphere(radius: 0.015)
secondSphereGeometry.firstMaterial?.diffuse.contents = UIColor.redColor()
let secondSphereNode = SCNNode(geometry: secondSphereGeometry)
let vpWithZ = SCNVector3(x: Float(result.worldCoordinates.x), y: Float(result.worldCoordinates.y), z: Float( result.worldCoordinates.z))
secondSphereNode.position = vpWithZ
bodyView.scene!.rootNode.addChildNode(secondSphereNode)
}
break
default:
break
}
}
Here is how I rotate the view:
func rotateGesture(sender: UIPanGestureRecognizer) {
let translation = sender.translationInView(sender.view)
var newZAngle = (Float)(translation.x)*(Float)(M_PI)/180.0
newZAngle += currentZAngle
bodyView.scene!.rootNode.transform = SCNMatrix4MakeRotation(newZAngle, 0, 0, 1)
if sender.state == .Ended {
currentZAngle = newZAngle
}
}
And to load the 3D model I just do:
bodyView.scene = SCNScene(named: "male_body.dae") // bodyView is a SCNView in the storyboard
I found something related to the worldTransform property and also the function convertPosition:toNode: but couldn't find an example that works well.
The problem is that, if I rotate the model, the sphere are not positioned properly. They're always positioned as if the model was in its initial state.
If I turn the body and add long tap his arm (on the side), the sphere is added somewhere floating in front of the body, as you can see above.
I don't know how to fix this. Appreciate if someone can help me. Thanks!