I use the following method to apply a constant force to my SCNnode in the scene,
func applyForceToDrone(powerApply: Double, isHovering: Bool, gravity : SCNVector3) {
let nodo = arrayDrone[0] // load the first drone in the scene
guard let lhBlade = nodo.childNode(withName: "Rotor_L_2", recursively: true) else {fatalError()}
guard let rhBlade = nodo.childNode(withName: "Rotor_R_2", recursively: true) else {fatalError()}
if isHovering == true {
print("x gravity = \(gravity.x), \(gravity.y) \(gravity.z)")
let force = SCNVector3(gravity.x, abs(gravity.y), gravity.z)
nodo.physicsBody?.applyForce(force, asImpulse: false)
} else {
print("apply power \(powerApply)")
let positionBladeLH = lhBlade.position
let positionBladeRH = rhBlade.position
let force = SCNVector3(0, powerApply, 0)
nodo.physicsBody?.applyForce(force, at: positionBladeLH, asImpulse: false)
nodo.physicsBody?.applyForce(force, at: positionBladeRH, asImpulse: false)
let alt = nodo.position.y // position is wrong ... is always the starting position
print(alt)
}
}
I use than this func in the render method to fly my drone.
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {}
i want to read the current position of the node every time the force is applied.
How can I get the update position while the render is applying the force.
I tried with :
let alt = nodo.position.y
but it always gave me the starting position of the node and not the one after the force is applied.
Thanks
use the node's presentation object like so:
instead of
nodo.position.y
use
nodo.presentation.position.y // local space
or
nodo.presentation.worldPosition.y // world space
Related
I am using ARKit (with Scene Kit) and am trying to find a way to get the intersection between an ARReference image and a Horizontal ARPlaneDetection to display a 3D character on the surface directly in front of the detected image, e.g., Spawn inside the red circle see image below
At the moment I am able to get the character to spawn in front of the detected image, however, the character is floating in the air instead of standing on the surface.
let realWorldPositon = SCNVector3Make(anchor.transform.columns.3.x, anchor.transform.columns.3.y, anchor.transform.columns.3.z)
let hitTest = self.sceneView.scene.rootNode.hitTestWithSegment(from: self.sceneView.scene.rootNode.worldPosition, to: realWorldPositon, options: nil)
overlayNode.position = SCNVector3Make((hitTest.first?.worldCoordinates.x)!, 0, (hitTest.first?.worldCoordinates.z)!)
self.sceneView.scene.rootNode.addChildNode(overlayNode)
Any help on this would be greatly appreciated, thanks!
Example project
I think you were on the right lines using the hitTestWithSegment function to detect an intersection between the ARImageAnchor and the ARPlaneAnchor.
Rather than trying to explain each step of my attempt at an answer, I have provided code which is fully commented, so it should be fairly self explanatory.
My example works fairly well (although its certainly not perfect) and will definitely need some tweaking.
For example, you will need to look at determining more accurately the distance from the ARReferenceImage to the ARPlaneAnchor etc.
I can get the model (a Pokemon) to place at the correct level and fairly close to the front of the ARReferenceImage, although it will need tweaking.
Having said this, I think this will be a fairly good base for you to start refining the code and getting more accurate results.
Of note however, is that I have just enabled one ARPlaneAnchor to be detected (just for simplicities sake) and have assumed that you will be detecting a plane infront of your image marker.
I haven't taken into account rotation or anything like that. And of course, based on your proposed scenario; it also assumes your image would be on a desk or some other flat surface.
Anyway, here is my answer (hopefully it should be fairly self explanatory):
import UIKit
import ARKit
//-----------------------
//MARK: ARSCNViewDelegate
//-----------------------
extension ViewController: ARSCNViewDelegate{
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If We Have Detected Our ImageTarget Then Create A Plane To Visualize It
if let currentImageAnchor = anchor as? ARImageAnchor {
createReferenceImagePlaneForNode(currentImageAnchor, node: node)
allowTracking = true
}
//2. If We Have Detected A Horizontal Plane Then Create One
if let currentPlaneAnchor = anchor as? ARPlaneAnchor{
if planeNode == nil && !createdModel{ createReferencePlaneForNode(currentPlaneAnchor, node: node) }
}
}
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
//1. Check To See Whether An ARPlaneAnchor Has Been Updated
guard let anchor = anchor as? ARPlaneAnchor,
//2. Check It Is Our PlaneNode
let existingPlane = planeNode,
//3. Get The Geometry Of The PlaneNode
let planeGeometry = existingPlane.geometry as? SCNPlane else { return }
//4. Adjust It's Size & Positions
planeGeometry.width = CGFloat(anchor.extent.x)
planeGeometry.height = CGFloat(anchor.extent.z)
planeNode?.position = SCNVector3Make(anchor.center.x, 0.01, anchor.center.z)
}
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
//1. Detect The Intersection Of The ARPlaneAnchor & ARImageAncho
if allowTracking { detectIntersetionOfImageTarget() }
}
}
//---------------------------------------
//MARK: Model Generation & Identification
//---------------------------------------
extension ViewController {
/// Detects If We Have Intersected A Valid Image Target
func detectIntersetionOfImageTarget(){
//If We Havent Created Our Model Then Check To See If We Have Detected An Existing Plane
if !createdModel{
//a. Perform A HitTest On The Center Of The Screen For AnyExisting Planes
guard let planeHitTest = self.augmentedRealityView.hitTest(screenCenter, types: .existingPlaneUsingExtent).first,
let planeAnchor = planeHitTest.anchor as? ARPlaneAnchor else { return }
//b. Get The Transform Of The ARPlane Anchor
let x = planeAnchor.transform.columns.3.x
let y = planeAnchor.transform.columns.3.y
let z = planeAnchor.transform.columns.3.z
//b. Create The Anchors Vector
let anchorVector = SCNVector3(x,y, z)
//Perform Another HitTest From The ImageAnchor Vector To The Anchors Vector
if let _ = self.augmentedRealityView.scene.rootNode.hitTestWithSegment(from: imageAnchorVector, to: anchorVector, options: nil).first?.node {
//a. If We Havent Created The Model Then Place It As Soon As An Intersection Occures
if createdModel == false{
//b. Load The Model
loadModelAtVector(SCNVector3(imageAnchorVector.x, y, imageAnchorVector.z))
createdModel = true
planeNode?.removeFromParentNode()
}
}
}
}
}
class ViewController: UIViewController {
//1. Reference To Our ImageTarget Bundle
let AR_BUNDLE = "AR Resources"
//2. Vector To Store The Position Of Our Detected Image
var imageAnchorVector: SCNVector3!
//3. Variables To Allow Tracking & To Determine Whether Our Model Has Been Placed
var allowTracking = false
var createdModel = false
//4. Create A Reference To Our ARSCNView In Our Storyboard Which Displays The Camera Feed
#IBOutlet weak var augmentedRealityView: ARSCNView!
//5. Create Our ARWorld Tracking Configuration
let configuration = ARWorldTrackingConfiguration()
//6. Create Our Session
let augmentedRealitySession = ARSession()
//7. ARReference Images
lazy var staticReferenceImages: Set<ARReferenceImage> = {
let images = ARReferenceImage.referenceImages(inGroupNamed: AR_BUNDLE, bundle: nil)
return images!
}()
//8. Scrren Center Reference
var screenCenter: CGPoint!
//9. PlaneNode
var planeNode: SCNNode?
//--------------------
//MARK: View LifeCycle
//--------------------
override func viewDidLoad() {
super.viewDidLoad()
//1. Get Reference To The Center Of The Screen For RayCasting
DispatchQueue.main.async { self.screenCenter = CGPoint(x: self.view.bounds.width/2, y: self.view.bounds.height/2) }
//2. Setup Our ARSession
setupARSessionWithStaticImages()
}
override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() }
//---------------------------------
//MARK: ARImageAnchor Vizualization
//---------------------------------
/// Creates An SCNPlane For Visualizing The Detected ARImageAnchor
///
/// - Parameters:
/// - imageAnchor: ARImageAnchor
/// - node: SCNNode
func createReferenceImagePlaneForNode(_ imageAnchor: ARImageAnchor, node: SCNNode){
//1. Get The Targets Width & Height
let width = imageAnchor.referenceImage.physicalSize.width
let height = imageAnchor.referenceImage.physicalSize.height
//2. Create A Plane Geometry To Cover The ARImageAnchor
let planeNode = SCNNode()
let planeGeometry = SCNPlane(width: width, height: height)
planeGeometry.firstMaterial?.diffuse.contents = UIColor.white
planeNode.opacity = 0.5
planeNode.geometry = planeGeometry
//3. Rotate The PlaneNode To Horizontal
planeNode.eulerAngles.x = -.pi/2
//4. The Node Is Centered In The Anchor (0,0,0)
node.addChildNode(planeNode)
//5. Store The Vector Of The ARImageAnchor
imageAnchorVector = SCNVector3(imageAnchor.transform.columns.3.x, imageAnchor.transform.columns.3.y, imageAnchor.transform.columns.3.z)
let fadeOutAction = SCNAction.fadeOut(duration: 5)
planeNode.runAction(fadeOutAction)
}
//-------------------------
//MARK: Plane Visualization
//-------------------------
/// Creates An SCNPlane For Visualizing The Detected ARAnchor
///
/// - Parameters:
/// - imageAnchor: ARAnchor
/// - node: SCNNode
func createReferencePlaneForNode(_ anchor: ARPlaneAnchor, node: SCNNode){
//1. Get The Anchors Width & Height
let width = CGFloat(anchor.extent.x)
let height = CGFloat(anchor.extent.z)
//2. Create A Plane Geometry To Cover The ARImageAnchor
planeNode = SCNNode()
let planeGeometry = SCNPlane(width: width, height: height)
planeGeometry.firstMaterial?.diffuse.contents = UIColor.white
planeNode?.opacity = 0.5
planeNode?.geometry = planeGeometry
//3. Rotate The PlaneNode To Horizontal
planeNode?.eulerAngles.x = -.pi/2
//4. The Node Is Centered In The Anchor (0,0,0)
node.addChildNode(planeNode!)
}
//-------------------
//MARK: Model Loading
//-------------------
/// Loads Our Model Based On The Resulting Vector Of Our ARAnchor
///
/// - Parameter worldVector: SCNVector3
func loadModelAtVector(_ worldVector: SCNVector3) {
let modelPath = "ARModels.scnassets/Scatterbug.scn"
//1. Get The Reference To Our SCNScene & Get The Model Root Node
guard let model = SCNScene(named: modelPath),
let pokemonModel = model.rootNode.childNode(withName: "RootNode", recursively: false) else { return }
//2.Add It To Our SCNView
augmentedRealityView.scene.rootNode.addChildNode(pokemonModel)
//3. Scale The Scatterbug
pokemonModel.scale = SCNVector3(0.003, 0.003, 0.003)
pokemonModel.position = worldVector
augmentedRealityView.scene.rootNode.addChildNode(pokemonModel)
}
//---------------
//MARK: ARSession
//---------------
/// Sets Up The AR Session With Static Or Dynamic AEImages
func setupARSessionWithStaticImages(){
//1. Set Our Configuration
configuration.detectionImages = staticReferenceImages
configuration.planeDetection = .horizontal
//2. Run The Configuration
augmentedRealitySession.run(configuration, options: [.resetTracking, .removeExistingAnchors])
//3. Set The Session & Delegate
augmentedRealityView?.session = augmentedRealitySession
self.augmentedRealityView?.delegate = self
}
}
Hope it points you in the right direction...
I'm fairly new to iOS Swift programming. I'm using ARKit to build a very basic app to detect a horizontal plane and place,translate,rotate,modify or delete an object on it.
My main concern is to differential between the plane detected by ARKit and a digital object that I've placed. My thinking was to use hitTest(:options:) to select the object (if any) and hitTest(:types:) to select the plane through a tap gesture. I'm attaching the relevant code snippet below.
#objc func tapped(_ gesture: UITapGestureRecognizer){
let sceneView = gesture.view as! ARSCNView
let location = gesture.location(in: sceneView)
let hitTestOptions: [SCNHitTestOption: Any] = [.boundingBoxOnly: true]
let existingNodeHitTest = sceneView.hitTest(location, options: hitTestOptions)
if let existingNode = existingNodeHitTest.first?.node {
// Move, rotate, modify or delete the object
} else {
// Option to add other objects
let hitTest = sceneView.hitTest(location, types: .existingPlaneUsingExtent)
if !hitTest.isEmpty {
let node = findNode(at: location)
if node !== selectedNode {
self.addItems(hitTestResult: hitTest.first!)
}
}
}
}
func addItems(hitTestResult: ARHitTestResult) {
let scene = SCNScene(named: "BuildingModels.scnassets/model/model.scn")
let itemNode = (scene?.rootNode.childNode(withName: "SketchUp", recursively: false))!
let transform = hitTestResult.worldTransform
let position = SCNVector3(transform.columns.3.x,transform.columns.3.y,transform.columns.3.z)
itemNode.position = position
// self.sceneView.scene.lightingEnvironment.contents = scene.lightingEnvironment.contents
self.sceneView.scene.rootNode.addChildNode(itemNode)
selectedNode = itemNode
}
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let planeAnchor = anchor as? ARPlaneAnchor else {return}
let gridNode = createGrid(planeAnchor: planeAnchor)
node.addChildNode(gridNode)
}
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let planeAnchor = anchor as? ARPlaneAnchor else {return}
node.enumerateChildNodes { (childNode, _) in
childNode.removeFromParentNode()
}
let gridNode = createGrid(planeAnchor: planeAnchor)
node.addChildNode(gridNode)
}
When I run the code, the hitTest(_:options:) returns the plane detected. Are there any ways to select only the SCNNodes (objects) that I place and not the plane detected. Am I missing something? Any help is highly appreciated.
Thanks,
Sourabh.
Looking at your question you are already half way there.
The way to handle this in it's entirety, is to make use of the following HitTest functions within your UITapGestureRecognizer function:
(1) An ARSCNHitTest which:
Searches for real-world objects or AR anchors in the captured camera image corresponding to a point in the SceneKit view.
(2) AnSCNHitTest which:
Looks for SCNGeometry objects along the ray you specify. For each intersection between the ray and and a geometry, SceneKit creates a hit-test result to provide information about both the SCNNode object containing the geometry and the location of the intersection on the geometry’s surface.
Using your UITapGestureRecognizer as an example therefore, you can differentiate between an ARPlaneAnchor (detectedPlane) and any SCNNode within your scene like so:
#objc func handleTap(_ gesture: UITapGestureRecognizer){
//1. Get The Current Touch Location
let currentTouchLocation = gesture.location(in: self.augmentedRealityView)
//2. Perform An ARNSCNHitTest To See If We Have Hit An ARPlaneAnchor
if let planeHitTest = augmentedRealityView.hitTest(currentTouchLocation, types: .existingPlane).first,
let planeAnchor = planeHitTest.anchor as? ARPlaneAnchor{
print("User Has Tapped On An Existing Plane = \(planeAnchor.identifier)")
return
}
//3. Perform An SCNHitTest To See If We Have Hit An SCNNode
if let nodeHitTest = augmentedRealityView.hitTest(currentTouchLocation, options: nil).first {
let nodeTapped = nodeHitTest.node
print("An SCNNode Has Been Tapped = \(nodeTapped)")
return
}
}
If you make use of the name property for any of your SCNNode’s this will also help you further e.g:
if let name = nodeTapped.name{
print("An SCNNode Named \(name) Has Been Tapped")
}
Additionally, if you ONLY want to detect objects you have added e.g SCNNodes then you can simply remove part two of the getureRecognizer function.
Hope it helps...
To fix this issue, you should loop through your scene nodes, after that you can manipulate with your wanted node. Example:
for node in sceneView.scene.rootNode.childNodes {
if node.name == "yorNodeName" {
// do your manipulations
}
}
Don't forget to add name to your nodes. Example:
node.name = "yorNodeName"
I hope it helped!
As soon as a node starts moving, for some reason, it stops tracking the position of the node. and here's what I mean:
internal func launchNode(force:Float) {
let force = force * 0.004
currNode.physicsBody = SCNPhysicsBody(type: .dynamic, shape: nil)
currNode.physicsBody?.mass = 0.05
let direction = currNode.worldFront + SCNVector3(force, -force, -force)
currNode.physicsBody?.applyForce(direction, asImpulse: true)
}
The Node is moving, which is great, but I'd like to track the current position of the node while it's moving:
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
guard currNode != nil else { return }
print("simdWorldPosition: \(currNode.simdWorldPosition.y) position: \(currNode.position.y) simWorldPosition: \(currNode.simdWorldPosition.y) simPosition: \(currNode.simdPosition.y)")
}
None of these properties is updating the location, so I know I'm missing something. I'd like to stop it from moving when it gets to a certain position (in y coordinate). If anyone had a success tracking the location of a node after .applyForce, I'd much appreciate it if you could point out what I did wrong, and perhaps what worked for you. thanks.
UPDATE
Here's some code I'm using to test it.
I'm declaring the node at the beginning of this ViewController:
var currNode = SCNNode()
You can try the following for testing to track the location of a node with the ARKit default project:
//call this once your scene is set
func addTapGestureToSceneView() {
let tapGestureRecognizer = UITapGestureRecognizer(target: self, action: #selector(self.didTap(withGestureRecognizer:)))
sceneView.addGestureRecognizer(tapGestureRecognizer)
}
#objc func didTap(withGestureRecognizer recognizer: UIGestureRecognizer) {
let tapLocation = recognizer.location(in: sceneView)
let hitTestResults = sceneView.hitTest(tapLocation)
guard let node = hitTestResults.first?.node else { return }
if node.name == "shipMesh" {
currNode = node
moveShip()
}
}
internal func moveShip() {
let force = 2 * 0.004
currNode.physicsBody = SCNPhysicsBody(type: .dynamic, shape: nil)
currNode.physicsBody?.mass = 0.05
let direction = currNode.worldFront + SCNVector3(force, -force, -force)
currNode.physicsBody?.applyForce(direction, asImpulse: true)
}
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
print("simdWorldPosition: \(currNode.simdWorldPosition.y) position: \(currNode.position.y) simWorldPosition: \(currNode.simdWorldPosition.y) simPosition: \(currNode.simdPosition.y)")
}
So Answer of your question is in one line
use node.presentation property
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
print("simdWorldPosition: \(currNode.presentation.simdWorldPosition.y) position: \(currNode.presentation.position.y) simWorldPosition: \(currNode.presentation.simdWorldPosition.y) simPosition: \(currNode.presentation.simdPosition.y)")
}
From Apple Docs
When you use implicit animation (see SCNTransaction) to change a node’s properties, those node properties are set immediately to their target values, even though the animated node content appears to transition from the old property values to the new. During the animation SceneKit maintains a copy of the node, called the presentation node, whose properties reflect the transitory values determined by any in-flight animations currently affecting the node.
Hope it is helpful
I'm actually trying to put a 3D Object on QRCode with ARKit
For that I use a AVCaptureDevice to detect a QRCode and establish the area of the QRCode that gives me a CGRect.
Then, I make a hitTest on every point of the CGRect to get the average 3D coordinates like so :
positionGiven = SCNVector3(0, 0, 0)
for column in Int(qrZone.origin.x)...2*Int(qrZone.origin.x + qrZone.width) {
for row in Int(qrZone.origin.y)...2*Int(qrZone.origin.y + qrZone.height) {
for result in sceneView.hitTest(CGPoint(x: CGFloat(column)/2,y:CGFloat(row)/2), types: [.existingPlaneUsingExtent,.featurePoint]) {
positionGiven.x+=result.worldTransform.columns.3.x
positionGiven.y+=result.worldTransform.columns.3.y
positionGiven.z+=result.worldTransform.columns.3.z
cpts += 1
}
}
}
positionGiven.x=positionGiven.x/cpts
positionGiven.y=positionGiven.y/cpts
positionGiven.z=positionGiven.z/cpts
But the hitTest doesn't detect any result and freeze the camera while when I make a hitTest with a touch on screen it works.
Do you have any idea why it's not working ?
Do you have an other idea that can help me to achieve what I want to do ?
I already thought about 3D translation with CoreMotion that can give me the tilt of the device but that seems really tedious.
I also heard about ARWorldAlignmentCamera that can locked the scene coordinate to match the orientation of the camera but I don't know how to use it !
Edit : I try to move my 3D Object every time I touch the screen and the hitTest is positive, and it's pretty accurate ! I really don't understand why hitTest on an area of pixels doesn't work...
Edit 2 : Here is the code of the hitTest who works with 2-5 touches on the screen:
#objc func touch(sender : UITapGestureRecognizer) {
for result in sceneView.hitTest(CGPoint(x: sender.location(in: view).x,y: sender.location(in: view).y), types: [.existingPlaneUsingExtent,.featurePoint]) {
//Pop up message for testing
alert("\(sender.location(in: view))", message: "\(result.worldTransform.columns.3)")
//Moving the 3D Object to the new coordinates
let objectList = sceneView.scene.rootNode.childNodes
for object : SCNNode in objectList {
object.removeFromParentNode()
}
addObject(SCNVector3(result.worldTransform.columns.3.x,result.worldTransform.columns.3.y,result.worldTransform.columns.3.z))
}
}
Edit 3 :
I manage to resolve my problem partially.
I take the transform matrix of the camera (session.currentFrame.camera.transform) so that the object is in front of the camera.
Then I apply a translation on (x,y) with the position of the CGRect.
However i can't translate the z-axis because i don't have enough informations.
And I will probably need a estimation of z coordinate like the hitTest do..
Thanks in advance ! :)
You could use Apple's Vision API to detect the QR code and place an anchor.
To start detecting QR codes, use:
var qrRequests = [VNRequest]()
var detectedDataAnchor: ARAnchor?
var processing = false
func startQrCodeDetection() {
// Create a Barcode Detection Request
let request = VNDetectBarcodesRequest(completionHandler: self.requestHandler)
// Set it to recognize QR code only
request.symbologies = [.QR]
self.qrRequests = [request]
}
In ARSession's didUpdate Frame
public func session(_ session: ARSession, didUpdate frame: ARFrame) {
DispatchQueue.global(qos: .userInitiated).async {
do {
if self.processing {
return
}
self.processing = true
// Create a request handler using the captured image from the ARFrame
let imageRequestHandler = VNImageRequestHandler(cvPixelBuffer: frame.capturedImage,
options: [:])
// Process the request
try imageRequestHandler.perform(self.qrRequests)
} catch {
}
}
}
Handle the Vision QR request and trigger the hit test
func requestHandler(request: VNRequest, error: Error?) {
// Get the first result out of the results, if there are any
if let results = request.results, let result = results.first as? VNBarcodeObservation {
guard let payload = result.payloadStringValue else {return}
// Get the bounding box for the bar code and find the center
var rect = result.boundingBox
// Flip coordinates
rect = rect.applying(CGAffineTransform(scaleX: 1, y: -1))
rect = rect.applying(CGAffineTransform(translationX: 0, y: 1))
// Get center
let center = CGPoint(x: rect.midX, y: rect.midY)
DispatchQueue.main.async {
self.hitTestQrCode(center: center)
self.processing = false
}
} else {
self.processing = false
}
}
func hitTestQrCode(center: CGPoint) {
if let hitTestResults = self.latestFrame?.hitTest(center, types: [.featurePoint] ),
let hitTestResult = hitTestResults.first {
if let detectedDataAnchor = self.detectedDataAnchor,
let node = self.sceneView.node(for: detectedDataAnchor) {
let previousQrPosition = node.position
node.transform = SCNMatrix4(hitTestResult.worldTransform)
} else {
// Create an anchor. The node will be created in delegate methods
self.detectedDataAnchor = ARAnchor(transform: hitTestResult.worldTransform)
self.sceneView.session.add(anchor: self.detectedDataAnchor!)
}
}
}
Then handle adding the node when the anchor is added.
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
// If this is our anchor, create a node
if self.detectedDataAnchor?.identifier == anchor.identifier {
let sphere = SCNSphere(radius: 1.0)
sphere.firstMaterial?.diffuse.contents = UIColor.redColor()
let sphereNode = SCNNode(geometry: sphere)
sphereNode.transform = SCNMatrix4(anchor.transform)
return sphereNode
}
return nil
}
Source
In my program, I am trying to make myself collidable with other objects so that I can test collisions. Is there a way to add a physics body that moves with myself in the new AR Kit which follows you? I can make one which is at my place at the beginning of the AR but I'm sure there is a way to add it to your actual self.
Good day!
There is a solution to your problem.
First of all you need to create a collider. For example:
func addCollider(position: SCNVector3, name: String) {
let collider = SCNNode(geometry: SCNSphere(radius: 0.5))
collider.position = position
collider.geometry?.firstMaterial?.diffuse.contents = UIColor.clear
collider.physicsBody?.categoryBitMask = 0 // example
collider.physicsBody?.contactTestBitMask = 1 // example
collider.name = name
sceneView.scene.rootNode.addChildNode(collider)
}
Don't forget to add proper bit masks, to collide with other objects.
Next you need to implement ARSCNViewDelegate protocol and call method willRenderScene. Next you need to locate your point of view and transform it into the matrix. Than you need to extract values from third column of the matrix to get current orientation. To get current location you need to extract values of forth column from your matrix. And finally you need to combine your location and orientation to get your current position.
To combine to SCNVector3 values you need to write global function "+". Example:
func +(left: SCNVector3, right: SCNVector3) -> SCNVector3 {
return SCNVector3Make(left.x + right.x, left.y + right.y, left.z + right.z)
}
Be sure, that you wrote this method outside of class or struct.
Final accord! A function that will move a collider accordingly to your current position.
func renderer(_ renderer: SCNSceneRenderer, willRenderScene scene: SCNScene, atTime time: TimeInterval) {
guard let pontOfView = self.sceneView.pointOfView else { return }
let transform = pontOfView.transform
let orientation = SCNVector3(-transform.m31, -transform.m32, -transform.m33)
let location = SCNVector3(transform.m41, transform.m42, transform.m43)
let currentPosition = orientation + location
for node in sceneView.scene.rootNode.childNodes {
if node.name == "your collider name" {
node.position = currentPosition
}
}
}
P.S. Don't forget to add
self.sceneView.delegate = self
to your viewDidLoad.
Best of luck. I hope it helped!