Giving yourself a physics body in AR kit? - swift

In my program, I am trying to make myself collidable with other objects so that I can test collisions. Is there a way to add a physics body that moves with myself in the new AR Kit which follows you? I can make one which is at my place at the beginning of the AR but I'm sure there is a way to add it to your actual self.

Good day!
There is a solution to your problem.
First of all you need to create a collider. For example:
func addCollider(position: SCNVector3, name: String) {
let collider = SCNNode(geometry: SCNSphere(radius: 0.5))
collider.position = position
collider.geometry?.firstMaterial?.diffuse.contents = UIColor.clear
collider.physicsBody?.categoryBitMask = 0 // example
collider.physicsBody?.contactTestBitMask = 1 // example
collider.name = name
sceneView.scene.rootNode.addChildNode(collider)
}
Don't forget to add proper bit masks, to collide with other objects.
Next you need to implement ARSCNViewDelegate protocol and call method willRenderScene. Next you need to locate your point of view and transform it into the matrix. Than you need to extract values from third column of the matrix to get current orientation. To get current location you need to extract values of forth column from your matrix. And finally you need to combine your location and orientation to get your current position.
To combine to SCNVector3 values you need to write global function "+". Example:
func +(left: SCNVector3, right: SCNVector3) -> SCNVector3 {
return SCNVector3Make(left.x + right.x, left.y + right.y, left.z + right.z)
}
Be sure, that you wrote this method outside of class or struct.
Final accord! A function that will move a collider accordingly to your current position.
func renderer(_ renderer: SCNSceneRenderer, willRenderScene scene: SCNScene, atTime time: TimeInterval) {
guard let pontOfView = self.sceneView.pointOfView else { return }
let transform = pontOfView.transform
let orientation = SCNVector3(-transform.m31, -transform.m32, -transform.m33)
let location = SCNVector3(transform.m41, transform.m42, transform.m43)
let currentPosition = orientation + location
for node in sceneView.scene.rootNode.childNodes {
if node.name == "your collider name" {
node.position = currentPosition
}
}
}
P.S. Don't forget to add
self.sceneView.delegate = self
to your viewDidLoad.
Best of luck. I hope it helped!

Related

ARKit SCNNode always in the center when camera move

I am working on a project where I have to place a green dot to be always in the center even when we rotate the camera in ARKit. I am using ARSCNView and I have added the node so far everything is good. Now I know I need to modify the position of the node in
func session(_ session: ARSession, didUpdate frame: ARFrame)
But I have no idea how to do that. I saw some example which was close to what I have but it does not run as it suppose to.
func session(_ session: ARSession, didUpdate frame: ARFrame) {
let location = sceneView.center
let hitTest = sceneView.hitTest(location, types: .featurePoint)
if hitTest.isEmpty {
print("No Plane Detected")
return
} else {
let columns = hitTest.first?.worldTransform.columns.3
let position = SCNVector3(x: columns!.x, y: columns!.y, z: columns!.z)
var node = sceneView.scene.rootNode.childNode(withName: "CenterShip", recursively: false) ?? nil
if node == nil {
let scene = SCNScene(named: "art.scnassets/ship.scn")!
node = scene.rootNode.childNode(withName: "ship", recursively: false)
node?.opacity = 0.7
let columns = hitTest.first?.worldTransform.columns.3
node!.name = "CenterShip"
node!.position = SCNVector3(x: columns!.x, y: columns!.y, z: columns!.z)
sceneView.scene.rootNode.addChildNode(node!)
}
let position2 = node?.position
if position == position2! {
return
} else {
//action
let action = SCNAction.move(to: position, duration: 0.1)
node?.runAction(action)
}
}
}
It doesn't matter how I rotate the camera this dot must be in the middle.
It's not clear exactly what you're trying to do, but I assume its one of the following:
A) Place the green dot centered in front of the camera at a fixed distance, eg. always exactly 1 meter in front of the camera.
B) Place the green dot centered in front of the camera at the depth of the nearest detected plane, i.e. using the results of a raycast from the mid point of the ARSCNView
I would have assumed A, but your example code is using (now deprecated) sceneView.hitTest() function which in this case would give you the depth of whatever is behind the pixel at sceneView.center
Anyway here's both:
Fixed Depth Solution
This is pretty straightforward, though there are few options. The simplest is to make the green dot a child node of the scene's camera node, and give it position with a negative z value, since z increases as a position moves toward the camera.
cameraNode.addChildNode(textNode)
textNode.position = SCNVector3(x: 0, y: 0, z: -1)
As the camera moves, so too will its child nodes. More details in this very thorough answer
Scene Depth Solution
To determine the estimated depth behind a pixel, you should use ARSession.raycast instead of SceneView.hitTest, because the latter is definitely deprecated.
Note that, if the raycast() (or still hitTest()) methods return an empty result set (not uncommon given the complexity of scene estimation going on in ARKit), you won't have a position to update the node and this it might not be directly centered in every frame. To handle this is a bit more complex, as you'd need decide exactly what you want to do in that case.
The SCNAction is unnecessary and potentially causing problems. These delegate methods run 60fps, so simply updating the position directly will produce smooth results.
Adapting and simplifying the code you posted:
func createCenterShipNode() -> SCNNode {
let scene = SCNScene(named: "art.scnassets/ship.scn")!
let node = scene.rootNode.childNode(withName: "ship", recursively: false)
node!.opacity = 0.7
node!.name = "CenterShip"
sceneView.scene.rootNode.addChildNode(node!)
return node!
}
func session(_ session: ARSession, didUpdate frame: ARFrame) {
// Check the docs for what the different raycast query parameters mean, but these
// give you the depth of anything ARKit has detected
guard let query = sceneView.raycastQuery(from: sceneView.center, allowing: .estimatedPlane, alignment: .any) else {
return
}
let results = session.raycast(query)
if let hit = results.first {
let node = sceneView.scene.rootNode.childNode(withName: "CenterShip", recursively: false) ?? createCenterShipNode()
let pos = hit.worldTransform.columns.3
node.simdPosition = simd_float3(pos.x, pos.y, pos.z)
}
}
See also: ARRaycastQuery
One last note - you generally don't want to do scene manipulation within this delegate method. It runs on a different thread than the Scenekit rendering thread, and SceneKit is very thread sensitive. This will likely work fine, but beyond adding or moving a node will certainly cause crashes from time to time. You'd ideally want to store the new position, and then update the actual scene contents from within the renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) delegate method.

ARSCNView unprojectPoint

I need to convert a point in the 2d coordinate space of my ARSCNView to a coordinate in 3d space. Basically a ray from the point of view to the touched location (up to a set distance away).
I wanted to use arView.unprojectPoint(vec2d) for that, but the point returned always seems to be located in the center of the view
vec2d is a SCNVector3 created from a 2d coordinate like this
SCNVector3(x, y, 0) // 0 specifies camera near plane
What am I doing wrong? How do I get the desired result?
I think you have at least 2 possible solutions:
First
Use hitTest(_:types:) instance method:
This method searches for real-world objects or AR anchors in the captured camera image corresponding to a point in the SceneKit view.
let sceneView = ARSCNView()
func calculateVector(point: CGPoint) -> SCNVector3? {
let hitTestResults = sceneView.hitTest(point,
types: [.existingPlane])
if let result = hitTestResults.first {
return SCNVector3.init(SIMD3(result.worldTransform.columns.3.x,
result.worldTransform.columns.3.y,
result.worldTransform.columns.3.z))
}
return nil
}
calculateVector(point: yourPoint)
Second
Use unprojectPoint(_:ontoPlane:) instance method:
This method returns the projection of a point from 2D view onto a plane in the 3D world space detected by ARKit.
#nonobjc func unprojectPoint(_ point: CGPoint,
ontoPlane planeTransform: simd_float4x4) -> simd_float3?
or:
let point = CGPoint()
var planeTransform = simd_float4x4()
sceneView.unprojectPoint(point,
ontoPlane: planeTransform)
Add a empty node infront of camera at 'x' cm offset and making it the child of camera.
//Add a node in front of camera just after creating scene
hitNode = SCNNode()
hitNode!.position = SCNVector3Make(0, 0, -0.25) //25 cm offset
sceneView.pointOfView?.addChildNode(hitNode!)
func unprojectedPosition(touch: CGPoint) -> SCNVector3 {
guard let hitNode = self.hitNode else {
return SCNVector3Zero
}
let projectedOrigin = sceneView.projectPoint(hitNode.worldPosition)
let offset = sceneView.unprojectPoint(SCNVector3Make(Float(touch.x), Float(touch.y), projectedOrigin.z))
return offset
}
See the Justaline GitHub implementation of the code here

ARAnchor for SCNNode

I'm trying to get a hold of the anchor after adding an SCNNode to the scene of an ARSCNView. My understanding is that the anchor should be created automatically, but I can't seem to retrieve it.
Below is how I add it. The node is saved in a variable called testNode.
let node = SCNNode()
node.geometry = SCNBox(width: 0.5, height: 0.1, length: 0.3, chamferRadius: 0)
node.geometry?.firstMaterial?.diffuse.contents = UIColor.green
sceneView.scene.rootNode.addChildNode(node)
testNode = node
Here is how I try to retrieve it. It always prints nil.
if let testNode = testNode {
print(sceneView.anchor(for: testNode))
}
Does it not create the anchor? If it does: is there another method I can use to retrieve it?
If you have a look at the Apple Docs it states that:
To track the positions and orientations of real or virtual objects
relative to the camera, create anchor objects and use the add(anchor:)
method to add them to your AR session.
As such, I think that since your aren't using PlaneDetection you would need to create an ARAnchor manually if it is needed:
Whenever you place a virtual object, always add an ARAnchor representing its position and orientation to the ARSession. After moving a virtual object, remove the anchor at the old position and create a new anchor at the new position. Adding an anchor tells ARKit that a position is important, improving world tracking quality in that area and helping virtual objects appear to stay in place relative to real-world surfaces.
You can read more about this in the following thread What's the difference between using ARAnchor to insert a node and directly insert a node?
Anyway, in order to get you started I began by creating an SCNNode called currentNode:
var currentNode: SCNNode?
Then using a UITapGestureRecognizer I created an ARAnchor manually at the touchLocation:
#objc func handleTap(_ gesture: UITapGestureRecognizer){
//1. Get The Current Touch Location
let currentTouchLocation = gesture.location(in: self.augmentedRealityView)
//2. If We Have Hit A Feature Point Get The Result
if let hitTest = augmentedRealityView.hitTest(currentTouchLocation, types: [.featurePoint]).last {
//2. Create An Anchore At The World Transform
let anchor = ARAnchor(transform: hitTest.worldTransform)
//3. Add It To The Scene
augmentedRealitySession.add(anchor: anchor)
}
}
Having added the anchor, I then used the ARSCNViewDelegate callback to create the currentNode like so:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
if currentNode == nil{
currentNode = SCNNode()
let nodeGeometry = SCNBox(width: 0.2, height: 0.2, length: 0.2, chamferRadius: 0)
nodeGeometry.firstMaterial?.diffuse.contents = UIColor.cyan
currentNode?.geometry = nodeGeometry
currentNode?.position = SCNVector3(anchor.transform.columns.3.x, anchor.transform.columns.3.y, anchor.transform.columns.3.z)
node.addChildNode(currentNode!)
}
}
In order to test that it works, e.g being able to log the corresponding ARAnchor, I changed the tapGesture method to include this at the end:
if let anchorHitTest = augmentedRealityView.hitTest(currentTouchLocation, options: nil).first,{
print(augmentedRealityView.anchor(for: anchorHitTest.node))
}
Which in my ConsoleLog prints:
Optional(<ARAnchor: 0x1c0535680 identifier="23CFF447-68E9-451D-A64D-17C972EB5F4B" transform=<translation=(-0.006610 -0.095542 -0.357221) rotation=(-0.00° 0.00° 0.00°)>>)
Hope it helps...

Unable to differentiate between plane detected by ARKit and a digital object to be placed using HitTest

I'm fairly new to iOS Swift programming. I'm using ARKit to build a very basic app to detect a horizontal plane and place,translate,rotate,modify or delete an object on it.
My main concern is to differential between the plane detected by ARKit and a digital object that I've placed. My thinking was to use hitTest(:options:) to select the object (if any) and hitTest(:types:) to select the plane through a tap gesture. I'm attaching the relevant code snippet below.
#objc func tapped(_ gesture: UITapGestureRecognizer){
let sceneView = gesture.view as! ARSCNView
let location = gesture.location(in: sceneView)
let hitTestOptions: [SCNHitTestOption: Any] = [.boundingBoxOnly: true]
let existingNodeHitTest = sceneView.hitTest(location, options: hitTestOptions)
if let existingNode = existingNodeHitTest.first?.node {
// Move, rotate, modify or delete the object
} else {
// Option to add other objects
let hitTest = sceneView.hitTest(location, types: .existingPlaneUsingExtent)
if !hitTest.isEmpty {
let node = findNode(at: location)
if node !== selectedNode {
self.addItems(hitTestResult: hitTest.first!)
}
}
}
}
func addItems(hitTestResult: ARHitTestResult) {
let scene = SCNScene(named: "BuildingModels.scnassets/model/model.scn")
let itemNode = (scene?.rootNode.childNode(withName: "SketchUp", recursively: false))!
let transform = hitTestResult.worldTransform
let position = SCNVector3(transform.columns.3.x,transform.columns.3.y,transform.columns.3.z)
itemNode.position = position
// self.sceneView.scene.lightingEnvironment.contents = scene.lightingEnvironment.contents
self.sceneView.scene.rootNode.addChildNode(itemNode)
selectedNode = itemNode
}
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let planeAnchor = anchor as? ARPlaneAnchor else {return}
let gridNode = createGrid(planeAnchor: planeAnchor)
node.addChildNode(gridNode)
}
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let planeAnchor = anchor as? ARPlaneAnchor else {return}
node.enumerateChildNodes { (childNode, _) in
childNode.removeFromParentNode()
}
let gridNode = createGrid(planeAnchor: planeAnchor)
node.addChildNode(gridNode)
}
When I run the code, the hitTest(_:options:) returns the plane detected. Are there any ways to select only the SCNNodes (objects) that I place and not the plane detected. Am I missing something? Any help is highly appreciated.
Thanks,
Sourabh.
Looking at your question you are already half way there.
The way to handle this in it's entirety, is to make use of the following HitTest functions within your UITapGestureRecognizer function:
(1) An ARSCNHitTest which:
Searches for real-world objects or AR anchors in the captured camera image corresponding to a point in the SceneKit view.
(2) AnSCNHitTest which:
Looks for SCNGeometry objects along the ray you specify. For each intersection between the ray and and a geometry, SceneKit creates a hit-test result to provide information about both the SCNNode object containing the geometry and the location of the intersection on the geometry’s surface.
Using your UITapGestureRecognizer as an example therefore, you can differentiate between an ARPlaneAnchor (detectedPlane) and any SCNNode within your scene like so:
#objc func handleTap(_ gesture: UITapGestureRecognizer){
//1. Get The Current Touch Location
let currentTouchLocation = gesture.location(in: self.augmentedRealityView)
//2. Perform An ARNSCNHitTest To See If We Have Hit An ARPlaneAnchor
if let planeHitTest = augmentedRealityView.hitTest(currentTouchLocation, types: .existingPlane).first,
let planeAnchor = planeHitTest.anchor as? ARPlaneAnchor{
print("User Has Tapped On An Existing Plane = \(planeAnchor.identifier)")
return
}
//3. Perform An SCNHitTest To See If We Have Hit An SCNNode
if let nodeHitTest = augmentedRealityView.hitTest(currentTouchLocation, options: nil).first {
let nodeTapped = nodeHitTest.node
print("An SCNNode Has Been Tapped = \(nodeTapped)")
return
}
}
If you make use of the name property for any of your SCNNode’s this will also help you further e.g:
if let name = nodeTapped.name{
print("An SCNNode Named \(name) Has Been Tapped")
}
Additionally, if you ONLY want to detect objects you have added e.g SCNNodes then you can simply remove part two of the getureRecognizer function.
Hope it helps...
To fix this issue, you should loop through your scene nodes, after that you can manipulate with your wanted node. Example:
for node in sceneView.scene.rootNode.childNodes {
if node.name == "yorNodeName" {
// do your manipulations
}
}
Don't forget to add name to your nodes. Example:
node.name = "yorNodeName"
I hope it helped!

ARKit – How to put 3D Object on QRCode?

I'm actually trying to put a 3D Object on QRCode with ARKit
For that I use a AVCaptureDevice to detect a QRCode and establish the area of the QRCode that gives me a CGRect.
Then, I make a hitTest on every point of the CGRect to get the average 3D coordinates like so :
positionGiven = SCNVector3(0, 0, 0)
for column in Int(qrZone.origin.x)...2*Int(qrZone.origin.x + qrZone.width) {
for row in Int(qrZone.origin.y)...2*Int(qrZone.origin.y + qrZone.height) {
for result in sceneView.hitTest(CGPoint(x: CGFloat(column)/2,y:CGFloat(row)/2), types: [.existingPlaneUsingExtent,.featurePoint]) {
positionGiven.x+=result.worldTransform.columns.3.x
positionGiven.y+=result.worldTransform.columns.3.y
positionGiven.z+=result.worldTransform.columns.3.z
cpts += 1
}
}
}
positionGiven.x=positionGiven.x/cpts
positionGiven.y=positionGiven.y/cpts
positionGiven.z=positionGiven.z/cpts
But the hitTest doesn't detect any result and freeze the camera while when I make a hitTest with a touch on screen it works.
Do you have any idea why it's not working ?
Do you have an other idea that can help me to achieve what I want to do ?
I already thought about 3D translation with CoreMotion that can give me the tilt of the device but that seems really tedious.
I also heard about ARWorldAlignmentCamera that can locked the scene coordinate to match the orientation of the camera but I don't know how to use it !
Edit : I try to move my 3D Object every time I touch the screen and the hitTest is positive, and it's pretty accurate ! I really don't understand why hitTest on an area of pixels doesn't work...
Edit 2 : Here is the code of the hitTest who works with 2-5 touches on the screen:
#objc func touch(sender : UITapGestureRecognizer) {
for result in sceneView.hitTest(CGPoint(x: sender.location(in: view).x,y: sender.location(in: view).y), types: [.existingPlaneUsingExtent,.featurePoint]) {
//Pop up message for testing
alert("\(sender.location(in: view))", message: "\(result.worldTransform.columns.3)")
//Moving the 3D Object to the new coordinates
let objectList = sceneView.scene.rootNode.childNodes
for object : SCNNode in objectList {
object.removeFromParentNode()
}
addObject(SCNVector3(result.worldTransform.columns.3.x,result.worldTransform.columns.3.y,result.worldTransform.columns.3.z))
}
}
Edit 3 :
I manage to resolve my problem partially.
I take the transform matrix of the camera (session.currentFrame.camera.transform) so that the object is in front of the camera.
Then I apply a translation on (x,y) with the position of the CGRect.
However i can't translate the z-axis because i don't have enough informations.
And I will probably need a estimation of z coordinate like the hitTest do..
Thanks in advance ! :)
You could use Apple's Vision API to detect the QR code and place an anchor.
To start detecting QR codes, use:
var qrRequests = [VNRequest]()
var detectedDataAnchor: ARAnchor?
var processing = false
func startQrCodeDetection() {
// Create a Barcode Detection Request
let request = VNDetectBarcodesRequest(completionHandler: self.requestHandler)
// Set it to recognize QR code only
request.symbologies = [.QR]
self.qrRequests = [request]
}
In ARSession's didUpdate Frame
public func session(_ session: ARSession, didUpdate frame: ARFrame) {
DispatchQueue.global(qos: .userInitiated).async {
do {
if self.processing {
return
}
self.processing = true
// Create a request handler using the captured image from the ARFrame
let imageRequestHandler = VNImageRequestHandler(cvPixelBuffer: frame.capturedImage,
options: [:])
// Process the request
try imageRequestHandler.perform(self.qrRequests)
} catch {
}
}
}
Handle the Vision QR request and trigger the hit test
func requestHandler(request: VNRequest, error: Error?) {
// Get the first result out of the results, if there are any
if let results = request.results, let result = results.first as? VNBarcodeObservation {
guard let payload = result.payloadStringValue else {return}
// Get the bounding box for the bar code and find the center
var rect = result.boundingBox
// Flip coordinates
rect = rect.applying(CGAffineTransform(scaleX: 1, y: -1))
rect = rect.applying(CGAffineTransform(translationX: 0, y: 1))
// Get center
let center = CGPoint(x: rect.midX, y: rect.midY)
DispatchQueue.main.async {
self.hitTestQrCode(center: center)
self.processing = false
}
} else {
self.processing = false
}
}
func hitTestQrCode(center: CGPoint) {
if let hitTestResults = self.latestFrame?.hitTest(center, types: [.featurePoint] ),
let hitTestResult = hitTestResults.first {
if let detectedDataAnchor = self.detectedDataAnchor,
let node = self.sceneView.node(for: detectedDataAnchor) {
let previousQrPosition = node.position
node.transform = SCNMatrix4(hitTestResult.worldTransform)
} else {
// Create an anchor. The node will be created in delegate methods
self.detectedDataAnchor = ARAnchor(transform: hitTestResult.worldTransform)
self.sceneView.session.add(anchor: self.detectedDataAnchor!)
}
}
}
Then handle adding the node when the anchor is added.
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
// If this is our anchor, create a node
if self.detectedDataAnchor?.identifier == anchor.identifier {
let sphere = SCNSphere(radius: 1.0)
sphere.firstMaterial?.diffuse.contents = UIColor.redColor()
let sphereNode = SCNNode(geometry: sphere)
sphereNode.transform = SCNMatrix4(anchor.transform)
return sphereNode
}
return nil
}
Source