I have a SpriteKit based ARKit application that shows SKSpriteNodes at various anchor points in the AR space. I am providing the node for each anchor using the ARSKViewDelegate method
func view(_ view: ARSKView, nodeFor anchor: ARAnchor) -> SKNode?
That works when I first add the anchor, but at point in the runtime I would like to switch the nodes for some anchors to different nodes. How do I force the ARSKView to "refresh" and call that node for anchor method again?
Use view(_:didUpdate:for:) instance method.
This method tells the delegate that a SpriteKit node's properties have been updated to match the current state of its corresponding anchor.
optional func view(_ view: ARSKView,
didUpdate node: SKNode,
for anchor: ARAnchor)
And don't forget about delegate allowing you to use a mentioned method:
weak var delegate: ARSKViewDelegate? { get set }
Related
I am trying to create an app where I can use the depth functionalities of RealityKit but the AR drawing capabilities from SceneKit. What I would like to do, is recognize an object and place a 3d model over it (which works already).
When that is completed I would like the user to be able to draw on top of that 3d model (which works fine with SceneKit, but makes the 3d model jitter). I found SCNLine to do the drawing, but since it uses SceneKit I can not use it in the ARView of RealityKit.
I have seen this already, but it does not cover fully what I would like.
Is it possible to use both?
SceneKit and RealityKit are incompatible due to a complete dissimilarity – difference in scenes' hierarchy, difference in renderer and physics engines, difference in component content. What's stopping you from using SceneKit + ARKit (ARSCNView class)?
ARKit 6.0 has a built-in Depth API (the same API is available in RealityKit) that uses a LiDAR scanner to more accurately determine distances in a surrounding environment, allowing us to use plane detection, raycasting and object occlusion more efficiently.
For that, use sceneReconstruction instance property and ARMeshAnchors.
import ARKit
import SceneKit
class ViewController: UIViewController {
#IBOutlet var sceneView: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
sceneView.scene = SCNScene()
sceneView.delegate = self
let config = ARWorldTrackingConfiguration()
config.sceneReconstruction = .mesh
config.planeDetection = .horizontal
sceneView.session.run(config)
}
}
Delegate's method:
extension ViewController: ARSCNViewDelegate {
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode,
for anchor: ARAnchor) {
guard let meshAnchor = anchor as? ARMeshAnchor else { return }
let meshGeo = meshAnchor.geometry
// logic ...
node.addChildNode(someModel)
}
}
P. S.
This post will be helpful for you.
I’m trying to make my AR experience more user friendly.
I have this SCNnode (objectNodeToPlace) that I want to place over the node created/update by the renderers whenever an imageReference is detected by the camera:
class ViewController: UIViewController, ARSCNViewDelegate {
var objectNodeToPlace: SCNNode
...
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
placeObject(object: objectNodeToPlace, at: node, ...)
}
}
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
placeObject(object: objectNodeToPlace, at: node, ...)
}
}
My placeObject function is pretty simple, I’m just changing the orientation of the “objectNodeToPlace”:
private func placeObject(object objectNodeToPlace: SCNNode, at node: SCNNode, ...) {
...
node.addChildNode(objectNodeToPlace)
}
Everything works well, my object is always placed on the latest detected imageReference.
But when my object is place onto an image and a new one is detected, the object jumps to the newest, it’s not great, very jittery / jerky.
My goal is to make this transition smoother.
What I currently have
Here’s what it looks like rigth now:
What I have now
In red, the imageReference.
What I want
Here’s what I would like:
What I would like
What I’ve found yet
I’ve found this package, SceneKit Bezier Animations, to animate the movement between 2 points but I look a little bit overkill for want I want.
I also read this topic, SceneKit Rotate and Animate a SCNNode, one response suggest to use CABasicAnimation and another one suggest SCNAction.
I feel like SCNAction is the best way to go for the quick, not that precise animation that I want, I’m not sure of what I’m doing so I will be happy to hear from more experienced developers.
Edit
I've found what I'm looking for on the Apple documentation Animating SceneKit Content
It's called Implicit animation, and it should work with only one line of code that determines the animation duration of my changes :
SCNTransaction.animationDuration = 1.0
I tried that line just before I change the euler angle (who is "animatable") of my node in my place()function:
SCNTransaction.animationDuration = 1.0
objectNodeToPlace.eulerAngles.x = radian
but that didn't worked. I'm pretty sure that I'm just missing a simple thing but I can't find examples online, even in the documentation of SCNTransaction.
Does someone have an idea ?
Axel,
SCNTransaction needs a minimum of three statements.
SCNTransaction.start()
SCNTransaction.animationDuration = 1.0
//commands
SCNTransaction.commit()
I written a dozen articles on SceneKit recently you'll find here.
I am trying to add a .rcproject to my SCNView. I am working with SwitftUI an totally lost. I have no idea how to add it.
Currently I am able to detect my objects in the room with ARKit. But I also want to add my Scene from RealityKit at this anchor point.
Is there a way to do so?
func renderer(_ renderer: SCNSceneRenderer,
didAdd node: SCNNode,
for anchor: ARAnchor) {
if let objectAnchor = anchor as? ARObjectAnchor {
let name = objectAnchor.referenceObject.name!
print("You found a \(name) object")
let titleNode = createTitleNode(name)
node.addChildNode(titleNode)
let example_scene = try! RealityExample.loadScene()
arView.scene.anchors.append(example_scene)
// not possible, because this is not a SCNScene
}
}
Thanks a lot.
You can't read in Reality Composer project (.rcproject) into ARSCNView's scene (.scn). That's because SceneKit isn't able to handle RealityKit's objects and hierarchy. In SceneKit there are nodes (SCNNode class) connected to scene's root node (however, if you're using SceneKit with ARKit, nodes must be also tethered with ARAnchors), but in RealityKit there are entities (ModelEntity class) connected to scene through AnchorEntities. These two frameworks are totally different.
The only file format RealityKit and SceneKit can share is Pixar's .usdz.
Let's say I have a single photo (taken with iOS camera) that contains a known image target (e.g. a square QR code that is 5cm x 5cm) lying on a flat plane. can I use the Apple Vision framework to calculate the 6dof pose of the image target?
I'm unfamiliar with the framework, but it seems to me that this problem is similar to the tracking of AR Targets, and so I'm hoping that there is a solution in there somewhere!
In fact what I actually want to do is to detect shapes in the static image (using an existing cloud hosted open cv app) and to display those shapes in AR using ARKit. I was hoping that I could have the same image targets present in both the static images and in the AR video feed.
Obtaining ARCamera position
In ARKit you can acquire ARCamera's position thru ARFrame's dot notation. Each ARFrame (out of 60 frames per second) contains 4x4 camera matrix. To update ARCamera's position use an instance method called renderer(_:didUpdate:for:).
Here's "initial" method called renderer(_:didAdd:for:):
extension ViewController: ARSCNViewDelegate {
func renderer(_ renderer: SCNSceneRenderer,
didAdd node: SCNNode,
for anchor: ARAnchor) {
let frame = sceneView.session.currentFrame
print(frame?.camera.transform.columns.3.x as Any)
print(frame?.camera.transform.columns.3.y as Any)
print(frame?.camera.transform.columns.3.z as Any)
// ...
}
}
Obtaining anchor coordinates and image size
When you're using Vision and ARKit together, the simplest way to obtain coordinates of a tracked image in ARKit is to use a transform instance property of ARImageAnchor expressed in SIMD 4x4 matrix.
var transform: simd_float4x4 { get }
This matrix encoding the position, orientation, and scale of the anchor relative to the world coordinate space of the AR session the anchor is placed in.
Here's how your code may look like:
extension ViewController: ARSCNViewDelegate {
func renderer(_ renderer: SCNSceneRenderer,
didAdd node: SCNNode,
for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor
else { return }
print(imageAnchor.transform.columns.3.x)
print(imageAnchor.transform.columns.3.y)
print(imageAnchor.transform.columns.3.z)
// ...
}
}
If you want to know what a SIMD 4x4 matrix is, read this post.
Also, for obtaining a physical size (in meters) of a tracked photo use this property:
// set in Xcode's `AR Resources` Group
imageAnchor.referenceImage.physicalSize
To calculate a factor between the initial size and the estimated physical size use this property:
imageAnchor.estimatedScaleFactor
Updating anchor coordinates and image size
To constantly update coordinates of ARImageAnchor and image size use second method coming from ARSCNViewDelegate :
optional func renderer(_ renderer: SCNSceneRenderer,
didUpdate node: SCNNode,
for anchor: ARAnchor)
For obtaining a bounding box (CGRect type) of your photo in Vision use this instance property:
VNDetectedObjectObservation().boundingBox
I have a simple SCNNode that I want to place in the real-world position, the node corresponds to a landmark with known coordinates. I want to keep the SCNNode still at its location, however it tends to move with the camera. I cannot use plane detection or a hit-test to place the node in the real-world, I can only use the real-world coordinates. My current solution creates an ARanchor using the SCNNodes world transform.
showNode(node: Node, location: convertedPoint)
let anchor = ARAnchor(transform: Node.simdWorldTransform)
self.sceneView.session.add(anchor: anchor)
I thought this would be enough to anchor the node. Is there a solution to anchor the node without using plane detection or a hit-test?
Thanks
First, even if you stabilize your node with an anchor, it's never going to be perfect, just better.
However, while an ARAnchor is being created at the coordinates you are telling it, but the node you want to be stabilized is not actually being placed within that anchor.
You need to implement the renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) function in your ARSCNViewDelegate and return the target node you want to be stabilized by the anchor (Alternatively, you can add your node as a childNode of the default created anchor node by implementing the renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) function).
If your nodeToBeStabilized object is available as property within the ViewController, then a very dumb implementation might be something like this:
extension WAViewController: ARSCNViewDelegate, ARSessionDelegate {
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
return nodeToBeStabilized
}
}