Implicitly animating SCNNode's transition from one ARImageAnchor to another - swift

I’m trying to make my AR experience more user friendly.
I have this SCNnode (objectNodeToPlace) that I want to place over the node created/update by the renderers whenever an imageReference is detected by the camera:
class ViewController: UIViewController, ARSCNViewDelegate {
var objectNodeToPlace: SCNNode
...
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
placeObject(object: objectNodeToPlace, at: node, ...)
}
}
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
placeObject(object: objectNodeToPlace, at: node, ...)
}
}
My placeObject function is pretty simple, I’m just changing the orientation of the “objectNodeToPlace”:
private func placeObject(object objectNodeToPlace: SCNNode, at node: SCNNode, ...) {
...
node.addChildNode(objectNodeToPlace)
}
Everything works well, my object is always placed on the latest detected imageReference.
But when my object is place onto an image and a new one is detected, the object jumps to the newest, it’s not great, very jittery / jerky.
My goal is to make this transition smoother.
What I currently have
Here’s what it looks like rigth now:
What I have now
In red, the imageReference.
What I want
Here’s what I would like:
What I would like
What I’ve found yet
I’ve found this package, SceneKit Bezier Animations, to animate the movement between 2 points but I look a little bit overkill for want I want.
I also read this topic, SceneKit Rotate and Animate a SCNNode, one response suggest to use CABasicAnimation and another one suggest SCNAction.
I feel like SCNAction is the best way to go for the quick, not that precise animation that I want, I’m not sure of what I’m doing so I will be happy to hear from more experienced developers.
Edit
I've found what I'm looking for on the Apple documentation Animating SceneKit Content
It's called Implicit animation, and it should work with only one line of code that determines the animation duration of my changes :
SCNTransaction.animationDuration = 1.0
I tried that line just before I change the euler angle (who is "animatable") of my node in my place()function:
SCNTransaction.animationDuration = 1.0
objectNodeToPlace.eulerAngles.x = radian
but that didn't worked. I'm pretty sure that I'm just missing a simple thing but I can't find examples online, even in the documentation of SCNTransaction.
Does someone have an idea ?

Axel,
SCNTransaction needs a minimum of three statements.
SCNTransaction.start()
SCNTransaction.animationDuration = 1.0
//commands
SCNTransaction.commit()
I written a dozen articles on SceneKit recently you'll find here.

Related

Add a RealityKit's "rcproject" to ARKit's SCNScene

I am trying to add a .rcproject to my SCNView. I am working with SwitftUI an totally lost. I have no idea how to add it.
Currently I am able to detect my objects in the room with ARKit. But I also want to add my Scene from RealityKit at this anchor point.
Is there a way to do so?
func renderer(_ renderer: SCNSceneRenderer,
didAdd node: SCNNode,
for anchor: ARAnchor) {
if let objectAnchor = anchor as? ARObjectAnchor {
let name = objectAnchor.referenceObject.name!
print("You found a \(name) object")
let titleNode = createTitleNode(name)
node.addChildNode(titleNode)
let example_scene = try! RealityExample.loadScene()
arView.scene.anchors.append(example_scene)
// not possible, because this is not a SCNScene
}
}
Thanks a lot.
You can't read in Reality Composer project (.rcproject) into ARSCNView's scene (.scn). That's because SceneKit isn't able to handle RealityKit's objects and hierarchy. In SceneKit there are nodes (SCNNode class) connected to scene's root node (however, if you're using SceneKit with ARKit, nodes must be also tethered with ARAnchors), but in RealityKit there are entities (ModelEntity class) connected to scene through AnchorEntities. These two frameworks are totally different.
The only file format RealityKit and SceneKit can share is Pixar's .usdz.

Apple Vision – Is it possible to obtain camera position from static image?

Let's say I have a single photo (taken with iOS camera) that contains a known image target (e.g. a square QR code that is 5cm x 5cm) lying on a flat plane. can I use the Apple Vision framework to calculate the 6dof pose of the image target?
I'm unfamiliar with the framework, but it seems to me that this problem is similar to the tracking of AR Targets, and so I'm hoping that there is a solution in there somewhere!
In fact what I actually want to do is to detect shapes in the static image (using an existing cloud hosted open cv app) and to display those shapes in AR using ARKit. I was hoping that I could have the same image targets present in both the static images and in the AR video feed.
Obtaining ARCamera position
In ARKit you can acquire ARCamera's position thru ARFrame's dot notation. Each ARFrame (out of 60 frames per second) contains 4x4 camera matrix. To update ARCamera's position use an instance method called renderer(_:didUpdate:for:).
Here's "initial" method called renderer(_:didAdd:for:):
extension ViewController: ARSCNViewDelegate {
func renderer(_ renderer: SCNSceneRenderer,
didAdd node: SCNNode,
for anchor: ARAnchor) {
let frame = sceneView.session.currentFrame
print(frame?.camera.transform.columns.3.x as Any)
print(frame?.camera.transform.columns.3.y as Any)
print(frame?.camera.transform.columns.3.z as Any)
// ...
}
}
Obtaining anchor coordinates and image size
When you're using Vision and ARKit together, the simplest way to obtain coordinates of a tracked image in ARKit is to use a transform instance property of ARImageAnchor expressed in SIMD 4x4 matrix.
var transform: simd_float4x4 { get }
This matrix encoding the position, orientation, and scale of the anchor relative to the world coordinate space of the AR session the anchor is placed in.
Here's how your code may look like:
extension ViewController: ARSCNViewDelegate {
func renderer(_ renderer: SCNSceneRenderer,
didAdd node: SCNNode,
for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor
else { return }
print(imageAnchor.transform.columns.3.x)
print(imageAnchor.transform.columns.3.y)
print(imageAnchor.transform.columns.3.z)
// ...
}
}
If you want to know what a SIMD 4x4 matrix is, read this post.
Also, for obtaining a physical size (in meters) of a tracked photo use this property:
// set in Xcode's `AR Resources` Group
imageAnchor.referenceImage.physicalSize
To calculate a factor between the initial size and the estimated physical size use this property:
imageAnchor.estimatedScaleFactor
Updating anchor coordinates and image size
To constantly update coordinates of ARImageAnchor and image size use second method coming from ARSCNViewDelegate :
optional func renderer(_ renderer: SCNSceneRenderer,
didUpdate node: SCNNode,
for anchor: ARAnchor)
For obtaining a bounding box (CGRect type) of your photo in Vision use this instance property:
VNDetectedObjectObservation().boundingBox

SCNNode is not being anchored at real world coordinates

I have a simple SCNNode that I want to place in the real-world position, the node corresponds to a landmark with known coordinates. I want to keep the SCNNode still at its location, however it tends to move with the camera. I cannot use plane detection or a hit-test to place the node in the real-world, I can only use the real-world coordinates. My current solution creates an ARanchor using the SCNNodes world transform.
showNode(node: Node, location: convertedPoint)
let anchor = ARAnchor(transform: Node.simdWorldTransform)
self.sceneView.session.add(anchor: anchor)
I thought this would be enough to anchor the node. Is there a solution to anchor the node without using plane detection or a hit-test?
Thanks
First, even if you stabilize your node with an anchor, it's never going to be perfect, just better.
However, while an ARAnchor is being created at the coordinates you are telling it, but the node you want to be stabilized is not actually being placed within that anchor.
You need to implement the renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) function in your ARSCNViewDelegate and return the target node you want to be stabilized by the anchor (Alternatively, you can add your node as a childNode of the default created anchor node by implementing the renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) function).
If your nodeToBeStabilized object is available as property within the ViewController, then a very dumb implementation might be something like this:
extension WAViewController: ARSCNViewDelegate, ARSessionDelegate {
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
return nodeToBeStabilized
}
}

ARKit – Poster as a window to a virtual room

I am working on iOS app using ARKit.
In real world, there is a poster on the wall. The poster is a fixed thing, so any needed preprocessing may be applied.
The goal is to make this poster a window into a virtual room. So that when user approaches the poster, he can look "through" it at some virtual 3D environment (room). Of course, user cannot go through the "window" and then wander in that 3D environment. He only can observe a virtual room looking "through" the poster.
I know that it's possible to make this poster detectable by ARKit, and to play some visual effects around it, or even a movie on top of it.
But I did not find information how to turn it into a window into virtual 3D world.
Any ideas and links to sample projects are greatly appreciated.
Look at this video posted on Augmented Images webpage (use Chrome browser to watch this video).
It's easy to create that type of a virtual cube. All you need is a 3D model of simple cube primitive without a front polygon (in order to see its inner surface). Also you need a plane with a square hole. Assign an out-of-the-box RealityKit occlusion material or a hand-made SceneKit occlusion material for this plane and it will hide all the outer walls of cube behind it (look at a picture below).
In Autodesk Maya Occlusion material is a Hold-Out option in Render Stats (for Viewport 2.0 only):
When you'll be tracking your poster on a wall (with detectionImages option activated), your app must recognize a picture and "load" 3D cube and its masking plane with occlusion shader. So, if ARImageAnchor on a poster and a pivot point of 3D cube must meet, cube's pivot point has to be located on a front edge of cube (at the same level where a wall's surface is).
If you wish to download Apple's sample code containing Image Detection experience – just click a blue button on the same webpage with detectionImages.
Here is a short example of my code:
#IBOutlet var sceneView: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = self // for using renderer() methods of ARSCNViewDelegate
sceneView.scene = SCNScene()
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
resetTrackingConfiguration()
}
func resetTrackingConfiguration() {
guard let refImage = ARReferenceImage.referenceImages(inGroupNamed: "Poster",
bundle: nil)
else { return }
let config = ARWorldTrackingConfiguration()
config.detectionImages = refImage
config.maximumNumberOfTrackedImages = 1
let options = [ARSession.RunOptions.removeExistingAnchors,
ARSession.RunOptions.resetTracking]
sceneView.session.run(config, options: ARSession.RunOptions(options))
}
...and, of course, a SceneKit's renderer() instance method:
func renderer(_ renderer: SCNSceneRenderer,
didAdd node: SCNNode,
for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor,
let _ = imageAnchor.referenceImage.name
else { return }
anchorsArray.append(imageAnchor)
if anchorsArray.first != nil {
node.addChildNode(portalNode)
}
}

Check whether the ARReferenceImage is no longer visible in the camera's view

I would like to check whether the ARReferenceImage is no longer visible in the camera's view. At the moment I can check if the image's node is in the camera's view, but this node is still visible in the camera's view when the ARReferenceImage is covered with another image or when the image is removed.
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
guard let node = self.currentImageNode else { return }
if let pointOfView = sceneView.pointOfView {
let isVisible = sceneView.isNode(node, insideFrustumOf: pointOfView)
print("Is node visible: \(isVisible)")
}
}
So I need to check if the image is no longer visible instead of the image's node visibility. But I can't find out if this is possible. The first screenshot shows three boxes that are added when the image beneath is found. When the found image is covered (see screenshot 2) I would like to remove the boxes.
I managed to fix the problem! Used a little bit of Maybe1's code and his concept to solving the problem, but in a different way. The following line of code is still used to reactivate the image recognition.
// Delete anchor from the session to reactivate the image recognition
sceneView.session.remove(anchor: anchor)
Let me explain. First we need to add some variables.
// The scnNodeBarn variable will be the node to be added when the barn image is found. Add another scnNode when you have another image.
var scnNodeBarn: SCNNode = SCNNode()
// This variable holds the currently added scnNode (in this case scnNodeBarn when the barn image is found)
var currentNode: SCNNode? = nil
// This variable holds the UUID of the found Image Anchor that is used to add a scnNode
var currentARImageAnchorIdentifier: UUID?
// This variable is used to call a function when there is no new anchor added for 0.6 seconds
var timer: Timer!
The complete code with comments below.
/// - Tag: ARImageAnchor-Visualizing
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
let referenceImage = imageAnchor.referenceImage
// The following timer fires after 0.6 seconds, but everytime when there found an anchor the timer is stopped.
// So when there is no ARImageAnchor found the timer will be completed and the current scene node will be deleted and the variable will set to nil
DispatchQueue.main.async {
if(self.timer != nil){
self.timer.invalidate()
}
self.timer = Timer.scheduledTimer(timeInterval: 0.6 , target: self, selector: #selector(self.imageLost(_:)), userInfo: nil, repeats: false)
}
// Check if there is found a new image on the basis of the ARImageAnchorIdentifier, when found delete the current scene node and set the variable to nil
if(self.currentARImageAnchorIdentifier != imageAnchor.identifier &&
self.currentARImageAnchorIdentifier != nil
&& self.currentNode != nil){
//found new image
self.currentNode!.removeFromParentNode()
self.currentNode = nil
}
updateQueue.async {
//If currentNode is nil, there is currently no scene node
if(self.currentNode == nil){
switch referenceImage.name {
case "barn":
self.scnNodeBarn.transform = node.transform
self.sceneView.scene.rootNode.addChildNode(self.scnNodeBarn)
self.currentNode = self.scnNodeBarn
default: break
}
}
self.currentARImageAnchorIdentifier = imageAnchor.identifier
// Delete anchor from the session to reactivate the image recognition
self.sceneView.session.remove(anchor: anchor)
}
}
Delete the node when the timer is finished indicating that there was no new ARImageAnchor found.
#objc
func imageLost(_ sender:Timer){
self.currentNode!.removeFromParentNode()
self.currentNode = nil
}
In this way the currently added scnNode wil be deleted when the image is covered or when there is found a new image.
This solution does unfortunately not solve the positioning problem of images because of the following:
ARKit doesn’t track changes to the position or orientation of each detected image.
I don't think this is currently possible.
From the Recognizing Images in an AR Experience documentation:
Design your AR experience to use detected images as a starting point for virtual content.
ARKit doesn’t track changes to the position or orientation of each detected image. If you try to place virtual content that stays attached to a detected image, that content may not appear to stay in place correctly. Instead, use detected images as a frame of reference for starting a dynamic scene.
New Answer for iOS 12.0
ARKit 2.0 and iOS 12 finally adds this feature, either via ARImageTrackingConfiguration or via the ARWorldTrackingConfiguration.detectionImages property that now also tracks the position of the images.
The Apple documentation to ARImageTrackingConfiguration lists advantages of both methods:
With ARImageTrackingConfiguration, ARKit establishes a 3D space not by tracking the motion of the device relative to the world, but solely by detecting and tracking the motion of known 2D images in view of the camera. ARWorldTrackingConfiguration can also detect images, but each configuration has its own strengths:
World tracking has a higher performance cost than image-only tracking, so your session can reliably track more images at once with ARImageTrackingConfiguration.
Image-only tracking lets you anchor virtual content to known images only when those images are in view of the camera. World tracking with image detection lets you use known images to add virtual content to the 3D world, and continues to track the position of that content in world space even after the image is no longer in view.
World tracking works best in a stable, nonmoving environment. You can use image-only tracking to add virtual content to known images in more situations—for example, an advertisement inside a moving subway car.
The correct way to check if an image that you are tracking is not currently tracked by ARKit is by using the "isTracked" property in the ARImageAnchor on the didUpdate node for anchor function.
For that, I use the next struct:
struct TrackedImage {
var name : String
var node : SCNNode?
}
And then an array of that struct with the name of all the images.
var trackedImages : [TrackedImage] = [ TrackedImage(name: "image_1", node: nil) ]
Then in the didAdd node for anchor, set the new content to the scene and also add the node to the corresponding element in the array of trackedImages
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
// Check if the added anchor is a recognized ARImageAnchor
if let imageAnchor = anchor as? ARImageAnchor{
// Get the reference ar image
let referenceImage = imageAnchor.referenceImage
// Create a plane to match the detected image.
let plane = SCNPlane(width: referenceImage.physicalSize.width, height: referenceImage.physicalSize.height)
plane.firstMaterial?.diffuse.contents = UIColor(red: 1, green: 1, blue: 1, alpha: 0.5)
// Create SCNNode from the plane
let planeNode = SCNNode(geometry: plane)
planeNode.eulerAngles.x = -.pi / 2
// Add the plane to the scene.
node.addChildNode(planeNode)
// Add the node to the tracked images
for (index, trackedImage) in trackedImages.enumerated(){
if(trackedImage.name == referenceImage.name){
trackedImage[index].node = planeNode
}
}
}
}
Finally in the didUpdate node for anchor function we search for the anchor name in our array and check if the property isTracked is false.
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
var trackedImages : [TrackedImage] = [ TrackedImage(name: "image_1", node: nil) ]
if let imageAnchor = anchor as? ARImageAnchor{
// Search the corresponding node for the ar image anchor
for (index, trackedImage) in trackedImages.enumerated(){
if(trackedImage.name == referenceImage.name){
// Check if track is lost on ar image
if(imageAnchor.isTracked){
// The image is being tracked
trackedImage.node?.isHidden = false // Show or add content
}else{
// The image is lost
trackedImage.node?.isHidden = true // Hide or delete content
}
break
}
}
}
}
This solution works when you want to tracked multiple images at the same time and know when any of them is lost.
Note: For this solution to work the maximumNumberOfTrackedImages in the AR configuration must be set to a nonzero number.
For what its worth, I spent hours trying to figure out how to constantly check for image references. The didUpdate function was the answer. Then you just need to test of the reference image is being tracked using the .isTracked property. At that point, you can set the .isHidden property to true or false. Heres my example:
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
let trackedNode = node
if let imageAnchor = anchor as? ARImageAnchor{
if (imageAnchor.isTracked) {
trackedNode.isHidden = false
print("\(trackedNode.name)")
}else {
trackedNode.isHidden = true
//print("\(trackedImageName)")
print("No image in view")
}
}
}
I'm not entirely sure I have understood what your asking (so apologies), but if I have then perhaps this might help...
It seems that for insideOfFrustum to work correctly, that their must be some SCNGeometry associated with the node for it to work (an SCNNode alone will not suffice).
For example if we do something like this in the delegate callback and save the added SCNNode into an array:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If Out Target Image Has Been Detected Than Get The Corresponding Anchor
guard let currentImageAnchor = anchor as? ARImageAnchor else { return }
//2. Print The Anchor ID & It's Associated Node
print("""
Anchor With ID Has Been Detected \(currentImageAnchor.identifier)
Associated Node Details = \(node)
""")
//3. Store The Node
imageTargets.append(node)
}
And then use the insideOfFrustum method, 99% of the time it will say that the node is in view even when we know it shouldn't be.
However if we do something like this (whereby we create a transparent marker node e.g. one that has some geometry):
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If Out Target Image Has Been Detected Than Get The Corresponding Anchor
guard let currentImageAnchor = anchor as? ARImageAnchor else { return }
//2. Print The Anchor ID & It's Associated Node
print("""
Anchor With ID Has Been Detected \(currentImageAnchor.identifier)
Associated Node Details = \(node)
""")
//3. Create A Transpanrent Geometry
node.geometry = SCNSphere(radius: 0.1)
node.geometry?.firstMaterial?.diffuse.contents = UIColor.clear
//3. Store The Node
imageTargets.append(node)
}
And then call the following method, it does detect if the ARReferenceImage is inView:
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
//1. Get The Current Point Of View
guard let pointOfView = augmentedRealityView.pointOfView else { return }
//2. Loop Through Our Image Target Markers
for addedNode in imageTargets{
if augmentedRealityView.isNode(addedNode, insideFrustumOf: pointOfView){
print("Node Is Visible")
}else{
print("Node Is Not Visible")
}
}
}
In regard to your other point about an SCNNode being occluded by another one, the Apple Docs state that the inViewOfFrostrum:
does not perform occlusion testing. That is, it returns
true if the tested node lies within the specified viewing frustum
regardless of whether that node’s contents are obscured by other
geometry.
Again, apologies if I haven't understood you correctly, but hopefully it might help to some extent...
Update:
Now I fully understand your question, I agree with #orangenkopf that this isn't possible. Since as the docs state:
ARKit doesn’t track changes to the position or orientation of each
detected image.
From the Recognizing Images in an AR Experience documentation:
ARKit adds an image anchor to a session exactly once for each
reference image in the session configuration’s detectionImages array.
If your AR experience adds virtual content to the scene when an image
is detected, that action will by default happen only once. To allow
the user to experience that content again without restarting your app,
call the session’s remove(anchor:) method to remove the corresponding
ARImageAnchor. After the anchor is removed, ARKit will add a new
anchor the next time it detects the image.
So, maybe you can find a workaround for your case:
Let's say we are that structure which saves our ARImageAnchor detected and the virtual content associated:
struct ARImage {
var anchor: ARImageAnchor
var node: SCNNode
}
Then, when the renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) is called, you save the image detected into a temporary list of ARImage:
...
var tmpARImages: [ARImage] = []
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
let referenceImage = imageAnchor.referenceImage
// If the ARImage does not exist
if !tmpARImages.contains(where: {$0.anchor.referenceImage.name == referenceImage.name}) {
let virtualContent = SCNNode(...)
node.addChildNode(virtualContent)
tmpARImages.append(ARImage(anchor: imageAnchor, node: virtualContent))
}
// Delete anchor from the session to reactivate the image recognition
sceneView.session.remove(anchor: anchor)
}
If you understood, while your camera's view point out of the image/marker, the delegate function will loop endlessly... (because we removed the anchor from the session).
The idea will be to combine the image recognition loop, the image detected saved into the tmp list and the sceneView.isNode(node, insideFrustumOf: pointOfView) function to determine if the image/marker detected is no longer view.
I hope it was clear...
This code works only if You hold the device strictly horizontally or vertically. If You hold iPhone tilted or starting to tilt if, this code doesn't work:
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
//1. Get The Current Point Of View
guard let pointOfView = augmentedRealityView.pointOfView else { return }
//2. Loop Through Our Image Target Markers
for addedNode in imageTargets{
if augmentedRealityView.isNode(addedNode, insideFrustumOf: pointOfView){
print("Node Is Visible")
}else{
print("Node Is Not Visible")
}
}
}