Stereo ARSCNview to make VR and AR mix - swift

I want to make a mix of virtual reality and augmented reality.
The goal is I have a stereo camera (for each eyes).
I tried to put two ARSCNView in a viewCotnroller but it seems ARKit enable only one ARWorldTrackingSessionConfiguration at the same time. How can I do that?
I researched to copy the graphic representation of a view to past this to an other view but impossible to find. Please help me to find the solution.
I found this link, maybe can it illumine us:
ARKit with multiple users
Here's a sample of my issue:
https://www.youtube.com/watch?v=d6LOqNnYm5s
PS: before unlike my post, comment why!

The following code is basically what Hal said. I previously wrote a few lines on github that might be able to help you get started. (Simple code, no barrel distortion, no adjustment for the narrow FOV - yet).
Essentially, we connect the same scene to the second ARSCNView (so both ARSCNViews are seeing the same scene). No need to get ARWorldTrackingSessionConfiguration working with 2 ARSCNViews. Then, we offset its pointOfView so it's positioned as the 2nd eye.
https://github.com/hanleyweng/iOS-Stereoscopic-ARKit-Template

The ARSession documentation says that ARSession is a shared object.
Every AR experience built with ARKit requires a single ARSession object. If you use an
ARSCNView
or
ARSKView
object to easily build the visual part of your AR experience, the view object includes an ARSession instance. If you build your own renderer for AR content, you'll need to instantiate and maintain an ARSession object yourself.
So there's a clue in that last sentence. Instead of two ARSCNView instances, use SCNView and share the single ARSession between them.
I expect this is a common use case, so it's worth filing a Radar to request stereo support.
How to do it right now?
The (singleton) session has only one delegate. You need two different delegate instances, one for each view. You could solve that with an object that sends the delegate messages to each view; solvable but a bit of extra work.
There's also the problem of needing two slightly different camera locations, one for each eye, for stereo vision. ARKit uses one camera, placed at the iOS device's location, so you'll have to fuzz that.
Then you have to deal with the different barrel distortions for each eye.
This, for me, adds up to writing my own custom object to intercept ARKit delegate messages, convert the coordinates to what I'd see from two different cameras, and manage the two distinct SCNViews (not ARSCNViews). Or perhaps use one ARSCNView (one eye), intercept its frame updates, and pass those frames on to a SCNView (the other eye).
File the Radar, post the number, and I'll dupe it.

To accomplish this, please use the following code:
import UIKit
import SceneKit
import ARKit
class ViewController: UIViewController, ARSCNViewDelegate {
#IBOutlet weak var sceneView: ARSCNView!
#IBOutlet weak var sceneView2: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = self
sceneView.showsStatistics = true
let scene = SCNScene(named: "art.scnassets/ship.scn")!
sceneView.scene = scene
sceneView.isPlaying = true
// SceneView2 Setup
sceneView2.scene = scene
sceneView2.showsStatistics = sceneView.showsStatistics
// Now sceneView2 starts receiving updates
sceneView2.isPlaying = true
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
let configuration = ARWorldTrackingConfiguration()
sceneView.session.run(configuration)
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
sceneView.session.pause()
}
}
And don't forget to activate .isPlaying instance properties for both ARSCNViews.

Objective-C version of Han's github code, sceneViews created programatically, with y + z positions not updated - all credit Han:
-(void)setup{
//left
leftSceneView = [ARSCNView new];
leftSceneView.frame = CGRectMake(0, 0, w, h/2);
leftSceneView.delegate = self;
leftSceneView.autoenablesDefaultLighting = true;
[self.view addSubview:leftSceneView];
//right
rightSceneView = [ARSCNView new];
rightSceneView.frame = CGRectMake(0, h/2, w, h/2);
rightSceneView.playing = true;
rightSceneView.autoenablesDefaultLighting = true;
[self.view addSubview:rightSceneView];
//scene
SCNScene * scene = [SCNScene new];
leftSceneView.scene = scene;
rightSceneView.scene = scene;
//tracking
ARWorldTrackingConfiguration * configuration = [ARWorldTrackingConfiguration new];
configuration.planeDetection = ARPlaneDetectionHorizontal;
[leftSceneView.session runWithConfiguration:configuration];
}
-(void)renderer:(id<SCNSceneRenderer>)renderer updateAtTime:(NSTimeInterval)time {
dispatch_async(dispatch_get_main_queue(), ^{
//update right eye
SCNNode * pov = self->leftSceneView.pointOfView.clone;
SCNQuaternion orientation = pov.orientation;
GLKQuaternion orientationQuaternion = GLKQuaternionMake(orientation.x, orientation.y, orientation.z, orientation.w);
GLKVector3 eyePosition = GLKVector3Make(1, 0, 0);
GLKVector3 rotatedEyePosition = GLKQuaternionRotateVector3(orientationQuaternion, eyePosition);
SCNVector3 rotatedEyePositionSCNV = SCNVector3Make(rotatedEyePosition.x, rotatedEyePosition.y, rotatedEyePosition.z);
float mag = 0.066f;
float rotatedX = pov.position.x + rotatedEyePositionSCNV.x * mag;
float rotatedY = pov.position.y;// + rotatedEyePositionSCNV.y * mag;
float rotatedZ = pov.position.z;// + rotatedEyePositionSCNV.z * mag;
[pov setPosition:SCNVector3Make(rotatedX, rotatedY, rotatedZ)];
self->rightSceneView.pointOfView = pov;
});
}

Related

Where is the .camera AnchorEntity located?

When adding a child to my AnchorEntity(.camera), it appears as if the child is spawning behind my camera (meaning I can only see my child when I turn around). I have also tried to add a mesh to my Anchor directly but unfortunately ARKit / RealityKit does not render the mesh when you are inside of it (which because its centered around the camera, is theoretically always the case. However, it could also be the case that its always located behind the screen [where the user is] and I'm never able to see it).
Also, oddly enough the child entity does not move with the camera AnchorEntity despite setting the translation transform to (0,0,0).
My two questions are:
Is the .camera anchor actually located right where the physical iPad / camera is located or is it located further back (perhaps where the user would normally hold the iPad)?
How do you get a child entity of the AnchorEntity(.camera) to move as the iPad / camera moves in real space?
Answer to the first question
In RealityKit and ARKit frameworks ARCamera has a pivot point like other entities (nodes) have, and it's located at the point where lens is attached to the camera body (at bayonet level). This pivot can tether AnchorEntity(.camera). In other words, virtual camera and real-world camera have that pivot point approximately at the same place.
So, if you attach RealityKit's AnchorEntity to a camera's pivot, you place it to the coordinates where camera's bayonet is located. And this AnchorEntity(.camera) will be tracked automatically without a need to implement session(_:didUpdate:) method.
However, if attach ARKit's ARAnchor to the camera's pivot, you have to implement session(_:didUpdate:) method to constantly update a position and orientation of that anchor for every ARFrame.
Answer to the second question
If you want to constantly update model's position in RealityKits at 60 fps (when ARCamera moves and rotates) you need to use the following approach:
import ARKit
import RealityKit
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
let box = MeshResource.generateBox(size: 0.25)
let material = SimpleMaterial(color: .systemPink, isMetallic: true)
let boxEntity = ModelEntity(mesh: box, materials: [material])
let cameraAnchor = AnchorEntity(.camera) // ARCamera anchor
cameraAnchor.addChild(boxEntity)
arView.scene.addAnchor(cameraAnchor)
boxEntity.transform.translation = [0, 0,-0.5] // Box offset 0.5 m
}
}
Or you can use ARKit's great old .currentFrame instance property in session(_:didUpdate:) delegate method:
extension ViewController: ARSessionDelegate {
func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
guard let transform = arView.session.currentFrame?.camera.transform
else { return }
let arkitAnchor = ARAnchor(transform: transform)
arView.session.add(anchor: arkitAnchor) // add to session
let anchor = AnchorEntity(anchor: arkitAnchor)
anchor.addChild(boxEntity)
arView.scene.addAnchor(anchor) // add to scene
}
}
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
var boxEntity = ModelEntity(...)
override func viewDidLoad() {
super.viewDidLoad()
arView.session.delegate = self // Session's delegate
}
}
To find out how to save the ARCamera Pose over time, read the following post.

ARKit – How to display the feed from a virtual SCNCamera placed on SCNPlane?

I put some objects in AR space using ARKit and SceneKit. That works well. Now I'd like to add an additional camera (SCNCamera) that is placed elsewhere in the scene attached and positioned by a common SCNNode. It is oriented to show me the current scene from an other (fixed) perspective.
Now I'd like to show this additional SCNCamera feed on i.Ex. a SCNPlane (as the diffuse first material) - Like a TV screen. Of course I am aware that it will only display the SceneKit content which stays in the camera focus and not rest of the ARKit image (which is only possible by the main camera of course). A simple colored background then would be fine.
I have seen tutorials that describes, how to play a video file on a virtual display in ARSpace, but I need a realtime camera feed from my own current scene.
I defined this objects:
let camera = SCNCamera()
let cameraNode = SCNNode()
Then in viewDidLoad I do this:
camera.usesOrthographicProjection = true
camera.orthographicScale = 9
camera.zNear = 0
camera.zFar = 100
cameraNode.camera = camera
sceneView.scene.rootNode.addChildNode(cameraNode)
Then I call my setup function to place the virtual Display next to all my AR stuff, position the cameraNode as well (pointing in the direction where objects stay in the scene)
cameraNode.position = SCNVector3(initialStartPosition.x, initialStartPosition.y + 0.5, initialStartPosition.z)
let cameraPlane = SCNNode(geometry: SCNPlane(width: 0.5, height: 0.3))
cameraPlane.geometry?.firstMaterial?.diffuse.contents = cameraNode.camera
cameraPlane.position = SCNVector3(initialStartPosition.x - 1.0, initialStartPosition.y + 0.5, initialStartPosition.z)
sceneView.scene.rootNode.addChildNode(cameraPlane)
Everything compiles and loads... The display shows up at the given position, but it stays entirely gray. Nothing is displayed at all from the SCNCamera I put in the scene. Everything else in the AR scene works well, I just don't get any feed from that camera.
Hay anyone an approach to get this scenario working?
To even better visualize, I add some more print screens.
The following shows the Image trough the SCNCamera according to ARGeo's input. But it takes the whole screen, instead of displaying its contents on a SCNPlane, like I need.
The next Print screen actually shows the current ARView result as I got it using my posted code. As you can see, the gray Display-Plane remains gray - it shows nothing.
The last print screen is a photomontage, showing the expected result, as I'd like to get.
How could this be realized? Am I missing something fundamental here?
After some research and sleep, I came to the following, working solution (including some inexplainable obstacles):
Currently, the additional SCNCamera feed is not linked to a SCNMaterial on a SCNPlane, as it was the initial idea, but I will use an additional SCNView (for the moment)
In the definitions I add an other view like so:
let overlayView = SCNView() // (also tested with ARSCNView(), no difference)
let camera = SCNCamera()
let cameraNode = SCNNode()
then, in viewDidLoad, I setup the stuff like so...
camera.automaticallyAdjustsZRange = true
camera.usesOrthographicProjection = false
cameraNode.camera = camera
cameraNode.camera?.focalLength = 50
sceneView.scene.rootNode.addChildNode(cameraNode) // add the node to the default scene
overlayView.scene = scene // the same scene as sceneView
overlayView.allowsCameraControl = false
overlayView.isUserInteractionEnabled = false
overlayView.pointOfView = cameraNode // this links the new SCNView to the created SCNCamera
self.view.addSubview(overlayView) // don't forget to add as subview
// Size and place the view on the bottom
overlayView.frame = CGRect(x: 0, y: 0, width: self.view.bounds.width * 0.8, height: self.view.bounds.height * 0.25)
overlayView.center = CGPoint(x: self.view.bounds.width * 0.5, y: self.view.bounds.height - 175)
then, in some other function, I place the node containing the SCNCamera to my desired position and angle.
// (exemplary)
cameraNode.position = initialStartPosition + SCNVector3(x: -0.5, y: 0.5, z: -(Float(shiftCurrentDistance * 2.0 - 2.0)))
cameraNode.eulerAngles = SCNVector3(-15.0.degreesToRadians, -15.0.degreesToRadians, 0.0)
The result, is a kind of window (the new SCNView) at the bottom of the screen, displaying the same SceneKit content as in the main sceneView, viewed trough the perspective of the SCNCamera plus its node position, and that very nicely.
In a common iOS/Swift/ARKit project, this construct generates some side effects, that one may struggle into.
1) Mainly, the new SCNView shows SceneKit content from the desired perspective, but the background is always the actual physical camera feed. I could not figure out, how to make the background a static color, by still displaying all the SceneKit content. Changing the new scene's background property affects also the whole main scene, what is actually NOT desired.
2) It might sound confusing, but as soon as the following code get's included (which is essential to make it work):
overlayView.scene = scene
the animation speed of the entire scenes (both) DOUBLES! (Why?)
I got this corrected by adding/changing the following property, which restores the animation speed behavour almost like normal (default):
// add or change this in the scene setup
scene.physicsWorld.speed = 0.5
3) If there are actions like SCNAction.playAudio in the project, all the effects will no longer play - as long as I don't do this:
overlayView.scene = nil
Of course, the additional SCNView stops working but everything else gets gets back to its normal.
Use this code (as a starting point) to find out how to setup a virtual camera.
Just create a default ARKit project in Xcode and copy-paste my code:
import UIKit
import SceneKit
import ARKit
class ViewController: UIViewController, ARSCNViewDelegate {
#IBOutlet var sceneView: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = self
sceneView.showsStatistics = true
let scene = SCNScene(named: "art.scnassets/ship.scn")!
sceneView.scene = scene
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
cameraNode.position = SCNVector3(0, 0, 1)
cameraNode.camera?.focalLength = 70
cameraNode.camera?.categoryBitMask = 1
scene.rootNode.addChildNode(cameraNode)
sceneView.pointOfView = cameraNode
sceneView.allowsCameraControl = true
sceneView.backgroundColor = UIColor.darkGray
let plane = SCNNode(geometry: SCNPlane(width: 0.8, height: 0.45))
plane.position = SCNVector3(0, 0, -1.5)
// ASSIGN A VIDEO STREAM FROM SCENEKIT-RECORDER TO YOUR MATERIAL
plane.geometry?.materials.first?.diffuse.contents = capturedVideoFromSceneKitRecorder
scene.rootNode.addChildNode(plane)
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
let configuration = ARWorldTrackingConfiguration()
sceneView.session.run(configuration)
}
}
UPDATED:
Here's a SceneKit Recorder App that you can tailor to your needs (you don't need to write a video to disk, just use a CVPixelBuffer stream and assign it as a texture for a diffuse material).
Hope this helps.
I'm a little late to the party, but I've had a similar issue recently.
As far as I can tell, you cannot directly connect a camera to a node's material. You can, however, use a scene's layer as a texture for a node.
The code below is not verified, but should be more or less ok:
class MyViewController: UIViewController {
override func loadView() {
let projectedScene = createProjectedScene()
let receivingScene = createReceivingScene()
let projectionPlane = receivingScene.scene?.rootNode.childNode(withName: "ProjectionPlane", recursively: true)!
// Here's the important part:
// You can't directly connect a camera to a material's diffuse texture.
// But you can connect a scene's layer as a texture.
projectionPlane.geometry?.firstMaterial?.diffuse.contents = projectedScene.layer
projectedScene.layer.contentsScale = 1
// Note how we only need to connect the receiving view to the controller.
// The projected view is not directly connected as a subview,
// but updates in projectedScene will still be reflected in receivingScene.
self.view = receivingScene
}
func createProjectedScene() -> SCNView {
let view = SCNView()
// ... set up scene ...
return view
}
func createReceivingScene() -> SCNView {
let view = SCNView()
// ... set up scene ...
let projectionPlane = SCNNode(geometry: SCNPlane(width: 2, height: 2)
projectionPlane.name = "ProjectionPlane"
view.scene.rootNode.addChildNode(projectionPlane)
return view
}
}

How use vertical detection in ARKit?

How can I insert a SCNScene with a vertical hold on ARKit? I can not do the detention and insert the object in a vertical position.
override func viewDidLoad() {
super.viewDidLoad()
// Set the view's delegate
sceneView?.delegate = self
// Show statistics such as fps and timing information
sceneView?.showsStatistics = true
// Create a new scene
let scene = SCNScene(named: "art.scnassets/tabellone.scn")
sceneView?.debugOptions = [SCNDebugOptions.showFeaturePoints]
// Set the scene to the view
sceneView?.scene = scene!
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
// Create a session configuration
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = .vertical
configuration.isLightEstimationEnabled = true
// Run the view's session
sceneView?.session.run(configuration)
}
You need an appropriate vertical surface for tracking. Wall with a solid color (with no distinguishable features on it) is very bad instance (you can see such a vertical surface on the left image). The most robust approach for tracking of a vertical surface is a well-lit brick wall, or a wall with pictures on it, or a wall with a distinguishable pattern, etc.
So the right image is quite good example for tracking and vertical plane detection.
If you want to explore how to place 3D object onto detected plane, look at Apple's Handling 3D Interaction and UI Controls in AR project.
Hope this helps.

Assigning SCNScene to SCNView - found nil while unwrapping Optional value

Recently, I decided to apply my previous knowledge in C++ and Python to learning Swift. After which, I decided to see what I could do with the SceneKit framework. After hours of checking through the documentation, and consulting a tutorial, I have to wonder what's going wrong with my code:
class GameViewController: UIViewController {
var gameView:SCNView!
var gameScene:SCNScene!
var cameraNode:SCNNode!
override func viewDidLoad() {
super.viewDidLoad()
initScene()
initView()
initCamera()
}
func initView() {
//initialize the game view - this view holds everything else in the game!
gameView = self.view as! SCNView
//allow the camera to move to gestures - mainly for testing purposes
gameView.allowsCameraControl = true
//use default lighting while still practicing
gameView.autoenablesDefaultLighting = true
}
func initScene() {
//initialize the scene
gameScene = SCNScene()
//set the scen in the gameView object to the scene created by this function
gameView.scene = gameScene
}
func initCamera() {
//create a node that will become the camera
cameraNode = SCNNode()
//since a node can be any object in the scene, this needs to be set up as a camera
cameraNode.camera = SCNCamera()
cameraNode.position = SCNVector3 (x:0, y:5, z:15)
}
}
After more checking through the documentation and making sure that I was now copying from the tutorial directly to get it to work, I still have no luck with this. According to a lot of the other questions I found here on StackOverflow, it looks like it has something to do with the forced unwrapping, the exclamation points, but I'm not exactly sure why that is.
I've probably been staring the answer in the face combing through this documentation, but I'm not quite seeing what the problem is.
Also, apologies if my comments are a bit long and/or distracting.
You have the following problems:
1) you should re-order the initializations in your viewDidLoad, doing so:
initView() // must be initialized before the scene
initScene() // you have been crashing here on getting `gameView.scene`, but gameView was nil
initCamera()
2) cameraNode is not attached on the rootNode, so you may add the following code at the end of initCamera:
gameScene.rootNode.addChildNode(cameraNode)

How do I programmatically move an ARAnchor?

I'm trying out the new ARKit to replace another similar solution I have. It's pretty great! But I can't seem to figure out how to move an ARAnchor programmatically. I want to slowly move the anchor to the left of the user.
Creating the anchor to be 2 meters in front of the user:
var translation = matrix_identity_float4x4
translation.columns.3.z = -2.0
let transform = simd_mul(currentFrame.camera.transform, translation)
let anchor = ARAnchor(transform: transform)
sceneView.session.add(anchor: anchor)
later, moving the object to the left/right of the user (x-axis)...
anchor.transform.columns.3.x = anchor.transform.columns.3.x + 0.1
repeated every 50 milliseconds (or whatever).
The above does not work because transform is a get-only property.
I need a way to change the position of an AR object in space relative to the user in a way that keeps the AR experience intact - meaning, if you move your device, the AR object will be moving but also won't be "stuck" to the camera like it's simply painted on, but moves like you would see a person move while you were walking by - they are moving and you are moving and it looks natural.
Please note the scope of this question relates only to how to move an object in space in relation to the user (ARAnchor), not in relation to a plane (ARPlaneAnchor) or to another detected surface (ARHitTestResult).
Thanks!
You don't need to move anchors. (hand wave) That's not the API you're looking for...
Adding ARAnchor objects to a session is effectively about "labeling" a point in real-world space so that you can refer to it later. The point (1,1,1) (for example) is always the point (1,1,1) — you can't move it someplace else because then it's not the point (1,1,1) anymore.
To make a 2D analogy: anchors are reference points sort of like the bounds of a view. The system (or another piece of your code) tells the view where it's boundaries are, and the view draws its content relative to those boundaries. Anchors in AR give you reference points you can use for drawing content in 3D.
What you're asking is really about moving (and animating the movement of) virtual content between two points. And ARKit itself really isn't about displaying or animating virtual content — there are plenty of great graphics engines out there, so ARKit doesn't need to reinvent that wheel. What ARKit does is provide a real-world frame of reference for you to display or animate content using an existing graphics technology like SceneKit or SpriteKit (or Unity or Unreal, or a custom engine built with Metal or GL).
Since you mentioned trying to do this with SpriteKit... beware, it gets messy. SpriteKit is a 2D engine, and while ARSKView provides some ways to shoehorn a third dimension in there, those ways have their limits.
ARSKView automatically updates the xScale, yScale, and zRotation of each sprite associated with an ARAnchor, providing the illusion of 3D perspective. But that applies only to nodes attached to anchors, and as noted above, anchors are static.
You can, however, add other nodes to your scene, and use those same properties to make those nodes match the ARSKView-managed nodes. Here's some code you can add/replace in the ARKit/SpriteKit Xcode template project to do that. We'll start with some basic logic to run a bouncing animation on the third tap (after using the first two taps to place anchors).
var anchors: [ARAnchor] = []
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
// Start bouncing on touch after placing 2 anchors (don't allow more)
if anchors.count > 1 {
startBouncing(time: 1)
return
}
// Create anchor using the camera's current position
guard let sceneView = self.view as? ARSKView else { return }
if let currentFrame = sceneView.session.currentFrame {
// Create a transform with a translation of 30 cm in front of the camera
var translation = matrix_identity_float4x4
translation.columns.3.z = -0.3
let transform = simd_mul(currentFrame.camera.transform, translation)
// Add a new anchor to the session
let anchor = ARAnchor(transform: transform)
sceneView.session.add(anchor: anchor)
anchors.append(anchor)
}
}
Then, some SpriteKit fun for making that animation happen:
var ballNode: SKLabelNode = {
let labelNode = SKLabelNode(text: "🏀")
labelNode.horizontalAlignmentMode = .center
labelNode.verticalAlignmentMode = .center
return labelNode
}()
func startBouncing(time: TimeInterval) {
guard
let sceneView = self.view as? ARSKView,
let first = anchors.first, let start = sceneView.node(for: first),
let last = anchors.last, let end = sceneView.node(for: last)
else { return }
if ballNode.parent == nil {
addChild(ballNode)
}
ballNode.setScale(start.xScale)
ballNode.zRotation = start.zRotation
ballNode.position = start.position
let scale = SKAction.scale(to: end.xScale, duration: time)
let rotate = SKAction.rotate(toAngle: end.zRotation, duration: time)
let move = SKAction.move(to: end.position, duration: time)
let scaleBack = SKAction.scale(to: start.xScale, duration: time)
let rotateBack = SKAction.rotate(toAngle: start.zRotation, duration: time)
let moveBack = SKAction.move(to: start.position, duration: time)
let action = SKAction.repeatForever(.sequence([
.group([scale, rotate, move]),
.group([scaleBack, rotateBack, moveBack])
]))
ballNode.removeAllActions()
ballNode.run(action)
}
Here's a video so you can see this code in action. You'll notice that the illusion only works as long as you don't move the camera — not so great for AR. When using SKAction, we can't adjust the start/end states of the animation while animating, so the ball keeps bouncing back and forth between its original (screen-space) positions/rotations/scales.
You could do better by animating the ball directly, but it's a lot of work. You'd need to, on every frame (or every view(_:didUpdate:for:) delegate callback):
Save off the updated position, rotation, and scale values for the anchor-based nodes at each end of the animation. You'll need to do this twice per didUpdate callback, because you'll get one callback for each node.
Work out position, rotation, and scale values for the node being animated, by interpolating between the two endpoint values based on the current time.
Set the new attributes on the node. (Or maybe animate it to those attributes over a very short duration, so it doesn't jump too much in one frame?)
That's kind of a lot of work to shoehorn a fake 3D illusion into a 2D graphics toolkit — hence my comments about SpriteKit not being a great first step into ARKit.
If you want 3D positioning and animation for your AR overlays, it's a lot easier to use a 3D graphics toolkit. Here's a repeat of the previous example, but using SceneKit instead. Start with the ARKit/SceneKit Xcode template, take the spaceship out, and paste the same touchesBegan function from above into the ViewController. (Change the as ARSKView casts to as ARSCNView, too.)
Then, some quick code for placing 2D billboarded sprites, matching via SceneKit the behavior of the ARKit/SpriteKit template:
// in global scope
func makeBillboardNode(image: UIImage) -> SCNNode {
let plane = SCNPlane(width: 0.1, height: 0.1)
plane.firstMaterial!.diffuse.contents = image
let node = SCNNode(geometry: plane)
node.constraints = [SCNBillboardConstraint()]
return node
}
// inside ViewController
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
// emoji to image based on https://stackoverflow.com/a/41021662/957768
let billboard = makeBillboardNode(image: "⛹️".image())
node.addChildNode(billboard)
}
Finally, adding the animation for the bouncing ball:
let ballNode = makeBillboardNode(image: "🏀".image())
func startBouncing(time: TimeInterval) {
guard
let sceneView = self.view as? ARSCNView,
let first = anchors.first, let start = sceneView.node(for: first),
let last = anchors.last, let end = sceneView.node(for: last)
else { return }
if ballNode.parent == nil {
sceneView.scene.rootNode.addChildNode(ballNode)
}
let animation = CABasicAnimation(keyPath: #keyPath(SCNNode.transform))
animation.fromValue = start.transform
animation.toValue = end.transform
animation.duration = time
animation.autoreverses = true
animation.repeatCount = .infinity
ballNode.removeAllAnimations()
ballNode.addAnimation(animation, forKey: nil)
}
This time the animation code is a lot shorter than the SpriteKit version.
Here's how it looks in action.
Because we're working in 3D to start with, we're actually animating between two 3D positions — unlike in the SpriteKit version, the animation stays where it's supposed to. (And without the extra work for directly interpolating and animating attributes.)