How use vertical detection in ARKit? - swift

How can I insert a SCNScene with a vertical hold on ARKit? I can not do the detention and insert the object in a vertical position.
override func viewDidLoad() {
super.viewDidLoad()
// Set the view's delegate
sceneView?.delegate = self
// Show statistics such as fps and timing information
sceneView?.showsStatistics = true
// Create a new scene
let scene = SCNScene(named: "art.scnassets/tabellone.scn")
sceneView?.debugOptions = [SCNDebugOptions.showFeaturePoints]
// Set the scene to the view
sceneView?.scene = scene!
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
// Create a session configuration
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = .vertical
configuration.isLightEstimationEnabled = true
// Run the view's session
sceneView?.session.run(configuration)
}

You need an appropriate vertical surface for tracking. Wall with a solid color (with no distinguishable features on it) is very bad instance (you can see such a vertical surface on the left image). The most robust approach for tracking of a vertical surface is a well-lit brick wall, or a wall with pictures on it, or a wall with a distinguishable pattern, etc.
So the right image is quite good example for tracking and vertical plane detection.
If you want to explore how to place 3D object onto detected plane, look at Apple's Handling 3D Interaction and UI Controls in AR project.
Hope this helps.

Related

SceneKit / ARKit updating a node every frame

I'm working with ARKit / SceneKit and I'm trying to have an arrow point to an arbitrary position I set in the world, but I'm having a bit of trouble. In my sceneView I have a scene set up to load in my arrow.
override func viewDidLoad() {
super.viewDidLoad()
// Set the view's delegate
sceneView.delegate = self
guard let arrowScene = SCNScene(named: "art.scnassets/arrowScene.scn") else {
fatalError("Scene arrow.scn not found")
}
let worldAnchor = ARAnchor(name: "World Anchor", transform: simd_float4x4(1))
sceneView.session.add(anchor: worldAnchor)
sceneView.scene = (arrowScene)
}
I wish to then use SCNNode.look(at:) to point my arrow at the specified anchor, however, I am unsure how to get this to occur every frame. I know that I have access to special delegates provided by ARSCNView such as renderer(willUpdate node:), but I am unsure how to use these when the anchor is not changing positions whereas the arrow is.
Thanks for the help!
You could do so by using the updateAtTime delegate function, but I strongly recommend you to use a SCNConstraint.
let lookAtConstraint = SCNLookAtConstraint(target: myLookAtNode)
lookAtConstraint.isGimbalLockEnabled = true // useful for cameras, probably also for your arrow.
// in addition you can use a position constraint
let positionConstraint = SCNReplicatorConstraint(target: myLookAtNode)
positionConstraint.positionOffset = yourOffsetSCNVector3
positionConstraint.replicatesOrientation = false
positionConstraint.replicatesScale = false
positionConstraint.replicatesPosition = true
// then add the constraints to your arrow node
arrowNode.constraints = [lookAtConstraint, positionConstraint ]
This will automatically adjust your node for each rendered frame.

Where is the .camera AnchorEntity located?

When adding a child to my AnchorEntity(.camera), it appears as if the child is spawning behind my camera (meaning I can only see my child when I turn around). I have also tried to add a mesh to my Anchor directly but unfortunately ARKit / RealityKit does not render the mesh when you are inside of it (which because its centered around the camera, is theoretically always the case. However, it could also be the case that its always located behind the screen [where the user is] and I'm never able to see it).
Also, oddly enough the child entity does not move with the camera AnchorEntity despite setting the translation transform to (0,0,0).
My two questions are:
Is the .camera anchor actually located right where the physical iPad / camera is located or is it located further back (perhaps where the user would normally hold the iPad)?
How do you get a child entity of the AnchorEntity(.camera) to move as the iPad / camera moves in real space?
Answer to the first question
In RealityKit and ARKit frameworks ARCamera has a pivot point like other entities (nodes) have, and it's located at the point where lens is attached to the camera body (at bayonet level). This pivot can tether AnchorEntity(.camera). In other words, virtual camera and real-world camera have that pivot point approximately at the same place.
So, if you attach RealityKit's AnchorEntity to a camera's pivot, you place it to the coordinates where camera's bayonet is located. And this AnchorEntity(.camera) will be tracked automatically without a need to implement session(_:didUpdate:) method.
However, if attach ARKit's ARAnchor to the camera's pivot, you have to implement session(_:didUpdate:) method to constantly update a position and orientation of that anchor for every ARFrame.
Answer to the second question
If you want to constantly update model's position in RealityKits at 60 fps (when ARCamera moves and rotates) you need to use the following approach:
import ARKit
import RealityKit
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
let box = MeshResource.generateBox(size: 0.25)
let material = SimpleMaterial(color: .systemPink, isMetallic: true)
let boxEntity = ModelEntity(mesh: box, materials: [material])
let cameraAnchor = AnchorEntity(.camera) // ARCamera anchor
cameraAnchor.addChild(boxEntity)
arView.scene.addAnchor(cameraAnchor)
boxEntity.transform.translation = [0, 0,-0.5] // Box offset 0.5 m
}
}
Or you can use ARKit's great old .currentFrame instance property in session(_:didUpdate:) delegate method:
extension ViewController: ARSessionDelegate {
func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
guard let transform = arView.session.currentFrame?.camera.transform
else { return }
let arkitAnchor = ARAnchor(transform: transform)
arView.session.add(anchor: arkitAnchor) // add to session
let anchor = AnchorEntity(anchor: arkitAnchor)
anchor.addChild(boxEntity)
arView.scene.addAnchor(anchor) // add to scene
}
}
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
var boxEntity = ModelEntity(...)
override func viewDidLoad() {
super.viewDidLoad()
arView.session.delegate = self // Session's delegate
}
}
To find out how to save the ARCamera Pose over time, read the following post.

ARKit – Poster as a window to a virtual room

I am working on iOS app using ARKit.
In real world, there is a poster on the wall. The poster is a fixed thing, so any needed preprocessing may be applied.
The goal is to make this poster a window into a virtual room. So that when user approaches the poster, he can look "through" it at some virtual 3D environment (room). Of course, user cannot go through the "window" and then wander in that 3D environment. He only can observe a virtual room looking "through" the poster.
I know that it's possible to make this poster detectable by ARKit, and to play some visual effects around it, or even a movie on top of it.
But I did not find information how to turn it into a window into virtual 3D world.
Any ideas and links to sample projects are greatly appreciated.
Look at this video posted on Augmented Images webpage (use Chrome browser to watch this video).
It's easy to create that type of a virtual cube. All you need is a 3D model of simple cube primitive without a front polygon (in order to see its inner surface). Also you need a plane with a square hole. Assign an out-of-the-box RealityKit occlusion material or a hand-made SceneKit occlusion material for this plane and it will hide all the outer walls of cube behind it (look at a picture below).
In Autodesk Maya Occlusion material is a Hold-Out option in Render Stats (for Viewport 2.0 only):
When you'll be tracking your poster on a wall (with detectionImages option activated), your app must recognize a picture and "load" 3D cube and its masking plane with occlusion shader. So, if ARImageAnchor on a poster and a pivot point of 3D cube must meet, cube's pivot point has to be located on a front edge of cube (at the same level where a wall's surface is).
If you wish to download Apple's sample code containing Image Detection experience – just click a blue button on the same webpage with detectionImages.
Here is a short example of my code:
#IBOutlet var sceneView: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = self // for using renderer() methods of ARSCNViewDelegate
sceneView.scene = SCNScene()
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
resetTrackingConfiguration()
}
func resetTrackingConfiguration() {
guard let refImage = ARReferenceImage.referenceImages(inGroupNamed: "Poster",
bundle: nil)
else { return }
let config = ARWorldTrackingConfiguration()
config.detectionImages = refImage
config.maximumNumberOfTrackedImages = 1
let options = [ARSession.RunOptions.removeExistingAnchors,
ARSession.RunOptions.resetTracking]
sceneView.session.run(config, options: ARSession.RunOptions(options))
}
...and, of course, a SceneKit's renderer() instance method:
func renderer(_ renderer: SCNSceneRenderer,
didAdd node: SCNNode,
for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor,
let _ = imageAnchor.referenceImage.name
else { return }
anchorsArray.append(imageAnchor)
if anchorsArray.first != nil {
node.addChildNode(portalNode)
}
}

Stereo ARSCNview to make VR and AR mix

I want to make a mix of virtual reality and augmented reality.
The goal is I have a stereo camera (for each eyes).
I tried to put two ARSCNView in a viewCotnroller but it seems ARKit enable only one ARWorldTrackingSessionConfiguration at the same time. How can I do that?
I researched to copy the graphic representation of a view to past this to an other view but impossible to find. Please help me to find the solution.
I found this link, maybe can it illumine us:
ARKit with multiple users
Here's a sample of my issue:
https://www.youtube.com/watch?v=d6LOqNnYm5s
PS: before unlike my post, comment why!
The following code is basically what Hal said. I previously wrote a few lines on github that might be able to help you get started. (Simple code, no barrel distortion, no adjustment for the narrow FOV - yet).
Essentially, we connect the same scene to the second ARSCNView (so both ARSCNViews are seeing the same scene). No need to get ARWorldTrackingSessionConfiguration working with 2 ARSCNViews. Then, we offset its pointOfView so it's positioned as the 2nd eye.
https://github.com/hanleyweng/iOS-Stereoscopic-ARKit-Template
The ARSession documentation says that ARSession is a shared object.
Every AR experience built with ARKit requires a single ARSession object. If you use an
ARSCNView
or
ARSKView
object to easily build the visual part of your AR experience, the view object includes an ARSession instance. If you build your own renderer for AR content, you'll need to instantiate and maintain an ARSession object yourself.
So there's a clue in that last sentence. Instead of two ARSCNView instances, use SCNView and share the single ARSession between them.
I expect this is a common use case, so it's worth filing a Radar to request stereo support.
How to do it right now?
The (singleton) session has only one delegate. You need two different delegate instances, one for each view. You could solve that with an object that sends the delegate messages to each view; solvable but a bit of extra work.
There's also the problem of needing two slightly different camera locations, one for each eye, for stereo vision. ARKit uses one camera, placed at the iOS device's location, so you'll have to fuzz that.
Then you have to deal with the different barrel distortions for each eye.
This, for me, adds up to writing my own custom object to intercept ARKit delegate messages, convert the coordinates to what I'd see from two different cameras, and manage the two distinct SCNViews (not ARSCNViews). Or perhaps use one ARSCNView (one eye), intercept its frame updates, and pass those frames on to a SCNView (the other eye).
File the Radar, post the number, and I'll dupe it.
To accomplish this, please use the following code:
import UIKit
import SceneKit
import ARKit
class ViewController: UIViewController, ARSCNViewDelegate {
#IBOutlet weak var sceneView: ARSCNView!
#IBOutlet weak var sceneView2: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = self
sceneView.showsStatistics = true
let scene = SCNScene(named: "art.scnassets/ship.scn")!
sceneView.scene = scene
sceneView.isPlaying = true
// SceneView2 Setup
sceneView2.scene = scene
sceneView2.showsStatistics = sceneView.showsStatistics
// Now sceneView2 starts receiving updates
sceneView2.isPlaying = true
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
let configuration = ARWorldTrackingConfiguration()
sceneView.session.run(configuration)
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
sceneView.session.pause()
}
}
And don't forget to activate .isPlaying instance properties for both ARSCNViews.
Objective-C version of Han's github code, sceneViews created programatically, with y + z positions not updated - all credit Han:
-(void)setup{
//left
leftSceneView = [ARSCNView new];
leftSceneView.frame = CGRectMake(0, 0, w, h/2);
leftSceneView.delegate = self;
leftSceneView.autoenablesDefaultLighting = true;
[self.view addSubview:leftSceneView];
//right
rightSceneView = [ARSCNView new];
rightSceneView.frame = CGRectMake(0, h/2, w, h/2);
rightSceneView.playing = true;
rightSceneView.autoenablesDefaultLighting = true;
[self.view addSubview:rightSceneView];
//scene
SCNScene * scene = [SCNScene new];
leftSceneView.scene = scene;
rightSceneView.scene = scene;
//tracking
ARWorldTrackingConfiguration * configuration = [ARWorldTrackingConfiguration new];
configuration.planeDetection = ARPlaneDetectionHorizontal;
[leftSceneView.session runWithConfiguration:configuration];
}
-(void)renderer:(id<SCNSceneRenderer>)renderer updateAtTime:(NSTimeInterval)time {
dispatch_async(dispatch_get_main_queue(), ^{
//update right eye
SCNNode * pov = self->leftSceneView.pointOfView.clone;
SCNQuaternion orientation = pov.orientation;
GLKQuaternion orientationQuaternion = GLKQuaternionMake(orientation.x, orientation.y, orientation.z, orientation.w);
GLKVector3 eyePosition = GLKVector3Make(1, 0, 0);
GLKVector3 rotatedEyePosition = GLKQuaternionRotateVector3(orientationQuaternion, eyePosition);
SCNVector3 rotatedEyePositionSCNV = SCNVector3Make(rotatedEyePosition.x, rotatedEyePosition.y, rotatedEyePosition.z);
float mag = 0.066f;
float rotatedX = pov.position.x + rotatedEyePositionSCNV.x * mag;
float rotatedY = pov.position.y;// + rotatedEyePositionSCNV.y * mag;
float rotatedZ = pov.position.z;// + rotatedEyePositionSCNV.z * mag;
[pov setPosition:SCNVector3Make(rotatedX, rotatedY, rotatedZ)];
self->rightSceneView.pointOfView = pov;
});
}

How do I programmatically move an ARAnchor?

I'm trying out the new ARKit to replace another similar solution I have. It's pretty great! But I can't seem to figure out how to move an ARAnchor programmatically. I want to slowly move the anchor to the left of the user.
Creating the anchor to be 2 meters in front of the user:
var translation = matrix_identity_float4x4
translation.columns.3.z = -2.0
let transform = simd_mul(currentFrame.camera.transform, translation)
let anchor = ARAnchor(transform: transform)
sceneView.session.add(anchor: anchor)
later, moving the object to the left/right of the user (x-axis)...
anchor.transform.columns.3.x = anchor.transform.columns.3.x + 0.1
repeated every 50 milliseconds (or whatever).
The above does not work because transform is a get-only property.
I need a way to change the position of an AR object in space relative to the user in a way that keeps the AR experience intact - meaning, if you move your device, the AR object will be moving but also won't be "stuck" to the camera like it's simply painted on, but moves like you would see a person move while you were walking by - they are moving and you are moving and it looks natural.
Please note the scope of this question relates only to how to move an object in space in relation to the user (ARAnchor), not in relation to a plane (ARPlaneAnchor) or to another detected surface (ARHitTestResult).
Thanks!
You don't need to move anchors. (hand wave) That's not the API you're looking for...
Adding ARAnchor objects to a session is effectively about "labeling" a point in real-world space so that you can refer to it later. The point (1,1,1) (for example) is always the point (1,1,1) — you can't move it someplace else because then it's not the point (1,1,1) anymore.
To make a 2D analogy: anchors are reference points sort of like the bounds of a view. The system (or another piece of your code) tells the view where it's boundaries are, and the view draws its content relative to those boundaries. Anchors in AR give you reference points you can use for drawing content in 3D.
What you're asking is really about moving (and animating the movement of) virtual content between two points. And ARKit itself really isn't about displaying or animating virtual content — there are plenty of great graphics engines out there, so ARKit doesn't need to reinvent that wheel. What ARKit does is provide a real-world frame of reference for you to display or animate content using an existing graphics technology like SceneKit or SpriteKit (or Unity or Unreal, or a custom engine built with Metal or GL).
Since you mentioned trying to do this with SpriteKit... beware, it gets messy. SpriteKit is a 2D engine, and while ARSKView provides some ways to shoehorn a third dimension in there, those ways have their limits.
ARSKView automatically updates the xScale, yScale, and zRotation of each sprite associated with an ARAnchor, providing the illusion of 3D perspective. But that applies only to nodes attached to anchors, and as noted above, anchors are static.
You can, however, add other nodes to your scene, and use those same properties to make those nodes match the ARSKView-managed nodes. Here's some code you can add/replace in the ARKit/SpriteKit Xcode template project to do that. We'll start with some basic logic to run a bouncing animation on the third tap (after using the first two taps to place anchors).
var anchors: [ARAnchor] = []
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
// Start bouncing on touch after placing 2 anchors (don't allow more)
if anchors.count > 1 {
startBouncing(time: 1)
return
}
// Create anchor using the camera's current position
guard let sceneView = self.view as? ARSKView else { return }
if let currentFrame = sceneView.session.currentFrame {
// Create a transform with a translation of 30 cm in front of the camera
var translation = matrix_identity_float4x4
translation.columns.3.z = -0.3
let transform = simd_mul(currentFrame.camera.transform, translation)
// Add a new anchor to the session
let anchor = ARAnchor(transform: transform)
sceneView.session.add(anchor: anchor)
anchors.append(anchor)
}
}
Then, some SpriteKit fun for making that animation happen:
var ballNode: SKLabelNode = {
let labelNode = SKLabelNode(text: "🏀")
labelNode.horizontalAlignmentMode = .center
labelNode.verticalAlignmentMode = .center
return labelNode
}()
func startBouncing(time: TimeInterval) {
guard
let sceneView = self.view as? ARSKView,
let first = anchors.first, let start = sceneView.node(for: first),
let last = anchors.last, let end = sceneView.node(for: last)
else { return }
if ballNode.parent == nil {
addChild(ballNode)
}
ballNode.setScale(start.xScale)
ballNode.zRotation = start.zRotation
ballNode.position = start.position
let scale = SKAction.scale(to: end.xScale, duration: time)
let rotate = SKAction.rotate(toAngle: end.zRotation, duration: time)
let move = SKAction.move(to: end.position, duration: time)
let scaleBack = SKAction.scale(to: start.xScale, duration: time)
let rotateBack = SKAction.rotate(toAngle: start.zRotation, duration: time)
let moveBack = SKAction.move(to: start.position, duration: time)
let action = SKAction.repeatForever(.sequence([
.group([scale, rotate, move]),
.group([scaleBack, rotateBack, moveBack])
]))
ballNode.removeAllActions()
ballNode.run(action)
}
Here's a video so you can see this code in action. You'll notice that the illusion only works as long as you don't move the camera — not so great for AR. When using SKAction, we can't adjust the start/end states of the animation while animating, so the ball keeps bouncing back and forth between its original (screen-space) positions/rotations/scales.
You could do better by animating the ball directly, but it's a lot of work. You'd need to, on every frame (or every view(_:didUpdate:for:) delegate callback):
Save off the updated position, rotation, and scale values for the anchor-based nodes at each end of the animation. You'll need to do this twice per didUpdate callback, because you'll get one callback for each node.
Work out position, rotation, and scale values for the node being animated, by interpolating between the two endpoint values based on the current time.
Set the new attributes on the node. (Or maybe animate it to those attributes over a very short duration, so it doesn't jump too much in one frame?)
That's kind of a lot of work to shoehorn a fake 3D illusion into a 2D graphics toolkit — hence my comments about SpriteKit not being a great first step into ARKit.
If you want 3D positioning and animation for your AR overlays, it's a lot easier to use a 3D graphics toolkit. Here's a repeat of the previous example, but using SceneKit instead. Start with the ARKit/SceneKit Xcode template, take the spaceship out, and paste the same touchesBegan function from above into the ViewController. (Change the as ARSKView casts to as ARSCNView, too.)
Then, some quick code for placing 2D billboarded sprites, matching via SceneKit the behavior of the ARKit/SpriteKit template:
// in global scope
func makeBillboardNode(image: UIImage) -> SCNNode {
let plane = SCNPlane(width: 0.1, height: 0.1)
plane.firstMaterial!.diffuse.contents = image
let node = SCNNode(geometry: plane)
node.constraints = [SCNBillboardConstraint()]
return node
}
// inside ViewController
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
// emoji to image based on https://stackoverflow.com/a/41021662/957768
let billboard = makeBillboardNode(image: "⛹ī¸".image())
node.addChildNode(billboard)
}
Finally, adding the animation for the bouncing ball:
let ballNode = makeBillboardNode(image: "🏀".image())
func startBouncing(time: TimeInterval) {
guard
let sceneView = self.view as? ARSCNView,
let first = anchors.first, let start = sceneView.node(for: first),
let last = anchors.last, let end = sceneView.node(for: last)
else { return }
if ballNode.parent == nil {
sceneView.scene.rootNode.addChildNode(ballNode)
}
let animation = CABasicAnimation(keyPath: #keyPath(SCNNode.transform))
animation.fromValue = start.transform
animation.toValue = end.transform
animation.duration = time
animation.autoreverses = true
animation.repeatCount = .infinity
ballNode.removeAllAnimations()
ballNode.addAnimation(animation, forKey: nil)
}
This time the animation code is a lot shorter than the SpriteKit version.
Here's how it looks in action.
Because we're working in 3D to start with, we're actually animating between two 3D positions — unlike in the SpriteKit version, the animation stays where it's supposed to. (And without the extra work for directly interpolating and animating attributes.)