ARKit – Get current position of ARCamera in a scene - swift

I'm in the process of learning both ARKit and Scenekit concurrently, and it's been a bit of a challenge.
With a ARWorldTrackingSessionConfiguration session created, I was wondering if anyone knew of a way to get the position of the user's 'camera' in the scene session. The idea is I want to animate an object towards the user's current position.
let reaperScene = SCNScene(named: "reaper.dae")!
let reaperNode = reaperScene.rootNode.childNode(withName: "reaper", recursively: true)!
reaperNode.position = SCNVector3Make(0, 0, -1)
let scene = SCNScene()
scene.rootNode.addChildNode(reaperNode)
// some unknown amount of time later
let currentCameraPosition = sceneView.pointOfView?.position
let moveAction = SCNAction.move(to: currentCameraPosition!, duration: 1.0)
reaperNode.runAction(moveAction)
However, it seems that currentCameraPosition is always [0,0,0], even though I am moving the camera around. Any idea what I'm doing wrong? Eventually the idea is I would rotate the object around an invisible sphere until it is in front of the camera and then animate it in, doing something similar to this: Rotate SCNCamera node looking at an object around an imaginary sphere (that way the user sees the object animate towards them)
Thanks for any help.

Set yourself as the ARSession.delegate. Than you can implement session(_:didUpdate:) which will give you an ARFrame for every frame processed in your session. The frame has an camera property that holds information on the cameras transform, rotation and position.
func session(_ session: ARSession, didUpdate frame: ARFrame) {
// Do something with the new transform
let currentTransform = frame.camera.transform
doSomething(with: currentTransform)
}
As rickster pointed out you always can get the current ARFrame and the camera position through it by calling session.currentFrame.
This is useful if you need the position just once, eg to move a node where the camera has been but you should use the delegate method if you want to get updates on the camera's position.

I know it had been solved but i have a little neat solution for it ..
I would prefere adding a renderer delegate method.. it's a method in ARSCNViewDelegate
func renderer(_ renderer: SCNSceneRenderer, willRenderScene scene: SCNScene, atTime time: TimeInterval) {
guard let pointOfView = sceneView.pointOfView else { return }
let transform = pointOfView.transform
let orientation = SCNVector3(-transform.m31, -transform.m32, transform.m33)
let location = SCNVector3(transform.m41, transform.m42, transform.m43)
let currentPositionOfCamera = orientation + location
print(currentPositionOfCamera)
}
of course you can't by default add the two SCNVector3 out of the box.. so you need to paste out of the class the following
func +(lhv:SCNVector3, rhv:SCNVector3) -> SCNVector3 {
return SCNVector3(lhv.x + rhv.x, lhv.y + rhv.y, lhv.z + rhv.z)
}

ARKit + SceneKit
For your convenience you can create a ViewController extension with an instance method session(_:didUpdate:) where an update will be occured.
import ARKit
import SceneKit
extension ViewController: ARSessionDelegate {
func session(_ session: ARSession, didUpdate frame: ARFrame) {
let transform = frame.camera.transform
let position = transform.columns.3
print(position.x, position.y, position.z) // UPDATING
}
}
class ViewController: UIViewController {
#IBOutlet var sceneView: ARSCNView!
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
sceneView.session.delegate = self // ARSESSION DELEGATE
let config = ARWorldTrackingConfiguration()
sceneView.session.run(config)
}
}
RealityKit
In RealityKit, ARView's object contains a camera's transform as well:
import RealityKit
import UIKit
import Combine
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
var subs: [AnyCancellable] = []
override func viewDidLoad() {
super.viewDidLoad()
arView.scene.subscribe(to: SceneEvents.Update.self) { _ in
let camTransform = self.arView.cameraTransform.matrix
print(camTransform) // UPDATING
}.store(in: &subs)
}
}

Related

Attach sphere at the center of the device screen when moving

I am trying to attach a sphere at the center of the device screen and as I move the device around the sphere should stay in the centre of the screen (like a crosshair)
I have attached a sphere entity and added it to sphere_anchor like this in makeUIView function
sphere_anchor.addChild(modelEntity)
But as i move my device the sphere just moves in the initial frame the entity was attached to as I move the device.Hoping someone could point me to the correct way of doing this
//Implement ARSession didUpdate session delegate method
public func session(_ session: ARSession, didUpdate frame: ARFrame) {
let trasnform = frame.camera.transform
if ((self.scene.findEntity(named: "sphere")) != nil) {
let position = simd_make_float3(trasnform.columns.3)
//print(position)
sphere_anchor.position = position
sphere_anchor.orientation = Transform(matrix: trasnform).rotation
}
}
Try AnchorEntity(.camera). If you implement it there's no need for session(_:didUpdate:) instance method because RealityKit's anchor automatically tracks ARCamera position.
#IBOutlet var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
let mesh = MeshResource.generateSphere(radius: 0.1)
let sphere = ModelEntity(mesh: mesh)
let anchor = AnchorEntity(.camera)
sphere.setParent(anchor)
arView.scene.addAnchor(anchor)
sphere.transform.translation.z = -0.75
}
AnchorEntity(.camera) works only when real iOS device in Active Scheme is chosen.

ARKit – How to know if 3d object is in the center of a screen?

I place 3d object in the world space. After that I try to move camera randomly. Then right now I need to know after I knew object has became inside frustum by isNode method, if the object is in center, top or bottom of camera view.
For a solution that's not a hack you can use the projectPoint: API.
It's probably better to work with pixel coordinates because this method uses the actual camera's settings to determine where the object appears on screen.
let projectedPoint = sceneView.projectPoint(self.sphereNode.worldPosition)
let xOffset = projectedPoint.x - screenCenter.x;
let yOffset = projectedPoint.y - screenCenter.y;
if xOffset * xOffset + yOffset * yOffset < R_squared {
// inside a disc of radius 'R' at the center of the screen
}
Solution
To achieve this you need to use a trick. Create new SCNCamera, make it a child of pointOfView default camera and set its FoV to approximately 10 degrees.
Then inside renderer(_:updateAtTime:) instance method use isNode(:insideFrustumOf:) method.
Here's working code:
import ARKit
class ViewController: UIViewController,
ARSCNViewDelegate,
SCNSceneRendererDelegate {
#IBOutlet var sceneView: ARSCNView!
#IBOutlet var label: UILabel!
let cameraNode = SCNNode()
let sphereNode = SCNNode()
let config = ARWorldTrackingConfiguration()
public func renderer(_ renderer: SCNSceneRenderer,
updateAtTime time: TimeInterval) {
DispatchQueue.main.async {
if self.sceneView.isNode(self.sphereNode,
insideFrustumOf: self.cameraNode) {
self.label.text = "In the center..."
} else {
self.label.text = "Out OF CENTER"
}
}
}
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = self
sceneView.allowsCameraControl = true
let scene = SCNScene()
sceneView.scene = scene
cameraNode.camera = SCNCamera()
cameraNode.camera?.fieldOfView = 10
DispatchQueue.main.asyncAfter(deadline: .now() + 0.5) {
self.sceneView.pointOfView!.addChildNode(self.cameraNode)
}
sphereNode.geometry = SCNSphere(radius: 0.05)
sphereNode.geometry?.firstMaterial?.diffuse.contents = UIColor.red
sphereNode.position.z = -1.0
sceneView.scene.rootNode.addChildNode(sphereNode)
sceneView.session.run(config)
}
}
Also, in this solution you may turn on an orthographic projection for child camera, instead of perspective one. It helps when a model is far from the camera.
cameraNode.camera?.usesOrthographicProjection = true
Here's how your screen might look like:
Next steps
The same way you can append two additional SCNCameras, place them above and below central SCNCamera, and test your object with two extra isNode(:insideFrustumOf:) instance methods.
I solved problem with another way:
let results = self.sceneView.hitTest(screenCenter!, options: [SCNHitTestOption.rootNode: parentnode])
where parentnode is the parent of target node, because I have multiple nodes.
func nodeInCenter() -> SCNNode? {
let x = (Int(sceneView.projectPoint(sceneView.pointOfView!.worldPosition).x - sceneView.projectPoint(sphereNode.worldPosition).x) ^^ 2) < 9
let y = (Int(sceneView.projectPoint(sceneView.pointOfView!.worldPosition).y - sceneView.projectPoint(sphereNode.worldPosition).y) ^^ 2) < 9
if x && y {
return node
}
return nil
}

SceneKit: z value of tap location?

I am using sceneKit in my project, and would like to add a red sphere (as a marker)on the location that the user taps on a 3d model of a human body (see picture below). With the code I have currently, the sphere is added in the correct position - however, it not added on top of the human body, but rather extremely close to the camera (the z value is off). How can change the z value of the red sphere so that it is added on top of the human body rather than in front of the camera? Thank you so much :)
import UIKit
import QuartzCore
import SceneKit
class GameViewController: UIViewController {
var selectedNode: SCNNode!
var markerSphereNode: SCNNode!
var bodyNode: SCNNode!
#IBOutlet weak var sceneView: SCNView!
override func viewDidLoad() {
super.viewDidLoad()
let scene = SCNScene(named: "art.scnassets/femaleBodySceneKit Scene.scn")
markerSphereNode = scene?.rootNode.childNode(withName: "markerSphere", recursively: true)
bodyNode = scene?.rootNode.childNode(withName: "Body_M_GeoRndr", recursively: true)
sceneView.scene = scene
sceneView.allowsCameraControl = true
_ = sceneView.pointOfView
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
let touch = touches.first!
let touchPoint = touch.location(in: sceneView)
if sceneView.hitTest(touch.location(in: sceneView), options: nil).first != nil {
markerSphereNode.position = sceneView.unprojectPoint(
SCNVector3(x: Float(touchPoint.x),
y: Float(touchPoint.y),
z: 0.56182814))
}
}
}
how the red sphere appears when you tap on a location in the body
how far it is from the body when you rotate the camera (it is in the background if you look closely
how the red dot SHOULD appear
SCNHitTestResult provides you with localCoordinates and worldCoordinates already. There's a need to use the unprojectPoint method.

Can I do ARKit "Continuous Image Tracking" in a World Tracking Configuration with RealityKit?

UPDATE: My premise that "continuous image tracking" is not possible out of the box with RealityKit ARViews was incorrect. All I needed to do was correctly create the AnchorEntity for the continuously tracked reference image.
The anchor entity needs to be created using the init(anchor: ARAnchor) initializer. (The init(world: SIMD3<Float>) initializer is correct for anchors stuck to the real world, but not ones that should track the reference image.)
Using ARKit and RealityKit with an ARWorldTrackingConfiguration, I am trying to do "continuous image tracking" (where the reference image is tracked each frame, and virtual objects can be anchored to it, appearing to be attached to and move with the reference image). Because reference images are only recognized once in world tracking (as opposed to ARImageTrackingConfiguration, where reference images are continuously tracked as long as they are in frame), this is not possible out of the box.
To get the same results in a world tracking configuration, I am anchoring virtual objects to the reference image in the session(_:didAdd:) delegate method, and using the session(_:didUpdate:) delegate method as an opportunity to remove the ARImageAnchor after each time it is identified. This causes the reference image to be re-recognized over and over, allowing virtual objects to be anchored to the image and appear to track it frame-to-frame.
In the example below, I am placing two ball markers to track the position of the reference image. First marker is placed only once, at the location where the reference image is initially detected. The other marker is re-positioned each time the reference image is re-detected, appearing to follow it.
This works. Virtual content tracks the reference image in the ARWorldTrackingConfiguration the same way it would in an image tracking config. But while the "animation" in ARImageTrackingConfiguration is very smooth, the animation in world tracking is much less smooth, more jumpy, as if it was running at 10 or 15 frames per second. (Actual FPS as reported by .showStatistics stays near 60 FPS in both configurations.)
I assume the difference in smoothness results from the time it takes ARKit to do the work of repeatedly re-recognizing and removing the reference image anchor on each didAdd/didUpdate cycle.
I would like to know if there is a better technique to get "continuous image tracking" in an ARWorldTrackingConfiguration, and/or if there is any way I can improve the code in the delegate methods to achieve this affect.
import ARKit
import RealityKit
class ViewController: UIViewController, ARSessionDelegate {
#IBOutlet var arView: ARView!
// originalImageAnchor is used to visualize the first-detected location of reference image
// currentImageAnchor should be continuously updated to match current position of ref image
var originalImageAnchor: AnchorEntity!
var currentImageAnchor: AnchorEntity!
let ballRadius: Float = 0.02
override func viewDidLoad() {
super.viewDidLoad()
guard let referenceImages = ARReferenceImage.referenceImages(inGroupNamed: "AR Resources",
bundle: nil) else { fatalError("Missing expected asset catalog resources.") }
arView.session.delegate = self
arView.automaticallyConfigureSession = false
arView.debugOptions = [.showStatistics]
arView.renderOptions = [.disableCameraGrain, .disableHDR, .disableMotionBlur,
.disableDepthOfField, .disableFaceOcclusions, .disablePersonOcclusion,
.disableGroundingShadows, .disableAREnvironmentLighting]
let configuration = ARWorldTrackingConfiguration()
configuration.detectionImages = referenceImages
configuration.maximumNumberOfTrackedImages = 1 // there is one ref image named "coaster_rb"
arView.session.run(configuration)
}
func session(_ session: ARSession, didAdd anchors: [ARAnchor]) {
guard let imageAnchor = anchors[0] as? ARImageAnchor else { return }
// Reference image detected. This will happen multiple times because
// we delete ARImageAnchor in session(_:didUpdate:)
if let imageName = imageAnchor.name, imageName == "coaster_rb" {
// If originalImageAnchor is nil, create an anchor and
// add a marker at initial position of reference image.
if originalImageAnchor == nil {
originalImageAnchor = AnchorEntity(world: imageAnchor.transform)
let originalImageMarker = generateBallMarker(radius: ballRadius, color: .systemPink)
originalImageMarker.position.y = ballRadius + (ballRadius * 2)
originalImageAnchor.addChild(originalImageMarker)
arView.scene.addAnchor(originalImageAnchor)
}
// If currentImageAnchor is nil, add an anchor and marker at reference image position
// If currentImageAnchor has already been added, adjust it's position to match ref image
if currentImageAnchor == nil {
currentImageAnchor = AnchorEntity(world: imageAnchor.transform)
let currentImageMarker = generateBallMarker(radius: ballRadius, color: .systemTeal)
currentImageMarker.position.y = ballRadius
currentImageAnchor.addChild(currentImageMarker)
arView.scene.addAnchor(currentImageAnchor)
} else {
currentImageAnchor.setTransformMatrix(imageAnchor.transform, relativeTo: nil)
}
}
}
func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
guard let imageAnchor = anchors[0] as? ARImageAnchor else { return }
// Delete reference image anchor to allow for ongoing tracking as it moves
if let imageName = imageAnchor.name, imageName == "coaster_rb" {
arView.session.remove(anchor: anchors[0])
}
}
func generateBallMarker(radius: Float, color: UIColor) -> ModelEntity {
let ball = ModelEntity(mesh: .generateSphere(radius: radius),
materials: [SimpleMaterial(color: color, isMetallic: false)])
return ball
}
}
Continuous image tracking does work out of the box with RealityKit ARViews in world tracking configurations. A mistake in my original code lead me to think otherwise.
Incorrect anchor entity initialization (for what I was trying to accomplish):
currentImageAnchor = AnchorEntity(world: imageAnchor.transform)
Since I wanted to track the ARImageAnchor assigned to the matched reference image, I should have done it like this:
currentImageAnchor = AnchorEntity(anchor: imageAnchor)
The corrected example below places one virtual marker that is fixed to the reference image's initial position, and another that smoothly tracks the reference image in a world tracking configuration:
import ARKit
import RealityKit
class ViewController: UIViewController, ARSessionDelegate {
#IBOutlet var arView: ARView!
let ballRadius: Float = 0.02
override func viewDidLoad() {
super.viewDidLoad()
guard let referenceImages = ARReferenceImage.referenceImages(
inGroupNamed: "AR Resources", bundle: nil) else {
fatalError("Missing expected asset catalog resources.")
}
arView.session.delegate = self
arView.automaticallyConfigureSession = false
arView.debugOptions = [.showStatistics]
arView.renderOptions = [.disableCameraGrain, .disableHDR,
.disableMotionBlur, .disableDepthOfField,
.disableFaceOcclusions, .disablePersonOcclusion,
.disableGroundingShadows, .disableAREnvironmentLighting]
let configuration = ARWorldTrackingConfiguration()
configuration.detectionImages = referenceImages
configuration.maximumNumberOfTrackedImages = 1
arView.session.run(configuration)
}
func session(_ session: ARSession, didAdd anchors: [ARAnchor]) {
guard let imageAnchor = anchors[0] as? ARImageAnchor else { return }
if let imageName = imageAnchor.name, imageName == "target_image" {
// AnchorEntity(world: imageAnchor.transform) results in anchoring
// virtual content to the real world. Content anchored like this
// will remain in position even if the reference image moves.
let originalImageAnchor = AnchorEntity(world: imageAnchor.transform)
let originalImageMarker = makeBall(radius: ballRadius, color: .systemPink)
originalImageMarker.position.y = ballRadius + (ballRadius * 2)
originalImageAnchor.addChild(originalImageMarker)
arView.scene.addAnchor(originalImageAnchor)
// AnchorEntity(anchor: imageAnchor) results in anchoring
// virtual content to the ARImageAnchor that is attached to the
// reference image. Content anchored like this will appear
// stuck to the reference image.
let currentImageAnchor = AnchorEntity(anchor: imageAnchor)
let currentImageMarker = makeBall(radius: ballRadius, color: .systemTeal)
currentImageMarker.position.y = ballRadius
currentImageAnchor.addChild(currentImageMarker)
arView.scene.addAnchor(currentImageAnchor)
}
}
func makeBall(radius: Float, color: UIColor) -> ModelEntity {
let ball = ModelEntity(mesh: .generateSphere(radius: radius),
materials: [SimpleMaterial(color: color, isMetallic: false)])
return ball
}
}

Where is the .camera AnchorEntity located?

When adding a child to my AnchorEntity(.camera), it appears as if the child is spawning behind my camera (meaning I can only see my child when I turn around). I have also tried to add a mesh to my Anchor directly but unfortunately ARKit / RealityKit does not render the mesh when you are inside of it (which because its centered around the camera, is theoretically always the case. However, it could also be the case that its always located behind the screen [where the user is] and I'm never able to see it).
Also, oddly enough the child entity does not move with the camera AnchorEntity despite setting the translation transform to (0,0,0).
My two questions are:
Is the .camera anchor actually located right where the physical iPad / camera is located or is it located further back (perhaps where the user would normally hold the iPad)?
How do you get a child entity of the AnchorEntity(.camera) to move as the iPad / camera moves in real space?
Answer to the first question
In RealityKit and ARKit frameworks ARCamera has a pivot point like other entities (nodes) have, and it's located at the point where lens is attached to the camera body (at bayonet level). This pivot can tether AnchorEntity(.camera). In other words, virtual camera and real-world camera have that pivot point approximately at the same place.
So, if you attach RealityKit's AnchorEntity to a camera's pivot, you place it to the coordinates where camera's bayonet is located. And this AnchorEntity(.camera) will be tracked automatically without a need to implement session(_:didUpdate:) method.
However, if attach ARKit's ARAnchor to the camera's pivot, you have to implement session(_:didUpdate:) method to constantly update a position and orientation of that anchor for every ARFrame.
Answer to the second question
If you want to constantly update model's position in RealityKits at 60 fps (when ARCamera moves and rotates) you need to use the following approach:
import ARKit
import RealityKit
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
let box = MeshResource.generateBox(size: 0.25)
let material = SimpleMaterial(color: .systemPink, isMetallic: true)
let boxEntity = ModelEntity(mesh: box, materials: [material])
let cameraAnchor = AnchorEntity(.camera) // ARCamera anchor
cameraAnchor.addChild(boxEntity)
arView.scene.addAnchor(cameraAnchor)
boxEntity.transform.translation = [0, 0,-0.5] // Box offset 0.5 m
}
}
Or you can use ARKit's great old .currentFrame instance property in session(_:didUpdate:) delegate method:
extension ViewController: ARSessionDelegate {
func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
guard let transform = arView.session.currentFrame?.camera.transform
else { return }
let arkitAnchor = ARAnchor(transform: transform)
arView.session.add(anchor: arkitAnchor) // add to session
let anchor = AnchorEntity(anchor: arkitAnchor)
anchor.addChild(boxEntity)
arView.scene.addAnchor(anchor) // add to scene
}
}
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
var boxEntity = ModelEntity(...)
override func viewDidLoad() {
super.viewDidLoad()
arView.session.delegate = self // Session's delegate
}
}
To find out how to save the ARCamera Pose over time, read the following post.