I am trying to attach a sphere at the center of the device screen and as I move the device around the sphere should stay in the centre of the screen (like a crosshair)
I have attached a sphere entity and added it to sphere_anchor like this in makeUIView function
sphere_anchor.addChild(modelEntity)
But as i move my device the sphere just moves in the initial frame the entity was attached to as I move the device.Hoping someone could point me to the correct way of doing this
//Implement ARSession didUpdate session delegate method
public func session(_ session: ARSession, didUpdate frame: ARFrame) {
let trasnform = frame.camera.transform
if ((self.scene.findEntity(named: "sphere")) != nil) {
let position = simd_make_float3(trasnform.columns.3)
//print(position)
sphere_anchor.position = position
sphere_anchor.orientation = Transform(matrix: trasnform).rotation
}
}
Try AnchorEntity(.camera). If you implement it there's no need for session(_:didUpdate:) instance method because RealityKit's anchor automatically tracks ARCamera position.
#IBOutlet var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
let mesh = MeshResource.generateSphere(radius: 0.1)
let sphere = ModelEntity(mesh: mesh)
let anchor = AnchorEntity(.camera)
sphere.setParent(anchor)
arView.scene.addAnchor(anchor)
sphere.transform.translation.z = -0.75
}
AnchorEntity(.camera) works only when real iOS device in Active Scheme is chosen.
Related
I am using sceneKit in my project, and would like to add a red sphere (as a marker)on the location that the user taps on a 3d model of a human body (see picture below). With the code I have currently, the sphere is added in the correct position - however, it not added on top of the human body, but rather extremely close to the camera (the z value is off). How can change the z value of the red sphere so that it is added on top of the human body rather than in front of the camera? Thank you so much :)
import UIKit
import QuartzCore
import SceneKit
class GameViewController: UIViewController {
var selectedNode: SCNNode!
var markerSphereNode: SCNNode!
var bodyNode: SCNNode!
#IBOutlet weak var sceneView: SCNView!
override func viewDidLoad() {
super.viewDidLoad()
let scene = SCNScene(named: "art.scnassets/femaleBodySceneKit Scene.scn")
markerSphereNode = scene?.rootNode.childNode(withName: "markerSphere", recursively: true)
bodyNode = scene?.rootNode.childNode(withName: "Body_M_GeoRndr", recursively: true)
sceneView.scene = scene
sceneView.allowsCameraControl = true
_ = sceneView.pointOfView
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
let touch = touches.first!
let touchPoint = touch.location(in: sceneView)
if sceneView.hitTest(touch.location(in: sceneView), options: nil).first != nil {
markerSphereNode.position = sceneView.unprojectPoint(
SCNVector3(x: Float(touchPoint.x),
y: Float(touchPoint.y),
z: 0.56182814))
}
}
}
how the red sphere appears when you tap on a location in the body
how far it is from the body when you rotate the camera (it is in the background if you look closely
how the red dot SHOULD appear
SCNHitTestResult provides you with localCoordinates and worldCoordinates already. There's a need to use the unprojectPoint method.
UPDATE: My premise that "continuous image tracking" is not possible out of the box with RealityKit ARViews was incorrect. All I needed to do was correctly create the AnchorEntity for the continuously tracked reference image.
The anchor entity needs to be created using the init(anchor: ARAnchor) initializer. (The init(world: SIMD3<Float>) initializer is correct for anchors stuck to the real world, but not ones that should track the reference image.)
Using ARKit and RealityKit with an ARWorldTrackingConfiguration, I am trying to do "continuous image tracking" (where the reference image is tracked each frame, and virtual objects can be anchored to it, appearing to be attached to and move with the reference image). Because reference images are only recognized once in world tracking (as opposed to ARImageTrackingConfiguration, where reference images are continuously tracked as long as they are in frame), this is not possible out of the box.
To get the same results in a world tracking configuration, I am anchoring virtual objects to the reference image in the session(_:didAdd:) delegate method, and using the session(_:didUpdate:) delegate method as an opportunity to remove the ARImageAnchor after each time it is identified. This causes the reference image to be re-recognized over and over, allowing virtual objects to be anchored to the image and appear to track it frame-to-frame.
In the example below, I am placing two ball markers to track the position of the reference image. First marker is placed only once, at the location where the reference image is initially detected. The other marker is re-positioned each time the reference image is re-detected, appearing to follow it.
This works. Virtual content tracks the reference image in the ARWorldTrackingConfiguration the same way it would in an image tracking config. But while the "animation" in ARImageTrackingConfiguration is very smooth, the animation in world tracking is much less smooth, more jumpy, as if it was running at 10 or 15 frames per second. (Actual FPS as reported by .showStatistics stays near 60 FPS in both configurations.)
I assume the difference in smoothness results from the time it takes ARKit to do the work of repeatedly re-recognizing and removing the reference image anchor on each didAdd/didUpdate cycle.
I would like to know if there is a better technique to get "continuous image tracking" in an ARWorldTrackingConfiguration, and/or if there is any way I can improve the code in the delegate methods to achieve this affect.
import ARKit
import RealityKit
class ViewController: UIViewController, ARSessionDelegate {
#IBOutlet var arView: ARView!
// originalImageAnchor is used to visualize the first-detected location of reference image
// currentImageAnchor should be continuously updated to match current position of ref image
var originalImageAnchor: AnchorEntity!
var currentImageAnchor: AnchorEntity!
let ballRadius: Float = 0.02
override func viewDidLoad() {
super.viewDidLoad()
guard let referenceImages = ARReferenceImage.referenceImages(inGroupNamed: "AR Resources",
bundle: nil) else { fatalError("Missing expected asset catalog resources.") }
arView.session.delegate = self
arView.automaticallyConfigureSession = false
arView.debugOptions = [.showStatistics]
arView.renderOptions = [.disableCameraGrain, .disableHDR, .disableMotionBlur,
.disableDepthOfField, .disableFaceOcclusions, .disablePersonOcclusion,
.disableGroundingShadows, .disableAREnvironmentLighting]
let configuration = ARWorldTrackingConfiguration()
configuration.detectionImages = referenceImages
configuration.maximumNumberOfTrackedImages = 1 // there is one ref image named "coaster_rb"
arView.session.run(configuration)
}
func session(_ session: ARSession, didAdd anchors: [ARAnchor]) {
guard let imageAnchor = anchors[0] as? ARImageAnchor else { return }
// Reference image detected. This will happen multiple times because
// we delete ARImageAnchor in session(_:didUpdate:)
if let imageName = imageAnchor.name, imageName == "coaster_rb" {
// If originalImageAnchor is nil, create an anchor and
// add a marker at initial position of reference image.
if originalImageAnchor == nil {
originalImageAnchor = AnchorEntity(world: imageAnchor.transform)
let originalImageMarker = generateBallMarker(radius: ballRadius, color: .systemPink)
originalImageMarker.position.y = ballRadius + (ballRadius * 2)
originalImageAnchor.addChild(originalImageMarker)
arView.scene.addAnchor(originalImageAnchor)
}
// If currentImageAnchor is nil, add an anchor and marker at reference image position
// If currentImageAnchor has already been added, adjust it's position to match ref image
if currentImageAnchor == nil {
currentImageAnchor = AnchorEntity(world: imageAnchor.transform)
let currentImageMarker = generateBallMarker(radius: ballRadius, color: .systemTeal)
currentImageMarker.position.y = ballRadius
currentImageAnchor.addChild(currentImageMarker)
arView.scene.addAnchor(currentImageAnchor)
} else {
currentImageAnchor.setTransformMatrix(imageAnchor.transform, relativeTo: nil)
}
}
}
func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
guard let imageAnchor = anchors[0] as? ARImageAnchor else { return }
// Delete reference image anchor to allow for ongoing tracking as it moves
if let imageName = imageAnchor.name, imageName == "coaster_rb" {
arView.session.remove(anchor: anchors[0])
}
}
func generateBallMarker(radius: Float, color: UIColor) -> ModelEntity {
let ball = ModelEntity(mesh: .generateSphere(radius: radius),
materials: [SimpleMaterial(color: color, isMetallic: false)])
return ball
}
}
Continuous image tracking does work out of the box with RealityKit ARViews in world tracking configurations. A mistake in my original code lead me to think otherwise.
Incorrect anchor entity initialization (for what I was trying to accomplish):
currentImageAnchor = AnchorEntity(world: imageAnchor.transform)
Since I wanted to track the ARImageAnchor assigned to the matched reference image, I should have done it like this:
currentImageAnchor = AnchorEntity(anchor: imageAnchor)
The corrected example below places one virtual marker that is fixed to the reference image's initial position, and another that smoothly tracks the reference image in a world tracking configuration:
import ARKit
import RealityKit
class ViewController: UIViewController, ARSessionDelegate {
#IBOutlet var arView: ARView!
let ballRadius: Float = 0.02
override func viewDidLoad() {
super.viewDidLoad()
guard let referenceImages = ARReferenceImage.referenceImages(
inGroupNamed: "AR Resources", bundle: nil) else {
fatalError("Missing expected asset catalog resources.")
}
arView.session.delegate = self
arView.automaticallyConfigureSession = false
arView.debugOptions = [.showStatistics]
arView.renderOptions = [.disableCameraGrain, .disableHDR,
.disableMotionBlur, .disableDepthOfField,
.disableFaceOcclusions, .disablePersonOcclusion,
.disableGroundingShadows, .disableAREnvironmentLighting]
let configuration = ARWorldTrackingConfiguration()
configuration.detectionImages = referenceImages
configuration.maximumNumberOfTrackedImages = 1
arView.session.run(configuration)
}
func session(_ session: ARSession, didAdd anchors: [ARAnchor]) {
guard let imageAnchor = anchors[0] as? ARImageAnchor else { return }
if let imageName = imageAnchor.name, imageName == "target_image" {
// AnchorEntity(world: imageAnchor.transform) results in anchoring
// virtual content to the real world. Content anchored like this
// will remain in position even if the reference image moves.
let originalImageAnchor = AnchorEntity(world: imageAnchor.transform)
let originalImageMarker = makeBall(radius: ballRadius, color: .systemPink)
originalImageMarker.position.y = ballRadius + (ballRadius * 2)
originalImageAnchor.addChild(originalImageMarker)
arView.scene.addAnchor(originalImageAnchor)
// AnchorEntity(anchor: imageAnchor) results in anchoring
// virtual content to the ARImageAnchor that is attached to the
// reference image. Content anchored like this will appear
// stuck to the reference image.
let currentImageAnchor = AnchorEntity(anchor: imageAnchor)
let currentImageMarker = makeBall(radius: ballRadius, color: .systemTeal)
currentImageMarker.position.y = ballRadius
currentImageAnchor.addChild(currentImageMarker)
arView.scene.addAnchor(currentImageAnchor)
}
}
func makeBall(radius: Float, color: UIColor) -> ModelEntity {
let ball = ModelEntity(mesh: .generateSphere(radius: radius),
materials: [SimpleMaterial(color: color, isMetallic: false)])
return ball
}
}
When adding a child to my AnchorEntity(.camera), it appears as if the child is spawning behind my camera (meaning I can only see my child when I turn around). I have also tried to add a mesh to my Anchor directly but unfortunately ARKit / RealityKit does not render the mesh when you are inside of it (which because its centered around the camera, is theoretically always the case. However, it could also be the case that its always located behind the screen [where the user is] and I'm never able to see it).
Also, oddly enough the child entity does not move with the camera AnchorEntity despite setting the translation transform to (0,0,0).
My two questions are:
Is the .camera anchor actually located right where the physical iPad / camera is located or is it located further back (perhaps where the user would normally hold the iPad)?
How do you get a child entity of the AnchorEntity(.camera) to move as the iPad / camera moves in real space?
Answer to the first question
In RealityKit and ARKit frameworks ARCamera has a pivot point like other entities (nodes) have, and it's located at the point where lens is attached to the camera body (at bayonet level). This pivot can tether AnchorEntity(.camera). In other words, virtual camera and real-world camera have that pivot point approximately at the same place.
So, if you attach RealityKit's AnchorEntity to a camera's pivot, you place it to the coordinates where camera's bayonet is located. And this AnchorEntity(.camera) will be tracked automatically without a need to implement session(_:didUpdate:) method.
However, if attach ARKit's ARAnchor to the camera's pivot, you have to implement session(_:didUpdate:) method to constantly update a position and orientation of that anchor for every ARFrame.
Answer to the second question
If you want to constantly update model's position in RealityKits at 60 fps (when ARCamera moves and rotates) you need to use the following approach:
import ARKit
import RealityKit
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
let box = MeshResource.generateBox(size: 0.25)
let material = SimpleMaterial(color: .systemPink, isMetallic: true)
let boxEntity = ModelEntity(mesh: box, materials: [material])
let cameraAnchor = AnchorEntity(.camera) // ARCamera anchor
cameraAnchor.addChild(boxEntity)
arView.scene.addAnchor(cameraAnchor)
boxEntity.transform.translation = [0, 0,-0.5] // Box offset 0.5 m
}
}
Or you can use ARKit's great old .currentFrame instance property in session(_:didUpdate:) delegate method:
extension ViewController: ARSessionDelegate {
func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
guard let transform = arView.session.currentFrame?.camera.transform
else { return }
let arkitAnchor = ARAnchor(transform: transform)
arView.session.add(anchor: arkitAnchor) // add to session
let anchor = AnchorEntity(anchor: arkitAnchor)
anchor.addChild(boxEntity)
arView.scene.addAnchor(anchor) // add to scene
}
}
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
var boxEntity = ModelEntity(...)
override func viewDidLoad() {
super.viewDidLoad()
arView.session.delegate = self // Session's delegate
}
}
To find out how to save the ARCamera Pose over time, read the following post.
I have a list of images coming from server and stored in gallery. I want to pick any image and place on live wall using ARKit and want to convert images into 3d images tp perform operation like zooming , moving image etc.
Can anybody please guide how can I create custom object in AR?
To detect vertical surfaces (e.g walls) in ARKit you need to firstly set up ARWorldTrackingConfiguration and then enable planeDetection within your app.
So under your Class Declaration you would create the following variables:
#IBOutlet var augmentedRealityView: ARSCNView!
let augmentedRealitySession = ARSession()
let configuration = ARWorldTrackingConfiguration()
And then initialise your ARSession and in ViewDidLoad for example e.g:
override func viewDidLoad() {
super.viewDidLoad()
//1. Set Up Our ARSession
augmentedRealityView.session = augmentedRealitySession
//2. Assign The ARSCNViewDelegate
augmentedRealityView.delegate = self
//3. Set Up Plane Detection
configuration.planeDetection = .vertical
//4. Run Our Configuration
augmentedRealitySession.run(configuration, options: [.resetTracking, .removeExistingAnchors])
}
Now that you are all set to detected vertical planes you need to hook into the following ARSCNViewDelegate Method:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) { }
Which simply:
Tells the delegate that a SceneKit node corresponding to a new AR
anchor has been added to the scene.
In this method we are going to explicitly look for any ARPlaneAnchors which have been detected which provide us with:
Information about the position and orientation of a real-world flat
surface detected in a world-tracking AR session.
As such placing an SCNPlane onto a detected vertical plane is a simple as this:
//-------------------------
//MARK: - ARSCNViewDelegate
//-------------------------
extension ViewController: ARSCNViewDelegate{
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. Check We Have Detected An ARPlaneAnchor
guard let planeAnchor = anchor as? ARPlaneAnchor else { return }
//2. Get The Size Of The ARPlaneAnchor
let width = CGFloat(planeAnchor.extent.x)
let height = CGFloat(planeAnchor.extent.z)
//3. Create An SCNPlane Which Matches The Size Of The ARPlaneAnchor
let imageHolder = SCNNode(geometry: SCNPlane(width: width, height: height))
//4. Rotate It
imageHolder.eulerAngles.x = -.pi/2
//5. Set It's Colour To Red
imageHolder.geometry?.firstMaterial?.diffuse.contents = UIColor.red
//4. Add It To Our Node & Thus The Hiearchy
node.addChildNode(imageHolder)
}
}
Applying This To Your Case:
In your case we need to do some additional work, as you want to be able to allow the user to apply an image to the vertical plane.
As such your best bet is to make the node you have just added a variable e.g.
class ViewController: UIViewController {
#IBOutlet var augmentedRealityView: ARSCNView!
let augmentedRealitySession = ARSession()
let configuration = ARWorldTrackingConfiguration()
var nodeWeCanChange: SCNNode?
}
As such your Delegate Callback might look like so:
//-------------------------
//MARK: - ARSCNViewDelegate
//-------------------------
extension ViewController: ARSCNViewDelegate{
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. If We Havent Create Our Interactive Node Then Proceed
if nodeWeCanChange == nil{
//a. Check We Have Detected An ARPlaneAnchor
guard let planeAnchor = anchor as? ARPlaneAnchor else { return }
//b. Get The Size Of The ARPlaneAnchor
let width = CGFloat(planeAnchor.extent.x)
let height = CGFloat(planeAnchor.extent.z)
//c. Create An SCNPlane Which Matches The Size Of The ARPlaneAnchor
nodeWeCanChange = SCNNode(geometry: SCNPlane(width: width, height: height))
//d. Rotate It
nodeWeCanChange?.eulerAngles.x = -.pi/2
//e. Set It's Colour To Red
nodeWeCanChange?.geometry?.firstMaterial?.diffuse.contents = UIColor.red
//f. Add It To Our Node & Thus The Hiearchy
node.addChildNode(nodeWeCanChange!)
}
}
}
Now you have a reference to the nodeWeCanChange setting it's image at anytime is simple!
Each SCNGeometry has a set of Materials which are a:
A set of shading attributes that define the appearance of a geometry's
surface when rendered.
In our case we are looking for the materials diffuse property which is:
An object that manages the material’s diffuse response to lighting.
And then the contents property which are:
The visual contents of the material property—a color, image, or source
of animated content.
Obviously you need to handle the full logistics of this, however a very basic example might look like so assuming you stored your Images into an Array of UIImage e.g:
let imageGallery = [UIImage(named: "StackOverflow"), UIImage(named: "GitHub")]
I have created an IBAction which will change the image of our SCNNode's Geometry based on the tag of the UIButton pressed e.g:
/// Changes The Material Of Our SCNNode's Gemeotry To The Image Selected By The User
///
/// - Parameter sender: UIButton
#IBAction func changeNodesImage(_ sender: UIButton){
guard let imageToApply = imageGallery[sender.tag], let node = nodeWeCanChange else { return}
node.geometry?.firstMaterial?.diffuse.contents = imageToApply
}
This is more than enough to point you in the right direction... Hope it helps...
I'm in the process of learning both ARKit and Scenekit concurrently, and it's been a bit of a challenge.
With a ARWorldTrackingSessionConfiguration session created, I was wondering if anyone knew of a way to get the position of the user's 'camera' in the scene session. The idea is I want to animate an object towards the user's current position.
let reaperScene = SCNScene(named: "reaper.dae")!
let reaperNode = reaperScene.rootNode.childNode(withName: "reaper", recursively: true)!
reaperNode.position = SCNVector3Make(0, 0, -1)
let scene = SCNScene()
scene.rootNode.addChildNode(reaperNode)
// some unknown amount of time later
let currentCameraPosition = sceneView.pointOfView?.position
let moveAction = SCNAction.move(to: currentCameraPosition!, duration: 1.0)
reaperNode.runAction(moveAction)
However, it seems that currentCameraPosition is always [0,0,0], even though I am moving the camera around. Any idea what I'm doing wrong? Eventually the idea is I would rotate the object around an invisible sphere until it is in front of the camera and then animate it in, doing something similar to this: Rotate SCNCamera node looking at an object around an imaginary sphere (that way the user sees the object animate towards them)
Thanks for any help.
Set yourself as the ARSession.delegate. Than you can implement session(_:didUpdate:) which will give you an ARFrame for every frame processed in your session. The frame has an camera property that holds information on the cameras transform, rotation and position.
func session(_ session: ARSession, didUpdate frame: ARFrame) {
// Do something with the new transform
let currentTransform = frame.camera.transform
doSomething(with: currentTransform)
}
As rickster pointed out you always can get the current ARFrame and the camera position through it by calling session.currentFrame.
This is useful if you need the position just once, eg to move a node where the camera has been but you should use the delegate method if you want to get updates on the camera's position.
I know it had been solved but i have a little neat solution for it ..
I would prefere adding a renderer delegate method.. it's a method in ARSCNViewDelegate
func renderer(_ renderer: SCNSceneRenderer, willRenderScene scene: SCNScene, atTime time: TimeInterval) {
guard let pointOfView = sceneView.pointOfView else { return }
let transform = pointOfView.transform
let orientation = SCNVector3(-transform.m31, -transform.m32, transform.m33)
let location = SCNVector3(transform.m41, transform.m42, transform.m43)
let currentPositionOfCamera = orientation + location
print(currentPositionOfCamera)
}
of course you can't by default add the two SCNVector3 out of the box.. so you need to paste out of the class the following
func +(lhv:SCNVector3, rhv:SCNVector3) -> SCNVector3 {
return SCNVector3(lhv.x + rhv.x, lhv.y + rhv.y, lhv.z + rhv.z)
}
ARKit + SceneKit
For your convenience you can create a ViewController extension with an instance method session(_:didUpdate:) where an update will be occured.
import ARKit
import SceneKit
extension ViewController: ARSessionDelegate {
func session(_ session: ARSession, didUpdate frame: ARFrame) {
let transform = frame.camera.transform
let position = transform.columns.3
print(position.x, position.y, position.z) // UPDATING
}
}
class ViewController: UIViewController {
#IBOutlet var sceneView: ARSCNView!
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
sceneView.session.delegate = self // ARSESSION DELEGATE
let config = ARWorldTrackingConfiguration()
sceneView.session.run(config)
}
}
RealityKit
In RealityKit, ARView's object contains a camera's transform as well:
import RealityKit
import UIKit
import Combine
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
var subs: [AnyCancellable] = []
override func viewDidLoad() {
super.viewDidLoad()
arView.scene.subscribe(to: SceneEvents.Update.self) { _ in
let camTransform = self.arView.cameraTransform.matrix
print(camTransform) // UPDATING
}.store(in: &subs)
}
}