Vertical place detection in ARKit is not very good so I am using proximity sensor to place found vertical plane. The UX is as follows:
Ask user to place front of the device on the wall
When the proximity sensor is triggered make a vertical plane using AR Camera transform
The issue that I am currently facing is that when front sensor is triggered, everything comes to a halt. All the CoreMotion sensors and render methods for ARSCNViewDelegate calls stop. This causes the world origin to move from its origin point and make the placed item also move with it.
Is there a way to get proximity sensor data without shutting down everything? Is there a better way to place vertical items?
Asynchronous functions let you perform two or more tasks almost simultaneously, without waiting until the dispatched block is executed.
So, you have to use an approach like this:
class ViewController: UIViewController {
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
DispatchQueue.main.async {
self.proximitySensorActivation()
}
}
func proximitySensorActivation() {
let device = UIDevice.current
device.isProximityMonitoringEnabled = true
if device.isProximityMonitoringEnabled {
NotificationCenter.default.addObserver(self,
selector: #selector(proximityChanged),
name: UIDevice.proximityStateDidChangeNotification,
object: device)
}
}
#objc func proximityChanged(notification: NSNotification) {
if let device = notification.object as? UIDevice {
print(device)
}
}
}
Related
When adding a child to my AnchorEntity(.camera), it appears as if the child is spawning behind my camera (meaning I can only see my child when I turn around). I have also tried to add a mesh to my Anchor directly but unfortunately ARKit / RealityKit does not render the mesh when you are inside of it (which because its centered around the camera, is theoretically always the case. However, it could also be the case that its always located behind the screen [where the user is] and I'm never able to see it).
Also, oddly enough the child entity does not move with the camera AnchorEntity despite setting the translation transform to (0,0,0).
My two questions are:
Is the .camera anchor actually located right where the physical iPad / camera is located or is it located further back (perhaps where the user would normally hold the iPad)?
How do you get a child entity of the AnchorEntity(.camera) to move as the iPad / camera moves in real space?
Answer to the first question
In RealityKit and ARKit frameworks ARCamera has a pivot point like other entities (nodes) have, and it's located at the point where lens is attached to the camera body (at bayonet level). This pivot can tether AnchorEntity(.camera). In other words, virtual camera and real-world camera have that pivot point approximately at the same place.
So, if you attach RealityKit's AnchorEntity to a camera's pivot, you place it to the coordinates where camera's bayonet is located. And this AnchorEntity(.camera) will be tracked automatically without a need to implement session(_:didUpdate:) method.
However, if attach ARKit's ARAnchor to the camera's pivot, you have to implement session(_:didUpdate:) method to constantly update a position and orientation of that anchor for every ARFrame.
Answer to the second question
If you want to constantly update model's position in RealityKits at 60 fps (when ARCamera moves and rotates) you need to use the following approach:
import ARKit
import RealityKit
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
let box = MeshResource.generateBox(size: 0.25)
let material = SimpleMaterial(color: .systemPink, isMetallic: true)
let boxEntity = ModelEntity(mesh: box, materials: [material])
let cameraAnchor = AnchorEntity(.camera) // ARCamera anchor
cameraAnchor.addChild(boxEntity)
arView.scene.addAnchor(cameraAnchor)
boxEntity.transform.translation = [0, 0,-0.5] // Box offset 0.5 m
}
}
Or you can use ARKit's great old .currentFrame instance property in session(_:didUpdate:) delegate method:
extension ViewController: ARSessionDelegate {
func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
guard let transform = arView.session.currentFrame?.camera.transform
else { return }
let arkitAnchor = ARAnchor(transform: transform)
arView.session.add(anchor: arkitAnchor) // add to session
let anchor = AnchorEntity(anchor: arkitAnchor)
anchor.addChild(boxEntity)
arView.scene.addAnchor(anchor) // add to scene
}
}
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
var boxEntity = ModelEntity(...)
override func viewDidLoad() {
super.viewDidLoad()
arView.session.delegate = self // Session's delegate
}
}
To find out how to save the ARCamera Pose over time, read the following post.
I am working on iOS app using ARKit.
In real world, there is a poster on the wall. The poster is a fixed thing, so any needed preprocessing may be applied.
The goal is to make this poster a window into a virtual room. So that when user approaches the poster, he can look "through" it at some virtual 3D environment (room). Of course, user cannot go through the "window" and then wander in that 3D environment. He only can observe a virtual room looking "through" the poster.
I know that it's possible to make this poster detectable by ARKit, and to play some visual effects around it, or even a movie on top of it.
But I did not find information how to turn it into a window into virtual 3D world.
Any ideas and links to sample projects are greatly appreciated.
Look at this video posted on Augmented Images webpage (use Chrome browser to watch this video).
It's easy to create that type of a virtual cube. All you need is a 3D model of simple cube primitive without a front polygon (in order to see its inner surface). Also you need a plane with a square hole. Assign an out-of-the-box RealityKit occlusion material or a hand-made SceneKit occlusion material for this plane and it will hide all the outer walls of cube behind it (look at a picture below).
In Autodesk Maya Occlusion material is a Hold-Out option in Render Stats (for Viewport 2.0 only):
When you'll be tracking your poster on a wall (with detectionImages option activated), your app must recognize a picture and "load" 3D cube and its masking plane with occlusion shader. So, if ARImageAnchor on a poster and a pivot point of 3D cube must meet, cube's pivot point has to be located on a front edge of cube (at the same level where a wall's surface is).
If you wish to download Apple's sample code containing Image Detection experience – just click a blue button on the same webpage with detectionImages.
Here is a short example of my code:
#IBOutlet var sceneView: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = self // for using renderer() methods of ARSCNViewDelegate
sceneView.scene = SCNScene()
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
resetTrackingConfiguration()
}
func resetTrackingConfiguration() {
guard let refImage = ARReferenceImage.referenceImages(inGroupNamed: "Poster",
bundle: nil)
else { return }
let config = ARWorldTrackingConfiguration()
config.detectionImages = refImage
config.maximumNumberOfTrackedImages = 1
let options = [ARSession.RunOptions.removeExistingAnchors,
ARSession.RunOptions.resetTracking]
sceneView.session.run(config, options: ARSession.RunOptions(options))
}
...and, of course, a SceneKit's renderer() instance method:
func renderer(_ renderer: SCNSceneRenderer,
didAdd node: SCNNode,
for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor,
let _ = imageAnchor.referenceImage.name
else { return }
anchorsArray.append(imageAnchor)
if anchorsArray.first != nil {
node.addChildNode(portalNode)
}
}
The following attributes are returning false for me, but I am not able to understand why.
ARImageTrackingConfiguration.isSupported
ARWorldTrackingConfiguration.isSupported
I am testing it on a iPhone Xs with iOS 12.1.1, with the code built with Xcode 10.1.
Note that ARConfiguration.isSupported does return true.
Any ideas why this might be happening?
Only one ARTracking class is supported per a given time.
You should write your code this way:
import UIKit
import ARKit
class ViewController: UIViewController, ARSCNViewDelegate {
#IBOutlet var sceneView: ARSCNView!
var configuration: ARConfiguration?
//.........
//.........
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
if ARWorldTrackingConfiguration.isSupported {
configuration = ARWorldTrackingConfiguration() // 6-DoF
} else {
configuration = AROrientationTrackingConfiguration() // 3-DoF
}
sceneView.session.run(configuaration!)
}
}
Also, read carefully about these 3 types of tracking configuration:
ARWorldTrackingConfiguration() (rotation and translation x-y-z) 6-DoF
AROrientationTrackingConfiguration() (only rotation x-y-z) 3-DoF
ARImageTrackingConfiguration() 6-DoF but image-only tracking lets you anchor virtual content to known images only when those images are in view of the camera.
Because 3-DoF tracking creates limited AR experiences, you should generally not use the AROrientationTrackingConfiguration() class directly. Instead, use the subclass ARWorldTrackingConfiguration() for tracking with six degrees of freedom (6-DoF), plane detection, and hit-testing. Use 3-DoF tracking only as a fallback in situations where 6-DoF tracking is temporarily unavailable.
Hope this helps.
Desired behavior is: when an action is removed from a node (with removeAction(forKey:) for instance) it stops to animate and all the changes caused by action are discarded, so the node returns back to pervious state. In other words, I want to achieve behavior similar to CAAnimation.
But when a SKAction is removed, the node remains changed. It's not good, because to restore it's state I need to know exactly what action was removed. And if I then change the action, I also will need to update the node state restoration.
Update:
The particular purpose is to show possible move in a match-3 game. When I show a move, pieces start pulsating (scale action, repeating forever). And when the user moves I want to stop showing the move, so I remove the action. As the result, pieces may remain downscaled. Later I would like to add more fancy and complicated animations, so I want to be able to edit it easily.
Thanks to the helpful comment and answer I came to my own solution. I think the state machine would be bit too heavy here. Instead I created a wrapper node, which main purpose is run the animation. It also has a state: isAimating property. But, first of all, it allows to keep startAnimating() and stopAnimating() methods close to each other, incapsulated, so it's more difficult to mess up.
class ShowMoveAnimNode: SKNode {
let animKey = "showMove"
var isAnimating: Bool = false {
didSet {
guard oldValue != isAnimating else { return }
if isAnimating {
startAnimating()
} else {
stopAnimating()
}
}
}
private func startAnimating() {
let shortPeriod = 0.2
let scaleDown = SKAction.scale(by: 0.75, duration: shortPeriod)
let seq = SKAction.sequence([scaleDown,
scaleDown.reversed(),
scaleDown,
scaleDown.reversed(),
SKAction.wait(forDuration: shortPeriod * 6)])
let repeated = SKAction.repeatForever(seq)
run(repeated, withKey: animKey)
}
private func stopAnimating() {
removeAction(forKey: animKey)
xScale = 1
yScale = 1
}
}
Usage: just add everything that should be animated to this node. Works well with simple animations, like: fade, scale and move.
As #Knight0fDragon suggested, you would be better off using the GKStateMachine functionality, I will give you an example.
First declare the states of your player/character in your scene
lazy var playerState: GKStateMachine = GKStateMachine(states: [
Idle(scene: self),
Run(scene: self)
])
Then you need to create a class for each of these states, in this example I will show you only the Idle class
import SpriteKit
import GameplayKit
class Idle: GKState {
weak var scene: GameScene?
init(scene: SKScene) {
self.scene = scene as? GameScene
super.init()
}
override func didEnter(from previousState: GKState?) {
//Here you can make changes to your character when it enters this state, for example, change his texture.
}
override func isValidNextState(_ stateClass: AnyClass) -> Bool {
return stateClass is Run.Type //This is pretty obvious by the method name, which states can the character go to from this state.
}
override func update(deltaTime seconds: TimeInterval) {
//Here is the update method for this state, lets say you have a button which controls your character velocity, then you can check if the player go over a certain velocity you make it go to the Run state.
if playerVelocity > 500 { //playerVelocity is just an example of a variable to check the player velocity.
scene?.playerState.enter(Run.self)
}
}
}
Now of course in your scene you need to do two things, first is initialize the character to a certain state or else it will remain stateless, so you can to this in the didMove method.
override func didMove(to view: SKView) {
playerState.enter(Idle.self)
}
And last but no least is make sure the scene update method calls the state update method.
override func update(_ currentTime: TimeInterval) {
playerState.update(deltaTime: currentTime)
}
I'm looking for a way to detect when spacial tracking is "working/not working" in ARKit, i.e when ARKit has enough visual information to start the 3d spacial tracking.
In other apps i've tried, the user gets prompted to look around with the phone/camera to resume space tracking if ARKit doesn't get enough information from the camera. I have even seen apps with a progress bar showing how much more the user needs to move the device around to resume tracking.
Would a good way to detect if tracking is available to check how many rawFeaturePoints the ARSessions current frame has? E.g if the current frame has more than say 100 rawFeaturePoints, we can assume that spacial tracking is working.
Would this be a good approach, or is there built in functionality or better way in ARKit to detect if spacial tracking is working what i don't know about?
You could use feature points but I think that is probably overkill, as such something like this might be a good start:
Using the currentFrame of an ARSession you can get the current tracking state from like so:
//------------------------------------------------
//MARK: ARSession Extension To Log Tracking States
//------------------------------------------------
extension ARSession{
/// Returns The Status Of The Current ARSession
///
/// - Returns: String
func sessionStatus() -> String? {
//1. Get The Current Frame
guard let frame = self.currentFrame else { return nil }
var status = "Preparing Device.."
//1. Return The Current Tracking State & Lighting Conditions
switch frame.camera.trackingState {
case .normal: status = ""
case .notAvailable: status = "Tracking Unavailable"
case .limited(.excessiveMotion): status = "Please Slow Your Movement"
case .limited(.insufficientFeatures): status = "Try To Point At A Flat Surface"
case .limited(.initializing): status = "Initializing"
case .limited(.relocalizing): status = "Relocalizing"
}
guard let lightEstimate = frame.lightEstimate?.ambientIntensity else { return nil }
if lightEstimate < 100 { status = "Lighting Is Too Dark" }
return status
}
}
Which you would call something like this in the ARSCNViewDelegate callback:
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
DispatchQueue.main.async {
//1. Update The Tracking Status
print(self.augmentedRealitySession.sessionStatus())
}
}
There are also other delegate callbacks you can use as well e.g:
func session(_ session: ARSession, didFailWithError error: Error) {
print("The ARSession Failed")
}
func sessionWasInterrupted(_ session: ARSession) {
print("ARSession Was Interupted")
}
These ARKit Guidelines also provide some useful information as how to handle these states: Apple Guidelines
If you do actually want to track the number of featurePoints however you can do something like this:
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
guard let currentFrame = self.augmentedRealitySession.currentFrame,
let featurePointCount = currentFrame.rawFeaturePoints?.points.count else { return }
print("Number Of Feature Points In Current Session = \(featurePointCount)")
}
And if you want to see an example you can have a look here: Feature Points Example
Hope it helps...