RealityKit – Image recognition and working with many scenes - swift

I've created an app using the RealityKit template file. Inside RealityComposer there are multiple scenes, all the scenes use image recognition that activates some animations.
Inside Xcode I have to load all the scenes as anchors and append those anchors to arView.scene.anchors array. The issue is an obvious one, as I present the physical 2D image one after the other I get multiple anchors piled on top of each other which is not desirable. I'm aware of arView.scene.anchors.removeAll() prior to loading the new anchor but my issue is this:
How do I check when a certain image has appeared to therefore remove the existing anchor and load the correct one? I've tried to look for something like there is in ARKit as didUpdate but I can't see anything similar in RealityKit.
Many thanks

Foreword
RealityKit's AnchorEntity(.image) coming from RC, matches ARKit's ARImageTrackingConfig. When iOS device recognises a reference image, it creates Image Anchor (that conforms to ARTrackable protocol) that tethers a corresponding 3D model. And, as you understand, you must show just one reference image at a time (in your particular case AR app can't operate normally when you give it two or more images simultaneously).
Code snippet showing how if condition logic might look like:
import SwiftUI
import RealityKit
struct ContentView : View {
var body: some View {
return ARViewContainer().edgesIgnoringSafeArea(.all)
}
}
struct ARViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
let id02Scene = try! Experience.loadID2()
print(id02Scene) // prints scene hierarchy
let anchor = id02Scene.children[0]
print(anchor.components[AnchoringComponent] as Any)
if anchor.components[AnchoringComponent] == AnchoringComponent(
.image(group: "Experience.reality",
name: "assets/MainID_4b51de84.jpeg")) {
arView.scene.anchors.removeAll()
print("LOAD SCENE")
arView.scene.anchors.append(id02Scene)
}
return arView
}
func updateUIView(_ uiView: ARView, context: Context) { }
}
ID2 scene hierarchy printed in console:
P.S.
You should implement SwiftUI Coordinator class (read about it here), and inside Coordinator use ARSessionDelegate's session(_:didUpdate:) instance method to update anchors properties at 60 fps.
Also you may use the following logic – if anchor of scene 1 is active or anchor of scene 3 is active, just delete all anchors from collection and load scene 2.
var arView = ARView(frame: .zero)
let id01Scene = try! Experience.loadID1()
let id02Scene = try! Experience.loadID2()
let id03Scene = try! Experience.loadID3()
func makeUIView(context: Context) -> ARView {
arView.session.delegate = context.coordinator
arView.scene.anchors.append(id01Scene)
arView.scene.anchors.append(id02Scene)
arView.scene.anchors.append(id03Scene)
return arView
}
...
func session(_ session: ARSession, didUpdate frame: ARFrame) {
if arView.scene.anchors[0].isActive || arView.scene.anchors[2].isActive {
arView.scene.anchors.removeAll()
arView.scene.anchors.append(id02Scene)
print("Load Scene Two")
}
}

Related

Compass Heading information in RealityKit

I'm new to SwiftUI, and I've learned some basics of my project's RealityKit and ARKit. I just want to display an arrow that always faces towards the north, or at least get heading information displayed as text when I open the camera (AR Experience).
Waiting for someone to solve this fundamental problem.
Thanks in advance!
Use the following code to create AR experience that depends on device's geo position.
Reality Composer
Code
import SwiftUI
import RealityKit
import ARKit
struct ARViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
arView.cameraMode = .ar
arView.automaticallyConfigureSession = false
let config = ARWorldTrackingConfiguration()
config.worldAlignment = .gravityAndHeading // case = 1
arView.session.run(config)
let arrowScene = try! Experience.loadNorth()
arView.scene.anchors.append(arrowScene)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) { }
}
struct ContentView : View {
var body: some View {
ZStack {
ARViewContainer().ignoresSafeArea()
}
}
}
Settings
On device, go to Settings–Privacy–Location Services–On. After that, in Xcode, append Privacy LocationUsageDescription and LocationWhenInUseUsageDescription and then CameraUsageDescription keys in info tab.

RealityKit – Difference between loading model using `.rcproject` vs `.usdz`

I'm building a simple app that adds a hat on top of the user's face. I've seen examples of 2 different approaches:
Adding the object as a scene to Experience.rcproject
Reading the object from the bundle directly as a .usdz file
Approach #1
struct ARViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
arView = ARView(frame: .zero)
arView.automaticallyConfigureSession = false
return arView
}
func updateUIView(_ uiView: ARView, context: Context) {
let arConfiguration = ARFaceTrackingConfiguration()
uiView.session.run(arConfiguration,
options:[.resetTracking, .removeExistingAnchors])
let arAnchor = try! Experience.loadHat()
uiView.scene.anchors.append(arAnchor)
}
}
Approach #2
struct ARViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
let modelEntity = try! ModelEntity.load(named: "hat.usdz")
modelEntity.position = SIMD3(0, 0, -8)
modelEntity.orientation = simd_quatf.init(angle: 0, axis: SIMD3(-90, 0, 0))
modelEntity.scale = SIMD3(0.02, 0.02, 0.02)
arView.session.run(ARFaceTrackingConfiguration())
let anchor = AnchorEntity(.face)
anchor.position.y += 0.25
anchor.addChild(modelEntity)
arView.scene.addAnchor(anchor)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) {
let arConfiguration = ARFaceTrackingConfiguration()
uiView.session.run(arConfiguration,
options:[.resetTracking, .removeExistingAnchors])
let fileName = "hat.usdz"
let modelEntity = try! ModelEntity.loadModel(named: fileName)
modelEntity.position = SIMD3(0, 0, -8)
modelEntity.orientation = simd_quatf.init(angle: 0, axis: SIMD3(-90, 0, 0))
modelEntity.scale = SIMD3(0.02, 0.02, 0.02)
let arAnchor = AnchorEntity(.face)
arAnchor.addChild(modelEntity)
uiView.scene.anchors.append(arAnchor)
}
}
What is the main difference between these approaches? Approach #1 works, but the issue is that approach #2 doesn't even work for me - the object simply doesn't load into the scene. Could anyone explain a bit?
Thanks!
The difference between .rcproject and .usdz is quite obvious: the Reality Composer file already has an anchor for the model (and it's at the top of the hierarchy). When you prototype in Reality Composer, you have the ability to visually control the scale of your models. .usdz models very often have a huge scale, which you need to reduce by 100 times.
As a rule, .usdz model doesn't have a floor, while .rcproject has a floor by default and this floor acts as a shadow catcher. Also, note that the .rcproject file is larger than the .usdz file.
let scene = try! Experience.loadHat()
arView.scene.anchors.append(scene)
print(scene)
When loading .usdz into a scene, you have to programmatically create an anchor (either swiftly or pythonically). It also makes sense to use .reality files as they are optimized for faster loading.
let model = try! ModelEntity.load(named: "hat.usdz")
let anchor = AnchorEntity(.face)
anchor.addChild(model)
arView.scene.anchors.append(anchor)
print(model)
Also, put a face tracking config inside makeUIView method:
import SwiftUI
import RealityKit
import ARKit
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
let model = try! ModelEntity.load(named: "hat.usdz")
arView.session.run(ARFaceTrackingConfiguration())
let anchor = AnchorEntity(.face)
anchor.position.y += 0.25
anchor.addChild(model)
arView.scene.addAnchor(anchor)
return arView
}
Also, check if the following render options are disabled.
arView.renderOptions = [.disableFaceMesh, .disablePersonOcclusion]
And check a position of pivot point in hat model.
For approach number 2, try removing the the position for the modelEntity. You provided position as 0, -4.9 and 11.8. Those positions are in meters. So try to remove it and see if appears.

Play animation when image anchor detected

In RealityKit I have an image anchor. When the image anchor is detected I would like to display an object and play animation it has. I created an animation in Reality Composer. It's a simple "Ease Out" animation which comes built-in Reality Composer.
Currently, my code looks like that:
struct ARViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = CustomARView(frame: .zero)
// generate image anchor
let anchor = AnchorEntity(.image(group: "AR Resources", name: "imageAnchor"))
// load 3D model from Experience.rcproject file
let box = try! Experience.loadBox()
// add 3D model to anchor
anchor.addChild(box)
// add anchor to scene
arView.scene.addAnchor(anchor)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) {}
}
The solution is easy. Choose Reality Composer's image anchor (supply it with a corresponding .jpg or .png image). Then assign a Custom Behavior for your model. As a trigger use a Scene Start. Then apply any desired Action.
Your code will be frighteningly simple:
struct ARViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
let scene = try! Experience.loadCylinder()
arView.scene.addAnchor(scene)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) { }
}
Action will be played automatically (immediately after the image anchor appears).

Multi-face detection in RealityKit

I have added content to the face anchor in Reality Composer, later on, after loading the Experience that i created on Reality Composer, i create a face tracking session like this:
guard ARFaceTrackingConfiguration.isSupported else { return }
let configuration = ARFaceTrackingConfiguration()
configuration.maximumNumberOfTrackedFaces = ARFaceTrackingConfiguration.supportedNumberOfTrackedFaces
configuration.isLightEstimationEnabled = true
arView.session.delegate = self
arView.session.run(configuration, options: [.resetTracking, .removeExistingAnchors])
It is not adding the content to all the faces that is detecting, and i know it is detecting more than one face because the other faces occlude the content that is stick to the other face, is this a limitation on RealityKit or i am missing something in the composer? actually is pretty hard to miss somehting since it is so basic and simple.
Thanks.
You can't succeed in multi-face tracking in RealityKit in case you use models with embedded Face Anchor, i.e. the models that came from Reality Composer' Face Tracking preset (you can use just one model with .face anchor, not three). Or you MAY USE such models but you need to delete these embedded AnchorEntity(.face) anchors. Although there's a better approach – simply load models in .usdz format.
Let's see what Apple documentation says about embedded anchors:
You can manually load and anchor Reality Composer scenes using code, like you do with other ARKit content. When you anchor a scene in code, RealityKit ignores the scene's anchoring information.
Reality Composer supports 5 anchor types: Horizontal, Vertical, Image, Face & Object. It displays a different set of guides for each anchor type to help you place your content. You can change the anchor type later if you choose the wrong option or change your mind about how to anchor your scene.
There are two options:
In new Reality Composer project, deselect the Create with default content checkbox at the bottom left of the action sheet you see at startup.
In RealityKit code, delete existing Face Anchor and assign a new one. The latter option is not great because you need to recreate objects positions from scratch:
boxAnchor.removeFromParent()
Nevertheless, I've achieved a multi-face tracking using AnchorEntity() with ARAnchor intializer inside session(:didUpdate:) instance method (just like SceneKit's renderer() instance method).
Here's my code:
import ARKit
import RealityKit
extension ViewController: ARSessionDelegate {
func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
guard let faceAnchor = anchors.first as? ARFaceAnchor
else { return }
let anchor1 = AnchorEntity(anchor: faceAnchor)
let anchor2 = AnchorEntity(anchor: faceAnchor)
anchor1.addChild(model01)
anchor2.addChild(model02)
arView.scene.anchors.append(anchor1)
arView.scene.anchors.append(anchor2)
}
}
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
let model01 = try! Entity.load(named: "angryFace") // USDZ file
let model02 = try! FacialExpression.loadSmilingFace() // RC scene
override func viewDidLoad() {
super.viewDidLoad()
arView.session.delegate = self
guard ARFaceTrackingConfiguration.isSupported else {
fatalError("Alas, Face Tracking isn't supported")
}
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
let config = ARFaceTrackingConfiguration()
config.maximumNumberOfTrackedFaces = 2
arView.session.run(config)
}
}

Can't preview SwiftUI Canvas with RealityKit scene

I only create a new property of RealityKit.
Xcode won't be able to preview the SwiftUI canvas.
But it can build successfully.
I create the App by Xcode.
Choose Augmented Reality App and set User Interface to SwiftUI.
The project can work normally.
import SwiftUI
import RealityKit
struct ContentView : View {
var body: some View {
return VStack {
Text("123")
ARViewContainer().edgesIgnoringSafeArea(.all)
}
}
}
struct ARViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
// Load the "Box" scene from the "Experience" Reality File
let boxAnchor = try! Experience.loadBox()
// Add the box anchor to the scene
arView.scene.anchors.append(boxAnchor)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) {}
}
#if DEBUG
struct ContentView_Previews : PreviewProvider {
static var previews: some View {
ContentView()
}
}
#endif
I only create a new property of RealityKit in makeUIView function.
let test: AnchoringComponent.Target.Alignment = .horizontal
The project cannot preview canvas and appear the error
'Alignment' is not a member type of 'AnchoringComponent.Target'
The project still can compile successful.
I am so confused what I met.
Has anyone meet the same issue?
You have to fix several issues:
You can't use anchoring component in iOS Simulator or in SwiftUI Canvas Preview, because it can only be used for pinning a virtual content to the real world surfaces. So no simulator for AR apps.
RealityKit Anchors are useless in iOS Simulator mode and SwiftUI Canvas Preview mode.
// Use it only for compiled AR app, not simulated...
let _: AnchoringComponent.Target.Alignment = .horizontal
Not only anchors are useless in iOS Simulator Mode and SwiftUI Preview Mode but also other session-oriented properties (including ARView.session) like ones you can see on a picture:
Change .backgroundColor in ARView to any other desirable one. Default color sometimes doesn't allow you to see a RealityKit scene. Looks like a bug.
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
let boxAnchor = try! Experience.loadBox()
boxAnchor.steelBox?.scale = SIMD3(5, 5, 5)
boxAnchor.steelBox?.orientation = simd_quatf(angle: Float.pi/4, axis: SIMD3(1,1,0))
arView.backgroundColor = .orange
arView.scene.anchors.append(boxAnchor)
return arView
}
And here's what you can see in SwiftUI Preview Area now:
And, of course, you have to give a Camera Permission before using AR app. And it doesn't matter what you're using: Storyboard or SwiftUI.
You need to add Camera Usage Description property and arkit string in info.plist file:
XML version looks like this:
/* info.plist
<key>NSCameraUsageDescription</key>
<string>Camera access for AR experience</string>
<key>UIRequiredDeviceCapabilities</key>
<array>
<string>armv7</string>
<string>arkit</string>
</array>
*/
After fixing these issues app compiles and works as expected (without any errors):