Play animation when image anchor detected - swift

In RealityKit I have an image anchor. When the image anchor is detected I would like to display an object and play animation it has. I created an animation in Reality Composer. It's a simple "Ease Out" animation which comes built-in Reality Composer.
Currently, my code looks like that:
struct ARViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = CustomARView(frame: .zero)
// generate image anchor
let anchor = AnchorEntity(.image(group: "AR Resources", name: "imageAnchor"))
// load 3D model from Experience.rcproject file
let box = try! Experience.loadBox()
// add 3D model to anchor
anchor.addChild(box)
// add anchor to scene
arView.scene.addAnchor(anchor)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) {}
}

The solution is easy. Choose Reality Composer's image anchor (supply it with a corresponding .jpg or .png image). Then assign a Custom Behavior for your model. As a trigger use a Scene Start. Then apply any desired Action.
Your code will be frighteningly simple:
struct ARViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
let scene = try! Experience.loadCylinder()
arView.scene.addAnchor(scene)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) { }
}
Action will be played automatically (immediately after the image anchor appears).

Related

How can i import a model with animations from Blender into RealityKit?

I made a simple cube model with animation in Blender.
Exported it as a .fbx file with the option bake animation turned on.
With Reality Converter I converted the .fbx file to .usdz which I imported into my Xcode project. But I am not getting the animation back in my project (see below for the code).
import SwiftUI
import RealityKit
struct ContentView : View {
var body: some View {
ARViewContainer().edgesIgnoringSafeArea(.all)
}
}
struct ARViewContainer: UIViewRepresentable {
let arView = ARView(frame: .zero)
func makeUIView(context: Context) -> ARView {
let anchorEntity = AnchorEntity()
//get local usdz file which is in xcode project
do {
let cubeModel = try ModelEntity.load(named: "cube")
print(cubeModel)
print(cubeModel.availableAnimations) // here i get an empty array
anchorEntity.addChild(cubeModel)
arView.scene.addAnchor(anchorEntity)
}
catch {
print(error)
}
return arView // the cube is visible on my ipad
}
func updateUIView(_ uiView: ARView, context: Context) { }
}
For what i understand is has to be possible to import with animations. Am i missing something ?
FBX model with character animation
Make sure you exported a model from Blender with enabled animation. To play animation in RealityKit use AnimationPlaybackController object. To test a character animation in RealityKit use this fbx model.
To convert .fbx model to .usdz use Reality Converter.
import SwiftUI
import RealityKit
struct ContentView : View {
var body: some View {
ARViewContainer()
.ignoresSafeArea()
}
}
struct ARViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
let model = try! Entity.loadModel(named: "walking.usdz")
let anchor = AnchorEntity(world: [0,-1,-2])
model.setParent(anchor)
arView.scene.anchors.append(anchor)
let animation: AnimationResource = model.availableAnimations[0]
let controller = model.playAnimation(animation.repeat())
controller.blendFactor = 0.5
controller.speed = 1.7
return arView
}
func updateUIView(_ view: ARView, context: Context) { }
}
I found the answer in blender, i did the export in USD format with animations on.
Then convert the file in realityconverter to usdz format, now the model and the animations are in my xcode project.

Compass Heading information in RealityKit

I'm new to SwiftUI, and I've learned some basics of my project's RealityKit and ARKit. I just want to display an arrow that always faces towards the north, or at least get heading information displayed as text when I open the camera (AR Experience).
Waiting for someone to solve this fundamental problem.
Thanks in advance!
Use the following code to create AR experience that depends on device's geo position.
Reality Composer
Code
import SwiftUI
import RealityKit
import ARKit
struct ARViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
arView.cameraMode = .ar
arView.automaticallyConfigureSession = false
let config = ARWorldTrackingConfiguration()
config.worldAlignment = .gravityAndHeading // case = 1
arView.session.run(config)
let arrowScene = try! Experience.loadNorth()
arView.scene.anchors.append(arrowScene)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) { }
}
struct ContentView : View {
var body: some View {
ZStack {
ARViewContainer().ignoresSafeArea()
}
}
}
Settings
On device, go to Settings–Privacy–Location Services–On. After that, in Xcode, append Privacy LocationUsageDescription and LocationWhenInUseUsageDescription and then CameraUsageDescription keys in info tab.

RealityKit – Difference between loading model using `.rcproject` vs `.usdz`

I'm building a simple app that adds a hat on top of the user's face. I've seen examples of 2 different approaches:
Adding the object as a scene to Experience.rcproject
Reading the object from the bundle directly as a .usdz file
Approach #1
struct ARViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
arView = ARView(frame: .zero)
arView.automaticallyConfigureSession = false
return arView
}
func updateUIView(_ uiView: ARView, context: Context) {
let arConfiguration = ARFaceTrackingConfiguration()
uiView.session.run(arConfiguration,
options:[.resetTracking, .removeExistingAnchors])
let arAnchor = try! Experience.loadHat()
uiView.scene.anchors.append(arAnchor)
}
}
Approach #2
struct ARViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
let modelEntity = try! ModelEntity.load(named: "hat.usdz")
modelEntity.position = SIMD3(0, 0, -8)
modelEntity.orientation = simd_quatf.init(angle: 0, axis: SIMD3(-90, 0, 0))
modelEntity.scale = SIMD3(0.02, 0.02, 0.02)
arView.session.run(ARFaceTrackingConfiguration())
let anchor = AnchorEntity(.face)
anchor.position.y += 0.25
anchor.addChild(modelEntity)
arView.scene.addAnchor(anchor)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) {
let arConfiguration = ARFaceTrackingConfiguration()
uiView.session.run(arConfiguration,
options:[.resetTracking, .removeExistingAnchors])
let fileName = "hat.usdz"
let modelEntity = try! ModelEntity.loadModel(named: fileName)
modelEntity.position = SIMD3(0, 0, -8)
modelEntity.orientation = simd_quatf.init(angle: 0, axis: SIMD3(-90, 0, 0))
modelEntity.scale = SIMD3(0.02, 0.02, 0.02)
let arAnchor = AnchorEntity(.face)
arAnchor.addChild(modelEntity)
uiView.scene.anchors.append(arAnchor)
}
}
What is the main difference between these approaches? Approach #1 works, but the issue is that approach #2 doesn't even work for me - the object simply doesn't load into the scene. Could anyone explain a bit?
Thanks!
The difference between .rcproject and .usdz is quite obvious: the Reality Composer file already has an anchor for the model (and it's at the top of the hierarchy). When you prototype in Reality Composer, you have the ability to visually control the scale of your models. .usdz models very often have a huge scale, which you need to reduce by 100 times.
As a rule, .usdz model doesn't have a floor, while .rcproject has a floor by default and this floor acts as a shadow catcher. Also, note that the .rcproject file is larger than the .usdz file.
let scene = try! Experience.loadHat()
arView.scene.anchors.append(scene)
print(scene)
When loading .usdz into a scene, you have to programmatically create an anchor (either swiftly or pythonically). It also makes sense to use .reality files as they are optimized for faster loading.
let model = try! ModelEntity.load(named: "hat.usdz")
let anchor = AnchorEntity(.face)
anchor.addChild(model)
arView.scene.anchors.append(anchor)
print(model)
Also, put a face tracking config inside makeUIView method:
import SwiftUI
import RealityKit
import ARKit
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
let model = try! ModelEntity.load(named: "hat.usdz")
arView.session.run(ARFaceTrackingConfiguration())
let anchor = AnchorEntity(.face)
anchor.position.y += 0.25
anchor.addChild(model)
arView.scene.addAnchor(anchor)
return arView
}
Also, check if the following render options are disabled.
arView.renderOptions = [.disableFaceMesh, .disablePersonOcclusion]
And check a position of pivot point in hat model.
For approach number 2, try removing the the position for the modelEntity. You provided position as 0, -4.9 and 11.8. Those positions are in meters. So try to remove it and see if appears.

RealityKit – Image recognition and working with many scenes

I've created an app using the RealityKit template file. Inside RealityComposer there are multiple scenes, all the scenes use image recognition that activates some animations.
Inside Xcode I have to load all the scenes as anchors and append those anchors to arView.scene.anchors array. The issue is an obvious one, as I present the physical 2D image one after the other I get multiple anchors piled on top of each other which is not desirable. I'm aware of arView.scene.anchors.removeAll() prior to loading the new anchor but my issue is this:
How do I check when a certain image has appeared to therefore remove the existing anchor and load the correct one? I've tried to look for something like there is in ARKit as didUpdate but I can't see anything similar in RealityKit.
Many thanks
Foreword
RealityKit's AnchorEntity(.image) coming from RC, matches ARKit's ARImageTrackingConfig. When iOS device recognises a reference image, it creates Image Anchor (that conforms to ARTrackable protocol) that tethers a corresponding 3D model. And, as you understand, you must show just one reference image at a time (in your particular case AR app can't operate normally when you give it two or more images simultaneously).
Code snippet showing how if condition logic might look like:
import SwiftUI
import RealityKit
struct ContentView : View {
var body: some View {
return ARViewContainer().edgesIgnoringSafeArea(.all)
}
}
struct ARViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
let id02Scene = try! Experience.loadID2()
print(id02Scene) // prints scene hierarchy
let anchor = id02Scene.children[0]
print(anchor.components[AnchoringComponent] as Any)
if anchor.components[AnchoringComponent] == AnchoringComponent(
.image(group: "Experience.reality",
name: "assets/MainID_4b51de84.jpeg")) {
arView.scene.anchors.removeAll()
print("LOAD SCENE")
arView.scene.anchors.append(id02Scene)
}
return arView
}
func updateUIView(_ uiView: ARView, context: Context) { }
}
ID2 scene hierarchy printed in console:
P.S.
You should implement SwiftUI Coordinator class (read about it here), and inside Coordinator use ARSessionDelegate's session(_:didUpdate:) instance method to update anchors properties at 60 fps.
Also you may use the following logic – if anchor of scene 1 is active or anchor of scene 3 is active, just delete all anchors from collection and load scene 2.
var arView = ARView(frame: .zero)
let id01Scene = try! Experience.loadID1()
let id02Scene = try! Experience.loadID2()
let id03Scene = try! Experience.loadID3()
func makeUIView(context: Context) -> ARView {
arView.session.delegate = context.coordinator
arView.scene.anchors.append(id01Scene)
arView.scene.anchors.append(id02Scene)
arView.scene.anchors.append(id03Scene)
return arView
}
...
func session(_ session: ARSession, didUpdate frame: ARFrame) {
if arView.scene.anchors[0].isActive || arView.scene.anchors[2].isActive {
arView.scene.anchors.removeAll()
arView.scene.anchors.append(id02Scene)
print("Load Scene Two")
}
}

How to set a view to NOT update when a state changes

I am trying to integrate an ARKit view which processes frames with machine learning and shows the results on the screen. I have gotten the ARKit view to work with UIViewRepresentable and everything works until a state changes. How do I make the AR view static and not update when a state changes. I only want to update the label that shows the result.
This is the error that I receive when the state changes: [CAMetalLayer nextDrawable] returning nil because allocation failed.
This presumably happens because the arView is being constantly reloaded as it processes the frames? Not too sure though.
This is the code for the view:
struct ARControlView: View {
#EnvironmentObject var resultHandler: ResultHandler
var body: some View {
let arView = ARViewContainer() // This is the UIViewRepresentable containing the ARKit view.
return ZStack {
arView
VStack {
Text(self.resultHandler.gesture.rawValue)
}
.onAppear {
arView.restartARSession()
}
.onDisappear {
arView.pauseArSession()
}
}
}
}
This is for the ARViewContainer:
struct ARViewContainer: UIViewRepresentable {
var arView = ARView(frame: .zero)
#EnvironmentObject var resultHandler: ResultHandler
func makeUIView(context: Context) -> ARView {
arView.session.delegate = context.coordinator
arView.session.run(AROrientationTrackingConfiguration())
return arView
}
func pauseArSession() {
arView.session.pause()
}
func restartARSession() {
arView.session.run(AROrientationTrackingConfiguration())
}
func updateUIView(_ uiView: ARView, context: Context) {}
class Coordinator: NSObject, ARSessionDelegate {
// Process frames here...
}
func makeCoordinator() -> ARViewContainer.Coordinator {
Coordinator(self)
}
}
Every time there is a state change in resultHandler, body is re-evaluated in ARControlView.
This causes a new ARViewContainer to be instantiated due to
let arView = ARViewContainer() being inside the body variable.
If you move let arView = ARViewContainer() outside of the body variable, arView won't be reinstantiated every time there is a state change.
I found out that the issue actually wasn't as it seemed. The error I was receiving was due to the frame being set to .zero which for some reason made it return nil. Also set it to not automatically configure because that also created weird issue causing the image to be stretched.
This was the line that I changed:
From:
var arView = ARView(frame: .zero)
To:
var arView = ARView(frame: .init(x: 1, y: 1, width: 1, height: 1), cameraMode: .ar, automaticallyConfigureSession: false)
Thanks for anyones else's help I appreciate it!