Reality Composer - Custom Collision Between Entities of Different Scenes - swift

I'm pretty new to RealityKit and ARKit. I have two scenes in Reality Composer, one with a book image anchor and one with a horizontal plane anchor. The first scene with an image anchor has a cube attached to the top of it and the second scene built on a horizontal plane has two rings. All objects have a fixed collision. I'd like to run an animation when the rings and the cube touch. I couldn't find a way to do this in Reality Composer, so I made two attempts within the code to no avail. (I'm printing "collision started" just to test the collision code without the animation) Unfortunately, it didn't work. Would appreciate help on this.
Attempt #1:
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
let componentBreakdownAnchor = try! CC.loadComponentBreakdown()
arView.scene.anchors.append(componentBreakdownAnchor)
let bookAnchor = try! CC.loadBook()
arView.scene.anchors.append(bookAnchor)
let ringsAnchor = try! CC.loadRings()
arView.scene.anchors.append(ringsAnchor)
// Add the componentBreakdown anchor to the scene
arView.scene.anchors.append(componentBreakdownAnchor)
let bookAnchor = try! CC.loadBook()
arView.scene.anchors.append(bookAnchor)
let ringsAnchor = try! CC.loadRings()
arView.scene.anchors.append(ringsAnchor)
let _ = ringsAnchor.scene?.subscribe(
to: CollisionEvents.Began.self,
on: bookAnchor
) { event in
print("collision started")
}
return arView
}
Attempt #2
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
let componentBreakdownAnchor = try! CC.loadComponentBreakdown()
arView.scene.anchors.append(componentBreakdownAnchor)
let bookAnchor = try! CC.loadBook()
arView.scene.anchors.append(bookAnchor)
let ringsAnchor = try! CC.loadRings()
arView.scene.anchors.append(ringsAnchor)
// Add the componentBreakdown anchor to the scene
arView.scene.anchors.append(componentBreakdownAnchor)
let bookAnchor = try! CC.loadBook()
arView.scene.anchors.append(bookAnchor)
let ringsAnchor = try! CC.loadRings()
arView.scene.anchors.append(ringsAnchor)
arView.scene.subscribe(
to: CollisionEvents.Began.self,
on: bookAnchor
) { event in
print("collision started")
}
return arView
}

RealityKit scene
If you want to use models' collisions made in RealityKit's scene from scratch, at first you need to implement a HasCollision protocol.
Let's see what a developer documentation says about it:
HasCollision protocol is an interface used for ray casting and collision detection.
Here's how your implementation should look like if you generate models in RealityKit:
import Cocoa
import RealityKit
class CustomCollision: Entity, HasModel, HasCollision {
let color: NSColor = .gray
let collider: ShapeResource = .generateSphere(radius: 0.5)
let sphere: MeshResource = .generateSphere(radius: 0.5)
required init() {
super.init()
let material = SimpleMaterial(color: color,
isMetallic: true)
self.components[ModelComponent] = ModelComponent(mesh: sphere,
materials: [material])
self.components[CollisionComponent] = CollisionComponent(shapes: [collider],
mode: .trigger,
filter: .default)
}
}
Reality Composer scene
And here's how your code should look like if you use models from Reality Composer:
import UIKit
import RealityKit
import Combine
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
var subscriptions: [Cancellable] = []
override func viewDidLoad() {
super.viewDidLoad()
let groundSphere = try! Experience.loadStaticSphere()
let upperSphere = try! Experience.loadDynamicSphere()
let gsEntity = groundSphere.children[0].children[0].children[0]
let usEntity = upperSphere.children[0].children[0].children[0]
// CollisionComponent exists in case you turn on
// "Participates" property in Reality Composer app
print(gsEntity)
let gsComp: CollisionComponent = gsEntity.components[CollisionComponent]!.self
let usComp: CollisionComponent = usEntity.components[CollisionComponent]!.self
gsComp.shapes = [.generateBox(size: [0.05, 0.07, 0.05])]
usComp.shapes = [.generateBox(size: [0.05, 0.05, 0.05])]
gsEntity.components.set(gsComp)
usEntity.components.set(usComp)
let subscription = self.arView.scene.subscribe(to: CollisionEvents.Began.self,
on: gsEntity) { event in
print("Balls' collision occured!")
}
self.subscriptions.append(subscription)
arView.scene.anchors.append(upperSphere)
arView.scene.anchors.append(groundSphere)
}
}

Related

Can't play USDZ animation with RealityKit

I've followed the answer in this SO question regarding playing the animation of a USDZ file with the following code:
#IBOutlet var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
arView = ARView(frame: self.view.frame)
self.view.addSubview(arView)
arView.cameraMode = .nonAR
let newAnchor = AnchorEntity(world: .zero)
let newEnt = try! Entity.load(named: "Vintage Toy Robot")
newAnchor.addChild(newEnt)
arView.scene.addAnchor(newAnchor)
print(newEnt.availableAnimations)
for anim in newEnt.availableAnimations {
newEnt.playAnimation(anim.repeat(duration: .infinity),
transitionDuration: 1.25, startsPaused: false)
}
}
However, the animation does not play, the USDZ file is just static.
Download robot model with animation from AR Quick Look page. Your code is working. Do not use not-animated robot from Reality Composer library.
let entity = try! Entity.load(named: "toy_robot_vintage.usdz")
And delete these two lines of code (you've got 2 ARViews, it's blocking the animation):
arView = ARView(frame: self.view.frame)
self.view.addSubview(arView)

RealityKit – Difference between loading model using `.rcproject` vs `.usdz`

I'm building a simple app that adds a hat on top of the user's face. I've seen examples of 2 different approaches:
Adding the object as a scene to Experience.rcproject
Reading the object from the bundle directly as a .usdz file
Approach #1
struct ARViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
arView = ARView(frame: .zero)
arView.automaticallyConfigureSession = false
return arView
}
func updateUIView(_ uiView: ARView, context: Context) {
let arConfiguration = ARFaceTrackingConfiguration()
uiView.session.run(arConfiguration,
options:[.resetTracking, .removeExistingAnchors])
let arAnchor = try! Experience.loadHat()
uiView.scene.anchors.append(arAnchor)
}
}
Approach #2
struct ARViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
let modelEntity = try! ModelEntity.load(named: "hat.usdz")
modelEntity.position = SIMD3(0, 0, -8)
modelEntity.orientation = simd_quatf.init(angle: 0, axis: SIMD3(-90, 0, 0))
modelEntity.scale = SIMD3(0.02, 0.02, 0.02)
arView.session.run(ARFaceTrackingConfiguration())
let anchor = AnchorEntity(.face)
anchor.position.y += 0.25
anchor.addChild(modelEntity)
arView.scene.addAnchor(anchor)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) {
let arConfiguration = ARFaceTrackingConfiguration()
uiView.session.run(arConfiguration,
options:[.resetTracking, .removeExistingAnchors])
let fileName = "hat.usdz"
let modelEntity = try! ModelEntity.loadModel(named: fileName)
modelEntity.position = SIMD3(0, 0, -8)
modelEntity.orientation = simd_quatf.init(angle: 0, axis: SIMD3(-90, 0, 0))
modelEntity.scale = SIMD3(0.02, 0.02, 0.02)
let arAnchor = AnchorEntity(.face)
arAnchor.addChild(modelEntity)
uiView.scene.anchors.append(arAnchor)
}
}
What is the main difference between these approaches? Approach #1 works, but the issue is that approach #2 doesn't even work for me - the object simply doesn't load into the scene. Could anyone explain a bit?
Thanks!
The difference between .rcproject and .usdz is quite obvious: the Reality Composer file already has an anchor for the model (and it's at the top of the hierarchy). When you prototype in Reality Composer, you have the ability to visually control the scale of your models. .usdz models very often have a huge scale, which you need to reduce by 100 times.
As a rule, .usdz model doesn't have a floor, while .rcproject has a floor by default and this floor acts as a shadow catcher. Also, note that the .rcproject file is larger than the .usdz file.
let scene = try! Experience.loadHat()
arView.scene.anchors.append(scene)
print(scene)
When loading .usdz into a scene, you have to programmatically create an anchor (either swiftly or pythonically). It also makes sense to use .reality files as they are optimized for faster loading.
let model = try! ModelEntity.load(named: "hat.usdz")
let anchor = AnchorEntity(.face)
anchor.addChild(model)
arView.scene.anchors.append(anchor)
print(model)
Also, put a face tracking config inside makeUIView method:
import SwiftUI
import RealityKit
import ARKit
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
let model = try! ModelEntity.load(named: "hat.usdz")
arView.session.run(ARFaceTrackingConfiguration())
let anchor = AnchorEntity(.face)
anchor.position.y += 0.25
anchor.addChild(model)
arView.scene.addAnchor(anchor)
return arView
}
Also, check if the following render options are disabled.
arView.renderOptions = [.disableFaceMesh, .disablePersonOcclusion]
And check a position of pivot point in hat model.
For approach number 2, try removing the the position for the modelEntity. You provided position as 0, -4.9 and 11.8. Those positions are in meters. So try to remove it and see if appears.

Is there a way to programmatically change the material of an Entity that was created in Reality Composer?

I want to change the color of an entity programmatically after it was created in Reality Composer.
As Reality Composer does not create a ModelEntity (it creates a generic Entity), it does not appear that I have access to change its color. When I typecast to a ModelEntity, I now have access to the ModelComponent materials. However, when I try to add that to the scene I get a Thread 1: signal SIGABART error. Could not cast value of type 'RealityKit.Entity' (0x1fcebe6e8) to 'RealityKit.ModelEntity' (0x1fceba970). Sample code below.
import UIKit
import RealityKit
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
// Load the "Box" scene from the "Experience" Reality File
let boxAnchor = try! Experience.loadBox()
// Typecast Steelbox as ModelEntity to change its color
let boxModelEntity = boxAnchor.steelBox as! ModelEntity
// Remove materials and create new material
boxModelEntity.model?.materials.removeAll()
let blueMaterial = SimpleMaterial(color: .blue, isMetallic: false)
boxModelEntity.model?.materials.append(blueMaterial)
// Add the box anchor to the scene
arView.scene.anchors.append(boxAnchor)
}
}
Model entity is stored deeper in RealityKit's hierarchy, and as you said, it's Entity, not ModelEntity. So use downcasting to access mesh and materials:
import UIKit
import RealityKit
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
let boxScene = try! Experience.loadBox()
print(boxScene)
let modelEntity = boxScene.steelBox?.children[0] as! ModelEntity
let material = SimpleMaterial(color: .green, isMetallic: false)
modelEntity.model?.materials = [material]
let anchor = AnchorEntity()
anchor.scale = [5,5,5]
modelEntity.setParent(anchor)
arView.scene.anchors.append(anchor)
}
}

Play USDZ animation in RealityKit

I spent 2 days trying to understand how to play properly an animation in my RealityKit project.
I followed many tips from others stackoverflow topics but without success. I know that with RealityKit v2 we can only play the first animation from the usdz file, ok. I'm trying to play the first animation of the "toy_robot_vintage.usdz" delivered by Apple directly in the Reality Composer.
Here is my complete code :
func loadModel(named: String, result: ARRaycastResult) {
var usdzToLoad: String = ""
switch named {
case "ROBOT":
usdzToLoad = "toy_robot_vintage.usdz"
default:
break;
}
DispatchQueue.main.async {
let modelToLoad = try! ModelEntity.loadModel(named: usdzToLoad)
switch named {
case "ROBOT":
modelToLoad.name = "ROBOT"
default:
break;
}
let anchor = AnchorEntity(plane: .horizontal, classification: .any, minimumBounds: [0.1, 0.1])
anchor.position.y = 0.01
anchor.addChild(modelToLoad)
// Create a "Physics" model of the toy in order to add physics mode
guard let modelEntity = anchor.children.first as? (Entity & HasPhysics)
else { return }
self.arView.installGestures([.rotation], for: modelEntity)
modelEntity.generateCollisionShapes(recursive: true)
modelEntity.physicsBody = PhysicsBodyComponent(shapes: [.generateBox(size: .one)],
mass: 1.0,
material: .default,
mode: .kinematic)
self.currentEntity = modelEntity
self.anchorsEntities.append(anchor)
self.arView.scene.addAnchor(anchor)
// self.currentEntity!.availableAnimations.forEach { self.currentEntity!.playAnimation($0.repeat()) }
let robotAnimationResource = self.currentEntity?.availableAnimations.first
self.currentEntity!.playAnimation(robotAnimationResource!.repeat(duration: .infinity),
transitionDuration: 1.25,
startsPaused: false)
}
robotAnimationResource is always nil returning of course a fatal error when I try to play the animation.
Any idea ? Thanks in advance for your help and support.
Change ModelEntity.loadModel to ModelEntity.load and it should now have the animations.
It's very weird and I don't know why, but that has worked for me in the past.
Also HasPhysics inherits Entity, so to save yourself looking for the anchor's children etc, you should be able to replace that guard let modelEntity... line with this:
guard let modelEntity = modelToLoad as? HasPhysics else { return }
EDIT:
I just ran this in playground and the animation runs fine:
import PlaygroundSupport
import UIKit
import RealityKit
let arview = ARView(frame: .zero, cameraMode: .nonAR, automaticallyConfigureSession: true)
arview.environment.lighting.intensityExponent = 3
let newAnchor = AnchorEntity(world: .zero)
let newEnt = try! Entity.load(named: "toy_robot_vintage")
newAnchor.addChild(newEnt)
arview.scene.addAnchor(newAnchor)
for anim in newEnt.availableAnimations {
newEnt.playAnimation(anim.repeat(duration: .infinity), transitionDuration: 1.25, startsPaused: false)
}
PlaygroundSupport.PlaygroundPage.current.liveView = arview
The issue is that a model imported this way does not conform to HasPhysics (useful if you mentioned that's where it was now failing for you).
Apply the ModelComponent to another entity class or ModelEntity instead.

Can I do ARKit "Continuous Image Tracking" in a World Tracking Configuration with RealityKit?

UPDATE: My premise that "continuous image tracking" is not possible out of the box with RealityKit ARViews was incorrect. All I needed to do was correctly create the AnchorEntity for the continuously tracked reference image.
The anchor entity needs to be created using the init(anchor: ARAnchor) initializer. (The init(world: SIMD3<Float>) initializer is correct for anchors stuck to the real world, but not ones that should track the reference image.)
Using ARKit and RealityKit with an ARWorldTrackingConfiguration, I am trying to do "continuous image tracking" (where the reference image is tracked each frame, and virtual objects can be anchored to it, appearing to be attached to and move with the reference image). Because reference images are only recognized once in world tracking (as opposed to ARImageTrackingConfiguration, where reference images are continuously tracked as long as they are in frame), this is not possible out of the box.
To get the same results in a world tracking configuration, I am anchoring virtual objects to the reference image in the session(_:didAdd:) delegate method, and using the session(_:didUpdate:) delegate method as an opportunity to remove the ARImageAnchor after each time it is identified. This causes the reference image to be re-recognized over and over, allowing virtual objects to be anchored to the image and appear to track it frame-to-frame.
In the example below, I am placing two ball markers to track the position of the reference image. First marker is placed only once, at the location where the reference image is initially detected. The other marker is re-positioned each time the reference image is re-detected, appearing to follow it.
This works. Virtual content tracks the reference image in the ARWorldTrackingConfiguration the same way it would in an image tracking config. But while the "animation" in ARImageTrackingConfiguration is very smooth, the animation in world tracking is much less smooth, more jumpy, as if it was running at 10 or 15 frames per second. (Actual FPS as reported by .showStatistics stays near 60 FPS in both configurations.)
I assume the difference in smoothness results from the time it takes ARKit to do the work of repeatedly re-recognizing and removing the reference image anchor on each didAdd/didUpdate cycle.
I would like to know if there is a better technique to get "continuous image tracking" in an ARWorldTrackingConfiguration, and/or if there is any way I can improve the code in the delegate methods to achieve this affect.
import ARKit
import RealityKit
class ViewController: UIViewController, ARSessionDelegate {
#IBOutlet var arView: ARView!
// originalImageAnchor is used to visualize the first-detected location of reference image
// currentImageAnchor should be continuously updated to match current position of ref image
var originalImageAnchor: AnchorEntity!
var currentImageAnchor: AnchorEntity!
let ballRadius: Float = 0.02
override func viewDidLoad() {
super.viewDidLoad()
guard let referenceImages = ARReferenceImage.referenceImages(inGroupNamed: "AR Resources",
bundle: nil) else { fatalError("Missing expected asset catalog resources.") }
arView.session.delegate = self
arView.automaticallyConfigureSession = false
arView.debugOptions = [.showStatistics]
arView.renderOptions = [.disableCameraGrain, .disableHDR, .disableMotionBlur,
.disableDepthOfField, .disableFaceOcclusions, .disablePersonOcclusion,
.disableGroundingShadows, .disableAREnvironmentLighting]
let configuration = ARWorldTrackingConfiguration()
configuration.detectionImages = referenceImages
configuration.maximumNumberOfTrackedImages = 1 // there is one ref image named "coaster_rb"
arView.session.run(configuration)
}
func session(_ session: ARSession, didAdd anchors: [ARAnchor]) {
guard let imageAnchor = anchors[0] as? ARImageAnchor else { return }
// Reference image detected. This will happen multiple times because
// we delete ARImageAnchor in session(_:didUpdate:)
if let imageName = imageAnchor.name, imageName == "coaster_rb" {
// If originalImageAnchor is nil, create an anchor and
// add a marker at initial position of reference image.
if originalImageAnchor == nil {
originalImageAnchor = AnchorEntity(world: imageAnchor.transform)
let originalImageMarker = generateBallMarker(radius: ballRadius, color: .systemPink)
originalImageMarker.position.y = ballRadius + (ballRadius * 2)
originalImageAnchor.addChild(originalImageMarker)
arView.scene.addAnchor(originalImageAnchor)
}
// If currentImageAnchor is nil, add an anchor and marker at reference image position
// If currentImageAnchor has already been added, adjust it's position to match ref image
if currentImageAnchor == nil {
currentImageAnchor = AnchorEntity(world: imageAnchor.transform)
let currentImageMarker = generateBallMarker(radius: ballRadius, color: .systemTeal)
currentImageMarker.position.y = ballRadius
currentImageAnchor.addChild(currentImageMarker)
arView.scene.addAnchor(currentImageAnchor)
} else {
currentImageAnchor.setTransformMatrix(imageAnchor.transform, relativeTo: nil)
}
}
}
func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
guard let imageAnchor = anchors[0] as? ARImageAnchor else { return }
// Delete reference image anchor to allow for ongoing tracking as it moves
if let imageName = imageAnchor.name, imageName == "coaster_rb" {
arView.session.remove(anchor: anchors[0])
}
}
func generateBallMarker(radius: Float, color: UIColor) -> ModelEntity {
let ball = ModelEntity(mesh: .generateSphere(radius: radius),
materials: [SimpleMaterial(color: color, isMetallic: false)])
return ball
}
}
Continuous image tracking does work out of the box with RealityKit ARViews in world tracking configurations. A mistake in my original code lead me to think otherwise.
Incorrect anchor entity initialization (for what I was trying to accomplish):
currentImageAnchor = AnchorEntity(world: imageAnchor.transform)
Since I wanted to track the ARImageAnchor assigned to the matched reference image, I should have done it like this:
currentImageAnchor = AnchorEntity(anchor: imageAnchor)
The corrected example below places one virtual marker that is fixed to the reference image's initial position, and another that smoothly tracks the reference image in a world tracking configuration:
import ARKit
import RealityKit
class ViewController: UIViewController, ARSessionDelegate {
#IBOutlet var arView: ARView!
let ballRadius: Float = 0.02
override func viewDidLoad() {
super.viewDidLoad()
guard let referenceImages = ARReferenceImage.referenceImages(
inGroupNamed: "AR Resources", bundle: nil) else {
fatalError("Missing expected asset catalog resources.")
}
arView.session.delegate = self
arView.automaticallyConfigureSession = false
arView.debugOptions = [.showStatistics]
arView.renderOptions = [.disableCameraGrain, .disableHDR,
.disableMotionBlur, .disableDepthOfField,
.disableFaceOcclusions, .disablePersonOcclusion,
.disableGroundingShadows, .disableAREnvironmentLighting]
let configuration = ARWorldTrackingConfiguration()
configuration.detectionImages = referenceImages
configuration.maximumNumberOfTrackedImages = 1
arView.session.run(configuration)
}
func session(_ session: ARSession, didAdd anchors: [ARAnchor]) {
guard let imageAnchor = anchors[0] as? ARImageAnchor else { return }
if let imageName = imageAnchor.name, imageName == "target_image" {
// AnchorEntity(world: imageAnchor.transform) results in anchoring
// virtual content to the real world. Content anchored like this
// will remain in position even if the reference image moves.
let originalImageAnchor = AnchorEntity(world: imageAnchor.transform)
let originalImageMarker = makeBall(radius: ballRadius, color: .systemPink)
originalImageMarker.position.y = ballRadius + (ballRadius * 2)
originalImageAnchor.addChild(originalImageMarker)
arView.scene.addAnchor(originalImageAnchor)
// AnchorEntity(anchor: imageAnchor) results in anchoring
// virtual content to the ARImageAnchor that is attached to the
// reference image. Content anchored like this will appear
// stuck to the reference image.
let currentImageAnchor = AnchorEntity(anchor: imageAnchor)
let currentImageMarker = makeBall(radius: ballRadius, color: .systemTeal)
currentImageMarker.position.y = ballRadius
currentImageAnchor.addChild(currentImageMarker)
arView.scene.addAnchor(currentImageAnchor)
}
}
func makeBall(radius: Float, color: UIColor) -> ModelEntity {
let ball = ModelEntity(mesh: .generateSphere(radius: radius),
materials: [SimpleMaterial(color: color, isMetallic: false)])
return ball
}
}