I added USDZ with animation in the Reality Composer (.rcproject) After Load the scene and added to the review
I tried to install gestures like Rotate scale ... but won't work
let ganGes = gangnim?.gnagnumObject as? (Entity & HasCollision)
arView.installGestures([.rotation,.translation,.scale], for: ganGes!)
How can I install Gestures to Reality Composer?
To implement RealityKit's translate, rotate and scale gestures, you also need to call generateCollisionShapes(recursive:) instance method to prepare a model's shape used for collision detection.
guard let ganGes = gangnim.gnagnumObject as? ModelEntity else { return }
ganGes.generateCollisionShapes(recursive: true)
arView.installGestures([.all], for: ganGes as (Entity & HasCollision))
Related
I'm trying to reverse engineer the 3d Scanner App using RealityKit and am having real trouble getting just a basic model working with all gestures. When I run the code below, I get a cube with scale and rotation (about the y axis only), but no translation interaction. I'm trying to figure out how to get rotation about an arbitray axis as well as translation, like in the 3d Scanner App above. I'm relatively new to iOS and read one should use RealityKit as Apple isn't really supporting SceneKit anymore, but am now wondering if SceneKit would be the way to go, as RealityKit is still young. Or if anyone knows of an extension to RealityKit ModelEntity objects to give them better interaction capabilities.
I've got my app taking a scan with the LiDAR sensor and saving it to disk as a .usda mesh, per this tutorial, but when I load the mesh as a ModelEntity and attach gestures to it, I don't get any interaction at all.
The below example code recreates the limited gestures for a box ModelEntity, and I have some commented lines showing where I would load my .usda model from disk, but again while it will render, it gets no interaction with gestures.
Any help appreciated!
// ViewController.swift
import UIKit
import RealityKit
class ViewController: UIViewController {
var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
arView = ARView(frame: view.frame, cameraMode: .nonAR, automaticallyConfigureSession: false)
view.addSubview(arView)
// create pointlight
let pointLight = PointLight()
pointLight.light.intensity = 10000
// create light anchor
let lightAnchor = AnchorEntity(world: [0, 0, 0])
lightAnchor.addChild(pointLight)
arView.scene.addAnchor(lightAnchor)
// eventually want to load my model from disk and give it gestures.
// guard let scanEntity = try? Entity.loadModel(contentsOf: urlOBJ) else {
// print("couldn't load scan in this format")
// return
// }
// entity to add gestures to
let cubeMaterial = SimpleMaterial(color: .blue, isMetallic: true)
let myEntity = ModelEntity(mesh: .generateBox(width: 0.1, height: 0.2, depth: 0.3, cornerRadius: 0.01, splitFaces: false), materials: [cubeMaterial])
myEntity.generateCollisionShapes(recursive: false)
let myAnchor = AnchorEntity(world: .zero)
myAnchor.addChild(myEntity)
// add collision and interaction
let scanEntityBounds = myEntity.visualBounds(relativeTo: myAnchor)
myEntity.collision = CollisionComponent(shapes: [.generateBox(size: scanEntityBounds.extents).offsetBy(translation: scanEntityBounds.center)])
arView.installGestures(for: myEntity).forEach {
gestureRecognizer in
gestureRecognizer.addTarget(self, action: #selector(handleGesture(_:)))
}
arView.scene.addAnchor(myAnchor)
// without this, get no gestures at all
let camera = PerspectiveCamera()
let cameraAnchor = AnchorEntity(world: [0, 0, 0.2])
cameraAnchor.addChild(camera)
arView.scene.addAnchor(cameraAnchor)
}
#objc private func handleGesture(_ recognizer: UIGestureRecognizer) {
if recognizer is EntityTranslationGestureRecognizer {
print("translation!")
} else if recognizer is EntityScaleGestureRecognizer {
print("scale!")
} else if recognizer is EntityRotationGestureRecognizer {
print("rotation!")
}
}
}
To extend ModelEntity's gesture interaction capabilities setup your own 2D gestures. There are 8 screen gestures in UIKit, and in SwiftUI you have 5 principal gestures and additionally Sequence, Simultaneous and Exclusive variations.
Form what I have understood, that the gestures are working for the box but not for your .usdz file/model. If this is the case, then the issue is because the model does not have a collision mesh(HasCollsion). If you are using reality composer to edit your models, you could do the following:
click on the model
under the Physics dropdown, click Participate
under collision shape select automatic
Overalls, make sure that the model has collision and you cast within the code that it has collision
let myEntity = try? Entity.loadModel(named: "fileName") as! HasCollision
I am currently looking into options on how to transform objects colour from Swift. The object has been added to the scene from Reality Composer.
I found in the documentation that I can change position, rotation, scale, however, I am unable to find a way how to change colour.
Xcode 14.2, RealityKit 2.0, Target iOS 16.2
Use the following code to change a color of a box model (found in Xcode RealityKit template):
let boxScene = try! Experience.loadBox() // Reality Composer scene
let modelEntity = boxScene.steelBox!.children[0] as! ModelEntity
var material = SimpleMaterial()
material.color = .init(tint: .green)
modelEntity.model?.materials[0] = material
arView.scene.anchors.append(boxScene)
print(boxScene)
Downcast to ModelEntity is necessary for accessing model's components:
If you need to change a transparency of your model, use the following approach.
Look into changing the material of the object.
I'm trying to create an AR experience with RealityKit but I'm finding that by default, entities will move into each other and overlap when they are moved by user interaction.
I want to prevent the objects from overlapping and entering each other, so that when they are moved by the user they just hit/bounce off without overlapping.
I'm loading the entities from a RealityComposer file as such and adding them to the scene (within a catch block and others not shown in this simplified version):
let entity = try Experience.loadBallSort()
anchorEntity.addChild(entity)
// anchorEntity is an AnchorEntity that is already attached to the scene
I'm using the default gestures like this to enable user interaction, which is how the objects are caused to overlap because they don't stop once they touch:
arView.installGestures([.rotation, .translation], for: entity)
Within Reality Composer, I've got Physics enabled with a Static motion type, and the default Physics material/collision shape for each object. I've also tried to use generateCollisionShapes as such, but it doesn't change the behaviour of the collision:
entity.generateCollisionShapes(recursive: true)
How can I prevent entities from overlapping in RealityKit?
There's no overlapping when colliding
To implement such a scenario, let's take 2 objects - one is dynamic and the other – kinematic.
PhysicsBodyMode.dynamic
Forces and collisions control body movement.
PhysicsBodyMode.kinematic
The user controls body movement. This type of physics body is unaffected by forces or collisions but that can cause collisions affecting other bodies when moved.
Code:
var arView = ARView(frame: .zero)
arView.frame = self.view.frame
self.view.addSubview(arView)
let scene = try! Experience.loadModels()
// Kinematic
let red = scene.redBox!.children[0] as! (Entity &
HasCollision &
HasPhysicsBody)
red.physicsBody = .init()
red.physicsBody?.massProperties.mass = 5
red.physicsBody?.mode = .kinematic
red.generateCollisionShapes(recursive: true)
arView.installGestures([.translation], for: red)
// Dynamic
let green = scene.greenCube!.children[0] as! (Entity &
HasCollision &
HasPhysicsBody)
green.physicsBody = .init()
green.physicsBody?.massProperties.mass = 5
green.physicsBody?.mode = .dynamic
green.generateCollisionShapes(recursive: true)
P.S.
Don't apply physics in Reality Composer, apply it programmatically in RealityKit.
I have added content to the face anchor in Reality Composer, later on, after loading the Experience that i created on Reality Composer, i create a face tracking session like this:
guard ARFaceTrackingConfiguration.isSupported else { return }
let configuration = ARFaceTrackingConfiguration()
configuration.maximumNumberOfTrackedFaces = ARFaceTrackingConfiguration.supportedNumberOfTrackedFaces
configuration.isLightEstimationEnabled = true
arView.session.delegate = self
arView.session.run(configuration, options: [.resetTracking, .removeExistingAnchors])
It is not adding the content to all the faces that is detecting, and i know it is detecting more than one face because the other faces occlude the content that is stick to the other face, is this a limitation on RealityKit or i am missing something in the composer? actually is pretty hard to miss somehting since it is so basic and simple.
Thanks.
You can't succeed in multi-face tracking in RealityKit in case you use models with embedded Face Anchor, i.e. the models that came from Reality Composer' Face Tracking preset (you can use just one model with .face anchor, not three). Or you MAY USE such models but you need to delete these embedded AnchorEntity(.face) anchors. Although there's a better approach – simply load models in .usdz format.
Let's see what Apple documentation says about embedded anchors:
You can manually load and anchor Reality Composer scenes using code, like you do with other ARKit content. When you anchor a scene in code, RealityKit ignores the scene's anchoring information.
Reality Composer supports 5 anchor types: Horizontal, Vertical, Image, Face & Object. It displays a different set of guides for each anchor type to help you place your content. You can change the anchor type later if you choose the wrong option or change your mind about how to anchor your scene.
There are two options:
In new Reality Composer project, deselect the Create with default content checkbox at the bottom left of the action sheet you see at startup.
In RealityKit code, delete existing Face Anchor and assign a new one. The latter option is not great because you need to recreate objects positions from scratch:
boxAnchor.removeFromParent()
Nevertheless, I've achieved a multi-face tracking using AnchorEntity() with ARAnchor intializer inside session(:didUpdate:) instance method (just like SceneKit's renderer() instance method).
Here's my code:
import ARKit
import RealityKit
extension ViewController: ARSessionDelegate {
func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
guard let faceAnchor = anchors.first as? ARFaceAnchor
else { return }
let anchor1 = AnchorEntity(anchor: faceAnchor)
let anchor2 = AnchorEntity(anchor: faceAnchor)
anchor1.addChild(model01)
anchor2.addChild(model02)
arView.scene.anchors.append(anchor1)
arView.scene.anchors.append(anchor2)
}
}
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
let model01 = try! Entity.load(named: "angryFace") // USDZ file
let model02 = try! FacialExpression.loadSmilingFace() // RC scene
override func viewDidLoad() {
super.viewDidLoad()
arView.session.delegate = self
guard ARFaceTrackingConfiguration.isSupported else {
fatalError("Alas, Face Tracking isn't supported")
}
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
let config = ARFaceTrackingConfiguration()
config.maximumNumberOfTrackedFaces = 2
arView.session.run(config)
}
}
In my ARKit app I have an animated character (stored as T-Bone Model in a SCN file). The Animations are taken from several DAE files and applied to the model using SCNAnimationPlayer like so:
let myAnimation = Animations.configMyAnimationFunction()
myAnimation.stop()
enemyNode.childNodes[2].addAnimationPlayer(myAnimation, forKey: "myKey")
enemyNode.childNodes[2].animationPlayer(forKey: "myKey")?.play()
the Animation plays perfectly.
Now I do "hit-test" against the animated geometry, like this:
let currentTouchPoint = touches.first?.location(in: self.sceneView)
let hitTest = sceneView.hitTest(currentTouchPoint!, options: [SCNHitTestOption.categoryBitMask: NodeCategory.catEnemy.rawValue, SCNHitTestOption.searchMode: SCNHitTestSearchMode.any.rawValue as NSNumber])
let hitObject = hitTest.first?.node // let that stores the hitTest
if hitObject != nil {
// code...
let hitLocation = hitTest.first?.worldCoordinates
// code...
}
I want to use the result from "worldCoordinates". But it seems, that the result always contains the coordinates from the static T-Bone Model, instead of the location on which it is during the animated runtime.
Imagine the animated model is clapping its hands (as a humanoid character) or touching the ground. When I touch the Models hands the hittest works and returns the result, but finally at the wrong coordinates.
Apple Docs on worldCoordinates is like this: "The point of intersection between the geometry and the search ray, in the scene’s world coordinate system." I also tried localCoordinates, but with even less success.
How can I determine the real coordinates at the current touch location on the geometry during the model is animated?