How to installGestures on Reality Composer - swift

is there a way to add installGestures in reality composer?
i am able to do it in code, but i don`t want to mix too much code and ui parts. so i am searching how i can do the drag and drop part in reality composer.
here is the code snipit to do it in code and what i did so far.
arView.installGestures([.translation], for: modelEntity)
does anyone know how to enable this in reality composer?
thanks

I solved it with your hin now like this:
let startScene = try! RCProjectFile.loadStartScene()
if let cube = startScene.cube {
if let cube = cube as? Entity & HasCollision {
cube.generateCollisionShapes(recursive: true)
arView.installGestures(for: cube)
}
}

Reality Composer 1.5 allows you implicitly implement gestures only for animated behaviors. At the moment there's no explicit way to turn on Translation, Rotation or Pinch gestures in Reality Composer. Only via RealityKit, as you indicated.
arView.installGestures([.all], for: entity)
P. S.
Also, this post will show you how raycasting works in conjunction with RealityKit gestures.

Related

RealityKit – Playing multiple animations in USDZ file

Has anyone found a workflow to create multiple animations for a skeletal mesh packaged in a USDZ file and playback the animations using RealityKit?
I have a skeletal mesh with two animations (idle & run). I want to package them into a single USDZ file (or even multiple USDZ files if I have to) to be used in RealityKit.
I have been able to create an FBX for export of my skeletal mesh and the animations, and ship them up to sketchfab for a valid USDZ export that RealityKit can understand. I do not know how to package the second animation into a single USDZ file and then use SWIFT to playback the specific animations based off of specific events.
There seem to be a lot of posts from about a year ago on the topic with no real answers and little activity since. Any pointers would be appreciated.
Although in SceneKit you can play multiple animations using .dae model, in RealityKit 2.0 you still have no possibility to play multiple animations found in any .usdz model. Look at this post and this post.
There is only one animation is accessible using the following code now:
let robot = try ModelEntity.load(named: "drummer")
let anchor = AnchorEntity()
anchor.children.append(robot)
arView.scene.anchors.append(anchor)
robot.playAnimation(robot.availableAnimations[0].repeat(duration: .infinity),
transitionDuration: 0.5,
startsPaused: false)
When you choose second or third element in collection (if it really exists), your app crashes:
modelWithMultipleAnimations.availableAnimations[1]
modelWithMultipleAnimations.availableAnimations[2]

How I can zoom out using RealityKit?

I am using RealityKit for a human body detection.
let configuration = ARBodyTrackingConfiguration()
arView.session.run(configuration, options: [.resetTracking, .removeExistingAnchors])
A person always need stay too far in front of that back camera, more than 4 m. Can I apply a negative zoom? If I use a system camera application I see that a person can stay only 3 m front in.
Thank you in advance.
You can't zoom out sadly with reality kit. Reality kit uses the camera provided by ARKit and it's fixed with a focal length of 28mm.

RealityKit and ARKit – What is AR project looking for when the app starts?

You will understand this question better if you open Xcode, create a new Augmented Reality Project and run that project.
After the project starts running on device, you will see the image from the rear camera, shooting your room.
After 3 or 4 seconds, a cube appears.
My questions are:
what were the app doing before the cube appearance? I mean, I suppose the app were looking for tracking points on the scene, so it could anchor the cube, right?
if this is true, what elements are the app looking for?
Suppose I am not satisfied with the point the cube appeared. Is there any function I can trigger with a tap on the screen, so the tracking can search for new points again near the location I have tapped on the screen?
I know my question is generic, so please, just give me the right direction.
ARKit and RealityKit stages
There are three stages in ARKit and RealityKit when you launch AR app:
Tracking
Scene Understanding
Rendering
Each stage may considerably increase a time required for model placement (+1...+4 seconds, depending on the device). Let's talk about each stage.
Tracking
This is initial state for your AR app. Here iPhone mixes visual data coming through RGB rear camera at 60 fps and transform data coming from IMU sensors (accelerometer, gyroscope and compass) at 1000 fps. Automatically generated Feature Points helps ARKit and RealityKit track surrounding environment and build a tracking map (whether it's a World Tracking or, for example, a Face Tracking). Feature Points are spontaneously generated on a high-contract margins of real-world objects and textures, in a well-lit environments. If you already have a previously saved World Map, it reduces a time for model placement into a scene. Also you may use a ARCoachingOverlayView for useful visual instructions that guide you during session initialization and recovery.
Scene Understanding
Second stage can include a horizontal and vertical Plane Detection, Ray-Casting (or Hit-Testing) and Light Estimation. If you have activated Plane Detection feature, it takes some time to detect a plane with a corresponding ARPlaneAnchor (or AnchorEntity(.plane)) that must tether a virtual model – cube in you case. Also there's an Advanced Scene Understanding allowing you to use a Scene Reconstruction feature. You can use scene reconstruction in gadgets with a LiDAR scanner and it gives you improved depth channel for compositing elements in a scene and People Occlusion. You can always enable an Image/Object Detection feature but you must consider it's built on machine learning algorithms that increase a model's placement time in a scene.
Rendering
The last stage is made for rendering of a virtual geometry in your scene. Scenes can contain models with shaders and textures on them, a transform or asset animations, dynamics and sound. Surrounding HDR reflections for metallic shaders are calculated by neural modules. ARKit can't render an AR scene. For 3d rendering you have to use such frameworks as RealityKit, SceneKit or Metal. These frameworks have their own rendering engines.
By default, in RealityKit there are high-quality rendering effects like Motion Blur or Ray-tracing shadows that require additional computational power. Take it into consideration.
Tip
To significantly reduce the time when placing an object in the AR scene, use a LiDAR scanner that works at nanoseconds speed. If you gadget has no LiDAR, then track only a surrounding environment where lighting conditions are good, all real-world objects are clearly distinguishable and textures on them are rich and have no repetitive patterns. Also, try not to use in your project polygonal geometry with more than 10K+ polygons and hi-res textures (jpeg or png with a size 1024x1024 considered as normal).
Also, RealityKit by default has several heavy options enabled – Depth channel Compositing, Motion Blur and Ray-traced Contact Shadows (on A11 and earlier there are Projected Shadows). If you don't need all these features, just disable them. After it your app will be much faster.
Practical Solution I
(shadows, motion blur, depth comp, etc. are disabled)
Use the following properties to disable processor intensive effects:
override func viewDidLoad() {
super.viewDidLoad()
arView.renderOptions = [.disableDepthOfField,
.disableHDR,
.disableMotionBlur,
.disableFaceOcclusions,
.disablePersonOcclusion,
.disableGroundingShadows]
let boxAnchor = try! Experience.loadBox()
arView.scene.anchors.append(boxAnchor)
}
Practical Solution II
(shadows, depth comp, etc. are enabled by default)
When you use the following code in RealityKit:
override func viewDidLoad() {
super.viewDidLoad()
let boxAnchor = try! Experience.loadBox()
arView.scene.anchors.append(boxAnchor)
}
you get a Reality Composer's preconfigured scene containing horizontal plane detection property and AnchorEntity with the following settings:
AnchorEntity(.plane(.horizontal,
classification: .any,
minimumBounds: [0.25, 0.25])
Separating Tracking and Scene Understanding from Model Loading and Rendering
The problem you're having is a time lag that occurs at the moment your app launches. At the same moment starts world tracking (first stage) and then app tries simultaneously to detect a horizontal plane (second stage) and then it renders a metallic shader of a cube (third stage). To get rid of this time lag use this very simple approach (when app's launching you need to track a room and then tap on a screen to load a model):
override func viewDidLoad() {
super.viewDidLoad()
let tap = UITapGestureRecognizer(target: self,
action: #selector(self.tapped))
arView.addGestureRecognizer(tap)
}
#objc func tapped(_ sender: UITapGestureRecognizer) {
let boxAnchor = try! Experience.loadBox()
arView.scene.anchors.append(boxAnchor)
}
This way you reduce the simultaneous load on the CPU and GPU. So your cube is loading faster.
P.S.
Also, as an alternative you can use a loadModelAsync(named:in:) type method that allows you to load a model entity from a file in a bundle asynchronously:
static func loadModelAsync(named name: String,
in bundle: Bundle?) -> LoadRequest<ModelEntity>
In the default Experience.rcproject the cube has an AnchoringComponent with a horizontal plane. So basically the cube will not display until the ARSession finds any horizontal plane in your scene (for example the floor or a table). Once it finds that the cube will appear.
If you want instead to create and anchor and set that as the target when catching a tap event, you could perform a raycast. Using the result of a raycast, you can grab the worldTransform and set the cube's AnchoringComponent to that transform:
Something like this:
boxAnchor.anchoring = AnchoringComponent(.world(transform: raycastResult.worldTransform))

Moving a Reality Composer entity after loading from Xcode?

I have used Reality Composer to build an AR scene, which currently has one object (as I understand, this is an entity). Using Xcode, I am loading this Reality Composer scene, which functions as expected. However, I would like my user to have the ability to scale or move the object, while still retaining all of my animations and Reality Composer setup.
I am using this code to load my object;
override func viewDidLoad() {
super.viewDidLoad()
// Load the "Box" scene from the "Experience" Reality File
let boxAnchor = try! Experience.loadBox()
boxAnchor.generateCollisionShapes(recursive: true)
arView.scene.anchors.append(boxAnchor)
}
I have attempted to implement traditional UIPinchGestureRecognizer and UITapGestureRecognizer to no avail. I do see such options such as EntityScaleGestureRecognizer, though I've yet to figure out how to implement this accordingly. I do see, from some reading, that my "entity" needs to conform to hasCollision, but it seems that I might be missing something, as I'd imagine Reality Composer must offer some sort of interaction functionality, given its simplicity to build AR experiences.
Thanks!
let boxAnchor = try! Experience.loadBox()
boxAnchor.generateCollisionShapes(recursive: true)
let box = boxAnchor.group as? Entity & HasCollision
arView.installGestures(for: box!)
set Physics for the box in Reality Composer
see https://forums.developer.apple.com/thread/119773

Can I render the same SKScene to two different views simultaneously?

I want to render two SpriteKit SKViews at the same time which share a common SKScene. I'd like each SKView to show a different part of the scene (e.g. from a different SKCameraNode). Is this possible?
What I've tried: I've instantiated two SKViews and called .presentScene(mySharedScene) on both of them. I can render those views simultaneously and animations work just fine. But since the camera position is set on the SKScene itself via the .camera property, I can't assign a different camera to each SKView.
Ultimately, I'd like to create a simple bouncing ball that leverages SpriteKit's physics engine. Each SKView will be displayed on a different physical monitor, and the ball should be able to bounce between them. I'm doing this purely as a learning exercise.
Yes, you can, I have done split screens before, but let me tell you. It is a real pain. All of your updates get called twice, so you are going to have to develop a system that works around it. Instead, for a simple experience I recommend copying your scene to your 2nd View then updating your camera to your new location.
func didFinishUpdate()
{
let copy = scene.copy()!
view2.presentScene(copy)
copy2.camera!.position = newPosition
}