How do I make an entity a physics entity in RealityKit? - swift

I am not able to figure out how to make the "ball" entity a physics entity/body and apply a force to it.
// I'm using UIKit for the user interface and RealityKit +
// the models made in Reality Composer for the Augmented reality and Code
import RealityKit
import ARKit
class ViewController: UIViewController {
var ball: (Entity & HasPhysics)? {
try? Entity.load(named: "golfball") as? Entity & HasPhysics
}
#IBOutlet var arView: ARView!
// referencing the play now button on the home screen
#IBAction func playNow(_ sender: Any) { }
// referencing the slider in the AR View - this slider will be used to
// control the power of the swing. The slider values range from 10% to
// 100% of swing power with a default value of 55%. The user will have
// to gain experience in the game to know how much power to use.
#IBAction func slider(_ sender: Any) { }
//The following code will fire when the view loads
override func viewDidLoad() {
super.viewDidLoad()
// defining the Anchor - it looks for a flat surface .3 by .3
// meters so about a foot by a foot - on this surface, it anchors
// the golf course and ball when you tap
let anchor = AnchorEntity(plane: .horizontal, minimumBounds: [0.3, 0.3])
// placing the anchor in the scene
arView.scene.addAnchor(anchor)
// defining my golf course entity - using modelentity so it
// participates in the physics of the scene
let entity = try? ModelEntity.load(named: "golfarnew")
// defining the ball entity - again using modelentity so it
// participates in the physics of the scene
let ball = try? ModelEntity.load(named: "golfball")
// loading my golf course entity
anchor.addChild(entity!)
// loading the golf ball
anchor.addChild(ball!)
// applying a force to the ball at the balls position and the
// force is relative to the ball
ball.physicsBody(SIMD3(1.0, 1.0, 1.0), at: ball.position, relativeTo: ball)
// sounds, add physics body to ball, iPad for shot direction,
// connect slider to impulse force
}
}

Use the following code to find out how to implement a RealityKit's physics.
Pay particular attention: Participates in Physics is ON in Reality Composer.
import ARKit
import RealityKit
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
let boxScene = try! Experience.loadBox()
let secondBoxAnchor = try! Experience.loadBox()
let boxEntity = boxScene.steelBox as! (Entity & HasPhysics)
let kinematics: PhysicsBodyComponent = .init(massProperties: .default,
material: nil,
mode: .kinematic)
let motion: PhysicsMotionComponent = .init(linearVelocity: [0.1 ,0, 0],
angularVelocity: [3, 3, 3])
boxEntity.components.set(kinematics)
boxEntity.components.set(motion)
let anchor = AnchorEntity()
anchor.addChild(boxEntity)
arView.scene.addAnchor(anchor)
arView.scene.addAnchor(secondBoxAnchor)
print(boxEntity.isActive) // Entity must be active!
}
}
Also, look at THIS POST to find out how to implement RealityKit's physics with a custom class.

Related

RealityKit .nonAR installGestures is missing translation and rotation is y axis only

I'm trying to reverse engineer the 3d Scanner App using RealityKit and am having real trouble getting just a basic model working with all gestures. When I run the code below, I get a cube with scale and rotation (about the y axis only), but no translation interaction. I'm trying to figure out how to get rotation about an arbitray axis as well as translation, like in the 3d Scanner App above. I'm relatively new to iOS and read one should use RealityKit as Apple isn't really supporting SceneKit anymore, but am now wondering if SceneKit would be the way to go, as RealityKit is still young. Or if anyone knows of an extension to RealityKit ModelEntity objects to give them better interaction capabilities.
I've got my app taking a scan with the LiDAR sensor and saving it to disk as a .usda mesh, per this tutorial, but when I load the mesh as a ModelEntity and attach gestures to it, I don't get any interaction at all.
The below example code recreates the limited gestures for a box ModelEntity, and I have some commented lines showing where I would load my .usda model from disk, but again while it will render, it gets no interaction with gestures.
Any help appreciated!
// ViewController.swift
import UIKit
import RealityKit
class ViewController: UIViewController {
var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
arView = ARView(frame: view.frame, cameraMode: .nonAR, automaticallyConfigureSession: false)
view.addSubview(arView)
// create pointlight
let pointLight = PointLight()
pointLight.light.intensity = 10000
// create light anchor
let lightAnchor = AnchorEntity(world: [0, 0, 0])
lightAnchor.addChild(pointLight)
arView.scene.addAnchor(lightAnchor)
// eventually want to load my model from disk and give it gestures.
// guard let scanEntity = try? Entity.loadModel(contentsOf: urlOBJ) else {
// print("couldn't load scan in this format")
// return
// }
// entity to add gestures to
let cubeMaterial = SimpleMaterial(color: .blue, isMetallic: true)
let myEntity = ModelEntity(mesh: .generateBox(width: 0.1, height: 0.2, depth: 0.3, cornerRadius: 0.01, splitFaces: false), materials: [cubeMaterial])
myEntity.generateCollisionShapes(recursive: false)
let myAnchor = AnchorEntity(world: .zero)
myAnchor.addChild(myEntity)
// add collision and interaction
let scanEntityBounds = myEntity.visualBounds(relativeTo: myAnchor)
myEntity.collision = CollisionComponent(shapes: [.generateBox(size: scanEntityBounds.extents).offsetBy(translation: scanEntityBounds.center)])
arView.installGestures(for: myEntity).forEach {
gestureRecognizer in
gestureRecognizer.addTarget(self, action: #selector(handleGesture(_:)))
}
arView.scene.addAnchor(myAnchor)
// without this, get no gestures at all
let camera = PerspectiveCamera()
let cameraAnchor = AnchorEntity(world: [0, 0, 0.2])
cameraAnchor.addChild(camera)
arView.scene.addAnchor(cameraAnchor)
}
#objc private func handleGesture(_ recognizer: UIGestureRecognizer) {
if recognizer is EntityTranslationGestureRecognizer {
print("translation!")
} else if recognizer is EntityScaleGestureRecognizer {
print("scale!")
} else if recognizer is EntityRotationGestureRecognizer {
print("rotation!")
}
}
}
To extend ModelEntity's gesture interaction capabilities setup your own 2D gestures. There are 8 screen gestures in UIKit, and in SwiftUI you have 5 principal gestures and additionally Sequence, Simultaneous and Exclusive variations.
Form what I have understood, that the gestures are working for the box but not for your .usdz file/model. If this is the case, then the issue is because the model does not have a collision mesh(HasCollsion). If you are using reality composer to edit your models, you could do the following:
click on the model
under the Physics dropdown, click Participate
under collision shape select automatic
Overalls, make sure that the model has collision and you cast within the code that it has collision
let myEntity = try? Entity.loadModel(named: "fileName") as! HasCollision

RealityKit – Which Entity is intersecting with other Entity

let height: Float = 1
let width: Float = 0.5
let box = MeshResource.generateBox(width: 0.02, height: height, depth: width)
This box will have a real-time position same as the current camera position, In AR World I would have multiple boxes with different shapes, I want to identify which object is intersecting with the current real-time box.
I can not do this with position matching (The nearest one). I literally want to know the object which is touching/intersecting the real-time box.
Thanks in advance.
You can easily do that using subscribe() method. The following code is a reference:
(physics for both objects was enabled in Reality Composer)
import UIKit
import RealityKit
import Combine
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
var subscriptions: [Cancellable] = []
override func viewDidLoad() {
super.viewDidLoad()
let boxScene = try! Experience.loadBox()
arView.scene.anchors.append(boxScene)
let floorEntity = boxScene.children[0].children[1]
let subscribe = arView.scene.subscribe(to: CollisionEvents.Began.self,
on: floorEntity) { (event) in
print("Collision Occured")
print(event.entityA.name)
print(event.entityB.name)
}
self.subscriptions += [subscribe]
}
}

Is there a way to programmatically change the material of an Entity that was created in Reality Composer?

I want to change the color of an entity programmatically after it was created in Reality Composer.
As Reality Composer does not create a ModelEntity (it creates a generic Entity), it does not appear that I have access to change its color. When I typecast to a ModelEntity, I now have access to the ModelComponent materials. However, when I try to add that to the scene I get a Thread 1: signal SIGABART error. Could not cast value of type 'RealityKit.Entity' (0x1fcebe6e8) to 'RealityKit.ModelEntity' (0x1fceba970). Sample code below.
import UIKit
import RealityKit
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
// Load the "Box" scene from the "Experience" Reality File
let boxAnchor = try! Experience.loadBox()
// Typecast Steelbox as ModelEntity to change its color
let boxModelEntity = boxAnchor.steelBox as! ModelEntity
// Remove materials and create new material
boxModelEntity.model?.materials.removeAll()
let blueMaterial = SimpleMaterial(color: .blue, isMetallic: false)
boxModelEntity.model?.materials.append(blueMaterial)
// Add the box anchor to the scene
arView.scene.anchors.append(boxAnchor)
}
}
Model entity is stored deeper in RealityKit's hierarchy, and as you said, it's Entity, not ModelEntity. So use downcasting to access mesh and materials:
import UIKit
import RealityKit
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
let boxScene = try! Experience.loadBox()
print(boxScene)
let modelEntity = boxScene.steelBox?.children[0] as! ModelEntity
let material = SimpleMaterial(color: .green, isMetallic: false)
modelEntity.model?.materials = [material]
let anchor = AnchorEntity()
anchor.scale = [5,5,5]
modelEntity.setParent(anchor)
arView.scene.anchors.append(anchor)
}
}

Anchoring Multiple Scenes in RealityKit

While loading multiple scenes (from reality composer) into arView, the scenes is not anchored in the same space.
In this example, scene1 is loaded when the app starts. After the button is pressed, the scene2 is added into the scene. In both the scenes, the models are placed at the origin and are expected to overlap with scene2 is added into the view. However, the position of scene1 and scene2 is different when they are added into the arView.
import UIKit
import RealityKit
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
#IBOutlet weak var button: UIButton!
var scene1: Experience.Scene1!
var scene2: Experience.Scene2!
override func viewDidLoad() {
super.viewDidLoad()
// Load the "Box" scene from the "Experience" Reality File
scene1 = try! Experience.loadScene1()
scene2 = try! Experience.loadScene2()
// Add the box anchor to the scene
arView.scene.addAnchor(scene1)
}
#IBAction func buttonPressed(_ sender: Any) {
arView.scene.addAnchor(scene2)
}
}
Note: This issues does not happen when both the scenes are added simultaneously.
How to make sure that both the scenes are anchored at the same ARAnchor?
Use the following approach:
let scene01 = try! Cube.loadCube()
let scene02 = try! Ball.loadSphere()
let cubeEntity: Entity = scene01.steelCube!.children[0]
let ballEntity: Entity = scene02.glassBall!.children[0]
// var cubeComponent: ModelComponent = cubeEntity.components[ModelComponent].self!
// var ballComponent: ModelComponent = ballEntity.components[ModelComponent].self!
let anchor = AnchorEntity()
anchor.addChild(cubeEntity)
anchor.addChild(ballEntity)
// scene01.steelCube!.components.set(cubeComponent)
// scene02.glassBall!.components.set(ballComponent)
arView.scene.anchors.append(anchor)

Where is the .camera AnchorEntity located?

When adding a child to my AnchorEntity(.camera), it appears as if the child is spawning behind my camera (meaning I can only see my child when I turn around). I have also tried to add a mesh to my Anchor directly but unfortunately ARKit / RealityKit does not render the mesh when you are inside of it (which because its centered around the camera, is theoretically always the case. However, it could also be the case that its always located behind the screen [where the user is] and I'm never able to see it).
Also, oddly enough the child entity does not move with the camera AnchorEntity despite setting the translation transform to (0,0,0).
My two questions are:
Is the .camera anchor actually located right where the physical iPad / camera is located or is it located further back (perhaps where the user would normally hold the iPad)?
How do you get a child entity of the AnchorEntity(.camera) to move as the iPad / camera moves in real space?
Answer to the first question
In RealityKit and ARKit frameworks ARCamera has a pivot point like other entities (nodes) have, and it's located at the point where lens is attached to the camera body (at bayonet level). This pivot can tether AnchorEntity(.camera). In other words, virtual camera and real-world camera have that pivot point approximately at the same place.
So, if you attach RealityKit's AnchorEntity to a camera's pivot, you place it to the coordinates where camera's bayonet is located. And this AnchorEntity(.camera) will be tracked automatically without a need to implement session(_:didUpdate:) method.
However, if attach ARKit's ARAnchor to the camera's pivot, you have to implement session(_:didUpdate:) method to constantly update a position and orientation of that anchor for every ARFrame.
Answer to the second question
If you want to constantly update model's position in RealityKits at 60 fps (when ARCamera moves and rotates) you need to use the following approach:
import ARKit
import RealityKit
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
let box = MeshResource.generateBox(size: 0.25)
let material = SimpleMaterial(color: .systemPink, isMetallic: true)
let boxEntity = ModelEntity(mesh: box, materials: [material])
let cameraAnchor = AnchorEntity(.camera) // ARCamera anchor
cameraAnchor.addChild(boxEntity)
arView.scene.addAnchor(cameraAnchor)
boxEntity.transform.translation = [0, 0,-0.5] // Box offset 0.5 m
}
}
Or you can use ARKit's great old .currentFrame instance property in session(_:didUpdate:) delegate method:
extension ViewController: ARSessionDelegate {
func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
guard let transform = arView.session.currentFrame?.camera.transform
else { return }
let arkitAnchor = ARAnchor(transform: transform)
arView.session.add(anchor: arkitAnchor) // add to session
let anchor = AnchorEntity(anchor: arkitAnchor)
anchor.addChild(boxEntity)
arView.scene.addAnchor(anchor) // add to scene
}
}
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
var boxEntity = ModelEntity(...)
override func viewDidLoad() {
super.viewDidLoad()
arView.session.delegate = self // Session's delegate
}
}
To find out how to save the ARCamera Pose over time, read the following post.