RealityKit .nonAR installGestures is missing translation and rotation is y axis only - swift

I'm trying to reverse engineer the 3d Scanner App using RealityKit and am having real trouble getting just a basic model working with all gestures. When I run the code below, I get a cube with scale and rotation (about the y axis only), but no translation interaction. I'm trying to figure out how to get rotation about an arbitray axis as well as translation, like in the 3d Scanner App above. I'm relatively new to iOS and read one should use RealityKit as Apple isn't really supporting SceneKit anymore, but am now wondering if SceneKit would be the way to go, as RealityKit is still young. Or if anyone knows of an extension to RealityKit ModelEntity objects to give them better interaction capabilities.
I've got my app taking a scan with the LiDAR sensor and saving it to disk as a .usda mesh, per this tutorial, but when I load the mesh as a ModelEntity and attach gestures to it, I don't get any interaction at all.
The below example code recreates the limited gestures for a box ModelEntity, and I have some commented lines showing where I would load my .usda model from disk, but again while it will render, it gets no interaction with gestures.
Any help appreciated!
// ViewController.swift
import UIKit
import RealityKit
class ViewController: UIViewController {
var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
arView = ARView(frame: view.frame, cameraMode: .nonAR, automaticallyConfigureSession: false)
view.addSubview(arView)
// create pointlight
let pointLight = PointLight()
pointLight.light.intensity = 10000
// create light anchor
let lightAnchor = AnchorEntity(world: [0, 0, 0])
lightAnchor.addChild(pointLight)
arView.scene.addAnchor(lightAnchor)
// eventually want to load my model from disk and give it gestures.
// guard let scanEntity = try? Entity.loadModel(contentsOf: urlOBJ) else {
// print("couldn't load scan in this format")
// return
// }
// entity to add gestures to
let cubeMaterial = SimpleMaterial(color: .blue, isMetallic: true)
let myEntity = ModelEntity(mesh: .generateBox(width: 0.1, height: 0.2, depth: 0.3, cornerRadius: 0.01, splitFaces: false), materials: [cubeMaterial])
myEntity.generateCollisionShapes(recursive: false)
let myAnchor = AnchorEntity(world: .zero)
myAnchor.addChild(myEntity)
// add collision and interaction
let scanEntityBounds = myEntity.visualBounds(relativeTo: myAnchor)
myEntity.collision = CollisionComponent(shapes: [.generateBox(size: scanEntityBounds.extents).offsetBy(translation: scanEntityBounds.center)])
arView.installGestures(for: myEntity).forEach {
gestureRecognizer in
gestureRecognizer.addTarget(self, action: #selector(handleGesture(_:)))
}
arView.scene.addAnchor(myAnchor)
// without this, get no gestures at all
let camera = PerspectiveCamera()
let cameraAnchor = AnchorEntity(world: [0, 0, 0.2])
cameraAnchor.addChild(camera)
arView.scene.addAnchor(cameraAnchor)
}
#objc private func handleGesture(_ recognizer: UIGestureRecognizer) {
if recognizer is EntityTranslationGestureRecognizer {
print("translation!")
} else if recognizer is EntityScaleGestureRecognizer {
print("scale!")
} else if recognizer is EntityRotationGestureRecognizer {
print("rotation!")
}
}
}

To extend ModelEntity's gesture interaction capabilities setup your own 2D gestures. There are 8 screen gestures in UIKit, and in SwiftUI you have 5 principal gestures and additionally Sequence, Simultaneous and Exclusive variations.

Form what I have understood, that the gestures are working for the box but not for your .usdz file/model. If this is the case, then the issue is because the model does not have a collision mesh(HasCollsion). If you are using reality composer to edit your models, you could do the following:
click on the model
under the Physics dropdown, click Participate
under collision shape select automatic
Overalls, make sure that the model has collision and you cast within the code that it has collision
let myEntity = try? Entity.loadModel(named: "fileName") as! HasCollision

Related

Applying downward force to an object using RealityKit

Here is my previous question about in general apply force for a certain point of an AR object which had a perfect answer.
I have managed to apply force to a given point with a little bit of tinkering to have a perfect effect for me. Let me show also some code.
I get the AR object from Experience like:
if let skateAnchor = try? Experience.loadSkateboard(),
let skateEntity = skateAnchor.skateboard {
guard let entity = skateEntity as? HasPhysicsBody else { return }
skateAnchor.generateCollisionShapes(recursive: true)
entity.collision?.filter.mask = [.sceneUnderstanding]
skateboard = entity
}
Afterwards I set up the plane and the LiDAR scanner and add some gestures to it like:
let arViewTap = UITapGestureRecognizer(target: self,
action: #selector(tapped(sender:)))
arView.addGestureRecognizer(arViewTap)
let arViewLongPress = UILongPressGestureRecognizer(target: self,
action: #selector(longPressed(sender:)))
arView.addGestureRecognizer(arViewLongPress)
So far so good, on tap gesture I apply the logic from the previously linked answer and apply force impulse like:
if let sk8 = skateboard as? HasPhysics {
sk8.applyImpulse(direction, at: position, relativeTo: nil)
}
My issue comes with my "catching" logic, where I do want to use the long press, and apply downward force to my skateboard AR object like this:
#objc func longPressed(sender: UILongPressGestureRecognizer) {
if sender.state == .began || sender.state == .changed {
let location = sender.location(in:arView)
if arView.entity(at: location) is HasPhysics {
if let ray = arView.ray(through: location) {
let results = arView.scene.raycast(origin: ray.origin,
direction: ray.direction,
length: 100.0,
query: .nearest,
mask: .all,
relativeTo: nil)
if let _ = results.first,
let position = results.first?.position,
let normal = results.first?.normal {
// test different kind of forces
let direction = SIMD3<Float>(0, -20, 0)
if let sk8 = skateboard as? HasPhysics {
sk8.addForce(direction, at: position, relativeTo: nil)
}
}
}
}
}
}
Right now I know that I am ignoring the raycast results, but this is in pure development state, my issue is that when I apply positive/negative x/z the object responds well, it either slides back and forth or left or right, the positive y is also working by draging the board in the air, the only error prone force direction is the one I am striving to achieve is the downward facing negative y. The object just sits there with no effect at all.
Let also share how my object is defined inside the Reality Composer:
Ollie trick
In real life, if you shift your entire body's weight to the nose of the skateboard's deck (like doing the Ollie Maneuver), the skateboard's center of mass shifts from the middle towards the point where the force is being applied. In RealityKit, if you need to tear the rear (front) wheels of the skateboard off the floor, move the model's center of mass towards the slope.
The repositioning of the center of mass occurs in a local coordinate system.
import SwiftUI
import RealityKit
struct ContentView : View {
var body: some View {
ARViewContainer().ignoresSafeArea()
}
}
struct ARViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
arView.debugOptions = .showPhysics // shape visualization
let scene = try! Experience.loadScene()
let name = "skateboard_01_base_stylized_lod0"
typealias ModelPack = ModelEntity & HasPhysicsBody & HasCollision
let model = scene.findEntity(named: name) as! ModelPack
model.physicsBody = .init()
model.generateCollisionShapes(recursive: true)
model.physicsBody?.massProperties.centerOfMass.position = [0, 0,-27]
arView.scene.anchors.append(scene)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) { }
}
Physics shape
The second problem that you need to solve is to replace the model's box shape of the physical body (RealityKit and Reality Composer generate this type of shape by default). Its shape cannot be in the form of a monolithic box, it's quite obvious, because the box-shaped form does not allow the force to be applied appropriately. You need a shape similar to the outline of the model.
So, you can use the following code to create a custom shape:
(four spheres for wheels and box for deck)
let shapes: [ShapeResource] = [
.generateBox(size: [ 20, 4, 78])
.offsetBy(translation: [ 0.0, 11, 0.0]),
.generateSphere(radius: 3.1)
.offsetBy(translation: [ 7.5, 3, 21.4]),
.generateSphere(radius: 3.1)
.offsetBy(translation: [ 7.5, 3,-21.4]),
.generateSphere(radius: 3.1)
.offsetBy(translation: [-7.5, 3, 21.4]),
.generateSphere(radius: 3.1)
.offsetBy(translation: [-7.5, 3,-21.4])
]
// model.physicsBody = PhysicsBodyComponent(shapes: shapes, mass: 4.5)
model.collision = CollisionComponent(shapes: shapes)
P.S.
Reality Composer model's settings (I used Xcode 14.0 RC 1).

Where is the .camera AnchorEntity located?

When adding a child to my AnchorEntity(.camera), it appears as if the child is spawning behind my camera (meaning I can only see my child when I turn around). I have also tried to add a mesh to my Anchor directly but unfortunately ARKit / RealityKit does not render the mesh when you are inside of it (which because its centered around the camera, is theoretically always the case. However, it could also be the case that its always located behind the screen [where the user is] and I'm never able to see it).
Also, oddly enough the child entity does not move with the camera AnchorEntity despite setting the translation transform to (0,0,0).
My two questions are:
Is the .camera anchor actually located right where the physical iPad / camera is located or is it located further back (perhaps where the user would normally hold the iPad)?
How do you get a child entity of the AnchorEntity(.camera) to move as the iPad / camera moves in real space?
Answer to the first question
In RealityKit and ARKit frameworks ARCamera has a pivot point like other entities (nodes) have, and it's located at the point where lens is attached to the camera body (at bayonet level). This pivot can tether AnchorEntity(.camera). In other words, virtual camera and real-world camera have that pivot point approximately at the same place.
So, if you attach RealityKit's AnchorEntity to a camera's pivot, you place it to the coordinates where camera's bayonet is located. And this AnchorEntity(.camera) will be tracked automatically without a need to implement session(_:didUpdate:) method.
However, if attach ARKit's ARAnchor to the camera's pivot, you have to implement session(_:didUpdate:) method to constantly update a position and orientation of that anchor for every ARFrame.
Answer to the second question
If you want to constantly update model's position in RealityKits at 60 fps (when ARCamera moves and rotates) you need to use the following approach:
import ARKit
import RealityKit
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
let box = MeshResource.generateBox(size: 0.25)
let material = SimpleMaterial(color: .systemPink, isMetallic: true)
let boxEntity = ModelEntity(mesh: box, materials: [material])
let cameraAnchor = AnchorEntity(.camera) // ARCamera anchor
cameraAnchor.addChild(boxEntity)
arView.scene.addAnchor(cameraAnchor)
boxEntity.transform.translation = [0, 0,-0.5] // Box offset 0.5 m
}
}
Or you can use ARKit's great old .currentFrame instance property in session(_:didUpdate:) delegate method:
extension ViewController: ARSessionDelegate {
func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
guard let transform = arView.session.currentFrame?.camera.transform
else { return }
let arkitAnchor = ARAnchor(transform: transform)
arView.session.add(anchor: arkitAnchor) // add to session
let anchor = AnchorEntity(anchor: arkitAnchor)
anchor.addChild(boxEntity)
arView.scene.addAnchor(anchor) // add to scene
}
}
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
var boxEntity = ModelEntity(...)
override func viewDidLoad() {
super.viewDidLoad()
arView.session.delegate = self // Session's delegate
}
}
To find out how to save the ARCamera Pose over time, read the following post.

How do I make an entity a physics entity in RealityKit?

I am not able to figure out how to make the "ball" entity a physics entity/body and apply a force to it.
// I'm using UIKit for the user interface and RealityKit +
// the models made in Reality Composer for the Augmented reality and Code
import RealityKit
import ARKit
class ViewController: UIViewController {
var ball: (Entity & HasPhysics)? {
try? Entity.load(named: "golfball") as? Entity & HasPhysics
}
#IBOutlet var arView: ARView!
// referencing the play now button on the home screen
#IBAction func playNow(_ sender: Any) { }
// referencing the slider in the AR View - this slider will be used to
// control the power of the swing. The slider values range from 10% to
// 100% of swing power with a default value of 55%. The user will have
// to gain experience in the game to know how much power to use.
#IBAction func slider(_ sender: Any) { }
//The following code will fire when the view loads
override func viewDidLoad() {
super.viewDidLoad()
// defining the Anchor - it looks for a flat surface .3 by .3
// meters so about a foot by a foot - on this surface, it anchors
// the golf course and ball when you tap
let anchor = AnchorEntity(plane: .horizontal, minimumBounds: [0.3, 0.3])
// placing the anchor in the scene
arView.scene.addAnchor(anchor)
// defining my golf course entity - using modelentity so it
// participates in the physics of the scene
let entity = try? ModelEntity.load(named: "golfarnew")
// defining the ball entity - again using modelentity so it
// participates in the physics of the scene
let ball = try? ModelEntity.load(named: "golfball")
// loading my golf course entity
anchor.addChild(entity!)
// loading the golf ball
anchor.addChild(ball!)
// applying a force to the ball at the balls position and the
// force is relative to the ball
ball.physicsBody(SIMD3(1.0, 1.0, 1.0), at: ball.position, relativeTo: ball)
// sounds, add physics body to ball, iPad for shot direction,
// connect slider to impulse force
}
}
Use the following code to find out how to implement a RealityKit's physics.
Pay particular attention: Participates in Physics is ON in Reality Composer.
import ARKit
import RealityKit
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
let boxScene = try! Experience.loadBox()
let secondBoxAnchor = try! Experience.loadBox()
let boxEntity = boxScene.steelBox as! (Entity & HasPhysics)
let kinematics: PhysicsBodyComponent = .init(massProperties: .default,
material: nil,
mode: .kinematic)
let motion: PhysicsMotionComponent = .init(linearVelocity: [0.1 ,0, 0],
angularVelocity: [3, 3, 3])
boxEntity.components.set(kinematics)
boxEntity.components.set(motion)
let anchor = AnchorEntity()
anchor.addChild(boxEntity)
arView.scene.addAnchor(anchor)
arView.scene.addAnchor(secondBoxAnchor)
print(boxEntity.isActive) // Entity must be active!
}
}
Also, look at THIS POST to find out how to implement RealityKit's physics with a custom class.

Where can I set the maximum number of markers for an ARView?

I'm using multiple unique marker anchors in a scene that each get a ModelEntity displayed on them. I have no problem detecting each marker individually, but once one is tracked and the model appears, the others won't track. If the tracked marker moves out of frame then suddenly another marker will start being tracked.
My suspicion is that there exists a setting for the max number of markers and it's set to 1. (Like the maximumNumberOfTrackedImages from SceneKit.) Is there a setting I'm missing here, is this a limitation of RealityKit, or am I just messing something up when I add my anchors to the scene?
I'm calling the following function for each item in an array:
class RealityViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
let arView = ARView(frame: UIScreen.main.bounds)
view.addSubview(arView)
let targets = ["image1", "image2", "image3"]
for target in targets {
addTarget(target:target,arView:arView)
}
}
func addTarget(target: String, arView: ARView) {
let imageAnchor = AnchorEntity(.image(group: "Markers", name: target))
arView.scene.addAnchor(imageAnchor)
let plane = MeshResource.generatePlane(width: 0.05, height: 0.05, cornerRadius: 0.0)
let material = SimpleMaterial(color: .blue, roughness: 1.0, isMetallic: false)
let model = ModelEntity(mesh: plane, materials: [material])
imageAnchor.addChild(model)
}
}
Update:
While #ARGeo's answer did solve the original question during further testing I found with the updated code I was only able to track a maximum of 4 targets at a time. Again I'm not sure this is a hard limit of RealityKit or what but if anyone has any insight please add to the accepted answer.
Below you can see only 4 of 6 unique markers being tracked:
There's no number of markers being tracked property in ARKit and RealityKit.
So, to correct a situation, you need to use this code for adding anchors in ARView:
arView.scene.anchors.append(imageAnchor)
And you also might try this code for for-in loop (because Xcode 11 beta might incorrectly run a loop):
for i in 0..<targets.count {
addTarget(target: targets[i], arView: arView)
}
P.S.
Look at this post. Now ARKit 5.0 has an ability to track more than 4 images at a time (at the moment up to 100 images simultaneously).

SceneKit -– How to get animations for a .dae model?

Ok, I am working with ARKit and SceneKit here and I am having trouble looking at the other questions dealing with just SceneKit trying to have a model in .dae format and load in various animations to have that model run - now that we're in iOS11 seems that some solutions don't work.
Here is how I get my model - from a base .dae scene where no animations are applied. I am importing these with Maya -
var modelScene = SCNScene(named: "art.scnassets/ryderFinal3.dae")!
if let d = modelScene.rootNode.childNodes.first {
theDude.node = d
theDude.setupNode()
}
Then in Dude class:
func setupNode() {
node.scale = SCNVector3(x: modifier, y: modifier, z: modifier)
center(node: node)
}
the scaling and centering of axes is needing because my model was just not at the origin. That worked. Then now with a different scene called "Idle.dae" I try to load in an animation to later run on the model:
func animationFromSceneNamed(path: String) -> CAAnimation? {
let scene = SCNScene(named: path)
var animation:CAAnimation?
scene?.rootNode.enumerateChildNodes({ child, stop in
if let animKey = child.animationKeys.first {
animation = child.animation(forKey: animKey)
stop.pointee = true
}
})
return animation
}
I was going to do this for all my animations scenes that I import into Xcode and store all the animations in
var animations = [CAAnimation]()
First Xcode says animation(forKey: is deprecated and This does not work it seems to (from what I can tell) de-center and de-scale my model back to the huge size it was. It screws up its position because I expect making the model move in an animation, for example, would make the instantiated model in my game snap to that same position.
and other attempts cause crashes. I am very new to scene kit and trying to get a grip on how to properly animate a .dae model that I instantiate anywhere in the scene -
How in iOS11 does one load in an array of animations to apply to their SCNNode?
How do you make it so those animations are run on the model WHEREVER THE MODEL IS (not snapping it to some other position)?
At first I should confirm that CoreAnimation framework and some of its methods like animation(forKey:) instance method are really deprecated in iOS and macOS. But some parts of CoreAnimation framework are now implemented into SceneKit and other modules. In iOS 11+ and macOS 10.13+ you can use SCNAnimation class:
let animation = CAAnimation(scnAnimation: SCNAnimation)
and here SCNAnimation class has three useful initializers:
SCNAnimation(caAnimation: CAAnimation)
SCNAnimation(contentsOf: URL)
SCNAnimation(named: String)
In addition I should add that you can use not only animations baked in .dae file format, but also in .abc, .scn and .usdz.
Also, you can use SCNSceneSource class (iOS 8+ and macOS 10.8+) to examine the contents of a SCNScene file or to selectively extract certain elements of a scene without keeping the entire scene and all the assets it contains.
Here's how a code with implemented SCNSceneSource might look like:
#IBOutlet var sceneView: ARSCNView!
var animations = [String: CAAnimation]()
var idle: Bool = true
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = self
let scene = SCNScene()
sceneView.scene = scene
loadMultipleAnimations()
}
func loadMultipleAnimations() {
let idleScene = SCNScene(named: "art.scnassets/model.dae")!
let node = SCNNode()
for child in idleScene.rootNode.childNodes {
node.addChildNode(child)
}
node.position = SCNVector3(0, 0, -5)
node.scale = SCNVector3(0.45, 0.45, 0.45)
sceneView.scene.rootNode.addChildNode(node)
loadAnimation(withKey: "walking",
sceneName: "art.scnassets/walk_straight",
animationIdentifier: "walk_version02")
}
...
func loadAnimation(withKey: String, sceneName: String, animationIdentifier: String) {
let sceneURL = Bundle.main.url(forResource: sceneName, withExtension: "dae")
let sceneSource = SCNSceneSource(url: sceneURL!, options: nil)
if let animationObj = sceneSource?.entryWithIdentifier(animationIdentifier,
withClass: CAAnimation.self) {
animationObj.repeatCount = 1
animationObj.fadeInDuration = CGFloat(1)
animationObj.fadeOutDuration = CGFloat(0.5)
animations[withKey] = animationObj
}
}
...
func playAnimation(key: String) {
sceneView.scene.rootNode.addAnimation(animations[key]!, forKey: key)
}