How to prevent Entities from overlapping? - swift

I'm trying to create an AR experience with RealityKit but I'm finding that by default, entities will move into each other and overlap when they are moved by user interaction.
I want to prevent the objects from overlapping and entering each other, so that when they are moved by the user they just hit/bounce off without overlapping.
I'm loading the entities from a RealityComposer file as such and adding them to the scene (within a catch block and others not shown in this simplified version):
let entity = try Experience.loadBallSort()
anchorEntity.addChild(entity)
// anchorEntity is an AnchorEntity that is already attached to the scene
I'm using the default gestures like this to enable user interaction, which is how the objects are caused to overlap because they don't stop once they touch:
arView.installGestures([.rotation, .translation], for: entity)
Within Reality Composer, I've got Physics enabled with a Static motion type, and the default Physics material/collision shape for each object. I've also tried to use generateCollisionShapes as such, but it doesn't change the behaviour of the collision:
entity.generateCollisionShapes(recursive: true)
How can I prevent entities from overlapping in RealityKit?

There's no overlapping when colliding
To implement such a scenario, let's take 2 objects - one is dynamic and the other – kinematic.
PhysicsBodyMode.dynamic
Forces and collisions control body movement.
PhysicsBodyMode.kinematic
The user controls body movement. This type of physics body is unaffected by forces or collisions but that can cause collisions affecting other bodies when moved.
Code:
var arView = ARView(frame: .zero)
arView.frame = self.view.frame
self.view.addSubview(arView)
let scene = try! Experience.loadModels()
// Kinematic
let red = scene.redBox!.children[0] as! (Entity &
HasCollision &
HasPhysicsBody)
red.physicsBody = .init()
red.physicsBody?.massProperties.mass = 5
red.physicsBody?.mode = .kinematic
red.generateCollisionShapes(recursive: true)
arView.installGestures([.translation], for: red)
// Dynamic
let green = scene.greenCube!.children[0] as! (Entity &
HasCollision &
HasPhysicsBody)
green.physicsBody = .init()
green.physicsBody?.massProperties.mass = 5
green.physicsBody?.mode = .dynamic
green.generateCollisionShapes(recursive: true)
P.S.
Don't apply physics in Reality Composer, apply it programmatically in RealityKit.

Related

RealityKit .nonAR installGestures is missing translation and rotation is y axis only

I'm trying to reverse engineer the 3d Scanner App using RealityKit and am having real trouble getting just a basic model working with all gestures. When I run the code below, I get a cube with scale and rotation (about the y axis only), but no translation interaction. I'm trying to figure out how to get rotation about an arbitray axis as well as translation, like in the 3d Scanner App above. I'm relatively new to iOS and read one should use RealityKit as Apple isn't really supporting SceneKit anymore, but am now wondering if SceneKit would be the way to go, as RealityKit is still young. Or if anyone knows of an extension to RealityKit ModelEntity objects to give them better interaction capabilities.
I've got my app taking a scan with the LiDAR sensor and saving it to disk as a .usda mesh, per this tutorial, but when I load the mesh as a ModelEntity and attach gestures to it, I don't get any interaction at all.
The below example code recreates the limited gestures for a box ModelEntity, and I have some commented lines showing where I would load my .usda model from disk, but again while it will render, it gets no interaction with gestures.
Any help appreciated!
// ViewController.swift
import UIKit
import RealityKit
class ViewController: UIViewController {
var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
arView = ARView(frame: view.frame, cameraMode: .nonAR, automaticallyConfigureSession: false)
view.addSubview(arView)
// create pointlight
let pointLight = PointLight()
pointLight.light.intensity = 10000
// create light anchor
let lightAnchor = AnchorEntity(world: [0, 0, 0])
lightAnchor.addChild(pointLight)
arView.scene.addAnchor(lightAnchor)
// eventually want to load my model from disk and give it gestures.
// guard let scanEntity = try? Entity.loadModel(contentsOf: urlOBJ) else {
// print("couldn't load scan in this format")
// return
// }
// entity to add gestures to
let cubeMaterial = SimpleMaterial(color: .blue, isMetallic: true)
let myEntity = ModelEntity(mesh: .generateBox(width: 0.1, height: 0.2, depth: 0.3, cornerRadius: 0.01, splitFaces: false), materials: [cubeMaterial])
myEntity.generateCollisionShapes(recursive: false)
let myAnchor = AnchorEntity(world: .zero)
myAnchor.addChild(myEntity)
// add collision and interaction
let scanEntityBounds = myEntity.visualBounds(relativeTo: myAnchor)
myEntity.collision = CollisionComponent(shapes: [.generateBox(size: scanEntityBounds.extents).offsetBy(translation: scanEntityBounds.center)])
arView.installGestures(for: myEntity).forEach {
gestureRecognizer in
gestureRecognizer.addTarget(self, action: #selector(handleGesture(_:)))
}
arView.scene.addAnchor(myAnchor)
// without this, get no gestures at all
let camera = PerspectiveCamera()
let cameraAnchor = AnchorEntity(world: [0, 0, 0.2])
cameraAnchor.addChild(camera)
arView.scene.addAnchor(cameraAnchor)
}
#objc private func handleGesture(_ recognizer: UIGestureRecognizer) {
if recognizer is EntityTranslationGestureRecognizer {
print("translation!")
} else if recognizer is EntityScaleGestureRecognizer {
print("scale!")
} else if recognizer is EntityRotationGestureRecognizer {
print("rotation!")
}
}
}
To extend ModelEntity's gesture interaction capabilities setup your own 2D gestures. There are 8 screen gestures in UIKit, and in SwiftUI you have 5 principal gestures and additionally Sequence, Simultaneous and Exclusive variations.
Form what I have understood, that the gestures are working for the box but not for your .usdz file/model. If this is the case, then the issue is because the model does not have a collision mesh(HasCollsion). If you are using reality composer to edit your models, you could do the following:
click on the model
under the Physics dropdown, click Participate
under collision shape select automatic
Overalls, make sure that the model has collision and you cast within the code that it has collision
let myEntity = try? Entity.loadModel(named: "fileName") as! HasCollision

How to install gestures for Reality Composer?

I added USDZ with animation in the Reality Composer (.rcproject) After Load the scene and added to the review
I tried to install gestures like Rotate scale ... but won't work
let ganGes = gangnim?.gnagnumObject as? (Entity & HasCollision)
arView.installGestures([.rotation,.translation,.scale], for: ganGes!)
How can I install Gestures to Reality Composer?
To implement RealityKit's translate, rotate and scale gestures, you also need to call generateCollisionShapes(recursive:) instance method to prepare a model's shape used for collision detection.
guard let ganGes = gangnim.gnagnumObject as? ModelEntity else { return }
ganGes.generateCollisionShapes(recursive: true)
arView.installGestures([.all], for: ganGes as (Entity & HasCollision))

How do I spin and add a linear force to an Entity loaded from Reality Composer?

I've constructed a scene in Reality Composer that has a ball that starts the scene floating in the air. I'm attempting to programmatically throw the ball while simultaneously spinning it.
I tried to do this through behaviors in Reality Composer, but can't get both behaviors to work simultaneously, also, the ball immediately falls to the ground once I start the animation.
My second attempt was to forgo the behavior route and I attempted to do this programmatically, but I can not add a force, because the loaded ball is an Entity and not a ModelEntity. What am I doing wrong?
I want to spin the ball, apply a force and enable gravity simultaneously.
Use the following code to create a custom class Physics that conforms to RealityKit's physics protocols (for controlling mass, velocity and kinematics/dynamics mode).
Here's how physics protocols' conforming hierarchy looks like:
HasPhysics: HasPhysicsBody, HasPhysicsMotion
|
|- HasCollision: HasTransform
Here's a code:
import ARKit
import RealityKit
class Physics: Entity, HasPhysicsBody, HasPhysicsMotion {
required init() {
super.init()
self.physicsBody = PhysicsBodyComponent(massProperties: .default,
material: nil,
mode: .kinematic)
self.physicsMotion = PhysicsMotionComponent(linearVelocity: [0.1, 0, 0],
angularVelocity: [1, 3, 5])
}
}
Then create an instance of that class in ViewController:
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
let physics = Physics()
arView.backgroundColor = .darkGray
let boxAnchor = try! Experience.loadBox()
boxAnchor.steelBox?.scale = [5, 5, 5]
let boxEntity = boxAnchor.children[0].children[0].children[0]
boxEntity.name = "CUBE"
print(boxEntity)
let kinematicComponent: PhysicsBodyComponent = physics.physicsBody!
let motionComponent: PhysicsMotionComponent = physics.physicsMotion!
boxEntity.components.set(kinematicComponent)
boxEntity.components.set(motionComponent)
arView.scene.anchors.append(boxAnchor)
}
}
Also, look at THIS POST to find out how to implement physics without custom Physics class.
To add a force you need the Entity to conform to HasPhysicsMotion. To see if your Entity imported from RC can have forces applied to it, check if (myEntity as? HasPhysics) returns nil or a value.
If that returns nil, make your own Entity subclass which has the HasPhysics protocol, and set your entity as a child of it. If you want it to collide with other things in your scene then you’ll also want HasCollision protocol.
All the things you mentioned can be achieved from this point!

How do I programmatically move an ARAnchor?

I'm trying out the new ARKit to replace another similar solution I have. It's pretty great! But I can't seem to figure out how to move an ARAnchor programmatically. I want to slowly move the anchor to the left of the user.
Creating the anchor to be 2 meters in front of the user:
var translation = matrix_identity_float4x4
translation.columns.3.z = -2.0
let transform = simd_mul(currentFrame.camera.transform, translation)
let anchor = ARAnchor(transform: transform)
sceneView.session.add(anchor: anchor)
later, moving the object to the left/right of the user (x-axis)...
anchor.transform.columns.3.x = anchor.transform.columns.3.x + 0.1
repeated every 50 milliseconds (or whatever).
The above does not work because transform is a get-only property.
I need a way to change the position of an AR object in space relative to the user in a way that keeps the AR experience intact - meaning, if you move your device, the AR object will be moving but also won't be "stuck" to the camera like it's simply painted on, but moves like you would see a person move while you were walking by - they are moving and you are moving and it looks natural.
Please note the scope of this question relates only to how to move an object in space in relation to the user (ARAnchor), not in relation to a plane (ARPlaneAnchor) or to another detected surface (ARHitTestResult).
Thanks!
You don't need to move anchors. (hand wave) That's not the API you're looking for...
Adding ARAnchor objects to a session is effectively about "labeling" a point in real-world space so that you can refer to it later. The point (1,1,1) (for example) is always the point (1,1,1) — you can't move it someplace else because then it's not the point (1,1,1) anymore.
To make a 2D analogy: anchors are reference points sort of like the bounds of a view. The system (or another piece of your code) tells the view where it's boundaries are, and the view draws its content relative to those boundaries. Anchors in AR give you reference points you can use for drawing content in 3D.
What you're asking is really about moving (and animating the movement of) virtual content between two points. And ARKit itself really isn't about displaying or animating virtual content — there are plenty of great graphics engines out there, so ARKit doesn't need to reinvent that wheel. What ARKit does is provide a real-world frame of reference for you to display or animate content using an existing graphics technology like SceneKit or SpriteKit (or Unity or Unreal, or a custom engine built with Metal or GL).
Since you mentioned trying to do this with SpriteKit... beware, it gets messy. SpriteKit is a 2D engine, and while ARSKView provides some ways to shoehorn a third dimension in there, those ways have their limits.
ARSKView automatically updates the xScale, yScale, and zRotation of each sprite associated with an ARAnchor, providing the illusion of 3D perspective. But that applies only to nodes attached to anchors, and as noted above, anchors are static.
You can, however, add other nodes to your scene, and use those same properties to make those nodes match the ARSKView-managed nodes. Here's some code you can add/replace in the ARKit/SpriteKit Xcode template project to do that. We'll start with some basic logic to run a bouncing animation on the third tap (after using the first two taps to place anchors).
var anchors: [ARAnchor] = []
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
// Start bouncing on touch after placing 2 anchors (don't allow more)
if anchors.count > 1 {
startBouncing(time: 1)
return
}
// Create anchor using the camera's current position
guard let sceneView = self.view as? ARSKView else { return }
if let currentFrame = sceneView.session.currentFrame {
// Create a transform with a translation of 30 cm in front of the camera
var translation = matrix_identity_float4x4
translation.columns.3.z = -0.3
let transform = simd_mul(currentFrame.camera.transform, translation)
// Add a new anchor to the session
let anchor = ARAnchor(transform: transform)
sceneView.session.add(anchor: anchor)
anchors.append(anchor)
}
}
Then, some SpriteKit fun for making that animation happen:
var ballNode: SKLabelNode = {
let labelNode = SKLabelNode(text: "🏀")
labelNode.horizontalAlignmentMode = .center
labelNode.verticalAlignmentMode = .center
return labelNode
}()
func startBouncing(time: TimeInterval) {
guard
let sceneView = self.view as? ARSKView,
let first = anchors.first, let start = sceneView.node(for: first),
let last = anchors.last, let end = sceneView.node(for: last)
else { return }
if ballNode.parent == nil {
addChild(ballNode)
}
ballNode.setScale(start.xScale)
ballNode.zRotation = start.zRotation
ballNode.position = start.position
let scale = SKAction.scale(to: end.xScale, duration: time)
let rotate = SKAction.rotate(toAngle: end.zRotation, duration: time)
let move = SKAction.move(to: end.position, duration: time)
let scaleBack = SKAction.scale(to: start.xScale, duration: time)
let rotateBack = SKAction.rotate(toAngle: start.zRotation, duration: time)
let moveBack = SKAction.move(to: start.position, duration: time)
let action = SKAction.repeatForever(.sequence([
.group([scale, rotate, move]),
.group([scaleBack, rotateBack, moveBack])
]))
ballNode.removeAllActions()
ballNode.run(action)
}
Here's a video so you can see this code in action. You'll notice that the illusion only works as long as you don't move the camera — not so great for AR. When using SKAction, we can't adjust the start/end states of the animation while animating, so the ball keeps bouncing back and forth between its original (screen-space) positions/rotations/scales.
You could do better by animating the ball directly, but it's a lot of work. You'd need to, on every frame (or every view(_:didUpdate:for:) delegate callback):
Save off the updated position, rotation, and scale values for the anchor-based nodes at each end of the animation. You'll need to do this twice per didUpdate callback, because you'll get one callback for each node.
Work out position, rotation, and scale values for the node being animated, by interpolating between the two endpoint values based on the current time.
Set the new attributes on the node. (Or maybe animate it to those attributes over a very short duration, so it doesn't jump too much in one frame?)
That's kind of a lot of work to shoehorn a fake 3D illusion into a 2D graphics toolkit — hence my comments about SpriteKit not being a great first step into ARKit.
If you want 3D positioning and animation for your AR overlays, it's a lot easier to use a 3D graphics toolkit. Here's a repeat of the previous example, but using SceneKit instead. Start with the ARKit/SceneKit Xcode template, take the spaceship out, and paste the same touchesBegan function from above into the ViewController. (Change the as ARSKView casts to as ARSCNView, too.)
Then, some quick code for placing 2D billboarded sprites, matching via SceneKit the behavior of the ARKit/SpriteKit template:
// in global scope
func makeBillboardNode(image: UIImage) -> SCNNode {
let plane = SCNPlane(width: 0.1, height: 0.1)
plane.firstMaterial!.diffuse.contents = image
let node = SCNNode(geometry: plane)
node.constraints = [SCNBillboardConstraint()]
return node
}
// inside ViewController
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
// emoji to image based on https://stackoverflow.com/a/41021662/957768
let billboard = makeBillboardNode(image: "⛹️".image())
node.addChildNode(billboard)
}
Finally, adding the animation for the bouncing ball:
let ballNode = makeBillboardNode(image: "🏀".image())
func startBouncing(time: TimeInterval) {
guard
let sceneView = self.view as? ARSCNView,
let first = anchors.first, let start = sceneView.node(for: first),
let last = anchors.last, let end = sceneView.node(for: last)
else { return }
if ballNode.parent == nil {
sceneView.scene.rootNode.addChildNode(ballNode)
}
let animation = CABasicAnimation(keyPath: #keyPath(SCNNode.transform))
animation.fromValue = start.transform
animation.toValue = end.transform
animation.duration = time
animation.autoreverses = true
animation.repeatCount = .infinity
ballNode.removeAllAnimations()
ballNode.addAnimation(animation, forKey: nil)
}
This time the animation code is a lot shorter than the SpriteKit version.
Here's how it looks in action.
Because we're working in 3D to start with, we're actually animating between two 3D positions — unlike in the SpriteKit version, the animation stays where it's supposed to. (And without the extra work for directly interpolating and animating attributes.)

Trying to get Physics Shape That Matches the Graphical Representation

what I'm trying to achieve is that the Node would have the same shape as PhysicsBody/texture (fire has a complicated shape), and then I'm trying to make only fireImage touchable. So far when I'm touching outside of the fireImage on the screen and it still makes a sound. It seems that I'm touching the squared Node, but I want to touch only the sprite/texture.
Would appreciate your help.
The code is below:
import SpriteKit
import AVFoundation
private var backgroundMusicPlayer: AVAudioPlayer!
class GameScene2: SKScene {
var wait1 = SKAction.waitForDuration(1)
override func didMoveToView(view: SKView) {
setUpScenery()
}
private func setUpScenery() {
//I'm creating a Fire Object here and trying to set its Node a physicsBody/texture shape:
let fireLayerTexture = SKTexture(imageNamed: fireImage)
let fireLayer = SKSpriteNode(texture: fireLayerTexture)
fireLayer.anchorPoint = CGPointMake(1, 0)
fireLayer.position = CGPointMake(size.width, 0)
fireLayer.zPosition = Layer.Z4st
var firedown = SKAction.moveToY(-200, duration: 0)
var fireup1 = SKAction.moveToY(10, duration: 0.8)
var fireup2 = SKAction.moveToY(0, duration: 0.2)
fireLayer.physicsBody = SKPhysicsBody(texture: fireLayerTexture, size: fireLayer.texture!.size())
fireLayer.physicsBody!.affectedByGravity = false
fireLayer.name = "fireNode"
fireLayer.runAction(SKAction.sequence([firedown, wait1, fireup1, fireup2]))
addChild(fireLayer)
}
//Here, I'm calling a Node I want to touch. I assume that it has a shape of a PhysicsBody/texture:
override func touchesEnded(touches: NSSet, withEvent event: UIEvent) {
for touch: AnyObject in touches {
let touch = touches.anyObject() as UITouch
let touchLocation = touch.locationInNode(self)
let node: SKNode = nodeAtPoint(touchLocation)
if node.name == "fireNode" {
var playguitar = SKAction.playSoundFileNamed("fire.wav", waitForCompletion: true)
node.runAction(playguitar)
}
}
}
}
Physics bodies and their shapes are for physics — that is, for simulating things like collisions between sprites in your scene.
Touch handling (i.e. nodeAtPoint) doesn't go through physics. That's probably the way it should be — if you had a sprite with a very small collision surface, you might not necessarily want it to be difficult to touch, and you also might want to be able to touch nodes that don't have physics. So your physics body doesn't affect the behavior of nodeAtPoint.
An API that lets you define a hit-testing area for a node — that's independent of the node's contents or physics — might not be a bad idea, but such a thing doesn't currently exist in SpriteKit. They do take feature requests, though.
In the meantime, fine-grained hit testing is something you'd have to do yourself. There are at least a couple of ways to do it:
If you can define the touch area as a path (CGPath or UIBezierPath/NSBezierPath), like you would for creating an SKShapeNode or a physics body using bodyWithPolygonFromPath:, you can hit-test against the path. See CGPathContainsPoint / containsPoint: / containsPoint: for the kind of path you're dealing with (and be sure to convert to the right local node coordinate space first).
If you want to hit-test against individual pixels... that'd be really slow, probably, but in theory the new SKMutableTexture class in iOS 8 / OX X v10.10 could help you. There's no saying you have to use the modifyPixelDataWithBlock: callback to actually change the texture data... you could use that once in setup to get your own copy of the texture data, then write your own hit testing code to decide what RGBA values count as a "hit".