Applying downward force to an object using RealityKit - swift

Here is my previous question about in general apply force for a certain point of an AR object which had a perfect answer.
I have managed to apply force to a given point with a little bit of tinkering to have a perfect effect for me. Let me show also some code.
I get the AR object from Experience like:
if let skateAnchor = try? Experience.loadSkateboard(),
let skateEntity = skateAnchor.skateboard {
guard let entity = skateEntity as? HasPhysicsBody else { return }
skateAnchor.generateCollisionShapes(recursive: true)
entity.collision?.filter.mask = [.sceneUnderstanding]
skateboard = entity
}
Afterwards I set up the plane and the LiDAR scanner and add some gestures to it like:
let arViewTap = UITapGestureRecognizer(target: self,
action: #selector(tapped(sender:)))
arView.addGestureRecognizer(arViewTap)
let arViewLongPress = UILongPressGestureRecognizer(target: self,
action: #selector(longPressed(sender:)))
arView.addGestureRecognizer(arViewLongPress)
So far so good, on tap gesture I apply the logic from the previously linked answer and apply force impulse like:
if let sk8 = skateboard as? HasPhysics {
sk8.applyImpulse(direction, at: position, relativeTo: nil)
}
My issue comes with my "catching" logic, where I do want to use the long press, and apply downward force to my skateboard AR object like this:
#objc func longPressed(sender: UILongPressGestureRecognizer) {
if sender.state == .began || sender.state == .changed {
let location = sender.location(in:arView)
if arView.entity(at: location) is HasPhysics {
if let ray = arView.ray(through: location) {
let results = arView.scene.raycast(origin: ray.origin,
direction: ray.direction,
length: 100.0,
query: .nearest,
mask: .all,
relativeTo: nil)
if let _ = results.first,
let position = results.first?.position,
let normal = results.first?.normal {
// test different kind of forces
let direction = SIMD3<Float>(0, -20, 0)
if let sk8 = skateboard as? HasPhysics {
sk8.addForce(direction, at: position, relativeTo: nil)
}
}
}
}
}
}
Right now I know that I am ignoring the raycast results, but this is in pure development state, my issue is that when I apply positive/negative x/z the object responds well, it either slides back and forth or left or right, the positive y is also working by draging the board in the air, the only error prone force direction is the one I am striving to achieve is the downward facing negative y. The object just sits there with no effect at all.
Let also share how my object is defined inside the Reality Composer:

Ollie trick
In real life, if you shift your entire body's weight to the nose of the skateboard's deck (like doing the Ollie Maneuver), the skateboard's center of mass shifts from the middle towards the point where the force is being applied. In RealityKit, if you need to tear the rear (front) wheels of the skateboard off the floor, move the model's center of mass towards the slope.
The repositioning of the center of mass occurs in a local coordinate system.
import SwiftUI
import RealityKit
struct ContentView : View {
var body: some View {
ARViewContainer().ignoresSafeArea()
}
}
struct ARViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
arView.debugOptions = .showPhysics // shape visualization
let scene = try! Experience.loadScene()
let name = "skateboard_01_base_stylized_lod0"
typealias ModelPack = ModelEntity & HasPhysicsBody & HasCollision
let model = scene.findEntity(named: name) as! ModelPack
model.physicsBody = .init()
model.generateCollisionShapes(recursive: true)
model.physicsBody?.massProperties.centerOfMass.position = [0, 0,-27]
arView.scene.anchors.append(scene)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) { }
}
Physics shape
The second problem that you need to solve is to replace the model's box shape of the physical body (RealityKit and Reality Composer generate this type of shape by default). Its shape cannot be in the form of a monolithic box, it's quite obvious, because the box-shaped form does not allow the force to be applied appropriately. You need a shape similar to the outline of the model.
So, you can use the following code to create a custom shape:
(four spheres for wheels and box for deck)
let shapes: [ShapeResource] = [
.generateBox(size: [ 20, 4, 78])
.offsetBy(translation: [ 0.0, 11, 0.0]),
.generateSphere(radius: 3.1)
.offsetBy(translation: [ 7.5, 3, 21.4]),
.generateSphere(radius: 3.1)
.offsetBy(translation: [ 7.5, 3,-21.4]),
.generateSphere(radius: 3.1)
.offsetBy(translation: [-7.5, 3, 21.4]),
.generateSphere(radius: 3.1)
.offsetBy(translation: [-7.5, 3,-21.4])
]
// model.physicsBody = PhysicsBodyComponent(shapes: shapes, mass: 4.5)
model.collision = CollisionComponent(shapes: shapes)
P.S.
Reality Composer model's settings (I used Xcode 14.0 RC 1).

Related

Is there a way to let an EntityTranslationGestureRecognizer recognize touches on the entity when another entity is in front of it?

In RealityKit there is the default EntityTranslationGestureRecognizer which you can install to Entities to allow dragging them along their anchoring plane. In my use-case, I will only allow moving one selected entity at a time. As such, I would like to enable the user to drag the selected entity even while it is behind another entity from the POV of the camera.
I have tried setting a delegate to the EntityTranslationGestureRecognizer and implementing the function gestureRecognizer(_ gestureRecognizer: UIGestureRecognizer,shouldReceive touch: UITouch) -> Bool, but the gesture recognizer still does not receive the touch when another entity is in front.
My assumption is that behind the scenes it is doing a HitTest, and possibly only considering the first Entity that is hit. I'm not sure if that is correct though. Were that the case, ideally there would be some way to set a CollisionMask or something on the hit test that the translation gesture is doing, but I have not found anything of the sort.
Do I just need to re-implement the entire behavior myself with a normal UIPanGestureRecognizer ?
Thanks for any suggestions.
Hypersimple solution
The easiest way to control a model with RealityKit's transform gestures, even if it's occluded by another model, is to assign a collision shape only for the controlled model.
modelOne.generateCollisionShapes(recursive: false)
arView.installGestures(.translation, for: modelOne as! (Entity & HasCollision))
Advanced solution
However, if both models have collision shapes, the solution should be as follows. This example implements EntityTranslationGestureRecognizer, TapGesture, CollisionCastHit collection, EntityScaleGestureRecognizer and collision masks.
Click to play GIF's animation.
I've implemented SwiftUI 2D tap gesture to deactivate cube's collision shape in a special way. TapGesture() calls the raycasting method, which fires a 3D ray from the center of the screen. If the ray does not hit any model with a required collision mask, then "Raycasted" string does not appear on the screen, therefore you will not be able to use the RealityKit's drag gesture for model translation.
import RealityKit
import SwiftUI
import ARKit
import PlaygroundSupport // iPadOS Swift Playgrounds app version
struct ContentView: View {
#State private var arView = ARView(frame: .zero)
#State var mask1 = CollisionGroup(rawValue: 1 << 0)
#State var mask2 = CollisionGroup(rawValue: 1 << 1)
#State var text: String = ""
var body: some View {
ZStack {
ARContainer(arView: $arView, mask1: $mask1, mask2: $mask2)
.gesture(
TapGesture().onEnded { raycasting() }
)
Text(text).font(.largeTitle)
}
}
func raycasting() {
let ray = arView.ray(through: arView.center)
let castHits = arView.scene.raycast(origin: ray?.origin ?? [],
direction: ray?.direction ?? [])
for result in castHits {
if (result.entity as! Entity & HasCollision)
.collision?.filter.mask == mask1 {
text = "Raycasted"
} else {
(result.entity as! ModelEntity).model?.materials[0] =
UnlitMaterial(color: .green.withAlphaComponent(0.7))
(result.entity as! Entity & HasCollision).collision = nil
}
}
}
}
struct ARContainer: UIViewRepresentable {
#Binding var arView: ARView
#Binding var mask1: CollisionGroup
#Binding var mask2: CollisionGroup
func makeUIView(context: Context) -> ARView {
arView.cameraMode = .ar
arView.renderOptions = [.disablePersonOcclusion, .disableDepthOfField]
let model1 = ModelEntity(mesh: .generateSphere(radius: 0.2))
model1.generateCollisionShapes(recursive: false)
model1.collision?.filter.mask = mask1
let model2 = ModelEntity(mesh: .generateBox(size: 0.2),
materials: [UnlitMaterial(color: .green)])
model2.position.z = 0.4
model2.generateCollisionShapes(recursive: false)
model2.collision?.filter.mask = mask2
let anchor = AnchorEntity(world: [0,0,-1])
anchor.addChild(model1)
anchor.addChild(model2)
arView.scene.anchors.append(anchor)
arView.installGestures(.translation,
for: model1 as! (Entity & HasCollision))
arView.installGestures(.scale,
for: model2 as! (Entity & HasCollision))
return arView
}
func updateUIView(_ view: ARView, context: Context) { }
}
PlaygroundPage.current.needsIndefiniteExecution = true
PlaygroundPage.current.setLiveView(ContentView())

ARKit – Tap node with raycastQuery instead of hitTest, which is deprecated

In iOS 14, hitTest(_:types:) was deprecated. It seems that you are supposed to use raycastQuery(from:allowing:alignment:) now. From the documentation:
Raycasting is the preferred method for finding positions on surfaces in the real-world environment, but the hit-testing functions remain present for compatibility. With tracked raycasting, ARKit continues to refine the results to increase the position accuracy of virtual content you place with a raycast.
However, how can I hit test SCNNodes with raycasting? I only see options to hit test a plane.
raycastQuery method documentation
Only choices for allowing: are planes
This is my current code, which uses hit-testing to detect taps on the cube node and turn it blue.
class ViewController: UIViewController {
#IBOutlet weak var sceneView: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
/// Run the configuration
let worldTrackingConfiguration = ARWorldTrackingConfiguration()
sceneView.session.run(worldTrackingConfiguration)
/// Make the red cube
let cube = SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0)
cube.materials.first?.diffuse.contents = UIColor.red
let cubeNode = SCNNode(geometry: cube)
cubeNode.position = SCNVector3(0, 0, -0.2) /// 20 cm in front of the camera
cubeNode.name = "ColorCube"
/// Add the node to the ARKit scene
sceneView.scene.rootNode.addChildNode(cubeNode)
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
super.touchesBegan(touches, with: event)
guard let location = touches.first?.location(in: sceneView) else { return }
let results = sceneView.hitTest(location, options: [SCNHitTestOption.searchMode : 1])
for result in results.filter( { $0.node.name == "ColorCube" }) { /// See if the beam hit the cube
let cubeNode = result.node
cubeNode.geometry?.firstMaterial?.diffuse.contents = UIColor.blue /// change to blue
}
}
}
How can I replace let results = sceneView.hitTest(location, options: [SCNHitTestOption.searchMode : 1]) with the equivalent raycastQuery code?
About Hit-Testing
Official documentation says that only ARKit's hitTest(_:types:) instance method is deprecated in iOS 14. However, in iOS 15 you can still use it. ARKit's hit-testing method is supposed to be replaced with a raycasting methods.
Deprecated hit-testing:
let results: [ARHitTestResult] = sceneView.hitTest(sceneView.center,
types: .existingPlaneUsingGeometry)
Raycasting equivalent
let raycastQuery: ARRaycastQuery? = sceneView.raycastQuery(
from: sceneView.center,
allowing: .estimatedPlane,
alignment: .any)
let results: [ARRaycastResult] = sceneView.session.raycast(raycastQuery!)
If you prefer raycasting method for hitting a node (entity), use RealityKit module instead of SceneKit:
let arView = ARView(frame: .zero)
let query: CollisionCastQueryType = .nearest
let mask: CollisionGroup = .default
let raycasts: [CollisionCastHit] = arView.scene.raycast(from: [0, 0, 0],
to: [5, 6, 7],
query: query,
mask: mask,
relativeTo: nil)
guard let raycast: CollisionCastHit = raycasts.first else { return }
print(raycast.entity.name)
P.S.
There is no need to look for a replacement for the SceneKit's hitTest(_:options:) instance method returning [SCNHitTestResult], because it works fine and it's not a time to make it deprecated.

visualize detected plane problem in RealityKit

guys
I want to visualize the detected plane in RealityKit use the code below, but the result gives that the detected plane float as the camera move (not totally float, a bit, but obviously ), so, my question is how to solve this problem ?
can any body help ?
struct ARViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
let config = ARWorldTrackingConfiguration()
config.planeDetection = .horizontal
arView.debugOptions = [.showFeaturePoints, .showWorldOrigin]
arView.session.run(config, options:[ ])
arView.session.delegate = arView
arView.CreatePlane()
return arView
}
func updateUIView(_ uiView: ARView, context: Context) {
}
}
var planeMesh = MeshResource.generatePlane(width: 0, depth: 0)
var planeEntity = ModelEntity(mesh:planeMesh)
extension ARView : ARSessionDelegate{
func CreatePlane(){
let planeAnchor = AnchorEntity(plane:.horizontal)
//planeEntity.transform.translation = SIMD3(0,0,0)
planeAnchor.addChild(planeEntity)
self.scene.addAnchor(planeAnchor)
}
public func session(_ session: ARSession, didUpdate anchors: [ARAnchor]){
guard let planeAnchor = anchors[0] as? ARPlaneAnchor else {
return
}
DispatchQueue.main.async {
let position = planeAnchor.transform.toTranslation()
let orientation = planeAnchor.transform.toQuaternion()
let rotatedCenter = orientation.act(planeAnchor.center)
planeEntity.model?.mesh = MeshResource.generatePlane(
width: planeAnchor.extent.x,
depth: planeAnchor.extent.z
)
planeEntity.transform.translation = position + rotatedCenter
planeEntity.transform.rotation = orientation
planeEntity.model?.materials = [SimpleMaterial(color:UIColor.white.withAlphaComponent(0.5),isMetallic: false)]
}
maybe I have not make myself clear, I used the code above to visualize the detected plane in RealityKit, Yes, it works, I can see the plane, and the plane updated when the ARAnchor update, that is to say, the plane's position ,orientation, size updated when explorer goes on. but there's a problem: the rendered plane does not fixed in the space, that is to say, after I scanned the table ,the rendered plane not always fixed on the table ,it can float left、 right、below the table when I move the Camera left、right、below the table, especially in the Y axis 。
so ,my question is how this happen ? and how to solve it ?
You can try turning off planeDetection when your ARPlane already have an Anchor and is in the position that satisfies you.
ARKit will stop updating anchors so your plane anchor won't be adjusted anymore and it should stay fixed better to the surface.
You can do it by adding button to stop updates or checking if your plane already has an anchor:
planeAnchor.anchor!.isAnchored == true
In either case just change configuration for ARWorldTrackingConfiguration without planeDetection
let config = ARWorldTrackingConfiguration()
config.planeDetection = []
arView.session.run(config, options:[ ])

How to set a known position and orientation as a starting point of ARKit

I am starting to use ARKit and I have a use case where I want to know the motion from a known position to another one.
So I was wondering if it is possible (like every tracking solution) to set a known position and orientation a starting point of the tracking in ARKit?
Regards
There are at least six approaches allowing you set a starting point for a model. But using no ARAnchors at all in your ARScene is considered as bad AR experience (although Apple's Augmented Reality app template has no any ARAnchors in a code).
First approach
This is the approach that Apple engineers propose us in Augmented Reality app template in Xcode. This approach doesn't use anchoring, so all you need to do is to accommodate a model in air with coordinates like (x: 0, y: 0, z: -0.5) or in other words your model will be 50 cm away from camera.
override func viewDidLoad() {
super.viewDidLoad()
sceneView.scene = SCNScene(named: "art.scnassets/ship.scn")!
let model = sceneView.scene.rootNode.childNode(withName: "ship",
recursively: true)
model?.position.z = -0.5
sceneView.session.run(ARWorldTrackingConfiguration())
}
Second approach
Second approach is almost the same as the first one, except it uses ARKit's anchor:
guard let sceneView = self.view as? ARSCNView
else { return }
if let currentFrame = sceneView.session.currentFrame {
var translation = matrix_identity_float4x4
translation.columns.3.z = -0.5
let transform = simd_mul(currentFrame.camera.transform, translation)
let anchor = ARAnchor(transform: transform)
sceneView.session.add(anchor: anchor)
}
Third approach
You can also create a pre-defined model's position pinned with ARAnchor using third approach, where you need to import RealityKit module as well:
func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
let model = ModelEntity(mesh: MeshResource.generateSphere(radius: 1.0))
// ARKit's anchor
let anchor = ARAnchor(transform: simd_float4x4(diagonal: [1,1,1]))
// RealityKit's anchor based on position of ARAnchor
let anchorEntity = AnchorEntity(anchor: anchor)
anchorEntity.addChild(model)
arView.scene.anchors.append(anchorEntity)
}
Fourth approach
If you turned on a plane detection feature you can use Ray-casting or Hit-testing methods. As a target object you can use a little sphere (located at 0, 0, 0) that will be ray-casted.
let query = arView.raycastQuery(from: screenCenter,
allowing: .estimatedPlane,
alignment: .any)
let raycast = session.trackedRaycast(query) { results in
if let result = results.first {
object.transform = result.transform
}
}
Fifth approach
This approach is focused to save and share ARKit's worldMaps.
func writeWorldMap(_ worldMap: ARWorldMap, to url: URL) throws {
let data = try NSKeyedArchiver.archivedData(withRootObject: worldMap,
requiringSecureCoding: true)
try data.write(to: url)
}
func loadWorldMap(from url: URL) throws -> ARWorldMap {
let mapData = try Data(contentsOf: url)
guard let worldMap = try NSKeyedUnarchiver.unarchivedObject(ofClass: ARWorldMap.self,
from: mapData)
else {
throw ARError(.invalidWorldMap)
}
return worldMap
}
Sixth approach
In ARKit 4.0 a new ARGeoTrackingConfiguration is implemented with the help of MapKit module. So now you can use a pre-defined GPS data.
func session(_ session: ARSession, didAdd anchors: [ARAnchor]) {
for geoAnchor in anchors.compactMap({ $0 as? ARGeoAnchor }) {
arView.scene.addAnchor(Entity.placemarkEntity(for: geoAnchor)
}
}

Add SCNNode after rotating rootNode

I'm trying to add a node (a sphere) to a body model but it doesn't work properly after I rotate the model through a pan gesture.
Here's how I'm adding the node (using a long tap gesture):
func addSphere(sender: UILongPressGestureRecognizer) {
switch sender.state {
case .Began:
let location = sender.locationInView(bodyView)
let hitResults = bodyView.hitTest(location, options: nil)
if hitResults.count > 0 {
let result = hitResults.first!
let secondSphereGeometry = SCNSphere(radius: 0.015)
secondSphereGeometry.firstMaterial?.diffuse.contents = UIColor.redColor()
let secondSphereNode = SCNNode(geometry: secondSphereGeometry)
let vpWithZ = SCNVector3(x: Float(result.worldCoordinates.x), y: Float(result.worldCoordinates.y), z: Float( result.worldCoordinates.z))
secondSphereNode.position = vpWithZ
bodyView.scene!.rootNode.addChildNode(secondSphereNode)
}
break
default:
break
}
}
Here is how I rotate the view:
func rotateGesture(sender: UIPanGestureRecognizer) {
let translation = sender.translationInView(sender.view)
var newZAngle = (Float)(translation.x)*(Float)(M_PI)/180.0
newZAngle += currentZAngle
bodyView.scene!.rootNode.transform = SCNMatrix4MakeRotation(newZAngle, 0, 0, 1)
if sender.state == .Ended {
currentZAngle = newZAngle
}
}
And to load the 3D model I just do:
bodyView.scene = SCNScene(named: "male_body.dae") // bodyView is a SCNView in the storyboard
I found something related to the worldTransform property and also the function convertPosition:toNode: but couldn't find an example that works well.
The problem is that, if I rotate the model, the sphere are not positioned properly. They're always positioned as if the model was in its initial state.
If I turn the body and add long tap his arm (on the side), the sphere is added somewhere floating in front of the body, as you can see above.
I don't know how to fix this. Appreciate if someone can help me. Thanks!