There are three ways about Detecting Intersections in RealityKit framework, but I don't know how to use it in my project.
1.
func raycast(origin: SIMD3<Float>,
direction: SIMD3<Float>,
length: Float,
query: CollisionCastQueryType,
mask: CollisionGroup,
relativeTo: Entity?) -> [CollisionCastHit]
2.
func raycast(from: SIMD3<Float>,
to: SIMD3<Float>,
query: CollisionCastQueryType,
mask: CollisionGroup,
relativeTo: Entity?) -> [CollisionCastHit]
3.
func convexCast(convexShape: ShapeResource,
fromPosition: SIMD3<Float>,
fromOrientation: simd_quatf,
toPosition: SIMD3<Float>,
toOrientation: simd_quatf,
query: CollisionCastQueryType,
mask: CollisionGroup,
relativeTo: Entity?) -> [CollisionCastHit]
Simple Ray-Casting
If you want to find out how to position a model made in Reality Composer into a RealityKit scene (that has a detected horizontal plane) using Ray-Casting method, use the following code:
import RealityKit
import ARKit
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
let scene = try! Experience.loadScene()
#IBAction func onTap(_ sender: UITapGestureRecognizer) {
scene.steelBox!.name = "Parcel"
let tapLocation: CGPoint = sender.location(in: arView)
let estimatedPlane: ARRaycastQuery.Target = .estimatedPlane
let alignment: ARRaycastQuery.TargetAlignment = .horizontal
let result: [ARRaycastResult] = arView.raycast(from: tapLocation,
allowing: estimatedPlane,
alignment: alignment)
guard let rayCast: ARRaycastResult = result.first
else { return }
let anchor = AnchorEntity(world: rayCast.worldTransform)
anchor.addChild(scene)
arView.scene.anchors.append(anchor)
print(rayCast)
}
}
Pay attention to a class ARRaycastQuery. This class comes from ARKit, not from RealityKit.
Convex-Ray-Casting
A Convex-Ray-Casting methods like raycast(from:to:query:mask:relativeTo:) is the op of swiping a convex shapes along a straight line and stopping at the very first intersection with any of the collision shape in the scene. Scene raycast() method performs a hit-tests against all entities with collision shapes in the scene. Entities without a collision shape are ignored.
You can use the following code to perform a convex-ray-cast from start position to end:
import RealityKit
let startPosition: SIMD3<Float> = [0, 0, 0]
let endPosition: SIMD3<Float> = [5, 5, 5]
let query: CollisionCastQueryType = .all
let mask: CollisionGroup = .all
let raycasts: [CollisionCastHit] = arView.scene.raycast(from: startPosition,
to: endPosition,
query: query,
mask: mask,
relativeTo: nil)
guard let rayCast: CollisionCastHit = raycasts.first
else { return }
print(rayCast.distance) /* The distance from the ray origin to the hit */
print(rayCast.entity.name) /* The entity's name that was hit */
A CollisionCastHit structure is a hit result of a collision cast and it lives in RealityKit's scene.
P.S.
When you use raycast(from:to:query:mask:relativeTo:) method for measuring a distance from camera to entity it doesn't matter what an orientation of ARCamera is, it only matters what its position is in world coordinates.
Related
Here is my previous question about in general apply force for a certain point of an AR object which had a perfect answer.
I have managed to apply force to a given point with a little bit of tinkering to have a perfect effect for me. Let me show also some code.
I get the AR object from Experience like:
if let skateAnchor = try? Experience.loadSkateboard(),
let skateEntity = skateAnchor.skateboard {
guard let entity = skateEntity as? HasPhysicsBody else { return }
skateAnchor.generateCollisionShapes(recursive: true)
entity.collision?.filter.mask = [.sceneUnderstanding]
skateboard = entity
}
Afterwards I set up the plane and the LiDAR scanner and add some gestures to it like:
let arViewTap = UITapGestureRecognizer(target: self,
action: #selector(tapped(sender:)))
arView.addGestureRecognizer(arViewTap)
let arViewLongPress = UILongPressGestureRecognizer(target: self,
action: #selector(longPressed(sender:)))
arView.addGestureRecognizer(arViewLongPress)
So far so good, on tap gesture I apply the logic from the previously linked answer and apply force impulse like:
if let sk8 = skateboard as? HasPhysics {
sk8.applyImpulse(direction, at: position, relativeTo: nil)
}
My issue comes with my "catching" logic, where I do want to use the long press, and apply downward force to my skateboard AR object like this:
#objc func longPressed(sender: UILongPressGestureRecognizer) {
if sender.state == .began || sender.state == .changed {
let location = sender.location(in:arView)
if arView.entity(at: location) is HasPhysics {
if let ray = arView.ray(through: location) {
let results = arView.scene.raycast(origin: ray.origin,
direction: ray.direction,
length: 100.0,
query: .nearest,
mask: .all,
relativeTo: nil)
if let _ = results.first,
let position = results.first?.position,
let normal = results.first?.normal {
// test different kind of forces
let direction = SIMD3<Float>(0, -20, 0)
if let sk8 = skateboard as? HasPhysics {
sk8.addForce(direction, at: position, relativeTo: nil)
}
}
}
}
}
}
Right now I know that I am ignoring the raycast results, but this is in pure development state, my issue is that when I apply positive/negative x/z the object responds well, it either slides back and forth or left or right, the positive y is also working by draging the board in the air, the only error prone force direction is the one I am striving to achieve is the downward facing negative y. The object just sits there with no effect at all.
Let also share how my object is defined inside the Reality Composer:
Ollie trick
In real life, if you shift your entire body's weight to the nose of the skateboard's deck (like doing the Ollie Maneuver), the skateboard's center of mass shifts from the middle towards the point where the force is being applied. In RealityKit, if you need to tear the rear (front) wheels of the skateboard off the floor, move the model's center of mass towards the slope.
The repositioning of the center of mass occurs in a local coordinate system.
import SwiftUI
import RealityKit
struct ContentView : View {
var body: some View {
ARViewContainer().ignoresSafeArea()
}
}
struct ARViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
arView.debugOptions = .showPhysics // shape visualization
let scene = try! Experience.loadScene()
let name = "skateboard_01_base_stylized_lod0"
typealias ModelPack = ModelEntity & HasPhysicsBody & HasCollision
let model = scene.findEntity(named: name) as! ModelPack
model.physicsBody = .init()
model.generateCollisionShapes(recursive: true)
model.physicsBody?.massProperties.centerOfMass.position = [0, 0,-27]
arView.scene.anchors.append(scene)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) { }
}
Physics shape
The second problem that you need to solve is to replace the model's box shape of the physical body (RealityKit and Reality Composer generate this type of shape by default). Its shape cannot be in the form of a monolithic box, it's quite obvious, because the box-shaped form does not allow the force to be applied appropriately. You need a shape similar to the outline of the model.
So, you can use the following code to create a custom shape:
(four spheres for wheels and box for deck)
let shapes: [ShapeResource] = [
.generateBox(size: [ 20, 4, 78])
.offsetBy(translation: [ 0.0, 11, 0.0]),
.generateSphere(radius: 3.1)
.offsetBy(translation: [ 7.5, 3, 21.4]),
.generateSphere(radius: 3.1)
.offsetBy(translation: [ 7.5, 3,-21.4]),
.generateSphere(radius: 3.1)
.offsetBy(translation: [-7.5, 3, 21.4]),
.generateSphere(radius: 3.1)
.offsetBy(translation: [-7.5, 3,-21.4])
]
// model.physicsBody = PhysicsBodyComponent(shapes: shapes, mass: 4.5)
model.collision = CollisionComponent(shapes: shapes)
P.S.
Reality Composer model's settings (I used Xcode 14.0 RC 1).
In iOS 14, hitTest(_:types:) was deprecated. It seems that you are supposed to use raycastQuery(from:allowing:alignment:) now. From the documentation:
Raycasting is the preferred method for finding positions on surfaces in the real-world environment, but the hit-testing functions remain present for compatibility. With tracked raycasting, ARKit continues to refine the results to increase the position accuracy of virtual content you place with a raycast.
However, how can I hit test SCNNodes with raycasting? I only see options to hit test a plane.
raycastQuery method documentation
Only choices for allowing: are planes
This is my current code, which uses hit-testing to detect taps on the cube node and turn it blue.
class ViewController: UIViewController {
#IBOutlet weak var sceneView: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
/// Run the configuration
let worldTrackingConfiguration = ARWorldTrackingConfiguration()
sceneView.session.run(worldTrackingConfiguration)
/// Make the red cube
let cube = SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0)
cube.materials.first?.diffuse.contents = UIColor.red
let cubeNode = SCNNode(geometry: cube)
cubeNode.position = SCNVector3(0, 0, -0.2) /// 20 cm in front of the camera
cubeNode.name = "ColorCube"
/// Add the node to the ARKit scene
sceneView.scene.rootNode.addChildNode(cubeNode)
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
super.touchesBegan(touches, with: event)
guard let location = touches.first?.location(in: sceneView) else { return }
let results = sceneView.hitTest(location, options: [SCNHitTestOption.searchMode : 1])
for result in results.filter( { $0.node.name == "ColorCube" }) { /// See if the beam hit the cube
let cubeNode = result.node
cubeNode.geometry?.firstMaterial?.diffuse.contents = UIColor.blue /// change to blue
}
}
}
How can I replace let results = sceneView.hitTest(location, options: [SCNHitTestOption.searchMode : 1]) with the equivalent raycastQuery code?
About Hit-Testing
Official documentation says that only ARKit's hitTest(_:types:) instance method is deprecated in iOS 14. However, in iOS 15 you can still use it. ARKit's hit-testing method is supposed to be replaced with a raycasting methods.
Deprecated hit-testing:
let results: [ARHitTestResult] = sceneView.hitTest(sceneView.center,
types: .existingPlaneUsingGeometry)
Raycasting equivalent
let raycastQuery: ARRaycastQuery? = sceneView.raycastQuery(
from: sceneView.center,
allowing: .estimatedPlane,
alignment: .any)
let results: [ARRaycastResult] = sceneView.session.raycast(raycastQuery!)
If you prefer raycasting method for hitting a node (entity), use RealityKit module instead of SceneKit:
let arView = ARView(frame: .zero)
let query: CollisionCastQueryType = .nearest
let mask: CollisionGroup = .default
let raycasts: [CollisionCastHit] = arView.scene.raycast(from: [0, 0, 0],
to: [5, 6, 7],
query: query,
mask: mask,
relativeTo: nil)
guard let raycast: CollisionCastHit = raycasts.first else { return }
print(raycast.entity.name)
P.S.
There is no need to look for a replacement for the SceneKit's hitTest(_:options:) instance method returning [SCNHitTestResult], because it works fine and it's not a time to make it deprecated.
I'm pretty new to RealityKit and ARKit. I have two scenes in Reality Composer, one with a book image anchor and one with a horizontal plane anchor. The first scene with an image anchor has a cube attached to the top of it and the second scene built on a horizontal plane has two rings. All objects have a fixed collision. I'd like to run an animation when the rings and the cube touch. I couldn't find a way to do this in Reality Composer, so I made two attempts within the code to no avail. (I'm printing "collision started" just to test the collision code without the animation) Unfortunately, it didn't work. Would appreciate help on this.
Attempt #1:
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
let componentBreakdownAnchor = try! CC.loadComponentBreakdown()
arView.scene.anchors.append(componentBreakdownAnchor)
let bookAnchor = try! CC.loadBook()
arView.scene.anchors.append(bookAnchor)
let ringsAnchor = try! CC.loadRings()
arView.scene.anchors.append(ringsAnchor)
// Add the componentBreakdown anchor to the scene
arView.scene.anchors.append(componentBreakdownAnchor)
let bookAnchor = try! CC.loadBook()
arView.scene.anchors.append(bookAnchor)
let ringsAnchor = try! CC.loadRings()
arView.scene.anchors.append(ringsAnchor)
let _ = ringsAnchor.scene?.subscribe(
to: CollisionEvents.Began.self,
on: bookAnchor
) { event in
print("collision started")
}
return arView
}
Attempt #2
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
let componentBreakdownAnchor = try! CC.loadComponentBreakdown()
arView.scene.anchors.append(componentBreakdownAnchor)
let bookAnchor = try! CC.loadBook()
arView.scene.anchors.append(bookAnchor)
let ringsAnchor = try! CC.loadRings()
arView.scene.anchors.append(ringsAnchor)
// Add the componentBreakdown anchor to the scene
arView.scene.anchors.append(componentBreakdownAnchor)
let bookAnchor = try! CC.loadBook()
arView.scene.anchors.append(bookAnchor)
let ringsAnchor = try! CC.loadRings()
arView.scene.anchors.append(ringsAnchor)
arView.scene.subscribe(
to: CollisionEvents.Began.self,
on: bookAnchor
) { event in
print("collision started")
}
return arView
}
RealityKit scene
If you want to use models' collisions made in RealityKit's scene from scratch, at first you need to implement a HasCollision protocol.
Let's see what a developer documentation says about it:
HasCollision protocol is an interface used for ray casting and collision detection.
Here's how your implementation should look like if you generate models in RealityKit:
import Cocoa
import RealityKit
class CustomCollision: Entity, HasModel, HasCollision {
let color: NSColor = .gray
let collider: ShapeResource = .generateSphere(radius: 0.5)
let sphere: MeshResource = .generateSphere(radius: 0.5)
required init() {
super.init()
let material = SimpleMaterial(color: color,
isMetallic: true)
self.components[ModelComponent] = ModelComponent(mesh: sphere,
materials: [material])
self.components[CollisionComponent] = CollisionComponent(shapes: [collider],
mode: .trigger,
filter: .default)
}
}
Reality Composer scene
And here's how your code should look like if you use models from Reality Composer:
import UIKit
import RealityKit
import Combine
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
var subscriptions: [Cancellable] = []
override func viewDidLoad() {
super.viewDidLoad()
let groundSphere = try! Experience.loadStaticSphere()
let upperSphere = try! Experience.loadDynamicSphere()
let gsEntity = groundSphere.children[0].children[0].children[0]
let usEntity = upperSphere.children[0].children[0].children[0]
// CollisionComponent exists in case you turn on
// "Participates" property in Reality Composer app
print(gsEntity)
let gsComp: CollisionComponent = gsEntity.components[CollisionComponent]!.self
let usComp: CollisionComponent = usEntity.components[CollisionComponent]!.self
gsComp.shapes = [.generateBox(size: [0.05, 0.07, 0.05])]
usComp.shapes = [.generateBox(size: [0.05, 0.05, 0.05])]
gsEntity.components.set(gsComp)
usEntity.components.set(usComp)
let subscription = self.arView.scene.subscribe(to: CollisionEvents.Began.self,
on: gsEntity) { event in
print("Balls' collision occured!")
}
self.subscriptions.append(subscription)
arView.scene.anchors.append(upperSphere)
arView.scene.anchors.append(groundSphere)
}
}
I am starting to use ARKit and I have a use case where I want to know the motion from a known position to another one.
So I was wondering if it is possible (like every tracking solution) to set a known position and orientation a starting point of the tracking in ARKit?
Regards
There are at least six approaches allowing you set a starting point for a model. But using no ARAnchors at all in your ARScene is considered as bad AR experience (although Apple's Augmented Reality app template has no any ARAnchors in a code).
First approach
This is the approach that Apple engineers propose us in Augmented Reality app template in Xcode. This approach doesn't use anchoring, so all you need to do is to accommodate a model in air with coordinates like (x: 0, y: 0, z: -0.5) or in other words your model will be 50 cm away from camera.
override func viewDidLoad() {
super.viewDidLoad()
sceneView.scene = SCNScene(named: "art.scnassets/ship.scn")!
let model = sceneView.scene.rootNode.childNode(withName: "ship",
recursively: true)
model?.position.z = -0.5
sceneView.session.run(ARWorldTrackingConfiguration())
}
Second approach
Second approach is almost the same as the first one, except it uses ARKit's anchor:
guard let sceneView = self.view as? ARSCNView
else { return }
if let currentFrame = sceneView.session.currentFrame {
var translation = matrix_identity_float4x4
translation.columns.3.z = -0.5
let transform = simd_mul(currentFrame.camera.transform, translation)
let anchor = ARAnchor(transform: transform)
sceneView.session.add(anchor: anchor)
}
Third approach
You can also create a pre-defined model's position pinned with ARAnchor using third approach, where you need to import RealityKit module as well:
func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
let model = ModelEntity(mesh: MeshResource.generateSphere(radius: 1.0))
// ARKit's anchor
let anchor = ARAnchor(transform: simd_float4x4(diagonal: [1,1,1]))
// RealityKit's anchor based on position of ARAnchor
let anchorEntity = AnchorEntity(anchor: anchor)
anchorEntity.addChild(model)
arView.scene.anchors.append(anchorEntity)
}
Fourth approach
If you turned on a plane detection feature you can use Ray-casting or Hit-testing methods. As a target object you can use a little sphere (located at 0, 0, 0) that will be ray-casted.
let query = arView.raycastQuery(from: screenCenter,
allowing: .estimatedPlane,
alignment: .any)
let raycast = session.trackedRaycast(query) { results in
if let result = results.first {
object.transform = result.transform
}
}
Fifth approach
This approach is focused to save and share ARKit's worldMaps.
func writeWorldMap(_ worldMap: ARWorldMap, to url: URL) throws {
let data = try NSKeyedArchiver.archivedData(withRootObject: worldMap,
requiringSecureCoding: true)
try data.write(to: url)
}
func loadWorldMap(from url: URL) throws -> ARWorldMap {
let mapData = try Data(contentsOf: url)
guard let worldMap = try NSKeyedUnarchiver.unarchivedObject(ofClass: ARWorldMap.self,
from: mapData)
else {
throw ARError(.invalidWorldMap)
}
return worldMap
}
Sixth approach
In ARKit 4.0 a new ARGeoTrackingConfiguration is implemented with the help of MapKit module. So now you can use a pre-defined GPS data.
func session(_ session: ARSession, didAdd anchors: [ARAnchor]) {
for geoAnchor in anchors.compactMap({ $0 as? ARGeoAnchor }) {
arView.scene.addAnchor(Entity.placemarkEntity(for: geoAnchor)
}
}
I want to achieve something similar like ARCore's raycast method which takes an arbitrary ray in world space coordinates instead of a screen-space point:
List<HitResult> hitTest (float[] origin3, int originOffset, float[] direction3, int directionOffset)
I see ARKit itself has not that method like that, but in any way maybe someone has an idea!
Thanks.
In Apple RealityKit and ARKit frameworks you can find three main types of Raycast methods: ARView Raycast, ARSession Raycast and Scene Raycast (or World Raycast). All methods written in Swift:
ARView.raycast(from:allowing:alignment:)
This instance method performs a ray cast, where a ray is cast into the scene from the center of the camera through a point in the view, and the results are immediately returned. You can use this type of raycast in ARKit.
func raycast(from point: CGPoint,
allowing target: ARRaycastQuery.Target,
alignment: ARRaycastQuery.TargetAlignment) -> [ARRaycastResult]
ARView.scene.raycast(origin:direction:query:mask:relativeTo:)
WORLD RAYCAST
This instance method performs a convex ray cast against all the geometry in the scene for a ray of a given origin, direction, and length.
func raycast(origin: SIMD3<Float>,
direction: SIMD3<Float>,
query: CollisionCastQueryType,
mask: CollisionGroup,
relativeTo: Entity) -> [CollisionCastHit]
ARView.session.trackedRaycast(_:updateHandler:)
This instance method repeats a ray-cast query over time to notify you of updated surfaces in the physical environment. You can use this type of raycast in ARKit 3.5.
func trackedRaycast(_ query: ARRaycastQuery,
updateHandler: #escaping ([ARRaycastResult]) -> Void) -> ARTrackedRaycast?
ARView.trackedRaycast(from:allowing:alignment:updateHandler:)
This RealityKit's instance method also performs a tracked ray cast, but here a ray is cast into the scene from the center of the camera through a point in the view.
func trackedRaycast(from point: CGPoint,
allowing target: ARRaycastQuery.Target,
alignment: ARRaycastQuery.TargetAlignment,
updateHandler: #escaping ([ARRaycastResult]) -> Void) -> ARTrackedRaycast?
Code snippet 01:
import RealityKit
let startPosition: SIMD3<Float> = [3,-2,0]
let endPosition: SIMD3<Float> = [10,7,-5]
let query: CollisionCastQueryType = .all
let mask: CollisionGroup = .all
let raycasts: [CollisionCastHit] = arView.scene.raycast(from: startPosition,
to: endPosition,
query: query,
mask: mask,
relativeTo: nil)
guard let rayCast: CollisionCastHit = raycasts.first
else {
return
}
Code snippet 02:
import ARKit
let query = arView.raycastQuery(from: screenCenter,
allowing: .estimatedPlane,
alignment: .any)
let raycast = session.trackedRaycast(query) { results in
if let result = results.first {
object.transform = result.transform
}
}
raycast.stop()