In my code, I am currently loading an Entity using the .loadAsync(named: String) method, then adding to my existing AnchorEntity. As a test, I am then rotating my Entity 90°, and would like to determine how to then get the current angle of rotation.
The long-term intent is that I am going to allow users to rotate a model, but want to limit the rotation to a certain degree (I.E., the user can rotate the pitch of the model to 90° or -90°, but no further than that). Without being able to know the current angle of rotation for the Entity, I am unsure what logic I could use to limit this.
Entity.loadAsync(named: "myModel.usdz")
.receive(on: RunLoop.main)
.sink { completion in
// ...
} receiveValue: { [weak self] entity in
guard let self = self else { return }
self.objectAnchor.addChild(entity)
scene.addAnchor(objectAnchor)
let transform = Transform(pitch: .pi / 2,
yaw: .zero,
roll: .zero)
entity.setOrientation(transform.rotation,
relativeTo: nil)
print(entity.orientation)
// Sample Output: simd_quatf(real: 0.7071069,
// imag: SIMD3<Float>(0.7071067, 0.0, 0.0))
}
.store(in: &subscriptions)
I would have expected entity.orientation to give me something like 90.0 or 1.57 (.pi / 2), but unsure how I can get the current rotation of the Entity in a form that would align with expected angles.
simd_quatf
In RealityKit 2.0, when retrieving simd_quatf structure's values from .orientation and .transform.rotation instance properties, the default initializer brings real (scalar) and imaginary (vector) parts of a Quaternion:
public init(real: Float, imag: SIMD3<Float>)
Hamilton's quaternion expression looks like this:
In your case, the value 0.707 corresponds to a rotation angle of 45 degrees. If you set both real (a) and imag.x (bi) to 0.707, you'll get a total rotation angle of 90 degrees for X axis.
It's easy to check:
let quaternion = simd_quatf(real: 0.707, imag: [0.707, 0, 0]) // 90 degrees
entity.orientation = quaternion
To check what "readable" model's orientation is, use regular parameters from another initializer:
public init(angle: Float, axis: SIMD3<Float>)
print(entity.orientation.angle) // 1.5707964
print(entity.orientation.axis) // SIMD3<Float>(0.99999994, 0.0, 0.0)
Related
I'm trying to animate the movement of an entity but at the end of the animation the entity disappears. It occurs if you animate the scale or translation but not rotation. I'm not sure if it's a bug or expected behaviour but I would like to find a way to stop it.
let transform = Transform(scale: simd_float3.one,
rotation: simd_quatf(),
translation: [0.05, 0, 0])
let animationDefinition = FromToByAnimation<Transform>(by: transform,
duration: 1.0,
bindTarget: .transform)
if let animationResource = try? AnimationResource.generate(with: animationDefinition) {
entity.playAnimation(animationResource)
}
I know you can use entity.move() and that works fine but I want to explore other ways to animate entities.
This transform animation works as expected. Fix the opacity of your model, if it has translucent materials (for that use USDZ Python Tools commands fixOpacity and usdARKitChecker). Also, check if any transformation are applied to the entity on which you are running the transform animation.
let boxScene = try! Experience.loadBox()
boxScene.children[0].scale *= 3
arView.scene.anchors.append(boxScene)
let entity = boxScene.children[0].children[0]
let transform = Transform(scale: simd_float3.init(2, 2, 2),
rotation: simd_quatf.init(angle: .pi, axis: [1, 1, 1]),
translation: [0.4, 0, 0])
let animationDefinition = FromToByAnimation<Transform>(by: transform,
duration: 2.0,
bindTarget: .transform)
if let anime = try? AnimationResource.generate(with: animationDefinition) {
entity.playAnimation(anime)
}
As you can see from this example, after applying the animation, the entity does not disappear.
I am trying to create some AR experience.
I load the Model with animations as an Entity. Lets call it a Toy.
I create an AnchorEntity.
I attach the Toy to the AnchorEntity. Up to this point everything works great.
I want the Toy to walk in random directions. And it does for the first time. Then it gets interesting, allow me to share my code:
First method uses a newly created Transform for the Toy with the modified translation x, y, to make the Toy move and that is it.
func walk(completion: #escaping () -> Void) {
guard let robot = robot else {
return
}
let currentTransform = robot.transform
guard let path = randomPath(from: currentTransform) else {
return
}
let (newTranslation , travelTime) = path
let newTransform = Transform(scale: currentTransform.scale,
rotation: currentTransform.rotation,
translation: newTranslation)
robot.move(to: newTransform, relativeTo: nil, duration: travelTime)
DispatchQueue.main.asyncAfter(deadline: .now() + travelTime + 1) {
completion()
}
}
We get that new Transform from the method below.
func randomPath(from currentTransform: Transform) -> (SIMD3<Float>, TimeInterval)? {
// Get the robot's current transform and translation
let robotTranslation = currentTransform.translation
// Generate random distances for a model to cross, relative to origin
let randomXTranslation = Float.random(in: 0.1...0.4) * [-1.0,1.0].randomElement()!
let randomZTranslation = Float.random(in: 0.1...0.4) * [-1.0,1.0].randomElement()!
// Create a translation relative to the current transform
let relativeXTranslation = robotTranslation.x + randomXTranslation
let relativeZTranslation = robotTranslation.z + randomZTranslation
// Find a path
var path = (randomXTranslation * randomXTranslation + randomZTranslation * randomZTranslation).squareRoot()
// Path only positive
if path < 0 { path = -path }
// Calculate the time of walking based on the distance and default speed
let timeOfWalking: Float = path / settings.robotSpeed
// Based on old trasnlation calculate the new one
let newTranslation: SIMD3<Float> = [relativeXTranslation,
Float(0),
relativeZTranslation]
return (newTranslation, TimeInterval(timeOfWalking))
}
The problem is that the value of Entity.transform.translation.y grows from 0 to some random value < 1. Always after the second time the walk() method is being called.
As you can see, every time the method is called, newTranslation sets the Y value to be 0. And yet the Toy's translation:
I am out of ideas any help is appreciated. I can share the whole code if needed.
I have managed to fix the issue by specifying parameter relativeTo as Toy's AnchorEntity:
toy.move(to: newTransform, relativeTo: anchorEntity, duration: travelTime)
I want to position an object in front of the camera, without changing its parent. The object should be in the center of the screen, at specified distance distanceFromCamera.
The object is stored as cursorEntity and is a child of sceneEntity.
A reference to the ARView is stored as arView and the position of the cursorEntity gets updated in the function updateCursorPosition
First, add forward in an extension to float4x4 that gives the forward-facing directional vector of a transform matrix.
extension float4x4 {
var forward: SIMD3<Float> {
normalize(SIMD3<Float>(-columns.2.x, -columns.2.y, -columns.2.z))
}
}
Then, implement the following 4 steps:
func updateCursorPosition() {
let cameraTransform: Transform = arView.cameraTransform
// 1. Calculate the local camera position, relative to the sceneEntity
let localCameraPosition: SIMD3<Float> = sceneEntity.convert(position: cameraTransform.translation, from: nil)
// 2. Get the forward-facing directional vector of the camera using the extension described above
let cameraForwardVector: SIMD3<Float> = cameraTransform.matrix.forward
// 3. Calculate the final local position of the cursor using distanceFromCamera
let finalPosition: SIMD3<Float> = localCameraPosition + cameraForwardVector * distanceFromCamera
// 4. Apply the translation
cursorEntity.transform.translation = finalPosition
}
I want to make the physics world without friction and damping.
I tried to make the scene's gravity to (0,0,0), and make a square and ball, give force when tapping. I want to make the ball move eternally, but it just stop in some time.
How can I make the entities friction to zero?
Apply a new Physics Material to your model entity.
For this use generate(friction:restitution:) type method:
static func generate(friction: Float = 0,
restitution: Float = 0) -> PhysicsMaterialResource
where
/* the coefficient of friction is in the range [0, infinity] */
/* and the coefficient of restitution is in the range [0, 1] */
Here's a code:
arView.environment.background = .color(.darkGray)
let mesh = MeshResource.generateSphere(radius: 0.5)
let material = SimpleMaterial()
let model = ModelEntity(mesh: mesh,
materials: [material]) as (ModelEntity & HasPhysics)
let physicsResource: PhysicsMaterialResource = .generate(friction: 0,
restitution: 0)
model.components[PhysicsBodyComponent] = PhysicsBodyComponent(
shapes: [.generateSphere(radius: 0.51)],
mass: 20, // in kilograms
material: physicsResource,
mode: .dynamic)
model.generateCollisionShapes(recursive: true)
let anchor = AnchorEntity()
anchor.addChild(model)
arView.scene.anchors.append(anchor)
P.S. Due to some imperfectness of physics engine in RealityKit, I suppose there's no possibility to create an eternal bouncing. Seemingly next RealityKit's update will fix physics engine imperfectness.
How does one extract the SceneKit depth buffer? I make an AR based app that is running Metal and I'm really struggling to find any info on how to extract a 2D depth buffer so I can render out fancy 3D photos of my scenes. Any help greatly appreciated.
Your question is unclear but I'll try to answer.
Depth pass from VR view
If you need to render a Depth pass from SceneKit's 3D environment then you should use, for instance, a SCNGeometrySource.Semantic structure. There are vertex, normal, texcoord, color and tangent type properties. Let's see what a vertex type property is:
static let vertex: SCNGeometrySource.Semantic
This semantic identifies data containing the positions of each vertex in the geometry. For a custom shader program, you use this semantic to bind SceneKit’s vertex position data to an input attribute of the shader. Vertex position data is typically an array of three- or four-component vectors.
Here's a code's excerpt from iOS Depth Sample project.
UPDATED: Using this code you can get a position for every point in SCNScene and assign a color for these points (this is what a zDepth channel really is):
import SceneKit
struct PointCloudVertex {
var x: Float, y: Float, z: Float
var r: Float, g: Float, b: Float
}
#objc class PointCloud: NSObject {
var pointCloud : [SCNVector3] = []
var colors: [UInt8] = []
public func pointCloudNode() -> SCNNode {
let points = self.pointCloud
var vertices = Array(repeating: PointCloudVertex(x: 0,
y: 0,
z: 0,
r: 0,
g: 0,
b: 0),
count: points.count)
for i in 0...(points.count-1) {
let p = points[i]
vertices[i].x = Float(p.x)
vertices[i].y = Float(p.y)
vertices[i].z = Float(p.z)
vertices[i].r = Float(colors[i * 4]) / 255.0
vertices[i].g = Float(colors[i * 4 + 1]) / 255.0
vertices[i].b = Float(colors[i * 4 + 2]) / 255.0
}
let node = buildNode(points: vertices)
return node
}
private func buildNode(points: [PointCloudVertex]) -> SCNNode {
let vertexData = NSData(
bytes: points,
length: MemoryLayout<PointCloudVertex>.size * points.count
)
let positionSource = SCNGeometrySource(
data: vertexData as Data,
semantic: SCNGeometrySource.Semantic.vertex,
vectorCount: points.count,
usesFloatComponents: true,
componentsPerVector: 3,
bytesPerComponent: MemoryLayout<Float>.size,
dataOffset: 0,
dataStride: MemoryLayout<PointCloudVertex>.size
)
let colorSource = SCNGeometrySource(
data: vertexData as Data,
semantic: SCNGeometrySource.Semantic.color,
vectorCount: points.count,
usesFloatComponents: true,
componentsPerVector: 3,
bytesPerComponent: MemoryLayout<Float>.size,
dataOffset: MemoryLayout<Float>.size * 3,
dataStride: MemoryLayout<PointCloudVertex>.size
)
let element = SCNGeometryElement(
data: nil,
primitiveType: .point,
primitiveCount: points.count,
bytesPerIndex: MemoryLayout<Int>.size
)
element.pointSize = 1
element.minimumPointScreenSpaceRadius = 1
element.maximumPointScreenSpaceRadius = 5
let pointsGeometry = SCNGeometry(sources: [positionSource, colorSource], elements: [element])
return SCNNode(geometry: pointsGeometry)
}
}
Depth pass from AR view
If you need to render a Depth pass from ARSCNView it is possible only in case you're using ARFaceTrackingConfiguration for the front-facing camera. If so, then you can employ capturedDepthData instance property that brings you a depth map, captured along with the video frame.
var capturedDepthData: AVDepthData? { get }
But this depth map image is only 15 fps and of lower resolution than corresponding RGB image at 60 fps.
Face-based AR uses the front-facing, depth-sensing camera on compatible devices. When running such a configuration, frames vended by the session contain a depth map captured by the depth camera in addition to the color pixel buffer (see capturedImage) captured by the color camera. This property’s value is always nil when running other AR configurations.
And a real code could be like this:
extension ViewController: ARSCNViewDelegate {
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
DispatchQueue.global().async {
guard let frame = self.sceneView.session.currentFrame else {
return
}
if let depthImage = frame.capturedDepthData {
self.depthImage = (depthImage as! CVImageBuffer)
}
}
}
}
Depth pass from Video view
Also, you can extract a true Depth pass using 2 back-facing cameras and AVFoundation framework.
Look at Image Depth Map tutorial where Disparity concept will be introduced to you.