Within RealityKit, how can I make the world without friction? - swift

I want to make the physics world without friction and damping.
I tried to make the scene's gravity to (0,0,0), and make a square and ball, give force when tapping. I want to make the ball move eternally, but it just stop in some time.
How can I make the entities friction to zero?

Apply a new Physics Material to your model entity.
For this use generate(friction:restitution:) type method:
static func generate(friction: Float = 0,
restitution: Float = 0) -> PhysicsMaterialResource
where
/* the coefficient of friction is in the range [0, infinity] */
/* and the coefficient of restitution is in the range [0, 1] */
Here's a code:
arView.environment.background = .color(.darkGray)
let mesh = MeshResource.generateSphere(radius: 0.5)
let material = SimpleMaterial()
let model = ModelEntity(mesh: mesh,
materials: [material]) as (ModelEntity & HasPhysics)
let physicsResource: PhysicsMaterialResource = .generate(friction: 0,
restitution: 0)
model.components[PhysicsBodyComponent] = PhysicsBodyComponent(
shapes: [.generateSphere(radius: 0.51)],
mass: 20, // in kilograms
material: physicsResource,
mode: .dynamic)
model.generateCollisionShapes(recursive: true)
let anchor = AnchorEntity()
anchor.addChild(model)
arView.scene.anchors.append(anchor)
P.S. Due to some imperfectness of physics engine in RealityKit, I suppose there's no possibility to create an eternal bouncing. Seemingly next RealityKit's update will fix physics engine imperfectness.

Related

Entity disappears at the end of the animation

I'm trying to animate the movement of an entity but at the end of the animation the entity disappears. It occurs if you animate the scale or translation but not rotation. I'm not sure if it's a bug or expected behaviour but I would like to find a way to stop it.
let transform = Transform(scale: simd_float3.one,
rotation: simd_quatf(),
translation: [0.05, 0, 0])
let animationDefinition = FromToByAnimation<Transform>(by: transform,
duration: 1.0,
bindTarget: .transform)
if let animationResource = try? AnimationResource.generate(with: animationDefinition) {
entity.playAnimation(animationResource)
}
I know you can use entity.move() and that works fine but I want to explore other ways to animate entities.
This transform animation works as expected. Fix the opacity of your model, if it has translucent materials (for that use USDZ Python Tools commands fixOpacity and usdARKitChecker). Also, check if any transformation ​​are applied to the entity on which you are running the transform animation.
let boxScene = try! Experience.loadBox()
boxScene.children[0].scale *= 3
arView.scene.anchors.append(boxScene)
let entity = boxScene.children[0].children[0]
let transform = Transform(scale: simd_float3.init(2, 2, 2),
rotation: simd_quatf.init(angle: .pi, axis: [1, 1, 1]),
translation: [0.4, 0, 0])
let animationDefinition = FromToByAnimation<Transform>(by: transform,
duration: 2.0,
bindTarget: .transform)
if let anime = try? AnimationResource.generate(with: animationDefinition) {
entity.playAnimation(anime)
}
As you can see from this example, after applying the animation, the entity does not disappear.

I've tried to give a physicsBodyComponet to modelEntity, and It just falls deep down

On reality kit, I've tried to give a physicsBodyComponent to a modelEntity.
But as I put to modelEntity to real world, It just fall down.
Is there anyway to fix this?
You need to create a floor mesh with a PhysicsBodyComponent:
let floor = ModelEntity(mesh: .generateBox(size: [1000, 0, 1000]), materials: [SimpleMaterial()])
floor.generateCollisionShapes(recursive: true)
if let collisionComponent = floor.components[CollisionComponent] as? CollisionComponent {
floor.components[PhysicsBodyComponent] = PhysicsBodyComponent(shapes: collisionComponent.shapes, mass: 0, material: nil, mode: .static)
floor.components[ModelComponent] = nil // make the floor invisible
}
scene?.addChild(floor)
Then, when you load your entities, you also give them a PhysicsBodyComponent (and they need a non-zero mass, otherwise they will anyways fall through, which is what eluded me for a long time):
var loadModelCancellable: AnyCancellable? = nil
loadModelCancellable = Entity.loadModelAsync(named: modelUri)
.sink(receiveCompletion: { _ in
loadModelCancellable?.cancel()
}, receiveValue: { entity in
entity.generateCollisionShapes(recursive: true)
if let collisionComponent = entity.components[CollisionComponent] as? CollisionComponent {
entity.components[PhysicsBodyComponent] = PhysicsBodyComponent(shapes: collisionComponent.shapes, mass: 1, material: nil, mode: .dynamic)
}
scene.addChild(entity)
loadModelCancellable?.cancel()
})
In the end, adding physics to my project had too many unintended consequences for what I was trying to do (just preventing models to overlap), like models pushing each other, and movements needing to be redone completely, ... So I didn't get further than this, but at least this should let you add physics to your models without them falling indefinitely from gravity.

iPad Pro Lidar - Export Geometry & Texture

I would like to be able to export a mesh and texture from the iPad Pro Lidar.
There's examples here of how to export a mesh, but Id like to be able to export the environment texture too
ARKit 3.5 – How to export OBJ from new iPad Pro with LiDAR?
ARMeshGeometry stores the vertices for the mesh, would it be the case that one would have to 'record' the textures as one scans the environment, and manually apply them?
This post seems to show a way to get texture co-ordinates, but I can't see a way to do that with the ARMeshGeometry: Save ARFaceGeometry to OBJ file
Any point in the right direction, or things to look at greatly appreciated!
Chris
You need to compute the texture coordinates for each vertex, apply them to the mesh and supply a texture as a material to the mesh.
let geom = meshAnchor.geometry
let vertices = geom.vertices
let size = arFrame.camera.imageResolution
let camera = arFrame.camera
let modelMatrix = meshAnchor.transform
let textureCoordinates = vertices.map { vertex -> vector_float2 in
let vertex4 = vector_float4(vertex.x, vertex.y, vertex.z, 1)
let world_vertex4 = simd_mul(modelMatrix!, vertex4)
let world_vector3 = simd_float3(x: world_vertex4.x, y: world_vertex4.y, z: world_vertex4.z)
let pt = camera.projectPoint(world_vector3,
orientation: .portrait,
viewportSize: CGSize(
width: CGFloat(size.height),
height: CGFloat(size.width)))
let v = 1.0 - Float(pt.x) / Float(size.height)
let u = Float(pt.y) / Float(size.width)
return vector_float2(u, v)
}
// construct your vertices, normals and faces from the source geometry
// directly and supply the computed texture coords to create new geometry
// and then apply the texture.
let scnGeometry = SCNGeometry(sources: [verticesSource, textureCoordinates, normalsSource], elements: [facesSource])
let texture = UIImage(pixelBuffer: frame.capturedImage)
let imageMaterial = SCNMaterial()
imageMaterial.isDoubleSided = false
imageMaterial.diffuse.contents = texture
scnGeometry.materials = [imageMaterial]
let pcNode = SCNNode(geometry: scnGeometry)
pcNode if added to your scene will contain the mesh with the texture applied.
Texture coordinates computation from here
Check out my answer over here
It's a description of this project: MetalWorldTextureScan which demonstrates how to scan your environment and create a textured mesh using ARKit and Metal.

How to apply a 3D Model on detected face by Apple Vision "NO AR"

With iPhoneX True-Depth camera its possible to get the 3D Coordinates of any object and use that information to position and scale the object, but with older iPhones we don't have access to AR on front-face camera, what i've done so far was detecting the face using Apple Vison frame work and drawing some 2D paths around the face or landmarks.
i've made a SceneView and Applied that as front layer of My view with clear background, and beneath it is AVCaptureVideoPreviewLayer , after detecting the face my 3D Object appears on the screen but positioning and scaling it correctly according to the face boundingBox required unprojecting and other stuffs which i got stuck there, i've also tried converting the 2D BoundingBox to 3D using CATransform3D but i failed! i am wondering if what i want to achieve is even possible ? i remember SnapChat was doing this before ARKit was available on iPhone if i'm not wrong!
override func viewDidLoad() {
super.viewDidLoad()
self.view.addSubview(self.sceneView)
self.sceneView.frame = self.view.bounds
self.sceneView.backgroundColor = .clear
self.node = self.scene.rootNode.childNode(withName: "face",
recursively: true)!
}
fileprivate func updateFaceView(for result:
VNFaceObservation, twoDFace: Face2D) {
let box = convert(rect: result.boundingBox)
defer {
DispatchQueue.main.async {
self.faceView.setNeedsDisplay()
}
}
faceView.boundingBox = box
self.sceneView.scene?.rootNode.addChildNode(self.node)
let unprojectedBox = SCNVector3(box.origin.x, box.origin.y,
0.8)
let worldPoint = sceneView.unprojectPoint(unprojectedBox)
self.node.position = worldPoint
/* Here i have to to unprojecting
to convert the value from a 2D point to 3D point also
issue here. */
}
The only way to achieve this is to SceneKit with an ortographic camera and use SCNGeometrySource to match the landmarks from Vision to the vertices of the mesh.
First, you need the mesh with the same number of vertices of Vision (66 - 77 depending on which Vision Revision you're in). You can create one using a tool like Blender.
The mesh on Blender
Then, on code, on each time you process your landmarks, you do the steps:
1- Get the mesh vertices:
func getVertices() -> [SCNVector3]{
var result = [SCNVector3]()
let planeSources = shape!.geometry?.sources(for: SCNGeometrySource.Semantic.vertex)
if let planeSource = planeSources?.first {
let stride = planeSource.dataStride
let offset = planeSource.dataOffset
let componentsPerVector = planeSource.componentsPerVector
let bytesPerVector = componentsPerVector * planeSource.bytesPerComponent
let vectors = [SCNVector3](repeating: SCNVector3Zero, count: planeSource.vectorCount)
// [SCNVector3](count: planeSource.vectorCount, repeatedValue: SCNVector3Zero)
let vertices = vectors.enumerated().map({
(index: Int, element: SCNVector3) -> SCNVector3 in
var vectorData = [Float](repeating: 0, count: componentsPerVector)
let byteRange = NSMakeRange(index * stride + offset, bytesPerVector)
let data = planeSource.data
(data as NSData).getBytes(&vectorData, range: byteRange)
return SCNVector3( x: vectorData[0], y: vectorData[1], z: vectorData[2])
})
result = vertices
}
return result
}
2- Unproject each landmark captured by Vision and keep them in a SCNVector3 array:
let unprojectedLandmark = sceneView.unprojectPoint( SCNVector3(landmarks[i].x + (landmarks[i].x,landmarks[i].y,0))
3- Modify the geometry using the new vertices:
func reshapeGeometry( _ vertices: [SCNVector3] ){
let source = SCNGeometrySource(vertices: vertices)
var newSources = [SCNGeometrySource]()
newSources.append(source)
for source in shape!.geometry!.sources {
if (source.semantic != SCNGeometrySource.Semantic.vertex) {
newSources.append(source)
}
}
let geometry = SCNGeometry(sources: newSources, elements: shape!.geometry?.elements)
let material = shape!.geometry?.firstMaterial
shape!.geometry = geometry
shape!.geometry?.firstMaterial = material
}
I was able to do that and that was my method.
Hope this helps!
I would suggest looking at Google's AR Core products which support an Apple AR scene with the back or front facing camera...but adds some additional functionality beyond Apple, when it comes to non Face depth camera devices.
Apple's Core Vision is almost the same as Googles Core Vision framework which returns 2D points representing the eyes/mouth/nose etc...and a face tilt component.
However, if you want a way to simply apply either 2D textures to a responsive 3D face, or alternatively attach 3D models to points on the face then take a look at Google's Core Augmented Faces framework. It has great sample code on iOS and Android.

How to combine to two or more SCNGeometry / SCNNode into one

As if I would like to create some custom shape by combining some SCNBox and SCNPyramid etc, I can put them together by setting the right positions and geometries. How ever I just can not find a way to combine them as a single unit which can be modified or reacted in physical world.
The code below that I would like create a simple house shape SCNNode, and I would like the node attached each other when affected by any collisions and gravity.
Can anyone give some hints?
let boxGeo = SCNBox(width: 5, height: 5, length: 5, chamferRadius: 0)
boxGeo.firstMaterial?.diffuse.contents = UIColor.blueColor()
let box = SCNNode(geometry: boxGeo)
box.position = SCNVector3Make(0, -2.5, 0)
scene.rootNode.addChildNode(box)
let pyramidGeo = SCNPyramid(width: 7, height: 7, length: 7)
pyramidGeo.firstMaterial?.diffuse.contents = UIColor.greenColor()
let pyramid = SCNNode(geometry: pyramidGeo)
pyramid.position = SCNVector3Make(0, 0, 0)
scene.rootNode.addChildNode(pyramid)
Make a container node, simply an empty node without any geometry. Let's call it "houseNode" since that's what it looks like you're building.
let houseNode = SCNNode()
Now make your other two nodes children of this.
houseNode.addChildNode(pyramid)
houseNode.addChildNode(box)
Now use the container node anytime you want to act on the two combined nodes.
Edit: You can effect changes to the geometry of the objects in your container by enumeration:
houseNode.enumerateChildNodesUsingBlock({
node, stop in
// change the color of all the children
node.geometry?.firstMaterial?.diffuse.contents = UIColor.purpleColor()
// I'm not sure on this next one, I've yet to use "physics".
houseNode.physicsBody?.affectedByGravity = true
})
Thanks bpedit!
With the childNodes method and the following code to set it a physicsShape, I found the solution for it.
houseNode.physicsBody = SCNPhysicsBody(type: .Dynamic,
shape: SCNPhysicsShape(node: houseNode,
options: [SCNPhysicsShapeKeepAsCompoundKey: true]))