Getting plane size from RayCasting - swift

According to this article by apple Ray-Casting and Hit-Testing. I should use ray casting provided by RealityKit to detect the surfaces instead of hit testing provided by ARKit as apple says
but the hit-testing functions remain present for compatibility
. However,I can't find a way to know the extent of the surface detected by the raycast query.
So according to this code:
func startRayCasting() {
guard let raycastQuery = arView.makeRaycastQuery(from: arView.center,
allowing: .estimatedPlane,
alignment: .vertical) else {
return
}
guard let result = arView.session.raycast(raycastQuery).first else {
return
}
let transformation = Transform(matrix: result.worldTransform)
let plane = Plane(color: .green, transformation: transformation)
plane.transform = transformation
let raycastAnchor = AnchorEntity(raycastResult: result)
raycastAnchor.addChild(plane)
arView.scene.addAnchor(raycastAnchor)
}
I would expect that the plane I am creating would get the size and position of the plane detected. However this does not happen.
So my question is, Is ray casting suitable for detecting the surfaces size and location. Or its just for checking the 2d point location on a surface.

An Apple Documentation says here:
Raycast instance method performs a convex ray cast against all the geometry in the scene for a ray of a given origin, direction, and length.
and here:
Raycast instance method performs a convex ray cast against all the geometry in the scene for a ray between two end points.
In both cases raycast methods are used for detecting intersections. And in both cases these methods return an array of collision cast hit results.
That's all raycast was made for.

Related

SCNKit: Hit test doesn't hit node's dynamically modified geometry

I'm facing an issue where SCNView.hitTest does not detect hits against geometry that I'm modifying dynamically on the cpu.
Here's the overview: I have a node that uses an SCNGeometry created from a MTLBuffer of vertices:
func createGeometry(vertexBuffer: MTLBuffer, vertexCount: Int) -> SCNGeometry {
let geometry = SCNGeometry(sources: [
SCNGeometrySource.init(
buffer: vertexBuffer,
vertexFormat: .float3,
semantic: .vertex,
vertexCount: vertexCount,
dataOffset: 0,
dataStride: MemoryLayout<SIMD3<Float>>.stride),
], elements: [
SCNGeometryElement(indices: ..., primitiveType: .triangles)
])
}
let vertexBuffer: MTLBuffer = // shared buffer
let vertexCount = ...
let node = SCNNode(geometry: createGeometry(vertexBuffer: vertexBuffer, vertexCount: vertexCount))
As the app is running, I then dynamically modify the vertex buffer in the SceneKit update loop:
// In SceneKit update function
var ptr = vertexBuffer.contents().bindMemory(to: SIMD3<Float>.self, capacity: vertexCount)
for i in 0..<vertexCount {
ptr[i] = // modify vertex
}
This dynamic geometry is correctly rendered by SceneKit. However when I then try hit testing against node using SCNView.hitTest, no hits are detected against the modified geometry.
I can work around this by re-creating the node's geometry after modifying the data:
// after updating data
node.geometry = createGeometry(vertexBuffer: vertexBuffer, vertexCount: vertexCount)
However this feels like a hack.
What is the proper way to have hit testing work reliably for a node with dynamically changing SCNGeometry?
I think there's no proper way to make hit-testing work reliably in your situation. It would apparently be possible if it didn't depend on the SceneKit/Metal render loop and delegate pattern. But since it entirely depends on them, this is an unrealistically expensive operation to recreate SCNGeometry's instances, as you said earlier. So, I totally agree with #HamidYusifli.
When you perform a hit-test search, SceneKit looks for SCNGeometry objects along the ray you specify. For each intersection between the ray and a geometry, SceneKit creates a hit-test result to provide information about both the SCNNode object containing the geometry and the location of the intersection on the geometry’s surface.
The problem in your case is that when you modify the buffer’s contents (MTLBuffer) at render time, SceneKit does not know about it, and therefore cannot update SCNGeometry object which is used for performing hit-test.
So the only way I can see to solve this issue is to recreate your SCNGeometry object.

iOS RealityKit : get world position of placed object from raycast center of screen

Using raycast and the tap gesture, I have successfully placed multiples objects (entity and anchors) in my arview.
Now, I am trying to get the entity that is the closest to the us using the center of the screen, in order to place a object near it. So we can imagine that every time an anchor is close to the center of the phone screen, a new object "spawn"
For that I am trying to use the raycast, but my code :
func session(_ session: ARSession, didUpdate frame: ARFrame) {
//
if let hitEntity = arView.entity(
at: self.arView.center
) {
print("hitEntity IS FOUND !")
print(hitEntity.name) // this is the object I previously placed
guard let result = arView.raycast(from: self.arView.center, allowing: .estimatedPlane, alignment: .any).first else { return }
// here result is the surface behind/below the object, and not the object I want
return;
}
}
the result trigger on a surface, but I can't manage to get the world transform of the object (entity) and not the surface behind.
Do you have an idea ?
Thanks
To get the worldTransform of the entity, you can call hitEntity.position(relativeTo: nil).
If I'm not understanding your question correctly, this might help too:
To get the position of result relative to the entity's local space, you can call hitEntity.convert(position: result.worldTransform, from: nil)
Convert is a super useful method in RealityKit. There's a few different methods, but here's the documentation for the one I added above:
https://developer.apple.com/documentation/realitykit/hastransform/3244194-convert/
I hope one of those helps you!

SceneKit: Rotate node but define target angles

EDIT: I have solved the problem and will post the solution in the next couple of days.
I'm building 3D achievements similar to Apple's Activity app.
I've already loaded my 3D model (a scene with a single node), can show it, and can tap on it to apply a rotational force:
#objc func objectTapped(_ gesture: UITapGestureRecognizer) {
let tapLocation = gesture.location(in: scnView)
let hitResults = scnView.hitTest(tapLocation, options: [:])
if let tappedNode = (hitResults.first { $0.node === badgeNode })?.node {
let pos = Float(tapLocation.x) - tappedNode.boundingBox.max.x
let tappedVector = SCNVector4(x: 0, y: pos, z: 0, w: 0.1)
tappedNode.physicsBody?.applyTorque(tappedVector,
asImpulse: true)
}
}
This works fine. Now to the tricky part:
I want the node to rotate until it either shows its front or backside (like in the Activity app), where it then should stop. It should stop naturally, which means it can overshoot a bit and then return.
To describe it with pictures - here I am holding the node in this position...
...and if I let go of the node, it will rotate to show the front side, which includes a little bit of overshooting. This is the ending position:
Since I'm quite new to SceneKit, I have troubles figuring out how to achieve this effect. It seems like I can achieve that by using SceneKit objects like gravity fields, without having to calculate a whole lot of stuff by myself, or at least that's what I'm hoping for.
I don't necessarily ask for a full solution, I basically just need a point in the right direction. Thanks in advance!

How are the ARKit People Occlusion samples being done?

This may be an obscure question, but I see lots of very cool samples online of how people are using the new ARKit people occlusion technology in ARKit 3 to effectively "separate" the people from the background, and apply some sort of filtering to the "people" (see here).
In looking at Apple's provided source code and documentation, I see that I can retrieve the segmentationBuffer from an ARFrame, which I've done, like so;
func session(_ session: ARSession, didUpdate frame: ARFrame) {
let image = frame.capturedImage
if let segementationBuffer = frame.segmentationBuffer {
// Get the segmentation's width
let segmentedWidth = CVPixelBufferGetWidth(segementationBuffer)
// Create the mask from that pixel buffer.
let sementationMaskImage = CIImage(cvPixelBuffer: segementationBuffer, options: [:])
// Smooth edges to create an alpha matte, then upscale it to the RGB resolution.
let alphaUpscaleFactor = Float(CVPixelBufferGetWidth(image)) / Float(segmentedWidth)
let alphaMatte = sementationMaskImage.clampedToExtent()
.applyingFilter("CIGaussianBlur", parameters: ["inputRadius": 2.0)
.cropped(to: sementationMaskImage.extent)
.applyingFilter("CIBicubicScaleTransform", parameters: ["inputScale": alphaUpscaleFactor])
// Unknown...
}
}
In the "unknown" section, I am trying to determine how I would render my new "blurred" person on top of the original camera feed. There does not seem to be any methods to draw the new CIImage on "top" of the original camera feed, as the ARView has no way of being manually updated.
In the following code snippet we see personSegmentationWithDepth type property for depth compositing (there are RGB, Alpha and Depth channels):
// Automatically segmenting and then compositing foreground (people),
// middle-ground (3D model) and background.
let session = ARSession()
if let configuration = session.configuration as? ARWorldTrackingConfiguration {
configuration.frameSemantics.insert(.personSegmentationWithDepth)
session.run(configuration)
}
You can manually access a Depth Data of World Tracking in CVPixelBuffer (depth values for a performed segmentation):
let image = frame.estimatedDepthData
And you can manually access a Depth Data of Face Tracking in CVPixelBuffer (from TrueDepth camera):
let image = session.currentFrame?.capturedDepthData?.depthDataMap
Also, there's a generateDilatedDepth instance method in ARKit 3.0:
func generateDilatedDepth(from frame: ARFrame,
commandBuffer: MTLCommandBuffer) -> MTLTexture
In your case you have to use estimatedDepthData because Apple documentation says:
It's a buffer that represents the estimated depth values from the camera feed that you use to occlude virtual content.
var estimatedDepthData: CVPixelBuffer? { get }
If you multiply DEPTH data from this buffer (at first you have to convert Depth channel to RGB) by RGB or ALPHA using compositing techniques and you'll get awesome effects.
Look at these 6 images: the lower row represents three RGB-images corrected with Depth channel: depth grading, depth blurring, depth point position pass.
the Bringing People into AR WWDC session has some information, especially about ARMatteGenerator. The session also comes with a sample code.

SpriteKit physics in Swift - Ball slides against wall instead of reflecting

I have been creating my own very simple test game based on Breakout while learning SpriteKit (using iOS Games by Tutorials by Ray Wenderlich et al.) to see if I can apply concepts that I have learned. I have decided to simplify my code by using an .sks file to create the sprite nodes and replacing my manual bounds checking and collision with physics bodies.
However, my ball keeps running parallel to walls/other rectangles (as in, simply sliding up and down them) any time it collides with them at a steep angle. Here is the relevant code--I have moved the physics body properties into code to make them more visible:
import SpriteKit
struct PhysicsCategory {
static let None: UInt32 = 0 // 0
static let Edge: UInt32 = 0b1 // 1
static let Paddle: UInt32 = 0b10 // 2
static let Ball: UInt32 = 0b100 // 4
}
var paddle: SKSpriteNode!
var ball: SKSpriteNode!
class GameScene: SKScene, SKPhysicsContactDelegate {
override func didMoveToView(view: SKView) {
physicsWorld.gravity = CGVector.zeroVector
let edge = SKNode()
edge.physicsBody = SKPhysicsBody(edgeLoopFromRect: frame)
edge.physicsBody!.usesPreciseCollisionDetection = true
edge.physicsBody!.categoryBitMask = PhysicsCategory.Edge
edge.physicsBody!.friction = 0
edge.physicsBody!.restitution = 1
edge.physicsBody!.angularDamping = 0
edge.physicsBody!.linearDamping = 0
edge.physicsBody!.dynamic = false
addChild(edge)
ball = childNodeWithName("ball") as SKSpriteNode
ball.physicsBody = SKPhysicsBody(rectangleOfSize: ball.size))
ball.physicsBody!.usesPreciseCollisionDetection = true
ball.physicsBody!.categoryBitMask = PhysicsCategory.Ball
ball.physicsBody!.collisionBitMask = PhysicsCategory.Edge | PhysicsCategory.Paddle
ball.physicsBody!.allowsRotation = false
ball.physicsBody!.friction = 0
ball.physicsBody!.restitution = 1
ball.physicsBody!.angularDamping = 0
ball.physicsBody!.linearDamping = 0
physicsWorld.contactDelegate = self
}
Forgot to mention this before, but I added a simple touchesBegan function to debug the bounces - it just adjusts the velocity to point the ball at the touch point:
override func touchesBegan(touches: NSSet, withEvent event: UIEvent) {
let touch = touches.anyObject() as UITouch
let moveToward = touch.locationInNode(self)
let targetVector = (moveToward - ball.position).normalized() * 300.0
ball.physicsBody!.velocity = CGVector(point: targetVector)
}
The normalized() function just reduces the ball/touch position delta to a unit vector, and there is an override of the minus operator that allows for CGPoint subtraction.
The ball/edge collisions should always reflect the ball at a precisely opposite angle but for some reason the ball really seems to have a thing for right angles. I can of course implement some workaround to reflect the ball's angle manually, but the point is that I want to do this all using the built in physics functionality in SpriteKit. Is there something obvious that I am missing?
This appears to be an issue with collision detection. Most have found solutions by using the didBeginContact and reapplying the force at an opposite direction. Note he says didMoveToView but corrects himself in a later comment to didBeginContact.
See comments at the bottom of the Ray Wenderlich tutorial here
I have a fix for the problem with the ball "riding the rail" if it
strikes at a shallow angle (#aziz76 and #colinf). I added another
category, "BorderCategory" and assigned it to the border PhysicsBody
we create in didMoveToView.
and a similar SO question here explaining why it is happening.
Even if you do that, though, many physics engines (including
SpriteKit's) have trouble with situations like this because of
floating point rounding errors. I've found that when I want a body to
keep a constant speed after a collision, it's best to force it to --
use a didEndContact: or didSimulatePhysics handler to reset the moving
body's velocity so it's going the same speed it was before the
collision (but in the opposite direction).
Also another thing I noticed is you are using a square instead of a circle for your ball and you may want to consider using...
ball.physicsBody = SKPhysicsBody(circleOfRadius: ball.size.width/2)
So turns out you aren't crazy which is always good to hear from someone else and hopefully this will help you find a solution that works best for your application.
I came up with a temporary solution that is working surprisingly well. Simply apply a very small impulse opposite of the border. You may need to change the strength based on the masses in your system.
func didBeginContact(contact: SKPhysicsContact) {
let otherNode = contact.bodyA.node == ball.sprite ? contact.bodyB.node : contact.bodyA.node
if let obstacle = otherNode as? Obstacle {
ball.onCollision(obstacle)
}
else if let border = otherNode as? SKSpriteNode {
assert(border.name == "border", "Bad assumption")
let strength = 1.0 * (ball.sprite.position.x < frame.width / 2 ? 1 : -1)
let body = ball.sprite.physicsBody!
body.applyImpulse(CGVector(dx: strength, dy: 0))
}
}
In reality, this should not be necessary, since as described in the question, frictionless, fully elastic collision dictates that the ball should rebound by inverting the x velocity (assuming side borders) no matter how small the collision angle is.
Instead, what is happening in the game is as if sprite kit ignores the X velocity if it is smaller than a certain value, making the ball slide against the wall without rebound.
Final Note
After reading this and this, it's obvious to me that the real answer is for any serious physics game you have, you should be using Box2D instead. You get way too many perks from the migration.
This problem only seems to occur when the velocity is small in either direction. However to reduce the effect it is possible to decrease the speed of the physicsWorld, e.g.,
physicsWorld.speed = 0.1
and then increase the velocity of the physicsBody, e.g.,
let targetVector = (moveToward - ball.position).normalized() * 300.0 * 10
ball.physicsBody!.velocity = CGVector(point: targetVector)
Add code below:
let border = SKPhysicsBody(edgeLoopFrom: self.frame)
border.friction = 0
border.restitution = 1
self.physicsBody = border
which will make your ball bounce back when it collides with wall.
Restitution is the bounciness of the physics body so setting it to 1 will bounce ball back.
I was seeing exactly the same issue, but the fix for me was not related to the collision detection issues mentioned in the other answers. Turns out I was setting the ball into motion by using an SKAction that repeats forever. I eventually discovered that this conflicts with SpriteKit's physics simulation leading to the node/ball travelling along the wall instead of bouncing off it.
I'm assuming that the repeating SKAction continues to be applied and overrides/conflicts with the physics simulation's auto-adjustment of the the ball's physicsBody.velocity property.
The fix for this was to set the ball into motion by setting the velocity on its physicsBody property. Once I'd done this the ball began bouncing correctly. I'm guessing that manipulating its position via physicsBody by using forces and impulses will also work given that they are a part of the physics simulation.
It took me an embarrassing amount of time to realise this issue, so I'm posting this here in case I can save anyone else some time. Thank you to 0x141e! Your comment put me (and my ball) on the right path.
The problem is twofold in that 1) it will not be solved by altering friction/restitution of the physics bodies and 2) will not be reliably addressed by a return impulse in the renderer() loop due to the contact occurring after the body has already begun decelerating.
Issue 1: Adjusting physics properties has no effect --
Because the angular component of the collision is below some predetermined threshold, the physics engine will not register it as a physical collision and therefore, the bodies will not react per the physics properties you've set. In this case, restitution will not be considered, regardless of the setting.
Issue 2: Applying an impulse force when the collision is detected will not produce consistent results -- This is due to the fact that in order to simulate restitution, one needs the velocity of the object just prior to impact.
-->For instance, if an object hits the floor at -10m/s and you want to simulate 0.8 restitution, you would want that object to be propelled 8m/s in the oppostie direction.
Unfortunately, due to the render loop, the velocity registered when the collision occurs is much lower since the object has already decelerated.
-->For example, in the simulations I was running, a ball hitting a floor at a low angle was arriving at -9m/s, but the velocity registered when the collision was detected was -2m/s.
This is important since in order to create a consistent representation of restitution, we must know the pre-collision velocity in order to arrive at our desired post-collision velocity...you can't ascertain this in the Swift collision callback delegate.
Solution:
Step 1. During the render cycle, record the velocity of the object.
//Prior to the extension define two variables:
var objectNode : SCNNode!
var objectVelocity : SCNVector3!
//Then, in the renderer delegate, capture the velocity of the object
extension GameViewController: SCNSceneRendererDelegate
{
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval)
{
if objectNode != nil {
//Capture the object's velocity here, which will be saved prior to the collision
if objectNode.physicsBody != nil {
objectVelocity = objectNode.physicsBody!.velocity
}
}
}
}
Step 2: Apply a return impulse when the object collides, using the velocity saved prior to the collision. In this example, I am only using the y-component since I am simulating restitution in that axis.
extension GameViewController: SCNPhysicsContactDelegate {
func physicsWorld(_ world: SCNPhysicsWorld, didBegin contact: SCNPhysicsContact) {
let contactNode: SCNNode!
//Bounceback factor is in essence restitution. It is negative signifying the direction of the vector will be opposite the impact
let bounceBackFactor : Float! = -0.8
//This is the slowest impact registered before the restitution will no longer take place
let minYVelocity : Float! = -2.5
// This is the smallest return force that can be applied (optional)
let minBounceBack : Float! = 2.5
if contact.nodeA.name == "YourMovingObjectName" && contact.nodeB.name == "Border" {
//Using the velocity saved during the render loop
let yVel = objectVelocity.y
let vel = contact.nodeA.physicsBody?.velocity
let bounceBack : Float! = yVel * bounceBackFactor
if yVel < minYVelocity
{
// Here, the opposite force is applied (in the y axis in this example)
contact.nodeA.physicsBody?.velocity = SCNVector3(x: vel!.x, y: bounceBack, z: vel!.z)
}
}
if contact.nodeB.name == "YourMovingObjectName" && contact.nodeA.name == "Border" {
//Using the velocity saved during the render loop
let yVel = objectVelocity.y
let vel = contact.nodeB.physicsBody?.velocity
let bounceBack : Float! = yVel * bounceBackFactor
if yVel < minYVelocity
{
// Here, the opposite force is applied (in the y axis in this example)
contact.nodeB.physicsBody?.velocity = SCNVector3(x: vel!.x, y: bounceBack, z: vel!.z)
}
}
}
}