ARKit Calculate distance from a wall to the camera - arkit

I’m developing a project with ARKit. I want to calculate the measure from a wall to the camera and it updates when I move away or I move closer.
Now, i have activated that it detects horizontal and vertical surfaces. When I get a surface, I calculate the distance from the camera position and the center of the surface. After I apply the calculus that it gets the distance between 2 points in a 3D space (Euclidean).
https://math.stackexchange.com/questions/42640/calculate-distance-in-3d-space
Is it correct? Can you help me?
class ViewController: UIViewController, ARSCNViewDelegate, ARSessionDelegate {
let configuration = ARWorldTrackingConfiguration()
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
configuration.planeDetection = [.horizontal, .vertical]
sceneView.session.run(configuration)
......
}
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let planeAnchor = anchor as? ARPlaneAnchor else { return }
let plane = SCNPlane(width: CGFloat(planeAnchor.extent.x), height:
CGFloat(planeAnchor.extent.z))
let planeNode = SCNNode(geometry: plane)
planeNode.simdPosition = float3(planeAnchor.center.x, 0,
planeAnchor.center.z)
planeNode.eulerAngles.x = -.pi / 2
node.addChildNode(planeNode)
let distance = distanceFromCamera(x: planeAnchor.center.x, y: 0, z: planeAnchor.center.z)
let formatted = String(format: "Distance: %.2f", distance)
print(formatted) q
}
private func distanceFromCamera(x: Float, y:Float, z:Float) -> Float {
let cameraPosition = self.sceneView.session.currentFrame!.camera.transform.columns.3
print("Camera: \(cameraPosition)")
let vector = SCNVector3Make(cameraPosition.x - x, cameraPosition.y - y, cameraPosition.z - z)
// Scene units map to meters in ARKit.
return sqrtf(vector.x * vector.x + vector.y * vector.y + vector.z * vector.z)
}
}

Add Following method
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
guard let currentBall = self.currentBall else {return}
DispatchQueue.main.async {
if let centerPosition = self.hitTestCenterVector() {
let startPositionOfBall = currentBall.position
let distance = self.getDistanceBetween(vector1: centerPosition, vector2: startPositionOfBall)
self.lblDistance.text = String(format: "%.1f", distance) //meter
}
}
}
Just replace self.currentBall in guard statement with your SCNNode it is from where you want to cal. distance
Now This is method to for calculations
func hitTestCenterVector () -> SCNVector3? {
let results = self.sceneView.hitTest(self.sceneView.center, types: .existingPlane)
if let firstObject = results.first {
return SCNVector3(firstObject.worldTransform.columns.3.x, firstObject.worldTransform.columns.3.y, firstObject.worldTransform.columns.3.z)
}
return nil
}
func getDistanceBetween(vector1:SCNVector3, vector2:SCNVector3) -> CGFloat {
return CGFloat(sqrt((vector1.x - vector2.x) * (vector1.x - vector2.x)
+ (vector1.y - vector2.y) * (vector1.y - vector2.y)
+ (vector1.z - vector2.z) * (vector1.z - vector2.z)))
}
Hope it is helpful

Related

Face position using visionKit in ARKit

I added visionKit face detection on an ARSCNView, it cab detect the face, here how I did that
public func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
let faceDetectionRequest = VNDetectFaceRectanglesRequest(completionHandler: { (request: VNRequest, error: Error?) in
DispatchQueue.main.async {
self.faceLayers.forEach { drawing in
drawing.removeFromSuperlayer()
}
if let observations = request.results as? [VNFaceObservation] {
self.handleFaceDetectionObservations(observations: observations)
}
}
})
guard let capturedImage = sceneView.session.currentFrame?.capturedImage else { return }
let imageRequestHandler = VNImageRequestHandler(cvPixelBuffer: capturedImage, orientation: .leftMirrored, options: [:])
do {
try imageRequestHandler.perform([faceDetectionRequest])
} catch {
print("perform fail, error: ", error.localizedDescription)
}
}
fileprivate func handleFaceDetectionObservations(observations: [VNFaceObservation]) {
for observation in observations {
let newWidth = sceneView.bounds.width * observation.boundingBox.width
let newHeight = sceneView.bounds.height * observation.boundingBox.height
let newX = sceneView.bounds.width * observation.boundingBox.origin.x
let newY = sceneView.bounds.height * observation.boundingBox.origin.y
let faceRectConverted = CGRect(x: newX, y: newY, width: newWidth, height: newHeight)
let faceRectanglePath = CGPath(rect: faceRectConverted, transform: nil)
let faceLayer = CAShapeLayer()
faceLayer.path = faceRectanglePath
faceLayer.fillColor = UIColor.black.cgColor
self.faceLayers.append(faceLayer)
self.sceneView.layer.addSublayer(faceLayer)
}
}
The only issue that I have here is the face position in the view, it's calculated wrong. Looks like the problem come from camera mirroring, when the goes right, the face rectangular goes left, or when the face goes up, the rectangular goes down. I don't know how to do the right calculation to tie observation rect to the right place in sceneView . Could anyone help me on that!
Also have a same problem on landscape, the rectangular height is more compact there...
Thanks

Projecting the ARKit face tracking 3D mesh to 2D image coordinates

I am collecting face mesh 3D vertices using ARKit. I have read: Mapping image onto 3D face mesh and Tracking and Visualizing Faces.
I have the following struct:
struct CaptureData {
var vertices: [SIMD3<Float>]
var verticesformatted: String {
let verticesDescribed = vertices.map({ "\($0.x):\($0.y):\($0.z)" }).joined(separator: "~")
return "<\(verticesDescribed)>"
}
}
I have a Strat button to capture vertices:
#IBAction private func startPressed() {
captureData = [] // Clear data
currentCaptureFrame = 0 //inital capture frame
fpsTimer = Timer.scheduledTimer(withTimeInterval: 1/fps, repeats: true, block: {(timer) -> Void in self.recordData()})
}
private var fpsTimer = Timer()
private var captureData: [CaptureData] = [CaptureData]()
private var currentCaptureFrame = 0
And a stop button to stop capturing (save the data):
#IBAction private func stopPressed() {
do {
fpsTimer.invalidate() //turn off the timer
let capturedData = captureData.map{$0.verticesformatted}.joined(separator:"")
let dir: URL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).last! as URL
let url = dir.appendingPathComponent("facedata.txt")
try capturedData.appendLineToURL(fileURL: url as URL)
}
catch {
print("Could not write to file")
}
}
Function for recoding data
private func recordData() {
guard let data = getFrameData() else { return }
captureData.append(data)
currentCaptureFrame += 1
}
Function for get frame data
private func getFrameData() -> CaptureData? {
let arFrame = sceneView?.session.currentFrame!
guard let anchor = arFrame?.anchors[0] as? ARFaceAnchor else {return nil}
let vertices = anchor.geometry.vertices
let data = CaptureData(vertices: vertices)
return data
}
ARSCN extension:
extension ViewController: ARSCNViewDelegate {
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let faceAnchor = anchor as? ARFaceAnchor else { return }
currentFaceAnchor = faceAnchor
if node.childNodes.isEmpty, let contentNode = selectedContentController.renderer(renderer, nodeFor: faceAnchor) {
node.addChildNode(contentNode)
}
selectedContentController.session = sceneView?.session
selectedContentController.sceneView = sceneView
}
/// - Tag: ARFaceGeometryUpdate
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard anchor == currentFaceAnchor,
let contentNode = selectedContentController.contentNode,
contentNode.parent == node
else { return }
selectedContentController.session = sceneView?.session
selectedContentController.sceneView = sceneView
selectedContentController.renderer(renderer, didUpdate: contentNode, for: anchor)
}
}
I am trying to use the example code from Tracking and Visualizing Faces:
// Transform the vertex to the camera coordinate system.
float4 vertexCamera = scn_node.modelViewTransform * _geometry.position;
// Camera projection and perspective divide to get normalized viewport coordinates (clip space).
float4 vertexClipSpace = scn_frame.projectionTransform * vertexCamera;
vertexClipSpace /= vertexClipSpace.w;
// XY in clip space is [-1,1]x[-1,1], so adjust to UV texture coordinates: [0,1]x[0,1].
// Image coordinates are Y-flipped (upper-left origin).
float4 vertexImageSpace = float4(vertexClipSpace.xy * 0.5 + 0.5, 0.0, 1.0);
vertexImageSpace.y = 1.0 - vertexImageSpace.y;
// Apply ARKit's display transform (device orientation * front-facing camera flip).
float4 transformedVertex = displayTransform * vertexImageSpace;
// Output as texture coordinates for use in later rendering stages.
_geometry.texcoords[0] = transformedVertex.xy;
I aslo read about projection point (but not sure which one is more applicable still):
func projectPoint(_ point: SCNVector3) -> SCNVector3
My question is how to use the example code above and transform the collected 3D face mesh vertices to 2D image coordinates??
I would like to get the 3D mesh vertices together with their corresponding 2D coordinates.
Currently, I can capture the face mesh points like so: <mesh_x: mesh_ y: mesh_ z:...>
I would to convert my mesh points to the image coordinates and show them together like so:
Expected result: <mesh_x: mesh_ y: mesh_ z:img_x: img_y...>
Any suggestions? Thanks in advance!
Maybe you can use the projectPoint function of the SCNSceneRenderer.
extension ARFaceAnchor{
// struct to store the 3d vertex and the 2d projection point
struct VerticesAndProjection {
var vertex: SIMD3<Float>
var projected: CGPoint
}
// return a struct with vertices and projection
func verticeAndProjection(to view: ARSCNView) -> [VerticesAndProjection]{
let points = geometry.vertices.compactMap({ (vertex) -> VerticesAndProjection? in
let col = SIMD4<Float>(SCNVector4())
let pos = SIMD4<Float>(SCNVector4(vertex.x, vertex.y, vertex.z, 1))
let pworld = transform * simd_float4x4(col, col, col, pos)
let vect = view.projectPoint(SCNVector3(pworld.position.x, pworld.position.y, pworld.position.z))
let p = CGPoint(x: CGFloat(vect.x), y: CGFloat(vect.y))
return VerticesAndProjection(vertex:vertex, projected: p)
})
return points
}
}
Here is a convenient way to get the position:
extension matrix_float4x4 {
/// Get the position of the transform matrix.
public var position: SCNVector3 {
get{
return SCNVector3(self[3][0], self[3][1], self[3][2])
}
}
}
If you want to check that the projection is ok, add a debug subview to the ARSCNView instance, then, with a couple of others extensions to draw the 2d points on a view such as:
extension UIView{
private struct drawCircleProperty{
static let circleFillColor = UIColor.green
static let circleStrokeColor = UIColor.black
static let circleRadius: CGFloat = 3.0
}
func drawCircle(point: CGPoint) {
let circlePath = UIBezierPath(arcCenter: point, radius: drawCircleProperty.circleRadius, startAngle: CGFloat(0), endAngle: CGFloat(Double.pi * 2.0), clockwise: true)
let shapeLayer = CAShapeLayer()
shapeLayer.path = circlePath.cgPath
shapeLayer.fillColor = drawCircleProperty.circleFillColor.cgColor
shapeLayer.strokeColor = drawCircleProperty.circleStrokeColor.cgColor
self.layer.addSublayer(shapeLayer)
}
func drawCircles(points: [CGPoint]){
self.clearLayers()
for point in points{
self.drawCircle(point: point)
}
}
func clearLayers(){
if let subLayers = self.layer.sublayers {
for subLayer in subLayers {
subLayer.removeFromSuperlayer()
}
}
}
You can compute the projection, and draw the points with:
let points:[ARFaceAnchor.VerticesAndProjection] = faceAnchor.verticeAndProjection(to: sceneView)
// keep only the projected points
let projected = points.map{ $0.projected}
// draw the points !
self.debugView?.drawCircles(points: projected)
I can see all the 3d vertices projected on the 2d screen (picture generated by https://thispersondoesnotexist.com).
I added this code to the Apple demo project, available here https://github.com/hugoliv/projectvertices.git

Add SCNNode or UIView to the center of SCNView in order to detect other SCNNodes

I am trying to center a UIView to the center of my SCNView in Order to detect the other added SCNTorus nodes in my scene.
I added a view to the center of my sceneView like below
var focusPoint: CGPoint {
return CGPoint(
x: sceneView.bounds.size.width / 2,
y: sceneView.bounds.size.height - (sceneView.bounds.size.height / 1.618))
}
Then I tried two ways :
1 -
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
DispatchQueue.main.async { [weak self] in
guard let strongSelf = self else { return }
if !strongSelf.inEditMode { return }
for node in strongSelf.selectionRingsNodes {
let projectedPoint = renderer.projectPoint(node.position)
let projectedCGPoint = CGPoint(x: CGFloat(projectedPoint.x), y: CGFloat(projectedPoint.y))
let distance = projectedCGPoint.distance(to: strongSelf.focusPoint)
if distance < 20 {
print(node.name)
}
}
}
}
2 -
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
DispatchQueue.main.async { [weak self] in
guard let strongSelf = self else { return }
if !strongSelf.inEditMode { return }
for node in strongSelf.selectionRingsNodes {
let (min, max) = node.boundingBox
let projectedMinPoint = renderer.projectPoint(min)
let projectedMinCGPoint = CGPoint(x: CGFloat(projectedMinPoint.x), y: CGFloat(projectedMinPoint.y))
let projectedMaxPoint = renderer.projectPoint(max)
let projectedMaxCGPoint = CGPoint(x: CGFloat(projectedMaxPoint.x), y: CGFloat(projectedMaxPoint.y))
let minX = CGFloat(projectedMinCGPoint.x)
let maxX = CGFloat(projectedMaxCGPoint.x)
let minY = CGFloat(projectedMinCGPoint.y)
let maxY = CGFloat(projectedMaxCGPoint.y)
let nodeRect = CGRect(x: minX, y: minY, width: maxX - minX, height: maxY - minY)
if nodeRect.contains(strongSelf.focusPoint) {
print(node.name)
}
}
}
}
These two methods return the wrong results, a very big distance, and very big x and y.
Finally I Got the solution!
It turned out that I should convert the position to the scene’s world coordinate space, using this method convertPosition(SCNVector3,to:)
Here the complete code :
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
DispatchQueue.main.async { [weak self] in
guard let strongSelf = self else { return }
if !strongSelf.inEditMode { return }
for node in strongSelf.selectionRingsNodes {
let position = node.convertPosition(SCNVector3Zero, to: nil)
let projectedPoint = renderer.projectPoint(position)
let projectedCGPoint = CGPoint(x: CGFloat(projectedPoint.x), y: CGFloat(projectedPoint.y))
let distance = projectedCGPoint.distance(to: strongSelf.focusPoint)
if distance < 50 {
strongSelf.showToast(message: node.getTopMostParentNode().name!, font: .systemFont(ofSize: 30))
}
}
}
}

Bounce rays with enumerateBodies alongRayStart

I want to trace the path where a bullet will move in my SpriteKit GameScene.
I'm using "enumerateBodies(alongRayStart", I can easily calculate the first collision with a physics body.
I don't know how to calculate the angle of reflection, given the contact point and the contact normal.
I want to calculate the path, over 5 reflections/bounces, so first I:
Cast a ray, get all the bodies it intersects with, and get the closest one.
I then use that contact point as the start of my next reflection/bounce....but I'm struggling with what the end point should be set to....
What I think I should be doing is getting the angle between the contact point and the contact normal, and then calculating a new point opposite to that...
var points: [CGPoint] = []
var start: CGPoint = renderComponent.node.position
var end: CGPoint = crossHairComponent.node.position
points.append(start)
var closestNormal: CGVector = .zero
for i in 0...5 {
closestNormal = .zero
var closestLength: CGFloat? = nil
var closestContact: CGPoint!
// Get the closest contact point.
self.physicsWorld.enumerateBodies(alongRayStart: start, end: end) { (physicsBody, contactPoint, contactNormal, stop) in
let len = start.distance(point: contactPoint)
if closestContact == nil {
closestNormal = contactNormal
closestLength = len
closestContact = contactPoint
} else {
if len <= closestLength! {
closestLength = len
closestNormal = contactNormal
closestContact = contactPoint
}
}
}
// This is where the code is just plain wrong and my math fails me.
if closestContact != nil {
// Calculate intersection angle...doesn't seem right?
let v1: CGVector = (end - start).normalized().toCGVector()
let v2: CGVector = closestNormal.normalized()
var angle = acos(v1.dot(v2)) * (180 / .pi)
let v1perp = CGVector(dx: -v1.dy, dy: v1.dx)
if(v2.dot(v1perp) > 0) {
angle = 360.0 - angle
}
angle = angle.degreesToRadians
// Set the new start point
start = closestContact
// Calculate a new end point somewhere in the distance to cast a ray to, so we can repeat the process again
let x = closestContact.x + cos(angle)*100
let y = closestContact.y + sin(-angle)*100
end = CGPoint(x: x, y: y)
// Add points to array to draw them on the screen
points.append(closestContact)
points.append(end)
}
}
I guess you are looking for something like this right?
1. Working code
First of all let me post the full working code. Just create a new Xcode project based SpriteKit and
In GameViewController.swift set
scene.scaleMode = .resizeFill
Remove the usual label you find in GameScene.sks
Replace Scene.swift with the following code
>
import SpriteKit
class GameScene: SKScene {
override func didMove(to view: SKView) {
self.physicsBody = SKPhysicsBody(edgeLoopFrom: frame)
}
var angle: CGFloat = 0
override func update(_ currentTime: TimeInterval) {
removeAllChildren()
drawRayCasting(angle: angle)
angle += 0.001
}
private func drawRayCasting(angle: CGFloat) {
let colors: [UIColor] = [.red, .green, .blue, .orange, .white]
var start: CGPoint = .zero
var direction: CGVector = CGVector(angle: angle)
for i in 0...4 {
guard let result = rayCast(start: start, direction: direction) else { return }
let vector = CGVector(from: start, to: result.destination)
// draw
drawVector(point: start, vector: vector, color: colors[i])
// prepare for next iteration
start = result.destination
direction = vector.normalized().bounced(withNormal: result.normal.normalized()).normalized()
}
}
private func rayCast(start: CGPoint, direction: CGVector) -> (destination:CGPoint, normal: CGVector)? {
let endVector = CGVector(
dx: start.x + direction.normalized().dx * 4000,
dy: start.y + direction.normalized().dy * 4000
)
let endPoint = CGPoint(x: endVector.dx, y: endVector.dy)
var closestPoint: CGPoint?
var normal: CGVector?
physicsWorld.enumerateBodies(alongRayStart: start, end: endPoint) {
(physicsBody:SKPhysicsBody,
point:CGPoint,
normalVector:CGVector,
stop:UnsafeMutablePointer<ObjCBool>) in
guard start.distanceTo(point) > 1 else {
return
}
guard let newClosestPoint = closestPoint else {
closestPoint = point
normal = normalVector
return
}
guard start.distanceTo(point) < start.distanceTo(newClosestPoint) else {
return
}
normal = normalVector
}
guard let p = closestPoint, let n = normal else { return nil }
return (p, n)
}
private func drawVector(point: CGPoint, vector: CGVector, color: SKColor) {
let start = point
let destX = (start.x + vector.dx)
let destY = (start.y + vector.dy)
let to = CGPoint(x: destX, y: destY)
let path = CGMutablePath()
path.move(to: start)
path.addLine(to: to)
path.closeSubpath()
let line = SKShapeNode(path: path)
line.strokeColor = color
line.lineWidth = 6
addChild(line)
}
}
extension CGVector {
init(angle: CGFloat) {
self.init(dx: cos(angle), dy: sin(angle))
}
func normalized() -> CGVector {
let len = length()
return len>0 ? self / len : CGVector.zero
}
func length() -> CGFloat {
return sqrt(dx*dx + dy*dy)
}
static func / (vector: CGVector, scalar: CGFloat) -> CGVector {
return CGVector(dx: vector.dx / scalar, dy: vector.dy / scalar)
}
func bounced(withNormal normal: CGVector) -> CGVector {
let dotProduct = self.normalized() * normal.normalized()
let dx = self.dx - 2 * (dotProduct) * normal.dx
let dy = self.dy - 2 * (dotProduct) * normal.dy
return CGVector(dx: dx, dy: dy)
}
init(from:CGPoint, to:CGPoint) {
self = CGVector(dx: to.x - from.x, dy: to.y - from.y)
}
static func * (left: CGVector, right: CGVector) -> CGFloat {
return (left.dx * right.dx) + (left.dy * right.dy)
}
}
extension CGPoint {
func length() -> CGFloat {
return sqrt(x*x + y*y)
}
func distanceTo(_ point: CGPoint) -> CGFloat {
return (self - point).length()
}
static func - (left: CGPoint, right: CGPoint) -> CGPoint {
return CGPoint(x: left.x - right.x, y: left.y - right.y)
}
}
2. How does it work?
Lets have a look at what this code does. We'll start from the bottom.
3. CGPoint and CGVector extensions
These are just simple extensions (mainly taken from Ray Wenderlich's repository on GitHub) to simplify the geometrical operations we are going to perform.
4. drawVector(point:vector:color)
This is a simple method to draw a vector with a given color starting from a given point.
Nothing fancy here.
private func drawVector(point: CGPoint, vector: CGVector, color: SKColor) {
let start = point
let destX = (start.x + vector.dx)
let destY = (start.y + vector.dy)
let to = CGPoint(x: destX, y: destY)
let path = CGMutablePath()
path.move(to: start)
path.addLine(to: to)
path.closeSubpath()
let line = SKShapeNode(path: path)
line.strokeColor = color
line.lineWidth = 6
addChild(line)
}
5. rayCast(start:direction) -> (destination:CGPoint, normal: CGVector)?
This method perform a raycasting and returns the ALMOST closest point where the ray enter in collision with a physics body.
private func rayCast(start: CGPoint, direction: CGVector) -> (destination:CGPoint, normal: CGVector)? {
let endVector = CGVector(
dx: start.x + direction.normalized().dx * 4000,
dy: start.y + direction.normalized().dy * 4000
)
let endPoint = CGPoint(x: endVector.dx, y: endVector.dy)
var closestPoint: CGPoint?
var normal: CGVector?
physicsWorld.enumerateBodies(alongRayStart: start, end: endPoint) {
(physicsBody:SKPhysicsBody,
point:CGPoint,
normalVector:CGVector,
stop:UnsafeMutablePointer<ObjCBool>) in
guard start.distanceTo(point) > 1 else {
return
}
guard let newClosestPoint = closestPoint else {
closestPoint = point
normal = normalVector
return
}
guard start.distanceTo(point) < start.distanceTo(newClosestPoint) else {
return
}
normal = normalVector
}
guard let p = closestPoint, let n = normal else { return nil }
return (p, n)
}
What does it mean ALMOST the closets?
It means the the destination point must be at least 1 point distant from the start point
guard start.distanceTo(point) > 1 else {
return
}
Ok but why?
Because without this rule the ray gets stuck into a physics body and it is never able to get outside of it.
6. drawRayCasting(angle)
This method basically keeps the local variables up to date to properly generate 5 segments.
private func drawRayCasting(angle: CGFloat) {
let colors: [UIColor] = [.red, .green, .blue, .orange, .white]
var start: CGPoint = .zero
var direction: CGVector = CGVector(angle: angle)
for i in 0...4 {
guard let result = rayCast(start: start, direction: direction) else { return }
let vector = CGVector(from: start, to: result.destination)
// draw
drawVector(point: start, vector: vector, color: colors[i])
// prepare next direction
start = result.destination
direction = vector.normalized().bounced(withNormal: result.normal.normalized()).normalized()
}
}
The first segment has starting point equals to zero and a direction diving my the angle parameter.
Segments 2 to 5 use the final point and the "mirrored direction" of the previous segment.
update(_ currentTime: TimeInterval)
Here I am just calling drawRayCasting every frame passing the current angle value and the increasing angle by 0.001.
var angle: CGFloat = 0
override func update(_ currentTime: TimeInterval) {
removeAllChildren()
drawRayCasting(angle: angle)
angle += 0.001
}
6. didMove(to view: SKView)
Finally here I create a physics body around the scene in order to make the ray bounce over the borders.
override func didMove(to view: SKView) {
self.physicsBody = SKPhysicsBody(edgeLoopFrom: frame)
}
7. Wrap up
I hope the explanation is clear.
Should you have any doubt let me know.
Update
There was a bug in the bounced function. It was preventing a proper calculation of the reflected ray.
It is now fixed.

arkit anchor or node visible in camera and sitting to left or right of frustum

How can i detect if an ARAnchor is currently visible in the camera, i need to test when the camera view changes.
I want to put arrows on the edge of the screen that point in the direction of the anchor when not on screen. I need to know if the node sits to the left or right of the frustum.
I am now doing this but it says pin is visible when it is not and the X values seem not right? Maybe the renderer frustum does not match the screen camera?
var deltaTime = TimeInterval()
public func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
deltaTime = time - lastUpdateTime
if deltaTime>1{
if let annotation = annotationsByNode.first {
let node = annotation.key.childNodes[0]
if !renderer.isNode(node, insideFrustumOf: renderer.pointOfView!)
{
print("Pin is not visible");
}else {
print("Pin is visible");
}
let pnt = renderer.projectPoint(node.position)
print("pos ", pnt.x, " ", renderer.pointOfView!.position)
}
lastUpdateTime = time
}
}
Update: The code works to show if node is visible or not, how can i tell which direction left or right a node is in relation to the camera frustum?
update2! as suggested answer from Bhanu Birani
let screenWidth = UIScreen.main.bounds.width
let screenHeight = UIScreen.main.bounds.height
let leftPoint = CGPoint(x: 0, y: screenHeight/2)
let rightPoint = CGPoint(x: screenWidth,y: screenHeight/2)
let leftWorldPos = renderer.unprojectPoint(SCNVector3(leftPoint.x,leftPoint.y,0))
let rightWorldPos = renderer.unprojectPoint(SCNVector3(rightPoint.x,rightPoint.y,0))
let distanceLeft = node.position - leftWorldPos
let distanceRight = node.position - rightWorldPos
let dir = (isVisible) ? "visible" : ( (distanceLeft.x<distanceRight.x) ? "left" : "right")
I got it working finally which uses the idea from Bhanu Birani of the left and right of the screen but i get the world position differently, unProjectPoint and also get a scalar value of distance which i compare to get the left/right direction. Maybe there is a better way of doing it but it worked for me
public func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
deltaTime = time - lastUpdateTime
if deltaTime>0.25{
if let annotation = annotationsByNode.first {
guard let pointOfView = renderer.pointOfView else {return}
let node = annotation.key.childNodes[0]
let isVisible = renderer.isNode(node, insideFrustumOf: pointOfView)
let screenWidth = UIScreen.main.bounds.width
let screenHeight = UIScreen.main.bounds.height
let leftPoint = CGPoint(x: 0, y: screenHeight/2)
let rightPoint = CGPoint(x: screenWidth,y: screenHeight/2)
let leftWorldPos = renderer.unprojectPoint(SCNVector3(leftPoint.x, leftPoint.y,0))
let rightWorldPos = renderer.unprojectPoint(SCNVector3(rightPoint.x, rightPoint.y,0))
let distanceLeft = node.worldPosition.distance(vector: leftWorldPos)
let distanceRight = node.worldPosition.distance(vector: rightWorldPos)
//let pnt = renderer.projectPoint(node.worldPosition)
//guard let pnt = renderer.pointOfView!.convertPosition(node.position, to: nil) else {return}
let dir = (isVisible) ? "visible" : ( (distanceLeft<distanceRight) ? "left" : "right")
print("dir" , dir, " ", leftWorldPos , " ", rightWorldPos)
lastDir=dir
delegate?.nodePosition?(node:node, pos: dir)
}else {
delegate?.nodePosition?(node:nil, pos: lastDir )
}
lastUpdateTime = time
}
extension SCNVector3
{
/**
* Returns the length (magnitude) of the vector described by the SCNVector3
*/
func length() -> Float {
return sqrtf(x*x + y*y + z*z)
}
/**
* Calculates the distance between two SCNVector3. Pythagoras!
*/
func distance(vector: SCNVector3) -> Float {
return (self - vector).length()
}
}
Project the ray from the from the following screen positions:
leftPoint = CGPoint(0, screenHeight/2) (centre left of the screen)
rightPoint = CGPoint(screenWidth, screenHeight/2) (centre right of the screen)
Convert CGPoint to world position:
leftWorldPos = convertCGPointToWorldPosition(leftPoint)
rightWorldPos = convertCGPointToWorldPosition(rightPoint)
Calculate the distance of node from both world position:
distanceLeft = node.position - leftWorldPos
distanceRight = node.position - rightWorldPos
Compare distance to find the shortest distance to the node. Use the shortest distance vector to position direction arrow for object.
Here is the code from tsukimi to check if the object is in right side of screen or on left side:
public func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
deltaTime = time - lastUpdateTime
if deltaTime>0.25{
if let annotation = annotationsByNode.first {
guard let pointOfView = renderer.pointOfView else {return}
let node = annotation.key.childNodes[0]
let isVisible = renderer.isNode(node, insideFrustumOf: pointOfView)
let screenWidth = UIScreen.main.bounds.width
let screenHeight = UIScreen.main.bounds.height
let leftPoint = CGPoint(x: 0, y: screenHeight/2)
let rightPoint = CGPoint(x: screenWidth,y: screenHeight/2)
let leftWorldPos = renderer.unprojectPoint(SCNVector3(leftPoint.x, leftPoint.y,0))
let rightWorldPos = renderer.unprojectPoint(SCNVector3(rightPoint.x, rightPoint.y,0))
let distanceLeft = node.worldPosition.distance(vector: leftWorldPos)
let distanceRight = node.worldPosition.distance(vector: rightWorldPos)
//let pnt = renderer.projectPoint(node.worldPosition)
//guard let pnt = renderer.pointOfView!.convertPosition(node.position, to: nil) else {return}
let dir = (isVisible) ? "visible" : ( (distanceLeft<distanceRight) ? "left" : "right")
print("dir" , dir, " ", leftWorldPos , " ", rightWorldPos)
lastDir=dir
delegate?.nodePosition?(node:node, pos: dir)
}else {
delegate?.nodePosition?(node:nil, pos: lastDir )
}
lastUpdateTime = time
}
Following is the class to help performing operations on vector
extension SCNVector3 {
init(_ vec: vector_float3) {
self.x = vec.x
self.y = vec.y
self.z = vec.z
}
func length() -> Float {
return sqrtf(x * x + y * y + z * z)
}
mutating func setLength(_ length: Float) {
self.normalize()
self *= length
}
mutating func setMaximumLength(_ maxLength: Float) {
if self.length() <= maxLength {
return
} else {
self.normalize()
self *= maxLength
}
}
mutating func normalize() {
self = self.normalized()
}
func normalized() -> SCNVector3 {
if self.length() == 0 {
return self
}
return self / self.length()
}
static func positionFromTransform(_ transform: matrix_float4x4) -> SCNVector3 {
return SCNVector3Make(transform.columns.3.x, transform.columns.3.y, transform.columns.3.z)
}
func friendlyString() -> String {
return "(\(String(format: "%.2f", x)), \(String(format: "%.2f", y)), \(String(format: "%.2f", z)))"
}
func dot(_ vec: SCNVector3) -> Float {
return (self.x * vec.x) + (self.y * vec.y) + (self.z * vec.z)
}
func cross(_ vec: SCNVector3) -> SCNVector3 {
return SCNVector3(self.y * vec.z - self.z * vec.y, self.z * vec.x - self.x * vec.z, self.x * vec.y - self.y * vec.x)
}
}
extension SCNVector3{
func distance(receiver:SCNVector3) -> Float{
let xd = receiver.x - self.x
let yd = receiver.y - self.y
let zd = receiver.z - self.z
let distance = Float(sqrt(xd * xd + yd * yd + zd * zd))
if (distance < 0){
return (distance * -1)
} else {
return (distance)
}
}
}
Here is the code snippet to convert tap location or any CGPoint to world transform.
#objc func handleTap(_ sender: UITapGestureRecognizer) {
// Take the screen space tap coordinates and pass them to the hitTest method on the ARSCNView instance
let tapPoint = sender.location(in: sceneView)
let result = sceneView.hitTest(tapPoint, types: ARHitTestResult.ResultType.existingPlaneUsingExtent)
// If the intersection ray passes through any plane geometry they will be returned, with the planes
// ordered by distance from the camera
if (result.count > 0) {
// If there are multiple hits, just pick the closest plane
if let hitResult = result.first {
let finalPosition = SCNVector3Make(hitResult.worldTransform.columns.3.x + insertionXOffset,
hitResult.worldTransform.columns.3.y + insertionYOffset,
hitResult.worldTransform.columns.3.z + insertionZOffset
);
}
}
}
Following is the code to get hit test results when there's no plane found.
// check what nodes are tapped
let p = gestureRecognize.location(in: scnView)
let hitResults = scnView.hitTest(p, options: [:])
// check that we clicked on at least one object
if hitResults.count > 0 {
// retrieved the first clicked object
let result = hitResults[0]
}
This answer is a bit late but can be useful for someone needing to know where a node is in camera space relatively to the center (e.g. top left corner, centered ...).
You can get your node position in camera space using scene.rootNode.convertPosition(node.position, to: pointOfView).
In camera space,
(isVisible && (x=0, y=0)) means that your node is in front of the camera.
(isVisible && (x=0.1)) means that the node is a little bit on the right.
Some sample code :
public func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
deltaTime = time - lastUpdateTime
if deltaTime>0.25{
if let annotation = annotationsByNode.first {
guard let pointOfView = renderer.pointOfView else {return}
let node = annotation.key.childNodes[0]
let isVisible = renderer.isNode(node, insideFrustumOf: pointOfView)
// Translate node to camera space
let nodeInCameraSpace = scene.rootNode.convertPosition(node.position, to: pointOfView)
let isCentered = isVisible && (nodeInCameraSpace.x < 0.1) && (nodeInCameraSpace.y < 0.1)
let isOnTheRight = isVisible && (nodeInCameraSpace.x > 0.1)
// ...
delegate?.nodePosition?(node:node, pos: dir)
}else {
delegate?.nodePosition?(node:nil, pos: lastDir )
}
lastUpdateTime = time
}