Placing an object in front of camera at Touch Location - swift

The following code places the node in front of the camera but always at the center 10cm away from the camera position. I want to place the node 10cm away in z-direction but at the x and y co-ordinates of where I touch the screen. So touching on different parts of the screen should result in a node being placed 10cm away in front of the camera but at the x and y location of the touch and not always at the center.
var cameraRelativePosition = SCNVector3(0,0,-0.1)
let sphere = SCNNode()
sphere.geometry = SCNSphere(radius: 0.0025)
sphere.geometry?.firstMaterial?.diffuse.contents = UIColor.white
Service.addChildNode(sphere, toNode: self.sceneView.scene.rootNode,
inView: self.sceneView, cameraRelativePosition:
cameraRelativePosition)
Service.swift
class Service: NSObject {
static func addChildNode(_ node: SCNNode, toNode: SCNNode, inView:
ARSCNView, cameraRelativePosition: SCNVector3) {
guard let currentFrame = inView.session.currentFrame else { return }
let camera = currentFrame.camera
let transform = camera.transform
var translationMatrix = matrix_identity_float4x4
translationMatrix.columns.3.x = cameraRelativePosition.x
translationMatrix.columns.3.y = cameraRelativePosition.y
translationMatrix.columns.3.z = cameraRelativePosition.z
let modifiedMatrix = simd_mul(transform, translationMatrix)
node.simdTransform = modifiedMatrix
toNode.addChildNode(node)
}
}
The result should look exactly like this : https://justaline.withgoogle.com

We can use the unprojectPoint(_:) method of SCNSceneRenderer (SCNView and ARSCNView both conform to this protocol) to convert a point on the screen to a 3D point.
When tapping the screen we can calculate a ray this way:
func getRay(for point: CGPoint, in view: SCNSceneRenderer) -> SCNVector3 {
let farPoint = view.unprojectPoint(SCNVector3(Float(point.x), Float(point.y), 1))
let nearPoint = view.unprojectPoint(SCNVector3(Float(point.x), Float(point.y), 0))
let ray = SCNVector3Make(farPoint.x - nearPoint.x, farPoint.y - nearPoint.y, farPoint.z - nearPoint.z)
// Normalize the ray
let length = sqrt(ray.x*ray.x + ray.y*ray.y + ray.z*ray.z)
return SCNVector3Make(ray.x/length, ray.y/length, ray.z/length)
}
The ray has a length of 1, so by multiplying it by 0.1 and adding the camera location we get the point you were searching for.

Related

ARKit cannot rotate a SCNNode correctly on a vertical plane

I want to rotate a SCNNode, which is a painting image.
I can rotate it on the floor, but I cannot rotate it correctly on wall.
(I am using UIRotationGestureRecognizer)
...
func addPainting(_ hitResult: ARHitTestResult, _ grid: Grid) {
...
// Add the painting
let newPaintingNode = SCNNode(geometry: planeGeometry)
newPaintingNode.transform = SCNMatrix4(hitResult.anchor!.transform)
newPaintingNode.eulerAngles = SCNVector3(newPaintingNode.eulerAngles.x + (-Float.pi / 2), newPaintingNode.eulerAngles.y, newPaintingNode.eulerAngles.z)
newPaintingNode.position = SCNVector3(hitResult.worldTransform.columns.3.x, hitResult.worldTransform.columns.3.y, hitResult.worldTransform.columns.3.z)
self.paintingNode = newPaintingNode
self.currentAngleY = newPaintingNode.eulerAngles.y
self.currentAngleZ = newPaintingNode.eulerAngles.z
augmentedRealityView.scene.rootNode.addChildNode(self.paintingNode!)
grid.removeFromParentNode()
}
...
#objc func rotateNode(_ gesture: UIRotationGestureRecognizer){
if let currentNode = self.paintingNode {
//1. Get The Current Rotation From The Gesture
let rotation = Float(gesture.rotation)
if self.paintingAlignment == "horizontal" {
log.verbose("rotate horizontal!")
//2. If The Gesture State Has Changed Set The Nodes EulerAngles.y
if gesture.state == .changed {
currentNode.eulerAngles.y = currentAngleY + rotation
}
//3. If The Gesture Has Ended Store The Last Angle Of The Cube
if(gesture.state == .ended) {
currentAngleY = currentNode.eulerAngles.y
}
} else if self.paintingAlignment == "vertical" {
log.verbose("rotate vertical!")
//2. If The Gesture State Has Changed Set The Nodes EulerAngles.z
if gesture.state == .changed {
currentNode.eulerAngles.z = currentAngleZ + rotation
}
//3. If The Gesture Has Ended Store The Last Angle Of The Cube
if(gesture.state == .ended) {
currentAngleZ = currentNode.eulerAngles.z
}
}
}
}
Does anyone know how can I rotate it correctly on wall? Thank you!
You're using eulerAngles instance property in your code:
var eulerAngles: SCNVector3 { get set }
According to Apple documentation:
SceneKit applies eulerAngles rotations relative to the node’s pivot property in the reverse order of the components: first roll (Z), then yaw (Y), then pitch (X).
...but three-component rotation can lead to Gimbal Lock.
Gimbal lock is the loss of one degree of freedom in a three-dimensional, three-gimbal mechanism that occurs when the axes of two of the three gimbals are driven into a parallel configuration, "locking" the system into rotation in a degenerate two-dimensional space.
So you need to use a four-component rotation property:
var rotation: SCNVector4 { get set }
The four-component rotation vector specifies the direction of the rotation axis in the first three components (XYZ) and the angle of rotation, expressed in radians, in the fourth (W).
currentNode.rotation = SCNVector4(x: 1,
y: 0,
z: 0,
w: -Float.pi / 2)
If you want to know more about SCNVector4 structure and four-component rotation (and its W component expressed in radians) look at THIS POST and THIS POST.
P.S.
In RealityKit framework instead of SCNVector4 structure you need to use SIMD4<Float> generic structure or simd_float4 type alias.
var rotation: simd_float4 { get set }
I tried SCNNode eulerAngles and SCNNode rotation and I have no luck...
I guess that is because I invoked SCNNode.transform() when I add the painting. When later I modify SCNNode.eulerAngles and invoke SCNNode.rotation(), they may have influence on the first transform.
I finally get it working using SCNMatrix4Rotate, not using eulerAngles and rotation.
func addPainting(_ hitResult: ARHitTestResult, _ grid: Grid) {
...
let newPaintingNode = SCNNode(geometry: planeGeometry)
newPaintingNode.transform = SCNMatrix4(hitResult.anchor!.transform)
newPaintingNode.transform = SCNMatrix4Rotate(newPaintingNode.transform, -Float.pi / 2.0, 1.0, 0.0, 0.0)
newPaintingNode.position = SCNVector3(hitResult.worldTransform.columns.3.x, hitResult.worldTransform.columns.3.y, hitResult.worldTransform.columns.3.z)
self.paintingNode = newPaintingNode
...
}
...
#objc func rotateNode(_ gesture: UIRotationGestureRecognizer){
if let _ = self.paintingNode {
// Get The Current Rotation From The Gesture
let rotation = Float(gesture.rotation)
if gesture.state == .began {
self.rotationZ = rotation
}
if gesture.state == .changed {
let diff = rotation - self.rotationZ
self.paintingNode?.transform = SCNMatrix4Rotate(self.paintingNode!.transform, diff, 0.0, 0.0, 1.0)
self.rotationZ = rotation
}
}
}
And the above code works on both vertical and horizontal plane.

How to move and rotate SCNode using ARKit and Gesture Recognizer?

I am working on an AR based iOS app using ARKit(SceneKit). I used the Apple sample code https://developer.apple.com/documentation/arkit/handling_3d_interaction_and_ui_controls_in_augmented_reality as base for this. Using this i am able to move or rotate the whole Virtual Object.
But i want to select and move/rotate a Child Node in Virtual object using user finger, similar to how we move/rotate the whole Virtual Object itself.
I tried the below two links but it is only moving the child node in particular axis but not freely moving anywhere as the user moves the finger.
ARKit - Drag a node along a specific axis (not on a plane)
Dragging SCNNode in ARKit Using SceneKit
Also i tried replacing the Virtual Object which is a SCNReferenceNode with SCNode so that whatever functionality present for existing Virtual Object applies to Child Node as well, it is not working.
Can anyone please help me on how to freely move/rotate not only the Virtual Object but also the child node of a Virtual Object?
Please find the code i am currently using below,
let tapPoint: CGPoint = gesture.location(in: sceneView)
let result = sceneView.hitTest(tapPoint, options: nil)
if result.count == 0 {
return
}
let scnHitResult: SCNHitTestResult? = result.first
movedObject = scnHitResult?.node //.parent?.parent
let hitResults = self.sceneView.hitTest(tapPoint, types: .existingPlane)
if !hitResults.isEmpty{
guard let hitResult = hitResults.last else { return }
movedObject?.position = SCNVector3Make(hitResult.worldTransform.columns.3.x, hitResult.worldTransform.columns.3.y, hitResult.worldTransform.columns.3.z)
}
To move an object:
Perform a hitTest to check where you have touched, and detect which plane you touched and get a position. Move your SCNNode to that position by changing the node.position value with an SCNVector3.
Code:
#objc func panDetected(recognizer: UIPanGestureRecognizer){
let hitResult = self.arSceneView.hitTest(loc, types: .existingPlane)
if !hitResult.isEmpty{
guard let hitResult = hitResult.last else { return }
self.yourNode.position = SCNVector3Make(hitResult.worldTransform.columns.3.x, hitResult.worldTransform.columns.3.y, hitResult.worldTransform.columns.3.z)
}
The above code is enough to move your node over a detected plane, anywhere you touch, and not just in a single axis.
Rotating a node according to your gesture is a very difficult task and I have worked on a solution for quite sometime, never reaching a perfect output.
But, I have come across this repository in GitHub which allows you to do just that with a very impressive result.
https://github.com/Xartec/ScreenSpaceRotationAndPan
The Swift version of the code you require to rotate your node using your gesture would be :
var previousLoc: CGPoint?
var touchCount: Int?
#objc func panDetected(recognizer: UIPanGestureRecognizer){
let loc = recognizer.location(in: self.view)
var delta = recognizer.translation(in: self.view)
if recognizer.state == .began {
previousLoc = loc
touchCount = recognizer.numberOfTouches
}
else if gestureRecognizer.state == .changed {
delta = CGPoint.init(x: 2 * (loc.x - previousLoc.x), y: 2 * (loc.y - previousLoc.y))
previousLoc = loc
if touchCount != recognizer.numberOfTouches {
return
}
var rotMatrix: SCNMatrix4!
let rotX = SCNMatrix4Rotate(SCNMatrix4Identity, Float((1.0/100) * delta.y), 1, 0, 0)
let rotY = SCNMatrix4Rotate(SCNMatrix4Identity, Float((1.0 / 100) * delta.x), 0, 1, 0)
rotMatrix = SCNMatrix4Mult(rotX, rotY)
let transMatrix = SCNMatrix4MakeTranslation(yourNode.position.x, yourNode.position.y, yourNode.position.z)
self.yourNode.transform = SCNMatrix4Mult(self.yourNode.transform, SCNMatrix4Invert(transMatrix))
let parentNoderanslationMatrix = SCNMatrix4MakeTranslation((self.yourNode.parent?.worldPosition.x)!, (self.yourNode.parent?.worldPosition.y)!, (self.yourNode.parent?.worldPosition.z)!)
let parentNodeMatWOTrans = SCNMatrix4Mult((self.yourNode.parent?.worldTransform)!, SCNMatrix4Invert(parentNoderanslationMatrix))
self.yourNode.transform = SCNMatrix4Mult(self.yourNode.transform, parentNodeMatWOTrans)
let camorbitNodeTransMat = SCNMatrix4MakeTranslation((self.arSceneView.pointOfView?.worldPosition.x)!, (self.arSceneView.pointOfView?.worldPosition.y)!, (self.arSceneView.pointOfView?.worldPosition.z)!)
let camorbitNodeMatWOTrans = SCNMatrix4Mult((self.arSceneView.pointOfView?.worldTransform)!, SCNMatrix4Invert(camorbitNodeTransMat))
self.yourNode.transform = SCNMatrix4Mult(self.yourNode.transform, SCNMatrix4Invert(camorbitNodeMatWOTrans))
self.yourNode.transform = SCNMatrix4Mult(self.yourNode.transform, rotMatrix)
self.yourNode.transform = SCNMatrix4Mult(self.yourNode.transform, camorbitNodeMatWOTrans)
self.yourNode.transform = SCNMatrix4Mult(self.yourNode.transform, SCNMatrix4Invert(parentNodeMatWOTrans))
self.yourNode.transform = SCNMatrix4Mult(self.yourNode.transform, transMatrix)
}
}

ARSCNView unprojectPoint

I need to convert a point in the 2d coordinate space of my ARSCNView to a coordinate in 3d space. Basically a ray from the point of view to the touched location (up to a set distance away).
I wanted to use arView.unprojectPoint(vec2d) for that, but the point returned always seems to be located in the center of the view
vec2d is a SCNVector3 created from a 2d coordinate like this
SCNVector3(x, y, 0) // 0 specifies camera near plane
What am I doing wrong? How do I get the desired result?
I think you have at least 2 possible solutions:
First
Use hitTest(_:types:) instance method:
This method searches for real-world objects or AR anchors in the captured camera image corresponding to a point in the SceneKit view.
let sceneView = ARSCNView()
func calculateVector(point: CGPoint) -> SCNVector3? {
let hitTestResults = sceneView.hitTest(point,
types: [.existingPlane])
if let result = hitTestResults.first {
return SCNVector3.init(SIMD3(result.worldTransform.columns.3.x,
result.worldTransform.columns.3.y,
result.worldTransform.columns.3.z))
}
return nil
}
calculateVector(point: yourPoint)
Second
Use unprojectPoint(_:ontoPlane:) instance method:
This method returns the projection of a point from 2D view onto a plane in the 3D world space detected by ARKit.
#nonobjc func unprojectPoint(_ point: CGPoint,
ontoPlane planeTransform: simd_float4x4) -> simd_float3?
or:
let point = CGPoint()
var planeTransform = simd_float4x4()
sceneView.unprojectPoint(point,
ontoPlane: planeTransform)
Add a empty node infront of camera at 'x' cm offset and making it the child of camera.
//Add a node in front of camera just after creating scene
hitNode = SCNNode()
hitNode!.position = SCNVector3Make(0, 0, -0.25) //25 cm offset
sceneView.pointOfView?.addChildNode(hitNode!)
func unprojectedPosition(touch: CGPoint) -> SCNVector3 {
guard let hitNode = self.hitNode else {
return SCNVector3Zero
}
let projectedOrigin = sceneView.projectPoint(hitNode.worldPosition)
let offset = sceneView.unprojectPoint(SCNVector3Make(Float(touch.x), Float(touch.y), projectedOrigin.z))
return offset
}
See the Justaline GitHub implementation of the code here

SCNNode facing towards the camera

I am trying to put SCNCylinder node in the scene on the touch point. I always want to show the cylinder shape diameter facing towards camera. Its working fine for horizontal scene but it have a problem in vertical scene. In vertical scene I can see the cylinder sides but I want to show the full diameter facing towards the camera no matter whats the camera orientation is. I know there is some transformation needs to be applied depending on the camera transform but don't know how. I am not using plane detection its the simple node which is directly added to the scene.
Vertical Image:
Horizontal Image:
The code to insert the node is as follows,
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
guard let touch = touches.first else {
return
}
let result = sceneView.hitTest(touch.location(in: sceneView), types: [ARHitTestResult.ResultType.featurePoint])
guard let hitResult = result.last else {
print("returning because couldn't find the touch point")
return
}
let hitTransform = SCNMatrix4(hitResult.worldTransform)
let position = SCNVector3Make(hitTransform.m41, hitTransform.m42, hitTransform.m43)
let ballShape = SCNCylinder(radius: 0.02, height: 0.01)
let ballNode = SCNNode(geometry: ballShape)
ballNode.position = position
sceneView.scene.rootNode.addChildNode(ballNode)
}
Any help would be appreciated.
I'm not certain this is the right way to handle what you need but here is something which may help you.
I think CoreMotion could be useful to help you determine if the device is at a horizontal or vertical angle.
This class has a property called attitude, which describes the rotation of our device in terms of roll, pitch, and yaw. If we are holding our phone in portrait orientation, the roll describes the angle of rotation about the axis that runs through the top and bottom of the phone. The pitch describes the angle of rotation about the axis that runs through the sides of your phone (where the volume buttons are). And finally, the yaw describes the angle of rotation about the axis that runs through the front and back of your phone. With these three values, we can determine how the user is holding their phone in reference to what would be level ground (Stephan Baker).
Begin by importing CoreMotion:
import CoreMotion
Then create the following variables:
let deviceMotionDetector = CMMotionManager()
var currentAngle: Double!
We will then create a function which will check the angle of our device like so:
/// Detects The Angle Of The Device
func detectDeviceAngle(){
if deviceMotionDetector.isDeviceMotionAvailable == true {
deviceMotionDetector.deviceMotionUpdateInterval = 0.1;
let queue = OperationQueue()
deviceMotionDetector.startDeviceMotionUpdates(to: queue, withHandler: { (motion, error) -> Void in
if let attitude = motion?.attitude {
DispatchQueue.main.async {
let pitch = attitude.pitch * 180.0/Double.pi
self.currentAngle = pitch
print(pitch)
}
}
})
}
else {
print("Device Motion Unavailable");
}
}
This only needs to be called once for example in viewDidLoad:
detectDeviceAngle()
In your touchesBegan method you can add this to the end:
//1. If We Are Holding The Device Above 60 Degress Change The Node
if currentAngle > 60 {
//2a. Get The X, Y, Z Values Of The Desired Rotation
let rotation = SCNVector3(1, 0, 0)
let vector3x = rotation.x
let vector3y = rotation.y
let vector3z = rotation.z
let degreesToRotate:Float = 90
//2b. Set The Position & Rotation Of The Object
sphereNode.rotation = SCNVector4Make(vector3x, vector3y, vector3z, degreesToRotate * 180 / .pi)
}else{
}
I am sure there are better ways to achieve what you need (and I would be very interested in hearing them too), but I hope it will get you started.
Here is the result:

ARKit – How to put 3D Object on QRCode?

I'm actually trying to put a 3D Object on QRCode with ARKit
For that I use a AVCaptureDevice to detect a QRCode and establish the area of the QRCode that gives me a CGRect.
Then, I make a hitTest on every point of the CGRect to get the average 3D coordinates like so :
positionGiven = SCNVector3(0, 0, 0)
for column in Int(qrZone.origin.x)...2*Int(qrZone.origin.x + qrZone.width) {
for row in Int(qrZone.origin.y)...2*Int(qrZone.origin.y + qrZone.height) {
for result in sceneView.hitTest(CGPoint(x: CGFloat(column)/2,y:CGFloat(row)/2), types: [.existingPlaneUsingExtent,.featurePoint]) {
positionGiven.x+=result.worldTransform.columns.3.x
positionGiven.y+=result.worldTransform.columns.3.y
positionGiven.z+=result.worldTransform.columns.3.z
cpts += 1
}
}
}
positionGiven.x=positionGiven.x/cpts
positionGiven.y=positionGiven.y/cpts
positionGiven.z=positionGiven.z/cpts
But the hitTest doesn't detect any result and freeze the camera while when I make a hitTest with a touch on screen it works.
Do you have any idea why it's not working ?
Do you have an other idea that can help me to achieve what I want to do ?
I already thought about 3D translation with CoreMotion that can give me the tilt of the device but that seems really tedious.
I also heard about ARWorldAlignmentCamera that can locked the scene coordinate to match the orientation of the camera but I don't know how to use it !
Edit : I try to move my 3D Object every time I touch the screen and the hitTest is positive, and it's pretty accurate ! I really don't understand why hitTest on an area of pixels doesn't work...
Edit 2 : Here is the code of the hitTest who works with 2-5 touches on the screen:
#objc func touch(sender : UITapGestureRecognizer) {
for result in sceneView.hitTest(CGPoint(x: sender.location(in: view).x,y: sender.location(in: view).y), types: [.existingPlaneUsingExtent,.featurePoint]) {
//Pop up message for testing
alert("\(sender.location(in: view))", message: "\(result.worldTransform.columns.3)")
//Moving the 3D Object to the new coordinates
let objectList = sceneView.scene.rootNode.childNodes
for object : SCNNode in objectList {
object.removeFromParentNode()
}
addObject(SCNVector3(result.worldTransform.columns.3.x,result.worldTransform.columns.3.y,result.worldTransform.columns.3.z))
}
}
Edit 3 :
I manage to resolve my problem partially.
I take the transform matrix of the camera (session.currentFrame.camera.transform) so that the object is in front of the camera.
Then I apply a translation on (x,y) with the position of the CGRect.
However i can't translate the z-axis because i don't have enough informations.
And I will probably need a estimation of z coordinate like the hitTest do..
Thanks in advance ! :)
You could use Apple's Vision API to detect the QR code and place an anchor.
To start detecting QR codes, use:
var qrRequests = [VNRequest]()
var detectedDataAnchor: ARAnchor?
var processing = false
func startQrCodeDetection() {
// Create a Barcode Detection Request
let request = VNDetectBarcodesRequest(completionHandler: self.requestHandler)
// Set it to recognize QR code only
request.symbologies = [.QR]
self.qrRequests = [request]
}
In ARSession's didUpdate Frame
public func session(_ session: ARSession, didUpdate frame: ARFrame) {
DispatchQueue.global(qos: .userInitiated).async {
do {
if self.processing {
return
}
self.processing = true
// Create a request handler using the captured image from the ARFrame
let imageRequestHandler = VNImageRequestHandler(cvPixelBuffer: frame.capturedImage,
options: [:])
// Process the request
try imageRequestHandler.perform(self.qrRequests)
} catch {
}
}
}
Handle the Vision QR request and trigger the hit test
func requestHandler(request: VNRequest, error: Error?) {
// Get the first result out of the results, if there are any
if let results = request.results, let result = results.first as? VNBarcodeObservation {
guard let payload = result.payloadStringValue else {return}
// Get the bounding box for the bar code and find the center
var rect = result.boundingBox
// Flip coordinates
rect = rect.applying(CGAffineTransform(scaleX: 1, y: -1))
rect = rect.applying(CGAffineTransform(translationX: 0, y: 1))
// Get center
let center = CGPoint(x: rect.midX, y: rect.midY)
DispatchQueue.main.async {
self.hitTestQrCode(center: center)
self.processing = false
}
} else {
self.processing = false
}
}
func hitTestQrCode(center: CGPoint) {
if let hitTestResults = self.latestFrame?.hitTest(center, types: [.featurePoint] ),
let hitTestResult = hitTestResults.first {
if let detectedDataAnchor = self.detectedDataAnchor,
let node = self.sceneView.node(for: detectedDataAnchor) {
let previousQrPosition = node.position
node.transform = SCNMatrix4(hitTestResult.worldTransform)
} else {
// Create an anchor. The node will be created in delegate methods
self.detectedDataAnchor = ARAnchor(transform: hitTestResult.worldTransform)
self.sceneView.session.add(anchor: self.detectedDataAnchor!)
}
}
}
Then handle adding the node when the anchor is added.
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
// If this is our anchor, create a node
if self.detectedDataAnchor?.identifier == anchor.identifier {
let sphere = SCNSphere(radius: 1.0)
sphere.firstMaterial?.diffuse.contents = UIColor.redColor()
let sphereNode = SCNNode(geometry: sphere)
sphereNode.transform = SCNMatrix4(anchor.transform)
return sphereNode
}
return nil
}
Source