Rotate scnnode with velocity - swift

I have an scnnode that I rotate with a UIPanGestureRecognizer around it's Y axis. This is great, but I'd like to apply inertia to the avatar so it keeps spinning (and slows down) once the user lifts their finger, kinda like what you have with a scrollView. Is this possible?
func addMoveAvatarGestures(sceneView: SCNView) {
let pan = gestureWithRootNode(target:self, action:#selector(rotateAvatar(_:)))
if let scene = sceneView.scene {
pan.rootNode = scene.rootNode
}
pan.maximumNumberOfTouches = 1
pan.minimumNumberOfTouches = 1
pan.delegate = self
self.rotationGesture = pan
sceneView.addGestureRecognizer(pan)
}
#objc func rotateAvatar(_ sender: gestureWithRootNode) {
guard let nodeToRotate = sender.rootNode else {
return
}
let translation = sender.translation(in: sender.view!)
let xToAngle = Float(self.view.frame.width) / Float(360)
let newAngleY = Float(translation.x) * xToAngle
let velocity = sender.velocity(in: sender.view!)
if (fabs(velocity.x) > fabs(velocity.y)) {
sender.setTranslation(CGPoint.zero, in: sender.view!)
nodeToRotate.eulerAngles.y += nodeExtensions.deg2rad(newAngleY)
}
}

You can use applyTorque for this which changes the node's angular momentum.
It affects the node's angularVelocity. You can constrain or restrict this effect by setting the angularVelocityFactor.

Related

ARKit – How to know if 3d object is in the center of a screen?

I place 3d object in the world space. After that I try to move camera randomly. Then right now I need to know after I knew object has became inside frustum by isNode method, if the object is in center, top or bottom of camera view.
For a solution that's not a hack you can use the projectPoint: API.
It's probably better to work with pixel coordinates because this method uses the actual camera's settings to determine where the object appears on screen.
let projectedPoint = sceneView.projectPoint(self.sphereNode.worldPosition)
let xOffset = projectedPoint.x - screenCenter.x;
let yOffset = projectedPoint.y - screenCenter.y;
if xOffset * xOffset + yOffset * yOffset < R_squared {
// inside a disc of radius 'R' at the center of the screen
}
Solution
To achieve this you need to use a trick. Create new SCNCamera, make it a child of pointOfView default camera and set its FoV to approximately 10 degrees.
Then inside renderer(_:updateAtTime:) instance method use isNode(:insideFrustumOf:) method.
Here's working code:
import ARKit
class ViewController: UIViewController,
ARSCNViewDelegate,
SCNSceneRendererDelegate {
#IBOutlet var sceneView: ARSCNView!
#IBOutlet var label: UILabel!
let cameraNode = SCNNode()
let sphereNode = SCNNode()
let config = ARWorldTrackingConfiguration()
public func renderer(_ renderer: SCNSceneRenderer,
updateAtTime time: TimeInterval) {
DispatchQueue.main.async {
if self.sceneView.isNode(self.sphereNode,
insideFrustumOf: self.cameraNode) {
self.label.text = "In the center..."
} else {
self.label.text = "Out OF CENTER"
}
}
}
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = self
sceneView.allowsCameraControl = true
let scene = SCNScene()
sceneView.scene = scene
cameraNode.camera = SCNCamera()
cameraNode.camera?.fieldOfView = 10
DispatchQueue.main.asyncAfter(deadline: .now() + 0.5) {
self.sceneView.pointOfView!.addChildNode(self.cameraNode)
}
sphereNode.geometry = SCNSphere(radius: 0.05)
sphereNode.geometry?.firstMaterial?.diffuse.contents = UIColor.red
sphereNode.position.z = -1.0
sceneView.scene.rootNode.addChildNode(sphereNode)
sceneView.session.run(config)
}
}
Also, in this solution you may turn on an orthographic projection for child camera, instead of perspective one. It helps when a model is far from the camera.
cameraNode.camera?.usesOrthographicProjection = true
Here's how your screen might look like:
Next steps
The same way you can append two additional SCNCameras, place them above and below central SCNCamera, and test your object with two extra isNode(:insideFrustumOf:) instance methods.
I solved problem with another way:
let results = self.sceneView.hitTest(screenCenter!, options: [SCNHitTestOption.rootNode: parentnode])
where parentnode is the parent of target node, because I have multiple nodes.
func nodeInCenter() -> SCNNode? {
let x = (Int(sceneView.projectPoint(sceneView.pointOfView!.worldPosition).x - sceneView.projectPoint(sphereNode.worldPosition).x) ^^ 2) < 9
let y = (Int(sceneView.projectPoint(sceneView.pointOfView!.worldPosition).y - sceneView.projectPoint(sphereNode.worldPosition).y) ^^ 2) < 9
if x && y {
return node
}
return nil
}

How to move and rotate SCNode using ARKit and Gesture Recognizer?

I am working on an AR based iOS app using ARKit(SceneKit). I used the Apple sample code https://developer.apple.com/documentation/arkit/handling_3d_interaction_and_ui_controls_in_augmented_reality as base for this. Using this i am able to move or rotate the whole Virtual Object.
But i want to select and move/rotate a Child Node in Virtual object using user finger, similar to how we move/rotate the whole Virtual Object itself.
I tried the below two links but it is only moving the child node in particular axis but not freely moving anywhere as the user moves the finger.
ARKit - Drag a node along a specific axis (not on a plane)
Dragging SCNNode in ARKit Using SceneKit
Also i tried replacing the Virtual Object which is a SCNReferenceNode with SCNode so that whatever functionality present for existing Virtual Object applies to Child Node as well, it is not working.
Can anyone please help me on how to freely move/rotate not only the Virtual Object but also the child node of a Virtual Object?
Please find the code i am currently using below,
let tapPoint: CGPoint = gesture.location(in: sceneView)
let result = sceneView.hitTest(tapPoint, options: nil)
if result.count == 0 {
return
}
let scnHitResult: SCNHitTestResult? = result.first
movedObject = scnHitResult?.node //.parent?.parent
let hitResults = self.sceneView.hitTest(tapPoint, types: .existingPlane)
if !hitResults.isEmpty{
guard let hitResult = hitResults.last else { return }
movedObject?.position = SCNVector3Make(hitResult.worldTransform.columns.3.x, hitResult.worldTransform.columns.3.y, hitResult.worldTransform.columns.3.z)
}
To move an object:
Perform a hitTest to check where you have touched, and detect which plane you touched and get a position. Move your SCNNode to that position by changing the node.position value with an SCNVector3.
Code:
#objc func panDetected(recognizer: UIPanGestureRecognizer){
let hitResult = self.arSceneView.hitTest(loc, types: .existingPlane)
if !hitResult.isEmpty{
guard let hitResult = hitResult.last else { return }
self.yourNode.position = SCNVector3Make(hitResult.worldTransform.columns.3.x, hitResult.worldTransform.columns.3.y, hitResult.worldTransform.columns.3.z)
}
The above code is enough to move your node over a detected plane, anywhere you touch, and not just in a single axis.
Rotating a node according to your gesture is a very difficult task and I have worked on a solution for quite sometime, never reaching a perfect output.
But, I have come across this repository in GitHub which allows you to do just that with a very impressive result.
https://github.com/Xartec/ScreenSpaceRotationAndPan
The Swift version of the code you require to rotate your node using your gesture would be :
var previousLoc: CGPoint?
var touchCount: Int?
#objc func panDetected(recognizer: UIPanGestureRecognizer){
let loc = recognizer.location(in: self.view)
var delta = recognizer.translation(in: self.view)
if recognizer.state == .began {
previousLoc = loc
touchCount = recognizer.numberOfTouches
}
else if gestureRecognizer.state == .changed {
delta = CGPoint.init(x: 2 * (loc.x - previousLoc.x), y: 2 * (loc.y - previousLoc.y))
previousLoc = loc
if touchCount != recognizer.numberOfTouches {
return
}
var rotMatrix: SCNMatrix4!
let rotX = SCNMatrix4Rotate(SCNMatrix4Identity, Float((1.0/100) * delta.y), 1, 0, 0)
let rotY = SCNMatrix4Rotate(SCNMatrix4Identity, Float((1.0 / 100) * delta.x), 0, 1, 0)
rotMatrix = SCNMatrix4Mult(rotX, rotY)
let transMatrix = SCNMatrix4MakeTranslation(yourNode.position.x, yourNode.position.y, yourNode.position.z)
self.yourNode.transform = SCNMatrix4Mult(self.yourNode.transform, SCNMatrix4Invert(transMatrix))
let parentNoderanslationMatrix = SCNMatrix4MakeTranslation((self.yourNode.parent?.worldPosition.x)!, (self.yourNode.parent?.worldPosition.y)!, (self.yourNode.parent?.worldPosition.z)!)
let parentNodeMatWOTrans = SCNMatrix4Mult((self.yourNode.parent?.worldTransform)!, SCNMatrix4Invert(parentNoderanslationMatrix))
self.yourNode.transform = SCNMatrix4Mult(self.yourNode.transform, parentNodeMatWOTrans)
let camorbitNodeTransMat = SCNMatrix4MakeTranslation((self.arSceneView.pointOfView?.worldPosition.x)!, (self.arSceneView.pointOfView?.worldPosition.y)!, (self.arSceneView.pointOfView?.worldPosition.z)!)
let camorbitNodeMatWOTrans = SCNMatrix4Mult((self.arSceneView.pointOfView?.worldTransform)!, SCNMatrix4Invert(camorbitNodeTransMat))
self.yourNode.transform = SCNMatrix4Mult(self.yourNode.transform, SCNMatrix4Invert(camorbitNodeMatWOTrans))
self.yourNode.transform = SCNMatrix4Mult(self.yourNode.transform, rotMatrix)
self.yourNode.transform = SCNMatrix4Mult(self.yourNode.transform, camorbitNodeMatWOTrans)
self.yourNode.transform = SCNMatrix4Mult(self.yourNode.transform, SCNMatrix4Invert(parentNodeMatWOTrans))
self.yourNode.transform = SCNMatrix4Mult(self.yourNode.transform, transMatrix)
}
}

Detect SCNNode with camera only

Let's say I draw a circle in the middle of my screen, to use it as a target. If I point this circle to a node, how is it possible for ARKit to detect it?
For now I'm using the tap method
#IBAction func tapHandler(_ sender: UITapGestureRecognizer) {
let viewTouchLocation:CGPoint = sender.location(in: sceneView)
guard let result = sceneView.hitTest(viewTouchLocation, options: nil).first else {
return
}
// ...etc
}
which works really well, but it would be so much better to detect a node just by pointing the camera at it.
let screenRect = UIScreen.main.bounds
let screenWidth = screenRect.size.width
let screenHeight = screenRect.size.height
let location = CGPoint(x:screenWidth/2,y:screenHeight/2)
use location in hittest

ARKit – How to put 3D Object on QRCode?

I'm actually trying to put a 3D Object on QRCode with ARKit
For that I use a AVCaptureDevice to detect a QRCode and establish the area of the QRCode that gives me a CGRect.
Then, I make a hitTest on every point of the CGRect to get the average 3D coordinates like so :
positionGiven = SCNVector3(0, 0, 0)
for column in Int(qrZone.origin.x)...2*Int(qrZone.origin.x + qrZone.width) {
for row in Int(qrZone.origin.y)...2*Int(qrZone.origin.y + qrZone.height) {
for result in sceneView.hitTest(CGPoint(x: CGFloat(column)/2,y:CGFloat(row)/2), types: [.existingPlaneUsingExtent,.featurePoint]) {
positionGiven.x+=result.worldTransform.columns.3.x
positionGiven.y+=result.worldTransform.columns.3.y
positionGiven.z+=result.worldTransform.columns.3.z
cpts += 1
}
}
}
positionGiven.x=positionGiven.x/cpts
positionGiven.y=positionGiven.y/cpts
positionGiven.z=positionGiven.z/cpts
But the hitTest doesn't detect any result and freeze the camera while when I make a hitTest with a touch on screen it works.
Do you have any idea why it's not working ?
Do you have an other idea that can help me to achieve what I want to do ?
I already thought about 3D translation with CoreMotion that can give me the tilt of the device but that seems really tedious.
I also heard about ARWorldAlignmentCamera that can locked the scene coordinate to match the orientation of the camera but I don't know how to use it !
Edit : I try to move my 3D Object every time I touch the screen and the hitTest is positive, and it's pretty accurate ! I really don't understand why hitTest on an area of pixels doesn't work...
Edit 2 : Here is the code of the hitTest who works with 2-5 touches on the screen:
#objc func touch(sender : UITapGestureRecognizer) {
for result in sceneView.hitTest(CGPoint(x: sender.location(in: view).x,y: sender.location(in: view).y), types: [.existingPlaneUsingExtent,.featurePoint]) {
//Pop up message for testing
alert("\(sender.location(in: view))", message: "\(result.worldTransform.columns.3)")
//Moving the 3D Object to the new coordinates
let objectList = sceneView.scene.rootNode.childNodes
for object : SCNNode in objectList {
object.removeFromParentNode()
}
addObject(SCNVector3(result.worldTransform.columns.3.x,result.worldTransform.columns.3.y,result.worldTransform.columns.3.z))
}
}
Edit 3 :
I manage to resolve my problem partially.
I take the transform matrix of the camera (session.currentFrame.camera.transform) so that the object is in front of the camera.
Then I apply a translation on (x,y) with the position of the CGRect.
However i can't translate the z-axis because i don't have enough informations.
And I will probably need a estimation of z coordinate like the hitTest do..
Thanks in advance ! :)
You could use Apple's Vision API to detect the QR code and place an anchor.
To start detecting QR codes, use:
var qrRequests = [VNRequest]()
var detectedDataAnchor: ARAnchor?
var processing = false
func startQrCodeDetection() {
// Create a Barcode Detection Request
let request = VNDetectBarcodesRequest(completionHandler: self.requestHandler)
// Set it to recognize QR code only
request.symbologies = [.QR]
self.qrRequests = [request]
}
In ARSession's didUpdate Frame
public func session(_ session: ARSession, didUpdate frame: ARFrame) {
DispatchQueue.global(qos: .userInitiated).async {
do {
if self.processing {
return
}
self.processing = true
// Create a request handler using the captured image from the ARFrame
let imageRequestHandler = VNImageRequestHandler(cvPixelBuffer: frame.capturedImage,
options: [:])
// Process the request
try imageRequestHandler.perform(self.qrRequests)
} catch {
}
}
}
Handle the Vision QR request and trigger the hit test
func requestHandler(request: VNRequest, error: Error?) {
// Get the first result out of the results, if there are any
if let results = request.results, let result = results.first as? VNBarcodeObservation {
guard let payload = result.payloadStringValue else {return}
// Get the bounding box for the bar code and find the center
var rect = result.boundingBox
// Flip coordinates
rect = rect.applying(CGAffineTransform(scaleX: 1, y: -1))
rect = rect.applying(CGAffineTransform(translationX: 0, y: 1))
// Get center
let center = CGPoint(x: rect.midX, y: rect.midY)
DispatchQueue.main.async {
self.hitTestQrCode(center: center)
self.processing = false
}
} else {
self.processing = false
}
}
func hitTestQrCode(center: CGPoint) {
if let hitTestResults = self.latestFrame?.hitTest(center, types: [.featurePoint] ),
let hitTestResult = hitTestResults.first {
if let detectedDataAnchor = self.detectedDataAnchor,
let node = self.sceneView.node(for: detectedDataAnchor) {
let previousQrPosition = node.position
node.transform = SCNMatrix4(hitTestResult.worldTransform)
} else {
// Create an anchor. The node will be created in delegate methods
self.detectedDataAnchor = ARAnchor(transform: hitTestResult.worldTransform)
self.sceneView.session.add(anchor: self.detectedDataAnchor!)
}
}
}
Then handle adding the node when the anchor is added.
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
// If this is our anchor, create a node
if self.detectedDataAnchor?.identifier == anchor.identifier {
let sphere = SCNSphere(radius: 1.0)
sphere.firstMaterial?.diffuse.contents = UIColor.redColor()
let sphereNode = SCNNode(geometry: sphere)
sphereNode.transform = SCNMatrix4(anchor.transform)
return sphereNode
}
return nil
}
Source

Add SCNNode after rotating rootNode

I'm trying to add a node (a sphere) to a body model but it doesn't work properly after I rotate the model through a pan gesture.
Here's how I'm adding the node (using a long tap gesture):
func addSphere(sender: UILongPressGestureRecognizer) {
switch sender.state {
case .Began:
let location = sender.locationInView(bodyView)
let hitResults = bodyView.hitTest(location, options: nil)
if hitResults.count > 0 {
let result = hitResults.first!
let secondSphereGeometry = SCNSphere(radius: 0.015)
secondSphereGeometry.firstMaterial?.diffuse.contents = UIColor.redColor()
let secondSphereNode = SCNNode(geometry: secondSphereGeometry)
let vpWithZ = SCNVector3(x: Float(result.worldCoordinates.x), y: Float(result.worldCoordinates.y), z: Float( result.worldCoordinates.z))
secondSphereNode.position = vpWithZ
bodyView.scene!.rootNode.addChildNode(secondSphereNode)
}
break
default:
break
}
}
Here is how I rotate the view:
func rotateGesture(sender: UIPanGestureRecognizer) {
let translation = sender.translationInView(sender.view)
var newZAngle = (Float)(translation.x)*(Float)(M_PI)/180.0
newZAngle += currentZAngle
bodyView.scene!.rootNode.transform = SCNMatrix4MakeRotation(newZAngle, 0, 0, 1)
if sender.state == .Ended {
currentZAngle = newZAngle
}
}
And to load the 3D model I just do:
bodyView.scene = SCNScene(named: "male_body.dae") // bodyView is a SCNView in the storyboard
I found something related to the worldTransform property and also the function convertPosition:toNode: but couldn't find an example that works well.
The problem is that, if I rotate the model, the sphere are not positioned properly. They're always positioned as if the model was in its initial state.
If I turn the body and add long tap his arm (on the side), the sphere is added somewhere floating in front of the body, as you can see above.
I don't know how to fix this. Appreciate if someone can help me. Thanks!