swift Mac OS: make trackpad reflect display size for NSTouch events - swift

I am trying to make touch events take the trackpad as the display when moving it.
What I mean in this is:
Consider a Mac laptop display, and imagine it to be exactly the size of the trackpad, or so this is how I want the touch events to be implemented. There was a normalised x y coordinate property before in NSEvent which may have given me this, but it seems to have been deprecated.
Below are touchBegan and touchMoved overrides and what I mean:
override func touchesBegan(with event: NSEvent) {
CGDisplayMoveCursorToPoint(CGMainDisplayID(), event.location(in: overlayScene))
let player = SCNAudioPlayer(source: tick)
sineNode.addAudioPlayer(player)
}
override func touchesMoved(with event: NSEvent) {
let location = event.location(in: overlayScene)
guard let node = overlayScene.nodes(at: location).first else{
if currentSKNode != nil{
currentSKNode = nil
}
return
}
if currentSKNode != node{
currentSKNode = node
print(node.name)
let player = SCNAudioPlayer(source: tick)
sineNode.addAudioPlayer(player)
}
}
Imagine a fgraph reader where x and y axes are centred at exactly width/2 and height/2 of the trackpad. When I touch anywhere on it, this should reflect exactly on the apps window which is set to full screen.
Currently, the mouse cursor seems to go only partially across the display when I make a full left to right move, hence trying to reposition the mouse cursor to the position of the NSView, or in this case the SKScene.

Ok, so it turns out, as Kent mentioned, normalised position is not deprecated, but I was looking into NSEvent not NSTouch.
Here's the code for anyone who would stumble upon the same requirements. Works fine, just need to figure out now how to make this as responsive as possible.
override func touchesMoved(with event: NSEvent) {
DispatchQueue.main.async {
let touch = event.touches(matching: .moved, in: self.sceneView).first
let normalised = touch!.normalizedPosition
let translated = CGPoint(x: self.width*normalised.x - self.widthCenter, y: self.height*normalised.y - self.heightCenter)
guard let node = self.overlayScene.nodes(at: translated).first else{
self.currentSKNode = nil
return
}
if self.currentSKNode != node{
self.currentSKNode = node
let player = SCNAudioPlayer(source: self.tick)
self.sineNode.addAudioPlayer(player)
}
}
}

Related

Scenekit: rotate camera to tap point (3D)

I have a camera node.
Around the camera node, there is another big node (.obj file) of a building.
User can move inside the building.
User can do LongPressGesture, and additional node (let's say a sphere) appears on the wall of the building. I want to rotate my camera to this new node (to tap location).
I don't know how to do it. Can someone help me?
Other answers are not correct for me. Camera just rotates in random directions.
I've found a way!
I take the location of a tap (or any coordinates you need to turn to)
#objc private func handleLongPress(pressRec: UILongPressGestureRecognizer) {
let arr: [UIGestureRecognizer.State] = [.cancelled, .ended, .failed]
if !arr.contains(pressRec.state) {
let touchPoint = pressRec.location(in: sceneView)
let hitResults = sceneView.hitTest(touchPoint, options: [:])
if let result: SCNHitTestResult = hitResults.first {
createAnnotation(result.worldCoordinates)
pressRec.state = .cancelled
}
}
}
func for turn camera:
func turnCameraTo(worldCoordinates: SCNVector3) {
SCNTransaction.begin()
SCNTransaction.animationDuration = C.hotspotAnimationDuration
cameraNode.look(at: worldCoordinates)
sceneView.defaultCameraController.clearRoll()
SCNTransaction.completionBlock = {
}
SCNTransaction.commit()
}

hitTest(_:options:) don't recognize nodes behind ARKit planes

I place an object on a wall, then try to recognize tap on it, but hit test returns 0 objects. When I change Z position of the object and place it a little bit closer to cam, it's recognized well, but this isn't a solution, because planes are always changing and it can cover the object in any moment. How can I made hitTest work correctly and recognize my nodes behind planes? Or, maybe, I use the wrong method?
fileprivate func addNode(atPoint point: CGPoint) {
let hits = sceneView.hitTest(point, types: .existingPlaneUsingExtent)
if hits.count > 0, let firstHit = hits.first, let originNode = originNode {
let node = originNode.clone()
sceneView.scene.rootNode.addChildNode(node)
node.position = SCNVector3Make(firstHit.worldTransform.columns.3.x, firstHit.worldTransform.columns.3.y, firstHit.worldTransform.columns.3.z)
let resize = simd_float4x4(SCNMatrix4MakeScale(0.2, 0.2, 0.2))
let rotation = simd_float4x4(SCNMatrix4MakeRotation(.pi / 2, -1, 0, 0))
let transform = simd_mul(firstHit.worldTransform, resize)
let finalTransform = simd_mul(transform, rotation)
node.simdTransform = finalTransform
addedNodes.insert(node)
}
}
func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
guard let touch = touches.first else {
print("Unable to identify touches on any plane. Ignoring interaction...")
return
}
let touchPoint = touch.location(in: sceneView)
let hits = sceneView.hitTest(touchPoint, options: [SCNHitTestOption.boundingBoxOnly: true])
let filtered = hits.filter({ addedNodes.contains($0.node) })
print("\(hits.count) vs \(filtered.count), \(hits.first?.node.name ?? "no name")")
if let node = filtered.first?.node {
node.removeFromParentNode()
addedNodes.remove(node)
return
}
addPictureToPlane(atPoint: touchPoint)
}
addedNodes - set with added objects. When I added translating transform with changing Z coordinate at least on 0.05 (close to the camera) detecting working good. At least before plane changing and moving ahead the node.
I believe what you need to do is change your SCNHitTestSearchModeparameter which allows you to set:
Possible values for the searchMode option used with hit-testing
methods.
static let searchMode: SCNHitTestOption
Whereby:
The value for this key is an NSNumber object containing the raw
integer value of an SCNHitTestSearchMode constant.
From the Apple Docs there are three possible options you can use here:
case all
The hit test should return all possible results, sorted from nearest
to farthest.
case any
The hit test should return only the first object found, regardless of
distance.
case closest
The hit test should return only the closes object found.
Based on your question therefore, you would likely need to to utilise the all case.
As such your hitTest function would probably need to look something like this (remembering that self.augmentedRealityView refers to an ARSCNView):
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
//1. Get The Current Touch Location
guard let currentTouchLocation = touches.first?.location(in: self.augmentedRealityView) else { return }
//2. Perform An SCNHitTest Setting The SearchMode To 1 (All) Which Returns A List Of Results Sorted From Nearest To Farthest
if #available(iOS 11.0, *) {
let hitTestResults = self.augmentedRealityView.hitTest(currentTouchLocation, options: [SCNHitTestOption.searchMode: 1])
//3. Loop Through The Results & Get The Nodes
for index in 0..<hitTestResults.count{
let node = hitTestResults[index]
print(node)
}
}
}

SCNNode facing towards the camera

I am trying to put SCNCylinder node in the scene on the touch point. I always want to show the cylinder shape diameter facing towards camera. Its working fine for horizontal scene but it have a problem in vertical scene. In vertical scene I can see the cylinder sides but I want to show the full diameter facing towards the camera no matter whats the camera orientation is. I know there is some transformation needs to be applied depending on the camera transform but don't know how. I am not using plane detection its the simple node which is directly added to the scene.
Vertical Image:
Horizontal Image:
The code to insert the node is as follows,
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
guard let touch = touches.first else {
return
}
let result = sceneView.hitTest(touch.location(in: sceneView), types: [ARHitTestResult.ResultType.featurePoint])
guard let hitResult = result.last else {
print("returning because couldn't find the touch point")
return
}
let hitTransform = SCNMatrix4(hitResult.worldTransform)
let position = SCNVector3Make(hitTransform.m41, hitTransform.m42, hitTransform.m43)
let ballShape = SCNCylinder(radius: 0.02, height: 0.01)
let ballNode = SCNNode(geometry: ballShape)
ballNode.position = position
sceneView.scene.rootNode.addChildNode(ballNode)
}
Any help would be appreciated.
I'm not certain this is the right way to handle what you need but here is something which may help you.
I think CoreMotion could be useful to help you determine if the device is at a horizontal or vertical angle.
This class has a property called attitude, which describes the rotation of our device in terms of roll, pitch, and yaw. If we are holding our phone in portrait orientation, the roll describes the angle of rotation about the axis that runs through the top and bottom of the phone. The pitch describes the angle of rotation about the axis that runs through the sides of your phone (where the volume buttons are). And finally, the yaw describes the angle of rotation about the axis that runs through the front and back of your phone. With these three values, we can determine how the user is holding their phone in reference to what would be level ground (Stephan Baker).
Begin by importing CoreMotion:
import CoreMotion
Then create the following variables:
let deviceMotionDetector = CMMotionManager()
var currentAngle: Double!
We will then create a function which will check the angle of our device like so:
/// Detects The Angle Of The Device
func detectDeviceAngle(){
if deviceMotionDetector.isDeviceMotionAvailable == true {
deviceMotionDetector.deviceMotionUpdateInterval = 0.1;
let queue = OperationQueue()
deviceMotionDetector.startDeviceMotionUpdates(to: queue, withHandler: { (motion, error) -> Void in
if let attitude = motion?.attitude {
DispatchQueue.main.async {
let pitch = attitude.pitch * 180.0/Double.pi
self.currentAngle = pitch
print(pitch)
}
}
})
}
else {
print("Device Motion Unavailable");
}
}
This only needs to be called once for example in viewDidLoad:
detectDeviceAngle()
In your touchesBegan method you can add this to the end:
//1. If We Are Holding The Device Above 60 Degress Change The Node
if currentAngle > 60 {
//2a. Get The X, Y, Z Values Of The Desired Rotation
let rotation = SCNVector3(1, 0, 0)
let vector3x = rotation.x
let vector3y = rotation.y
let vector3z = rotation.z
let degreesToRotate:Float = 90
//2b. Set The Position & Rotation Of The Object
sphereNode.rotation = SCNVector4Make(vector3x, vector3y, vector3z, degreesToRotate * 180 / .pi)
}else{
}
I am sure there are better ways to achieve what you need (and I would be very interested in hearing them too), but I hope it will get you started.
Here is the result:

TouchesMoved Lag with Pathfinding

I'm making a game where the player follows a touch. The game has obstacles and pathfinding, and I'm experiencing extreme lag with touches moved. Upon loading the game, everything works, but after a few seconds of dragging my finger around (especially around obstacles) the game freezes to the point where moving a touch will freeze all physics (0 fps) until the touch is ended.
I'm assuming the culprit is my function makeGraph(), which finds a path for the player from player.position:CGPoint() to player.destination:CGPoint(), storing a path in player.goto:[CGPoint()], the update function then takes care of moving to the next goto point.
This function is called every cycle of touchesmoved, so the player can switch directions if the finger moves over an obstacle
My question is: 1) what in this code is causing the unbearable lag, 2) how can I make this code more efficient?
My code:
initializing vars:
var obstacles = [GKPolygonObstacle(points: [float2()])]
var navgraph = GKObstacleGraph()
setting up a graph (called at start of level):
func setupGraph(){
var wallnodes :[SKSpriteNode] = [SKSpriteNode]()
self.enumerateChildNodes(withName: "wall") {
node, stop in
wallnodes.append(node as! SKSpriteNode)
}
self.enumerateChildNodes(withName: "crate") {
node, stop in
wallnodes.append(node as! SKSpriteNode)
}
obstacles = SKNode.obstacles(fromNodeBounds: wallnodes)
navgraph = GKObstacleGraph(obstacles: obstacles, bufferRadius: Float(gridSize/2))
}
touchesmoved: I keep track of multiple touches using strings. only one finger may act as the "MoveFinger." this string is created at touchesbegan and emptied at touchesended
override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {
touchloop: for touch in touches {
if MoveFinger == String(format: "%p", touch) {
let location = touch.location(in: self)
player.destination = location
makeGraph()
player.moving = true
player.UpdateMoveMode()
continue touchloop
}
// .... more if statements ....
}
}
and graph function (called every touchesmoved cycle):
func makeGraph(){
let startNode = GKGraphNode2D(point: float2(Float(player.position.x), Float(player.position.y)))
let endNode = GKGraphNode2D(point: float2(Float(player.destination.x), Float(player.destination.y)))
let graphcopy = navgraph
graphcopy.connectUsingObstacles(node: startNode)
graphcopy.connectUsingObstacles(node: endNode)
let path = graphcopy.findPath(from: startNode, to: endNode)
player.goto = []
for node:GKGraphNode in path {
if let point2d = node as? GKGraphNode2D {
let point = CGPoint(x: CGFloat(point2d.position.x), y: CGFloat(point2d.position.y))
player.goto.append(point)
}
}
if player.goto.count == 0 {
//if path is empty, go straight to destination
player.goto = [player.destination]
} else {
//if path is not empty, remove first point (start point)
player.goto.remove(at: 0)
}
}
any ideas?

SpriteKit - Why SKNode's are not being touch detected

I have reviewed countless references to try to understand why my scene is not behaving the way i expected it to, such as this.
Here is my very simple SKScene (2 child nodes):
The scene has a SpriteNode (which covers the entire scene as a background image). This has a zPosition = 0.
The scene has a 2nd node (SKNode) which itself has another child (up to 2 levels). This has a zPosiiton - 2.
ALL nodes have .userInteractionEnabled = false
Issue:
When i click anywhere all i see is that the 1st child (SpriteNode) is touched. The 2nd child (SKNode) is never touch-detected.
Note that the z-ordering of the Nodes are being rendered as I expect them. It is the touch-detection that doesnt appear to be working.
Snippet of my touchesBegan method:
for touch in touches {
let touchLocation = touch.locationInNode(self)
let sceneTouchPoint = self.convertPointToView(touchLocation)
let touchedNode = self.nodeAtPoint(sceneTouchPoint)
if (touchedNode.name != nil) {
print("Touched = \(touchedNode.name! as String)")
}
}
I had a similar issue (background in z: 999 + spawning "ducks" nodes in z: <999) that I solved with the following code in Swift 4:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
let touch:UITouch = touches.first!
let positionInScene = touch.location(in: self)
let touchedNodes = self.nodes(at: positionInScene)
for touch in touchedNodes {
let touchName = touch.name
if (touchName != nil && touchName!.hasPrefix("pato_")) {
touch.removeFromParent()
}
}
}
I had several layers of nodes because I used a mask over my game with buttons to make selections and move forward. I had issues with the buttons not working until I made a "startState:Bool = true" and updated this to false when the start screen was clicked through. I then had each of my buttons on that start page to have && startState==true for there clicks to be taken. It may be that your clicks are being recorded - but its not the node you think you are using. I would put print("NodeXXX") on each entry in touches and give it a unique name so you can see where the touches are actually happening.
Hope that helps.
Best regards,
mpe